16:00:21 <jlaska> #startmeeting Fedora QA Meeting 16:00:21 <zodbot> Meeting started Mon Jan 10 16:00:21 2011 UTC. The chair is jlaska. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:21 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic. 16:00:25 <jlaska> #meetingname fedora-qa 16:00:25 <zodbot> The meeting name has been set to 'fedora-qa' 16:00:31 <jlaska> #topic Gathering people 16:01:01 * kparal here 16:01:10 <jlaska> okay, show of nicks, who is here? 16:01:18 * mkrizek is here 16:01:47 <jlaska> hi kparal + mkrizek 16:02:07 <jlaska> anyone else lurking ... adamw Viking-Ice robatino bmwiedemann? 16:02:14 <adamw> yo 16:02:18 <bmwiedemann> lurking :) 16:02:21 * adamw busy playing with wikis 16:02:25 <Cerlyn> here 16:03:10 * Viking-Ice here 16:03:18 <jlaska> Cerlyn: adamw: Viking-Ice: bmwiedemann: hello all 16:03:42 <jlaska> okay, let's get started 16:03:52 <jlaska> quick recap from last week 16:03:56 <jlaska> #topic Previous meeting follow-up 16:04:14 <jlaska> #info Bodhi feedback patch from fcami (see ticket#701 (infrastructure) awaiting review 16:04:23 <jlaska> not much to add here ... looks like it's still pending review 16:04:58 <jlaska> I don't see lmacken around to ping on this 16:05:11 <jlaska> unless no other ideas ... 16:06:05 <jlaska> #info Adamw and Hurry did some f15 test day rescheduling (adjusted Xorg dates and added preupgrade) -- https://fedoraproject.org/wiki/QA/Fedora_15_test_days 16:06:18 <adamw> that was in response to viking's suggestion last week 16:06:27 <jlaska> right on, thanks Viking-Ice and adamw 16:06:34 <adamw> X test week now comes between the first and second GNOME test days 16:07:10 * jlaska adjusts a cell in the test day schedule 16:07:15 <adamw> brb 16:07:46 <jlaska> adamw: you're next on the agenda, I'll shuffle that topic until you're bafck 16:07:49 <jlaska> back 16:08:03 <jlaska> kparal are you ready to talk about autoqa updates? 16:08:10 <kparal> jlaska: yes 16:08:13 <jlaska> #topic Latest and greatest on autoqa-0.4.4 16:08:20 <jlaska> kparal: okay, take it away 16:08:35 <kparal> ok, welcome all to the regular news from autoqa world :) 16:08:35 <adamw> back 16:08:50 <kparal> the changes for the past week: 16:08:52 <jlaska> adamw: okay, I'll queue you up right after kparal 16:09:03 <kparal> 1. mkrizek's patch "Add support for a staging server" has been pushed to master. 16:09:03 <kparal> https://fedorahosted.org/autoqa/ticket/241 16:09:03 <kparal> That means that AutoQA should be now fully configurable when it comes to interaction with various external services. Sending email with results (we have 3 different types of them) can be turned on or off, sending Bodhi comments can be turned on or off, URLs for Koji and Bodhi instances are configurable now. There are a few more options available. 16:09:18 <jlaska> #info mkrizek's patch "Add support for a staging server" has been pushed to master (https://fedorahosted.org/autoqa/ticket/241) 16:09:40 <kparal> 2. my patch "Load config files from current directory by default" has been pushed to master. 16:09:40 <kparal> https://fedorahosted.org/autoqa/ticket/253 16:09:41 <kparal> All config files are now copied to the autotest client to the test's directory and used from there. That means we don't have to maintain /etc/autoqa conf files on each and every client anymore. 16:09:42 <jlaska> very cool, nice work mkrizek and kparal 16:09:49 <kparal> Unfortunately in one of the following patches we hit some issues and partly broke this functionality again. But it works for the most important config file - fas.conf. And I'll try to fix the rest soon :) 16:09:54 <jlaska> #info my patch "Load config files from current directory by default" has been pushed to master (https://fedorahosted.org/autoqa/ticket/253) 16:10:17 <jlaska> kparal: ah, so that's why you wanted site_tests writable by autotest? 16:10:48 <kparal> jlaska: yes, that's the reason, all the config files must be first copied to site_tests to be tarred and transfered to the client 16:11:05 <kparal> but as I say, maybe we will devise some other solution 16:11:22 <jlaska> yeah, I added a comment in the ticket regarding packaging guidelines 16:11:31 <kparal> because the current one causes some problems, we can't fully on CWD being set properly on the client 16:11:55 <kparal> next on 16:11:58 <kparal> 3. clumens asked for merging his clumens branch onto master. His branch expanded again, now it contains not just anaconda_storage test, but also compose_tree and anaconda_checkbot tests and git-post-receive hook. I started to review it, albeit slowly. 16:12:28 <jlaska> #info clumens asked for merging his clumens branch onto master (tests: anaconda_storage, compose_tree, anaconda_checkbot, watcher: git-post-receive) 16:13:02 <kparal> clumens and jlaska implemented lots of stuff in that patchset, lot of work done in that. so thanks. 16:13:29 * jlaska notes, it was fun writing test wrappers for existing tests 16:13:40 <kparal> 4. jlaska found out that autoqa package will probably need to add autotest as its dependency. (This stuff is still a little unclear, maybe we will need to do a few things differently because of ticket https://fedorahosted.org/autoqa/ticket/256, so this may change). 16:14:16 <jlaska> #info jlaska found out that autoqa package will probably need to add autotest as its dependency (due to fas.conf changes) 16:14:41 <kparal> everything's interconnected, this is the problem we spoke about a few paragraphs above :) 16:14:47 <kparal> 5. A few bugfixes found its way into master branch. Kudos to jskladan for finding them out. 16:14:58 <jlaska> #info A few bugfixes found its way into master branch. Kudos to jskladan for finding them out. 16:15:08 <kparal> 6. jskladan posted his "New Koji Watcher" patch to autoqa-devel: 16:15:08 <kparal> https://fedorahosted.org/pipermail/autoqa-devel/2011-January/001485.html 16:15:08 <kparal> That should allow us to get rid of the outdated post-bodhi-update watcher and make use of the new -pending tags in koji. Still waiting for review. 16:15:40 <jlaska> #info jskladan posted his "New Koji Watcher" patch to autoqa-devel, ready for review (https://fedorahosted.org/pipermail/autoqa-devel/2011-January/001485.html) 16:15:50 <jlaska> kparal: ah okay, I'll queue that up for some feedback today, thanks for reminder 16:15:53 <kparal> it will also allow us to run depcheck-style tests effectively, just once for many received update notifications 16:16:24 <kparal> 7. Documentation has been improved a little to accommodate the new changes. 16:16:31 <jlaska> #info Documentation has been improved a little to accommodate the new changes. 16:16:38 <kparal> 8. I just pushed a 'use self.__class__ in super() calls' patch that should simplify a little our test template (one place less requiring manual changes). 16:16:38 <kparal> https://fedorahosted.org/pipermail/autoqa-devel/2011-January/001494.html 16:16:42 <jlaska> kparal: feel free to prefix with #info if you have more :) 16:16:55 <kparal> jlaska: next time, sorry :) 16:17:03 <kparal> this is all for today :) 16:17:14 <jlaska> #info I just pushed a 'use self.__class__ in super() calls' patch that should simplify a little our test template (https://fedorahosted.org/pipermail/autoqa-devel/2011-January/001494.html) 16:17:22 <jlaska> kparal: awesome, thank you for the updates 16:17:32 <jlaska> how's it all looking for the next release 16:17:48 <jlaska> is the end in sight for the original est date of January? 16:18:05 <kparal> we have to solve a few unexpected issues, but I don't see any major obstacle 16:18:12 <kparal> I'm just a little worried about testing 16:18:33 <kparal> we tested the individual patches manually 16:18:52 <kparal> but I can't really say everything's gonna work perfectly when deployed 16:18:59 <jlaska> kparal: yeah, it will be interesting to get a sense for how long integration testing takes 16:19:04 * kparal looking forward to a staging server in the future 16:19:18 <jlaska> kparal: note, our new staging hardware has been delivered ... I don't have any updates on ETA for setup 16:19:28 <kparal> jlaska: that sounds great 16:20:06 <jlaska> kparal: thanks again for the updates 16:20:15 <jlaska> adamw: you ready, we can touch on your topic next 16:20:21 <adamw> sure 16:20:23 <jlaska> #topic Update on critpath test definition (ticket#154) 16:20:34 <jlaska> adamw: alrighty, what's the latest n' greatest 16:21:07 <adamw> lemme remember, where were we up to last week 16:21:08 <adamw> just a sec 16:21:31 <jlaska> adamw: https://fedoraproject.org/wiki/QA/Meetings/20110103#Critical_Path_test_case_development 16:21:37 <adamw> yeah thanks 16:21:52 <adamw> I think the current pages are pretty much good to go, so i'll put them into production soon 16:22:11 <adamw> i think the last current easily-resolvable question (ignoring test dependencies for now) is whether we should standardize *naming* 16:22:46 <adamw> the benefit of that would be to allow tools like bodhi/f-e-k to display 'nice' test names, but it may be a bit cumbersome 16:23:07 <jlaska> adamw: this is for test case page names? 16:23:18 <adamw> yes 16:23:33 <adamw> so for instance if we standardised on the name QA:Testcase_package_(packagename)_testname 16:23:47 <adamw> bodhi could easily weed out all the parameters except 'testname' 16:24:01 <adamw> but it may not work well for some situations, i guess. it may be a step too far 16:24:31 <jlaska> what's the "standard" now? 16:24:36 <adamw> there isn't one 16:24:37 <jlaska> the page name has to be unique? 16:24:44 <adamw> it has to be QA:Testcase, but then you can do whatever you like 16:24:55 <adamw> so far the policy only requires test cases to be in the right categories 16:25:22 <adamw> we don't have a de facto standard either, all test cases tend to be named according to somewhat different schemes 16:25:45 <jlaska> okay 16:25:59 <adamw> (some of them aren't even in the correct QA:Testcase_blah format yet) 16:26:12 <jlaska> adamw: yeah, we've got a lot of migration still with older tests 16:26:18 <adamw> yeah 16:26:56 <adamw> i'm inclined towards not requiring a specific name format as it may restrict some cases 16:27:09 <adamw> like creating a test case that can apply as-is to several packages, and adding it to the categories for each package 16:27:21 <jlaska> I agree, my take ... let the categories you've previously define handle the organization and metadata ... and let the test case names be more human readable 16:27:22 <adamw> and the benefit of being able to display a 'nice' test name isn't that great 16:28:07 <jlaska> adamw: that reminds me of one feature of the wiki that I'll need to list on Hurry's feature comparison: human readible test case links 16:28:16 <adamw> yeah 16:28:21 <jlaska> instead of http://some.server.com/test?id=12435 16:28:29 <jlaska> on right, test 12435! that's a good one 16:28:37 <adamw> one of my all-time favourites 16:28:51 <jlaska> alright ... so any next steps you wanted to highlight? 16:29:18 <adamw> i'm migrating some more test cases to the new format at present, still need to move the pages into 'production', announce to the lists, solicit test creation, and talk to tools maintainers 16:31:02 <jlaska> #info next steps ... migrating some more test cases to the new format at present, still need to move the pages into 'production', announce to the lists, solicit test creation, and talk to tools maintainers 16:31:10 <jlaska> adamw: thanks for the updates. Nice job on the SOP's too 16:31:18 <adamw> thanks 16:31:31 * jlaska adds a comment regarding human-readable test case URLs to https://fedoraproject.org/wiki/Talk:Rhe/tcms_Comparison 16:31:58 <jlaska> #topic SUSE openqa project 16:32:05 <jlaska> kparal: suggested adding this topic 16:32:19 <jlaska> bmwiedemann: joined #fedora-qa last week to let everyone know about a cool test automation effort he started 16:32:25 <jlaska> #info Project home - http://openqa.opensuse.org 16:32:33 <jlaska> #info Sample automated results available at - http://openqa.opensuse.org/results/ 16:32:38 <jlaska> #info Source code hosted at http://www.os-autoinst.org/ 16:33:11 <kparal> #info Example Fedora installation video - http://www3.zq1.de/fedora/Fedora-14-i386-netinst.iso.ogv 16:33:15 <jlaska> It was nice that bmwiedemann and _lmr_ were both in channel, so they could discuss similarities between the two screenshoting GUI automation approaches (openqa and kvm-autotest) 16:33:15 <adamw> ooh! shiny! what's it do? 16:33:27 <kparal> adamw: click on the video link 16:33:36 <jlaska> adamw: it does GUI test automation by using screen region matching 16:33:45 <jlaska> so it's different from LDTP or dogtail, as those are at-spi based 16:34:10 <jlaska> I've reached out to hongqing and hurry with some links and for their thoughts on this approach 16:34:25 <jlaska> I'll me meeting with _lmr_ after this meeting to talk through his experiences with using GUI matching like this in kvm-autotest 16:34:57 <bmwiedemann> The effort started in May 2010 and had produced videos that were also used by openSUSE-marketing on youtube 16:35:21 <kparal> bmwiedemann: what is it used for by opensuse - just installation testing, or also desktop applications testing? 16:35:26 <adamw> that looks like an awesome step towards project mojito 16:35:27 <jlaska> the source is all perl-based ... so if anyone in fedora-qa is a perl guru, this might be fun to get involved in 16:36:15 <bmwiedemann> kparal: it does test some important applications like KDE,GNOME, libreoffice, firefox - but the main reason is to prevent breakage in install, boot and updates 16:36:29 * adamw not a perl guru but he bets he knows where to hire one 16:36:29 <kparal> I believe this project could nicely supplement our current rats-install test 16:36:34 <bmwiedemann> aka critical-path 16:37:30 <kparal> maybe we can execute it from autoqa regularly for Branched composes 16:37:42 <jlaska> kparal: it could be ... I've asked hongqing for his thoughts on the approach. He's still coming up to speed with the roadmap 16:38:05 <jlaska> bmwiedemann: what do you call your GUI region matches again? 16:38:12 <jlaska> kvm-autotest calls them stepfiles 16:39:16 <bmwiedemann> I would say "test modules" would be the best equivalent. they contain a "run" method with the keypresses and waits, plus a checklist method with the MD5sums of the regions 16:39:45 <jlaska> bmwiedemann: how do you get the md5sums of the regions? Is there a tool that watches a manual install, and records keypresses/clicks etc..? 16:41:09 <kparal> bmwiedemann: are you able to ignore minor differences somehow? like changed icon in the menu would not break the test 16:41:10 <bmwiedemann> jlaska: every automated install can also take manual interaction via VNC or qemu-monitor commands ("sendkey ctrl-alt-f2"), so I usually interact with the automation, which will write things to a log. 16:41:26 <bmwiedemann> but still a lot of things done manually there. 16:41:53 <bmwiedemann> kparal: one way to avoid breakage is to only use MD5sum of a certain interesting region. 16:42:28 <kparal> bmwiedemann: I see, so I don't have to match the whole screen, just a part of it, right? 16:42:37 <bmwiedemann> the alternative is to live with it and have several "known-good" MD5sums for a test 16:42:44 <bmwiedemann> kparal: yes. 16:42:45 <jlaska> bmwiedemann: I tried running it against a Fedora 14 ISO, but it wasn't happy, and I wasn't sure how/where to make adjustments since I haven't touched perl in ages 16:43:27 <wwoods> augh, Perl 16:43:49 <bmwiedemann> jlaska: you should know that "wasn't happy" is not a good bug report ;) we can have a look at that later. 16:43:59 <jlaska> bmwiedemann: understood :) 16:44:02 <kparal> there are some tools in the world that are able to intelligently recognize "almost same" images - they give percentage of similarity. it's done by image downscaling, re-coloring, etc. I am sure some of that approach could be used also here to improve image matching 16:44:43 <jlaska> there are some potential licensing things I wasn't sure about with the tool (since it's using some packaging from rpmfusion) to generate the videos. Someone smarter than me would need to work that out 16:44:44 <bmwiedemann> kparal: I have one simple approach: thresholding the image before checksumming 16:45:14 <kparal> bmwiedemann: how does it work? 16:45:28 <wwoods> I'm convinced that for certain apps we could be using a 'translation' that put unique unicode glyphs on each button 16:45:34 <wwoods> and then just scan for those glyphs 16:45:44 <bmwiedemann> kparal: substituting each byte<128 to 0 and every other to 255 16:46:02 <wwoods> heh - we could use tiny QR codes 16:46:42 <jlaska> wwoods: would that be any better than just using at-spi instead? 16:47:26 <Cerlyn> bmwiedemann: So does your tool support detecting slight shifts of the image region in question (new field, moved for alignment purposes, etc.)? 16:47:30 <bmwiedemann> I know, there is "sikuli" out there, which uses opencv for computer-vision to compensate for small changes 16:47:32 <wwoods> at-spi requires actual access to the accessibility layer - i.e. the dbus session in the client 16:47:32 <kparal> I think both approach have their uses. os-autoinst is very usable for installation testing and boot process testing 16:47:38 <bmwiedemann> Cerlyn: no 16:48:15 <jlaska> kparal: yeah, I was thinking over the weekend that hte perfect solution would likely be a mix of the two approaches 16:48:16 <kparal> Cerlyn: I am sure that can be worked out to some extent 16:48:26 <kparal> jlaska: agreed 16:48:27 <wwoods> but if we included a special 'translation' of the app, and provided a font to display its 'language', then you can find all the buttons by their labels using image recognition 16:48:33 <jlaska> kparal: as we do now with lili's sample DVD auto install proof of concept (uses dogtail + kickstart) 16:49:28 <wwoods> so then you could just set anaconda to lang=QR and the button for 'Next' would have the QR-ese word for 'Next', which would be some easily-recognized symbol 16:49:37 <wwoods> e.g. a QR code 16:49:38 <kparal> bmwiedemann: so what's the tool name - openqa or os-autoinst? :) 16:49:38 <bmwiedemann> wwoods: that would only be important, if you wanted to click the buttons, instead of using the keyboard. 16:49:51 <bmwiedemann> the test tool is "os-autoinst" 16:49:58 <jlaska> wwoods: sounds good, I'd have no idea where+how to start there :) 16:50:07 <bmwiedemann> openqa.opensuse.org is just the machine running it 16:50:13 <bmwiedemann> one of the two 16:50:17 <jlaska> bmwiedemann: thanks for stopping by for the meeting 16:50:22 <wwoods> bmwiedemann: true, keyboad will suffice if there's keyboard accelerators for everything you want to test 16:50:29 <wwoods> anyway, interesting stuff! 16:50:30 <jlaska> all: let's move discussion of further details into #fedora-qa 16:50:43 <jlaska> and in the interest of time ... we'll move on to open discussion 16:50:43 <kparal> yes, certainly very nice tool 16:51:07 <jlaska> agreed, nice work bmwiedemann 16:51:17 <jlaska> #topic Open Discussion <your topic here> 16:51:27 <jlaska> Alright ... anything folks want to raise that we haven't already discussed? 16:52:07 <jlaska> if not ... we'll close out in 2 minutes 16:53:08 <jlaska> Meeting end in 1 minute ... 16:54:12 <jlaska> thanks everyone for your time! 16:54:14 <jlaska> #endmeeting