f22_final_gono-go_meeting_-_2
LOGS
17:01:33 <jreznik> #startmeeting F22 Final Go/No-Go meeting - 2
17:01:33 <zodbot> Meeting started Fri May 22 17:01:33 2015 UTC.  The chair is jreznik. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:33 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
17:01:34 <jreznik> #meetingname F22 Final Go/No-Go meeting - 2
17:01:34 <zodbot> The meeting name has been set to 'f22_final_go/no-go_meeting_-_2'
17:01:48 <jreznik> #topic Roll Call
17:01:49 <nirik> morning
17:01:50 <sgallagh> .hello sgallagh
17:01:51 <zodbot> sgallagh: sgallagh 'Stephen Gallagher' <sgallagh@redhat.com>
17:02:00 <jreznik> hola dgilmore and everyone!
17:02:04 * satellit listening
17:02:12 <jwb> hello
17:02:23 <jforbes> hello
17:02:31 <mattdm> .hello mattdm
17:02:32 <zodbot> mattdm: mattdm 'Matthew Miller' <mattdm@mattdm.org>
17:02:49 <pwhalen> .hello pwhalen
17:02:50 <zodbot> pwhalen: pwhalen 'Paul Whalen' <pwhalen@redhat.com>
17:02:50 <jreznik> #chair dgilmore sgallagh nirik mattdm roshi
17:02:50 <zodbot> Current chairs: dgilmore jreznik mattdm nirik roshi sgallagh
17:02:56 <danofsatx> .hello dmossor
17:02:57 <zodbot> danofsatx: dmossor 'Dan Mossor' <danofsatx@gmail.com>
17:03:04 * kparal is here
17:03:33 <jreznik> ok, let's start then!
17:03:42 <roshi> .hello roshi
17:03:43 <zodbot> roshi: roshi 'Mike Ruckman' <mruckman@redhat.com>
17:03:51 * nirik has a sense of deja-vu
17:03:53 <jreznik> #topic Purpose of this meeting
17:03:54 <jreznik> #info Purpose of this meeting is to see whether or not F22 Final is ready for shipment, according to the release criteria.
17:03:56 <jreznik> #info This is determined in a few ways:
17:03:57 <jreznik> #info No remaining blocker bugs
17:03:59 <jreznik> #info Release candidate compose is available
17:04:00 <jreznik> #info Test matrices for Final are fully completed
17:04:02 <jreznik> #link http://qa.fedoraproject.org/blockerbugs/milestone/22/final/buglist
17:04:03 <jreznik> #link https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Installation
17:04:05 <jreznik> #link https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Base
17:04:06 <jreznik> #link https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Desktop
17:04:08 <jreznik> #link https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Server
17:04:15 <kparal> jreznik: you're missing a cloud link
17:04:30 <kparal> https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Cloud
17:04:31 <jreznik> #link https://fedoraproject.org/wiki/Test_Results:Fedora_22_Final_RC3_Cloud
17:04:47 <jreznik> kparal: thanks, I knew something is missing but I was just blind as...
17:05:10 <jreznik> #topic Current status
17:05:53 <jreznik> so today, we have follow up go/no-go after yesterday's unsuccessful go/no-go
17:06:05 <nirik> there's 1 proposed blocker we should discuss.
17:06:22 <jreznik> RC3 is being validated and as nirik pointed out, there's one proposed blocker bug
17:06:23 <roshi> and another that needs discussed, I haven't proposed it yet
17:06:40 <jreznik> #info 1 Proposed Blocker
17:06:45 <jreznik> #undo
17:06:45 <zodbot> Removing item from minutes: INFO by jreznik at 17:06:40 : 1 Proposed Blocker
17:06:58 <dgilmore> jreznik: if it is not proposed it does not exist
17:07:18 <jreznik> dgilmore: are we going to be so fast we won't get to it? :)
17:07:44 <dgilmore> jreznik: maybe
17:07:49 <jreznik> #topic Mini blocker review
17:07:54 <roshi> it's being proposed right now
17:07:57 <roshi> so 2
17:08:10 <jreznik> #info 2 Proposed Blockers
17:08:25 <jreznik> roshi: you want to go through it or do you want me?
17:08:30 <roshi> sure, I can
17:08:36 <roshi> just waiting on blocker bugs to load...
17:08:56 <roshi> #topic (1224048) anaconda does not include package download and filesystem metadata size in minimal partition size computation and hard reboots during installation
17:08:59 <roshi> #link https://bugzilla.redhat.com/show_bug.cgi?id=1224048
17:09:01 <roshi> #info Proposed Blocker, anaconda, NEW
17:09:03 <nirik> note that it updates only every 30min, so it wont have any just proposed ones.
17:09:24 <roshi> yup, I'll manually write it out
17:09:42 <roshi> this bug is a direct violation of the criteria, as written
17:09:53 <nirik> anyhow. On this one I am -1 blocker for f22. It's anoying, but it's a corner I don't think many people will hit.
17:09:57 <nirik> is it?
17:10:13 <kparal> only people with netinst and very small partition sizes
17:10:38 <danofsatx> as in vms, private clouds
17:10:41 <roshi> I think it's a direct violation of "When using the custom partitioning flow, the installer must be able to:  Reject or disallow invalid disk and volume configurations without crashing.
17:10:45 <roshi> "
17:10:49 <kparal> you can hit it with DVD, but you need to be extremely unlucky - specifying just a bit bigger partition than the minimum size
17:10:59 <sgallagh> danofsatx: Even in VMs and private clouds, most people expect their content to be larger than the installed packages.
17:11:05 <sgallagh> They usually build in room for, say, data.
17:11:08 <roshi> but the times where you *would* hit this seem small
17:11:19 <danofsatx> understood.
17:11:26 <jreznik> yeah, based on how corner it seems to be I'm -1 blocker
17:11:34 <kparal> I agree that this is a criterion violation, but it happens just in certain occassions, so it's a judgement call, I believe
17:11:41 <roshi> yeah, same here
17:11:46 <danofsatx> common bug?
17:11:55 <kparal> danofsatx: if rejected, definitely a common bug
17:11:57 <roshi> for sure
17:11:58 <nirik> and fix for f23 for sure.
17:11:58 <sgallagh> I'm also -1 as a blocker for F22, though as discussed in #fedora-qa last night, we may want to pre-emptively vote it an F23 blocker
17:12:19 <dgilmore> definitly violates the criteria
17:12:34 <kparal> if we reject something in the last minute, I think it's quite a good idea to already accept it for the next fedora release, to make sure it's not forgotten and fixed eventually
17:12:55 <dgilmore> but a system that has such a small amount of free space post install seems silly
17:13:18 <kparal> if this was discovered earlier in the cycle, we could have been more strict, but I think it makes sense here to waive it and accept it for F23, probably Beta or Final
17:13:30 <roshi> proposed #agreed - 1224048 - RejectedBlocker - This bug does violate the criteria, but the cases where a user can hit this are slim. Rejected as a blocker for F22, please document in Common Bugs and propose for F23 so it doesn't get forgotten.
17:13:31 <jreznik> kparal: sure, let's accept it for F23 - it's more about to disallow such config
17:13:31 * nirik nods.
17:13:42 <nirik> ack
17:13:45 <jreznik> roshi: maybe we can agree on it for F23
17:13:47 <kparal> roshi: patch
17:13:47 <dgilmore> though I guess if you build cloud images with the intention that the filesystems will be resized after it is less silly
17:13:48 <sgallagh> ack
17:13:56 <kparal> roshi: please also accept for F23 Final
17:14:03 <kparal> right away
17:14:33 <roshi> ok
17:14:51 <sgallagh> Beta?
17:15:06 <dgilmore> I would say f23 alpha
17:15:06 <kparal> I'm fine with either, not sure if it is serious enough for Beta
17:15:16 <roshi> proposed #agreed - 1224048 - RejectedBlocker F22Final AcceptedBlocker F23Final- This bug does violate the criteria, but the cases where a user can hit this are slim. Rejected as a blocker for F22, please document in Common Bugs. This has been accepted as a blocker for F23 so it doesn't get forgotten.
17:15:19 <kparal> dgilmore: it does not violate any alpha criteria
17:15:21 <sgallagh> dgilmore: It's not violating an Alpha criterion.
17:15:23 <dgilmore> just because it is known now
17:15:34 <roshi> well, we now go through all milestones for blocker reviews
17:15:38 <sgallagh> dgilmore: Nothing prevents it from landing earlier :)
17:15:38 <kparal> I wouldn't go that path, anaconda devs would hate us
17:15:41 <roshi> so it'll get looked at during alpha for sure
17:15:42 <kparal> more than they do now
17:16:05 <jreznik> dgilmore: I tried the same logic for other F23 bugs but still something is better than nothing
17:16:33 <dgilmore> jreznik: :)
17:16:36 <jreznik> just someone has to take a look on all blockers ahead of that milestone
17:17:01 <kparal> ack
17:17:12 <kparal> roshi: I'll do the bug secretary work
17:17:17 <roshi> thanks
17:17:41 <jreznik> ack
17:17:49 <danofsatx> ack
17:17:59 <roshi> #agreed - 1224048 - RejectedBlocker F22Final AcceptedBlocker F23Final- This bug does violate the criteria, but the cases where a user can hit this are slim. Rejected as a blocker for F22, please document in Common Bugs. This has been accepted as a blocker for F23 so it doesn't get forgotten.
17:18:29 <roshi> #topic (1224045) DeviceCreateError: ('Process reported exit code 1280: A volume group called fedora already exists.\n', 'fedora')
17:18:35 <roshi> #link https://bugzilla.redhat.com/show_bug.cgi?id=1224045
17:19:26 <sgallagh> roshi: What does "set up a raid array" mean in this bug?
17:19:27 <kparal> we think that this is connected with some leftover data after deleting/re-creating raid
17:19:43 <sgallagh> bios raid? dmraid in a live image before starting the installer?
17:19:43 <roshi> well, it failed for me the first time
17:19:55 <tflink> fwiw, i just did a x86_64 server netinstall on a similar system
17:20:06 <roshi> intel bios raid, set up a RAID0 array, started the install
17:20:13 <kparal> cleaning the partitioning tables on raid disks seems to resolve that issue. right, roshi ?
17:20:16 * tflink is going to try workstaion live after this install finishes and changes everything to RAID1
17:20:17 <roshi> disks were previously installed to on a non-raid setup
17:20:52 <roshi> kparal: I'm not sure
17:21:01 <roshi> I went to sleep before I figured that out
17:21:21 <dgilmore> i think I am +1 blocker here
17:21:40 <roshi> I'm not a raid expert - so there could be user error here
17:21:42 <dgilmore> though it seems it can be worked around
17:21:54 <kparal> I believe the experience pschindl had was that after clearing old partitioning tables it started working ok
17:22:08 <roshi> I'd like someone with more raid experience to double check my results
17:22:13 <kparal> further complication was that he had old gpt tables on disks, but booted i386 image
17:22:21 <kparal> it's known that that confuses anaconda a lot
17:22:29 <kparal> and they rejected such bugs in the past, iirc
17:23:11 <kparal> we would need dlehman here, it seems
17:23:34 <roshi> so I'll defer to you all on the blockeryness of this bug - but I didn't want to *not* say something about it
17:23:51 <kparal> honestly, our original impression with pschindl was that this was not a blocker material
17:24:10 * nirik isn't sure yet, still pondering and trying to work out how common this might be
17:24:29 <danofsatx> I'm leaning towards -1. This doesn't appear to be "normal" workflow.
17:24:42 <danofsatx> I understand this is bad, but you have to try and hit it.
17:25:06 <roshi> the original workflow was "take two discs that had normal installs on them, make an array, get a crash on installation start"
17:25:10 <dgilmore> to me it seems reinstalling triggers it
17:25:16 <nirik> roshi: was the orig install x86_64 and the second one i386?
17:25:19 <dgilmore> thats not really trying
17:25:33 <kparal> jreznik invited dlehman over here
17:25:38 <roshi> tbh, I'm not sure - haven't installed to this box in a while
17:25:52 <roshi> I'm wiping the disks now to retest with clean disks
17:26:10 <danofsatx> the problem is that anaconda sees a new device and tries to create the fedora VG on it, but that VG already exists as part of the underlying data structure on the old device.
17:26:36 <kparal> pschindl received a crash right on anaconda start, with the leftover partitioning tables (probably)
17:27:06 * danofsatx really needs a hardware budget for Fedora QA ;)
17:27:24 <kparal> pschindl_wfh: right on time, debating the roshi's raid bug
17:27:26 * pschindl_wfh is here. Everything is -1.
17:27:29 <pschindl_wfh> :)
17:27:38 <sgallagh> danofsatx: Right, that would be my guess too: you'd only hit this if you tried to create a RAID array from at least one disk that previously had a Fedora install on *just* it
17:27:47 <sgallagh> (outside of an array)
17:28:00 <sgallagh> I also strongly suspect this has been this way for a long time.
17:28:16 <danofsatx> sgallagh: that's exactly how I'm reading it.
17:28:21 <kparal> I have the same impression as sgallagh in his last comment
17:28:54 <pschindl_wfh> As I wrote to the bug. I think that it is caused by firmware. By the way it handles creation and deletion of raid volumes.
17:29:41 * nirik loathes firmware raid, but oh well, we support it.
17:30:08 <kparal> we're not sure if the firmware itself should remove old partitioning tables and such from disk when destroying the raid firmware. it seems it didn't and it confused anaconda afterwards
17:30:53 <nirik> well, I think it might also be lvm's fault.
17:31:14 * kparal is not sure if he should repost dlehman's comment here
17:31:17 <nirik> boot, see old lvm, start wipe to do new install, but don't properly deactivate/destroy the old lv's
17:31:20 <kparal> you know, it's logged
17:31:51 <kparal> he's -1 blocker
17:31:53 <jreznik> if you're not in #anaconda - dlehman confirmed sgallagh's theory
17:31:56 <nirik> anyhow, I am leaning toward -1 blocker as it's so corner a case.
17:31:59 <jreznik> -1 blocker
17:32:07 <sgallagh> -1 blocker
17:32:11 <pschindl_wfh> -1 from me too
17:32:27 <danofsatx> -1 (already stated above)
17:32:33 <sgallagh> jreznik: In fairness, it was equal parts danofsatx's theory. (Don't want to steal credit)
17:32:47 <jreznik> ah, sorry
17:32:52 <jreznik> danofsatx++
17:32:52 <zodbot> jreznik: Karma for dmossor changed to 2:  https://badges.fedoraproject.org/tags/cookie/any
17:32:57 <danofsatx> yeah, sgallagh just wrapped in a better wrapper
17:32:57 <jreznik> cookie :)
17:33:09 <jreznik> for you as my excuse
17:33:22 <roshi> proposed #agreed - 1224045 - RejectedBlocker - This bug is a corner case, and while it does violate the criteria it's not severe enough to block the release for Fedora 22.
17:33:24 <danofsatx> heh...
17:33:30 <danofsatx> ack
17:33:33 <jreznik> ack
17:33:34 <sgallagh> ack
17:33:43 <roshi> #agreed - 1224045 - RejectedBlocker - This bug is a corner case, and while it does violate the criteria it's not severe enough to block the release for Fedora 22.
17:34:05 <roshi> that's it for blockers I think
17:34:13 * tflink has a question before we move on
17:34:27 <tflink> how common are promise raid controllers and dmraid?
17:34:46 <nirik> they were pretty common long ago, but I have not seen/heard of any much in the last few years.
17:34:52 <tflink> and are they worthy of release blocking release if they don't work
17:35:05 <sgallagh> Same; they seem to have fallen out of favor
17:35:15 * danofsatx checks newegg
17:35:18 <tflink> fwiw, my AMD bios raid seems to be a promise variant
17:35:26 <tflink> and that's a new-ish system
17:36:27 * tflink hasn't been able to get arrays on either of his promise-ish boxes to show up as installation targets with f22
17:36:47 <danofsatx> only 2 Promise controllers on Newegg, zero reviews
17:36:48 <nirik> huh.
17:37:25 <nirik> I'd say not worth blocking over, especially when the bug is that they don't see them...
17:37:28 <tflink> I haven't filed a bug yet because I just hit the second one and I figured that the first one was just wonky hardware but wanted to mention it
17:37:58 <sgallagh> I'm with nirik; the worst case here is that the drives are unseen (and therefore we have no effect and cause no changes to their contents)
17:37:59 <jreznik> nirik: yep
17:38:00 <tflink> nirik: i'd say that not showing up as installation targets is almost as bad as not working well with the arrays
17:38:03 <danofsatx> not many reviews on Amazon, either
17:38:24 <tflink> danofsatx: they're usually embedded onto motherboards
17:38:32 <nirik> tflink: well, worse would be if it saw them and corrupted data on them.
17:38:34 <tflink> at least that's been my experience
17:38:36 <danofsatx> understood.
17:38:41 <nirik> tflink: you have f21 on those machines currently?
17:38:42 <tflink> nirik: that's why i said almost as bad :)
17:38:47 <sgallagh> /me used to have a highpoint controller. Those never worked reliably either.
17:39:17 <tflink> nirik: kind of. f21 at least saw the array and installed to it
17:40:01 <nirik> ok. I was going to suggest trying the f21 4.0.4 updates-testing kernel on it and see if it sees them... that would tell us if it's a kernel driver issue or not.
17:41:22 <tflink> I can certainly try it
17:42:08 <tflink> but i think that the bigger question is whether or not this would be worth slipping over
17:42:17 <sgallagh> Right now we need to make a decision though on whether this is a blocking issue
17:42:18 <sgallagh> yeah
17:42:54 <nirik> jwb / jforbes: have you seen any reports of issues with promise raid and 4.x kernels?
17:42:56 <tflink> if I'm the first person to even try this for f22, I'm not sure it's all that common but then again, I suspect most folks have intel stuff
17:43:02 <sgallagh> Considering a web search on "linux promise raid" returns results primarily of the form "don't use it"
17:43:14 <sgallagh> I'm inclined to suggest that it's not worth slipping over
17:43:32 * kparal lost context, what's the topic now? that bug has already been #agreed
17:43:32 * nirik is with sgallagh
17:43:45 <tflink> sgallagh: my amd SB950 bios raid is also detected as promise and has similar symptoms
17:43:45 <sgallagh> kparal: There's no associated BZ
17:44:02 <nirik> kparal: it's tflink's promise raid systems. They don't see the raid at all.
17:44:05 <sgallagh> tflink: It's probably an OEM promise controller
17:44:16 <kparal> #topic tflink's promise raid systems
17:44:30 <roshi> #topic tflink's promise raid systems
17:44:35 <kparal> ;)
17:44:40 <tflink> sgallagh: yeah, but my point is that it could be more common than we think if it shows up as AMD bios raid instead of promise
17:44:45 <sgallagh> tflink: I'm not saying it isn't a bug. I'm just saying that cargo-cult internet wisdom supports the "you shouldn't be attempting this" argument
17:44:50 <sgallagh> ah
17:45:27 <fedorauser|26599> please...not for this bug...
17:45:29 <mattdm> I'm -1 to blocking on this.
17:46:04 <sgallagh> I am also -1 to blocking on this. I'm also -1 to trying to decide on the future of promise RAID criteria in this meeting where everyone is sleep-deprived.
17:46:08 * tflink isn't trying to make this out as more than it is but didn't want to not mention it
17:46:11 <danofsatx> -1
17:46:24 <danofsatx> tflink, it's mentioned ;)
17:46:36 <nirik> -1
17:46:44 <danofsatx> so, all blockers cleared?
17:46:48 <jreznik> -1 but yeah, let's sort out all RAIDs later
17:47:03 * dgilmore wonders if this would have been a blocker a week ago
17:47:04 <jreznik> (and understand that stuff I agree it could be tricky)
17:47:26 <sgallagh> dgilmore: A week ago I'd have been +1 FE, but I think still -1 blocker
17:47:27 <jreznik> dgilmore: it would be punt, get more data likely
17:47:51 <roshi> well, this wasn't proposed yet - so nothing to do here
17:47:58 <roshi> jreznik: back to you :)
17:48:21 <dgilmore> okay, just think we need to take out the context of push to get it out the door out of the way
17:48:27 <sgallagh> jreznik: Stage cleared. Advance to the next level.
17:48:32 <jreznik> thanks roshi
17:48:39 <roshi> np :)
17:48:42 <dgilmore> it should be a blocker or not, and when in the cycle it hits should not change that
17:49:05 <jreznik> #topic Test Matrices coverage
17:49:09 <roshi> I concur dgilmore
17:49:19 <jreznik> but I'll bounce it back to QA
17:49:24 * nirik agrees too.
17:49:40 <kparal> the coverage is very good, many thanks to everyone who contributed
17:49:48 <sgallagh> dgilmore: I mostly agree, but in this case I'd have opposed it as a blocker earlier too
17:49:52 * jreznik is not sure - wa have only some energy to solve all issues of this HW world...
17:49:55 <roshi> yeah, the matrices got tore through really well :)
17:49:57 * danofsatx apologizes for not representing during this period
17:50:13 <dgilmore> sgallagh: confirms my point
17:50:40 <kparal> we have a few blank spots in the matrices, let me print them out here
17:50:58 <kparal> xen is not tested
17:51:00 * danofsatx has to bow out of the meeting now.
17:51:06 <kparal> hardware raid is missing
17:51:16 <danofsatx> y'all have the con, danofsatx out.
17:51:16 <mattdm> kparal: is ec2 pvm tested?
17:51:16 <kparal> and fcoe
17:51:39 <kparal> mattdm: going through installation matrix now. what's pvm?
17:51:52 <mattdm> xen :)
17:52:11 <kparal> we have a few blank spots for arm, but I think the number of results is comfortable enough
17:52:23 <kparal> pwhalen, do you have any concerns about the few missing fields for arm?
17:52:42 <roshi> mattdm: I can test the EC2 images when I know what the AMIs are for RC3
17:52:46 <kparal> we're missing local and ec2 cloud tests
17:53:09 <kparal> I've performed some local tests according to kushal instructions, but I'm not clear how much of that they covered
17:53:17 <nirik> roshi: I pasted them I thought? or you aren't sure those are right?
17:53:37 <roshi> I have no way to know if those are right or not :)
17:54:00 <roshi> they *look* right, but I don't really know if they are or not
17:54:09 <mattdm> roshi: should be what you get with searching for Fedora-Cloud-Base-22-20150521
17:54:15 <pwhalen> kparal, there are a couple im working, but also some we need to filter out for arm. i think its well covered.
17:54:23 <kparal> there are also missing results for server on arm
17:54:30 <kparal> apart from that mentioned above, we're very much covered
17:54:45 <mattdm> roshi in US East, ami-76dfc41e for 64-bit HVM, ami-9ed8c3f6 for PV
17:54:57 <roshi> mattdm: nothing shows for me in that search
17:55:18 <roshi> search never seems to work for me, so I stopped trusting it
17:55:32 <roshi> I have no reason to believe they *dont* work
17:55:49 <kparal> do we have someone to test xen, hwraid and fcoe until the ec2 tests are executed?
17:56:03 <mattdm> roshi do you see those ami ids?
17:56:07 <mattdm> oddshocks: ping
17:56:07 <zodbot> mattdm: Ping with data, please: https://fedoraproject.org/wiki/No_naked_pings
17:56:17 * jwb high fives zodbot
17:56:26 <mattdm> lol
17:56:30 <roshi> when I search the actual AMI id yeah
17:56:33 <mattdm> oddshocks: ^
17:56:41 <mattdm> ha take taht zodbot
17:58:04 <roshi> testing 64bit hvm now
17:58:53 <nirik> ok, so we want to pause here for ami testing?
17:59:13 <jreznik> how much time it will take?
17:59:25 <jreznik> but yeah, we can have coffee/tea break
18:00:01 <sgallagh> /me considers Irish Coffee
18:00:09 <mattdm> both hvm and pv amis pass basic smokescreen
18:00:14 <dgilmore> sgallagh: i suggest irish whiskey
18:00:16 <roshi> it's takes me 5-8 minutes to run through some smoketests
18:00:18 <dgilmore> skip the coffee
18:00:26 <robyduck> dgilmore: +1
18:00:48 <sgallagh> dgilmore: I like the cut of your gib
18:00:53 <nirik> :)
18:07:46 <roshi> well, with ec2 things take longer...
18:08:34 <roshi> ami-76dfc41e seems to be working fine
18:08:46 <roshi> let me test ami-9ed8c3f6 next...
18:09:01 * mattdm thinks that sshd is maybe not the best test service for the cloud :)
18:09:14 <nirik> yeah, that needs changing IMHO.
18:09:50 <roshi> it does
18:09:55 <roshi> and I keep meaning to change that
18:09:59 <roshi> I use nginx
18:10:12 <roshi> let's me test services, installation all in one pass
18:11:29 * mattdm is old-school, uses apache httpd
18:13:49 <mattdm> Also, I believe 32-bit ec2 images are no longer a thing. (they aren't linked on the web page at least, in any case)
18:14:13 <roshi> that jives with what I last heard
18:14:27 <roshi> might even drop it completely for F23
18:14:34 <roshi> things are looking good here
18:14:39 <mattdm> yeah same here
18:14:46 <jreznik> great!
18:14:48 <roshi> though I'd love to have more time to leave it running
18:14:56 <roshi> push it a little
18:15:38 <mattdm> roshi: an minute costs the same as an hour :)
18:16:04 <roshi> true
18:16:24 <sgallagh> During a meeting, they feel like the same thing
18:16:40 <roshi> I mean more of "I'd like cloud testing to be longer running processes that get *used* instead of simple tests."
18:17:26 <jreznik> what's the avarage time cloud instance lasts? I read somewhere it's minute or do
18:17:28 <jreznik> so
18:17:48 <roshi> depends on usecase
18:18:02 * jreznik understand what roshi means but we should have the same for everything but realistically
18:18:22 <roshi> when I was doing web development, I'd run a cloud instance in DO or something and it ran until they no longer wanted their website
18:18:25 <dgilmore> if only we could test years uptime for everything :P
18:18:38 <roshi> well, when you put it like that...
18:18:38 <dgilmore> we can all run a 2.0 kernel
18:19:00 <jreznik> dgilmore: and find bug and try to reproduce it waiting for years!
18:19:10 <dgilmore> jreznik: exactly
18:19:38 <jreznik> ok, but time is running - getting later here on Friday... so where we are now?
18:19:49 <dgilmore> cloud seems okay?
18:19:53 <nirik> on to decision?
18:19:54 <jreznik> yep
18:20:17 <roshi> from what I can tell
18:20:25 <jreznik> ok
18:20:53 <jreznik> #topic Go/No-Go decision
18:21:48 <kparal> I should mention that technically we're still blocked on the liveusb-creator bug, which we decided we remove from docs if the fix doesn't make it
18:21:49 * mattdm gets ready to delete pending magazine article about delay
18:21:59 <kparal> the good news is - a new fix was published and it seems to work
18:22:05 <sgallagh> http://giphy.com/gifs/dr6toZX3D1O8
18:22:08 <kparal> so it just needs a package and an update
18:22:17 <jreznik> kparal: ah, I forget
18:22:29 <nirik> I'm +1 to go. :) Lets ship those bits.
18:22:30 <jreznik> kparal: we need a) upstream release and package it or b) patch it and build it
18:22:32 <mattdm> Suggestion: remove from docs until update is live, then revert once it goes live?
18:22:39 <kparal> we also need new spin-kickstarts, but that's just a technicality, as I understand it
18:22:41 <jreznik> mattdm: +1
18:22:51 <roshi> same here kparal
18:22:54 <maxamillion> http://bit.ly/1Hzjocd
18:23:04 <nirik> kparal: yep. As soon as we are go we can make that... but karma would be helpfull after it's submitted.
18:23:22 <kparal> ok
18:23:34 <maxamillion> nirik: I can help with karma also if you're going to do the package build
18:23:43 <nirik> it would be good to push that, kernel and libblockdev all stable today.
18:23:46 * maxamillion thought he was on the hook for spin-kickstarts
18:23:55 <nirik> maxamillion: you can do the package thats fine with me. ;)
18:23:55 <maxamillion> nirik: +1
18:24:20 <dgilmore> nirik: indeed, then I can prepare the Everything repo and disable branched tomorrow
18:24:30 <maxamillion> nirik: either way, I don't have any desire to attempt to lay claim for it ... :)
18:24:39 <dgilmore> nirik: really it is a must they go stable today
18:24:57 <nirik> well, bruno usually does them, but I think there's a SOP
18:25:12 <nirik> anyhow, we can coordinate all that once we are go. ;)
18:25:53 <sgallagh> For the record, I vote "Go"
18:25:53 * mattdm is not opposed to early coordination :)
18:26:16 <jreznik> dgilmore: for releng? and QA too kparal, roshi
18:26:38 <roshi> looks like all the boxes are checked
18:26:47 <roshi> I's dotted and T's crossed
18:26:48 <kparal> except for fcoe and xen
18:26:51 <dgilmore> releng is go
18:27:00 <kparal> I believe we're fine with that
18:27:11 <kparal> QA is Go
18:27:19 <maxamillion> nirik: +1
18:27:30 <jreznik> proposal #agreed Fedora 22 Final status is Go by Release Engineering, QA and Development
18:27:36 <nirik> ack
18:27:40 <kparal> ack
18:27:56 <sgallagh> ack
18:28:00 <mattdm> The FPL casts an honorary figurehead "Go" vote too :)
18:28:01 <dgilmore> ack
18:28:11 <roshi> ack
18:28:26 <jreznik> #agreed Fedora 22 Final status is Go by Release Engineering, QA and Development
18:28:44 <nirik> hurray. Thanks everyone for all the hard work
18:28:51 * jreznik should do the last honorary Go vote too but it's too late!
18:29:02 <roshi> everyone++
18:29:13 <jreznik> yep, thanks and I'd say good night to many of us :)
18:29:37 <jreznik> #action jreznik to announce Go decision
18:29:44 <jreznik> #topic Open floor
18:30:06 <sgallagh> /me passes out cigars
18:30:14 <jwb> bad for your health
18:30:26 * roshi passes out the cigar cutter
18:30:28 <striker> possible to get f22 cds in time for SELF?
18:30:32 <jforbes> so are more meetings
18:30:40 <sgallagh> *all testers simply pass out*
18:31:01 <sgallagh> striker: When is SELF? And what do you need?
18:31:12 <striker> June 12-14
18:31:12 <sgallagh> The ISOs are all available; they'll be unchanged from the RC3 content
18:31:25 * jreznik is not going to prolong it here longer than needed
18:31:29 <striker> ack - was looking for some nice branded ones :)
18:31:30 <sgallagh> striker: Contact the regional ambassador
18:31:31 <jreznik> 3...
18:31:35 <striker> sgallagh: ok
18:32:17 <maxamillion> you are all rockstars, thank you so much for all the amazing work! :D
18:32:44 * lkiesow would like to thank all of you for the hard work!
18:33:02 * jwb notes we should start working on f23 now
18:33:03 <mattdm> yes! thank you so much everyone!
18:33:07 <mattdm> jwb++
18:33:15 * mattdm notes that we already are!
18:33:19 <nirik> on to f23!
18:33:21 <jwb> (yes, i am a terrible person and taskmaster)
18:33:22 <jreznik> jwb: f23 already started!
18:33:35 <jreznik> (for many people)
18:34:01 <jreznik> 2...
18:34:16 <dgilmore> f23 started at f22 branching
18:35:06 <mattdm> anyway. advance thanks to all of you working the weekend and monday to make sure the bits get where they need to go!
18:35:09 <kparal> thanks everyone for doing a stellar job testing
18:35:23 <jreznik> mattdm: +1!
18:35:59 <jreznik> as I said, I'll try to help us much on Monday - if still something needed for announcement etc.
18:36:04 <jreznik> 1...
18:36:17 <jreznik> thanks again!
18:36:22 <jreznik> #endmeeting