16:04:24 <roshi> #startmeeting F21-blocker-review
16:04:24 <zodbot> Meeting started Wed Dec  3 16:04:24 2014 UTC.  The chair is roshi. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:04:24 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
16:04:24 <roshi> #meetingname F21-blocker-review
16:04:24 <zodbot> The meeting name has been set to 'f21-blocker-review'
16:04:25 <roshi> #topic Roll Call
16:04:36 <roshi> who's around to knock out these blockers?
16:04:36 * pschindl is here
16:04:40 * kparal is here
16:04:43 * satellit listening
16:04:50 <roshi> #chair kparal pschindl satellit adamw
16:04:50 <zodbot> Current chairs: adamw kparal pschindl roshi satellit
16:05:25 <roshi> adamw: you around?
16:06:48 * nirik is lurking around
16:06:54 <roshi> well, we can move forward
16:06:58 <roshi> welcome nirik :)
16:06:59 <roshi> #topic Introduction
16:07:00 <roshi> Why are we here?
16:07:00 <roshi> #info Our purpose in this meeting is to review proposed blocker and nice-to-have bugs and decide whether to accept them, and to monitor the progress of fixing existing accepted blocker and nice-to-have bugs.
16:07:04 <roshi> #info We'll be following the process outlined at:
16:07:06 <roshi> #link https://fedoraproject.org/wiki/QA:SOP_Blocker_Bug_Meeting
16:07:09 <roshi> #info The bugs up for review today are available at:
16:07:11 <roshi> #link http://qa.fedoraproject.org/blockerbugs/current
16:07:14 <roshi> #info The criteria for release blocking bugs can be found at:
16:07:16 <roshi> #link https://fedoraproject.org/wiki/Fedora_21_Alpha_Release_Criteria
16:07:19 <roshi> #link https://fedoraproject.org/wiki/Fedora_21_Beta_Release_Criteria
16:07:22 <roshi> #link https://fedoraproject.org/wiki/Fedora_21_Final_Release_Criteria
16:07:25 <roshi> we've got 2 proposed blockers and one proposed FE
16:07:27 <kparal> I think we could wait for adamw. I suppose it will be very interesting today
16:07:28 <roshi> #topic (1170153) anaconda gets stuck during creating a partition, when there is some existing partition after that one
16:07:31 <roshi> #link https://bugzilla.redhat.com/show_bug.cgi?id=1170153
16:07:33 <roshi> #info Proposed Blocker, anaconda, NEW
16:07:34 * jreznik is here but has only one hour...
16:07:38 <roshi> sounds good to me kparal
16:07:39 <nirik> kparal: yeah, likely to be a bit heated. ;)
16:07:51 <kparal> also, we need anaconda representatives here
16:07:56 <roshi> yeah
16:08:05 <roshi> think you can get some here?
16:08:09 <sgallagh> I'm here, and I've just come from a conversation with dcantrell about this.
16:08:25 <kparal> it would be still nice if they joined
16:08:29 <jreznik> sgallagh: any details before we start?
16:09:04 <sgallagh> Yes, David's position is that we revert the change that caused this blocker and document the original bug as a known issue in F21
16:09:10 <sgallagh> I agree with this.
16:09:40 <kparal> there were several suggestions on #anaconda
16:09:42 <sgallagh> He does not want his team spending their time on this.
16:09:58 <kparal> today it seems they don't want to spend time on none of the blockers
16:10:05 <kparal> putting it mildly
16:10:19 * nirik re-reads the bug.
16:10:44 <sgallagh> kparal: The blocker process has worn them out, they're not going to work on a firedrill schedule and I can't blame them.
16:10:49 <roshi> are tehy going to come to the meeting?
16:11:03 <kparal> the ideas currently include: a) do nothing and risk user data loss b) restrict users in visiting the spoke again c) rescan all disks on every visit, thus throwing away all pending changes
16:11:19 <nirik> so, isn't this a pretty corner case? where would someone hit it? and it was possible in f20?
16:11:37 <kparal> sgallagh: I understand that, on the other hand we are in exactly the same position. and this cycle there has been the lowest number of blockers ever
16:11:42 <kparal> or, in a long time
16:11:48 <jreznik> sgallagh: well, everyone invests a lot of time into the release and prolonging makes it more work for everyone :(
16:12:10 <sbueno> kparal: we don't want to spend time on blockers today because tomorrow it would be the same thing all over again
16:12:33 <sbueno> and again, and again, ad infinitum
16:12:48 <kparal> nirik: this one is very easy to hit. you just need to be creating a partition before some other existing partition. therefore in the middle of the disk, or e.g. replacing one partition with a different one (while there is some persistent partition after that one)
16:12:57 <sbueno> yes, this is our lowest number of blockers this release -- you can say that all you want. that does *not* mean it has been less stressful
16:13:15 <kparal> so, should we stop releasing fedora?
16:13:18 <kparal> don't really understand
16:13:30 <sbueno> you don't understand that you have to draw the line somewhere and not drive developers into the ground?
16:13:41 <sbueno> you can not fix every bug. that is just impossible
16:13:45 <pschindl> let's stop testing anaconda and release it as it is. Best solution. No work needed.
16:13:48 <kparal> well, if we fix something and break something else, we can't just ignore it
16:14:00 <jreznik> sbueno: I don't want to argue but then, we can ship just some snapshot of release and completely resign from any testing... and as kparal said, this time it was really nice release compared to others and it still is, blockers were all real blocker (except that high contrast icons and we raised it to the team)
16:14:10 <kparal> we need to fix blockers = major bugs, not every bug
16:14:44 <sbueno> but you're nominating something as a blocker everyday
16:14:47 <kparal> data loss is pretty high on my list
16:15:02 <kparal> well, sorry, it's still broken
16:15:07 <kparal> it's my job
16:15:17 <kparal> over and over again, ad infinitum
16:16:19 <jreznik> well, let's go back to the topic - and find solution to make it lower burden for everyone, qe, anaconda, all great folks doing releases
16:16:28 <roshi> +1
16:16:32 <kparal> if we're really in a state where development teams can't keep up with the pressure anymore, maybe we should strike out a lot from our criteria. but data loss should still be there, imho
16:16:46 <roshi> data loss is a big deal
16:17:03 <nirik> well, there's lots of things that can cause data loss on an install... especially if you have lots of other stuff on the disks you are manipulating.
16:17:06 <roshi> kparal: with this bug, does manual fiddling with teh disk allow things to start working?
16:17:52 <nirik> this bug was possible/there in f20? or only with multiple disks?
16:18:00 <kparal> roshi: anaconda freezes during partition creating, therefore the system is not installed at all. my existing systems were somehow made unbootable during the process on one of the machines, but I haven't had time to investigate why
16:18:14 <roshi> ok, that's what I was wondering
16:18:21 <kparal> nirik: that would be 1166598, not this one
16:18:27 <kparal> fix for 1166598 caused this one
16:18:30 <kparal> this one is worse
16:18:43 <kparal> that's the reason why anaconda devs proposed to revert that fix
16:18:58 <kparal> and release with bug 1166598 present
16:19:05 <kparal> (and 1170153 fixed)
16:19:12 <nirik> right.
16:19:44 <kparal> correction, it's quite hard to define which one is worse. each of them have a specific use case where things might go wrong
16:19:48 <kparal> potentially to data loss
16:20:13 <kparal> but if we revert the fix, we can still mitigate 1166598 somehow
16:20:24 <kparal> I'll re-paste the suggestions
16:20:32 <kparal> the ideas currently include: a) do nothing and risk user data loss b) restrict users in visiting the spoke again c) rescan all disks on every visit, thus throwing away all pending changes
16:20:49 <nirik> I think b and c are off the table...
16:20:53 <kparal> my impressions is that anaconda devs strongly support a)
16:20:57 <jreznik> d) slip and let anaconda team time to fix both bugs properly
16:20:58 <kparal> nirik: why is it?
16:21:09 <kparal> jreznik: they say it's not that simple
16:21:18 <kparal> but I'd like them to talk about it
16:21:33 <nirik> I think it's a) mark this not blocker somehow and ship or b) revert 1166598 and ship c) slip and try and do something else.
16:22:08 <nirik> kparal: my understanding is that they wish to do b) and don't want to spend time working on your other options at this time.
16:22:13 <sgallagh> b) is the most appealing choice, honestly.
16:22:21 <kparal> nirik: yes
16:22:31 <jreznik> for c) it would mean get fix in less than week, pretty unlikely, so it would probably lead to January
16:22:37 <kparal> in that case, we could potentially still ship tomorrow
16:22:41 <nirik> jreznik: yes
16:22:45 <roshi> if the choices are just those three, b would be best afaict
16:22:46 <sgallagh> The revert puts us in a better situation than we have now
16:22:51 * mattdm drops in to +1 to shipping tomorrow :)
16:22:58 <sgallagh> The remaining issue can be documented
16:23:06 <kparal> if we revert the patch *and* try to mitigate (not fix) 1166598, it's going to take more time probably
16:23:11 <jreznik> there's one more bug... so it's still not tomorrow :)
16:23:13 <Corey84-> .fasinfo Corey84-
16:23:14 <zodbot> Corey84-: User "Corey84-" doesn't exist
16:23:28 <kparal> jreznik: sure, we need to vote on it as well
16:23:28 <sgallagh> jreznik: I'm going to push for FESCo to vote to remove the dual-boot criterion
16:23:40 <nirik> note that the revert is actually in anaconda, blivit and pyparted I think... so it's not just a simple one liner
16:23:46 <sgallagh> (As a blocker, rather than a very-nice-to-have)
16:23:57 <Corey84-> + 1  sgallagh
16:24:00 <sgallagh> nirik: It's just blivet, according to dcantrell
16:24:10 <jreznik> sgallagh: on the other hand, we have a lot of users who asks for even more dual boot support... it will definitely shrink our user base
16:24:17 <nirik> oh? there were updates for the others in there... possibly because it was the same bodhi update I guess.
16:24:32 <kparal> let's not discuss dual boot now, it's off topic
16:24:33 <mattdm> jreznik This is just a case of "we don't yet support dual boot with a certain os"
16:24:37 <sgallagh> jreznik: I'm not saying we don't try to do it. I'm saying we don't block on it
16:24:41 <kparal> this is a different bug
16:24:41 <nirik> 1166598 is anoying to read due to the cloning. ;(
16:24:44 * mattdm shuts up
16:24:57 * satellit_e Dual Boot : Use a usb ext disk...boot from it
16:25:29 <jreznik> mattdm: you mean with both mac and windows? :) not speaking about other linuxes
16:25:39 <mattdm> jreznik: later :)
16:26:16 <jreznik> but I understand trying to support too much if we don't have resources to do it - it's better to admit it
16:26:18 <kparal> dcantrell: can you tell us what you think about reverting the fix for 1166598 and then rescanning all disk every time you enter storage spoke, to avoid 1166598?
16:26:21 <nirik> ok, I am in favor of reverting the fix for 1166598 and doing another rc I guess.
16:27:03 <Corey84-> another rc ? :(   but im with your logic
16:27:03 <kparal> I'm for reverting fix for 1166598 provided we at least somehow protect the users from it
16:27:06 <sgallagh> For a formal vote: Proposal: Revert the fix for 1166598 and spin RC5. No additional changes.
16:27:07 <dcantrell> kparal: reverting the fix is the least risky option at this point.  introducing new code introduces an unknown amount of risk.  in discussion the solution sounds nice and it may even work, but now is not the time to be introducing new things
16:27:15 <Corey84-> +!
16:27:29 <dcantrell> reverting the patch for 1166598 is the safest.  document the limitation as a known issue and move on
16:27:31 <kparal> dcantrell: what about disallowing users to enter the spoke again?
16:27:34 <jreznik> dcantrell: if we slip and give the team enough time to proper fix this issues, how much time do you expect we would need (and if you will be willing to do it?)
16:27:48 <dcantrell> kparal: departure from established workflow, which I would argue is an RFE
16:27:49 <kparal> that would be a pretty easy fix
16:27:52 <dcantrell> we don't need to be doing that either
16:28:06 <kparal> well, it would be to protect users from sounds not complicated
16:28:10 <Corey84-> kparal,   not all users are aware to jsut reboot  if they actually bugger it up first time tho
16:28:12 <kparal> from 1166598
16:28:21 <kparal> sorry, bad ctrl+v
16:28:29 <nirik> sgallagh: we shouldn't vote for rc5 yet, we have another blocker proposed after this one. ;)
16:28:35 <dcantrell> jreznik: unknown amount of time required, extremely risky given that we are in December.  I am not prepared to commit the team to that work
16:28:54 <sgallagh> nirik: If we revert this, RC5 is a given
16:29:01 <jreznik> dcantrell: I understand it would very likely lead to January
16:29:02 <Corey84-> ^
16:29:05 <sgallagh> I didn't say "instantly"
16:29:22 <Corey84-> sgallagh,  when then ?
16:29:26 <kparal> what about a big fat warning displayed the second time you enter the spoke?
16:29:31 <kparal> still too complex?
16:29:31 <dcantrell> jreznik: the proper solution for this is to bake it in rawhide and fix it in f22.  it's a complex problem with no quick fix
16:29:41 <dcantrell> kparal: UI changes now?  not a good idea
16:29:48 <Corey84-> kparal,  not likely but   confusing likely
16:29:51 <sgallagh> Corey84-: I just mean that one will have to happen as soon as possible. Don't read too much into it
16:29:55 <nirik> sgallagh: I'd rather say: +1 this being a blocker, fix is to revert fix for 1166598 and document that issue/-1 blocker it.
16:29:59 <kparal> dcantrell: better than partitioning changes, don't you think?
16:30:09 <jreznik> kparal: no, I'm not sure it's what we want now... common bugs and document it
16:30:18 <sgallagh> nirik: I'm fine with however it is phrased
16:30:25 <kparal> ugh
16:30:26 <Corey84-> kparal, sure
16:30:40 <dcantrell> kparal: no
16:30:49 <sgallagh> kparal: We can't fix every bug all the time. It's not ideal, but it's reality.
16:31:10 <kparal> sgallagh: agreed. that's why we have release criteria, to distinguish them.
16:31:45 <Corey84-> so its just the reentry into the spoke that bugs out yes?  (dont have the bz  in front of me  atm)
16:31:51 <sgallagh> Yes
16:32:28 <sgallagh> kparal: Sure, but the criteria are written somewhat ambiguously, and at times like this I think it's perfectly reasonable to play the "Okay, let's document that and move on" card
16:32:49 * nirik wonders if anyone has a way to wake the adamw.
16:33:14 <kparal> we have done this many times in the past. but those were minor bugs. this is potential data loss
16:33:27 <kparal> I'm not happy about it
16:33:34 <roshi> can you get the data off manually?
16:33:35 <sgallagh> kparal: It's an OS installer. That's *always* a risk.
16:33:48 <roshi> is the data "lost" or "hard to get?"
16:34:22 <mattdm> kparal: if it makes you feel better, there's probably plenty of other data loss bugs we just didn't happen to discover yet.
16:34:23 <jreznik> sgallagh: there's always risk but that doesn't mean we should make that risk too high
16:34:46 <kparal> roshi: anaconda can delete a wrong partition, other than you selected
16:34:52 <dcantrell> do we have any actual numbers on the people this problem affects (if 1166598 is reverted)?  what is the likelihood of users hitting this problem?  or are we all just speculating?
16:34:54 <jreznik> well, definitely this bug sounds worst than the original one, so
16:34:57 <nirik> wait, are we talking about just documenting this one and -1'ing it?
16:34:59 <sgallagh> jreznik: Of course, but I'm making the personal judgement that it's not too high in this case
16:35:00 <kparal> from comment 0: "3. There is a high risk of removing partition which was supposed to be kept."
16:35:43 <kparal> dcantrell: that's hard to tell. only a fraction of affected people will report it
16:35:57 <jreznik> nirik: I think it's revert this one as it's worst and document the original one as less likely happening? or maybe I'm already lost :)
16:36:15 <sgallagh> jreznik: Yes, that's my take
16:36:55 <jreznik> it's fudge but seems like the only way how to release this year for me
16:36:56 <Corey84-> sgallagh,  how can it delete a  non declared partition tho
16:37:26 <sgallagh> Corey84-: We don't need to investigate the code here. We just need to decide how to proceed.
16:37:37 <nirik> FWIW, I have seen 0 reports of 1166598 in the wild, but it could be most people just give up and wipe everything instead of reporting or seeking help.
16:37:46 <Corey84-> sgallagh,  wasn't  suggesting a code review lol
16:38:42 <dcantrell> nirik: so with 0 reports of it in the wild, I find it hard to believe that it should get the status that is has received
16:38:52 <sgallagh> nirik: I don't have numbers to back this up, but I think most people try Fedora out with VMs or Lives these days and don't generally install locally until they're ready for it to take over completely.
16:39:06 <dcantrell> my position is still to revert 1166598 and document the problem as a known issue and how to not hit it
16:39:11 <sgallagh> So if they hit this, they probably just try another VM
16:39:20 <sgallagh> dcantrell: I still completely agree.
16:39:28 <nirik> sure, it's all speculation really. ;)
16:39:47 * nirik is also with dcantrell and sgallagh.
16:39:55 <Corey84-> im with dcantrell  on that   common bug and doc
16:40:22 <jreznik> sgallagh: I can't agree with that VM/Live (and to be honest, I recommend it to many folks who wants to try Fedora as best/safe way)
16:40:51 <sgallagh> jreznik: Sorry, I couldn't parse that. What do you recommend to people?
16:40:51 <kparal> I'm for reverting, but I'd like to see *some* improvement to protect users from de-fixed 1166598
16:41:19 <sgallagh> kparal: I just don't think that's likely to happen in this release. Certainly not if we want to ship in 2014
16:41:21 <jreznik> but without commitment from anaconda team, I don't think we have many options here, so revert
16:41:32 <roshi> ok, so to stick with the order of the meeting
16:41:41 <roshi> votes on this bug as a blocker for F21?
16:41:42 <jreznik> sgallagh: to install in vm or use live
16:42:05 <sgallagh> jreznik: It sounds like you were agreeing with me, then.
16:42:57 <sgallagh> Proposal (again): 1170153 is a blocker. Agreed resolution is to revert the fix for 1166598.
16:43:21 <mattdm> sgallagh +1. It sucks, but let's document it, ship it, move on
16:43:28 <sgallagh> +1
16:43:28 <Corey84-> +1
16:43:36 <kparal> we also need to agree that 1166598 not a blocker, or discuss it separately
16:43:41 <kparal> *is
16:43:53 <kparal> just patching the proposal
16:43:59 <Corey84-> +1 on 1166598
16:44:06 <nirik> +1
16:44:20 <dcantrell> +1
16:44:36 <roshi> we can discuss the other bug next
16:44:49 <jreznik> +1
16:45:21 <kparal> roshi: so let's do proposed #agreed
16:45:28 <roshi> working on it :)
16:45:31 <kparal> ok
16:46:09 <roshi> proposed #agreed - 1170153 - AcceptedBlocker - This bug is a clear violation of the Windows dual boot criterion and can lead to data loss.
16:46:44 <sgallagh> roshi: Uh, what?
16:46:52 <sgallagh> I think you may have jumped ahead...
16:47:02 <roshi> what do you mean?
16:47:07 <sgallagh> Sorry, I misread
16:47:11 <nirik> ack
16:47:13 <roshi> we're discussing 1170153 and if we should block on it
16:47:13 <sgallagh> ack
16:47:15 <Corey84-> ack
16:47:20 <jreznik> ack
16:47:26 <roshi> then discussing 1166598
16:47:28 <sgallagh> roshi: Sorry, I got confused with the *other* dual-boot bug that got opened.
16:47:35 <roshi> did I miss something?
16:47:40 <roshi> ah
16:47:41 <sgallagh> No, I did. Carry on
16:47:42 <roshi> ok :)
16:47:53 <roshi> #agreed - 1170153 - AcceptedBlocker - This bug is a clear violation of the Windows dual boot criterion and can lead to data loss.
16:48:25 <roshi> now we can talk about the one proposed to revert and document
16:48:26 <roshi> #topic (1166598) going back to installation destination picker swaps partitions on disks
16:48:29 <roshi> #link https://bugzilla.redhat.com/show_bug.cgi?id=1166598
16:48:31 <roshi> #info Accepted Blocker, anaconda, VERIFIED
16:48:44 <sgallagh> Proposal: Remove blocker status, revert this fix.
16:49:55 <pschindl> propose as f22 blocker ;)
16:49:55 <nirik> yeah... 'current fix causes worse issues and no better fix is available in near term, so remove blocker status and document'
16:49:59 <kparal> actually, 1170153 was not really about windows, and I've found it it works with windows in most cases.
16:50:07 <roshi> votes on reverting blocker status for this bug? we'll also have to provide a really clear justification in the bug itself
16:50:21 <kparal> but let's discuss 1166598 now
16:50:22 * adamw wakes up, reads back
16:50:31 <sgallagh> adamw: Trust me, go back to bed.
16:50:31 <mattdm> +1 remove blocker status with nirik's justification
16:50:32 <nirik> hey adamw. welcome to the fun. ;)
16:50:39 <roshi> proposal: give adam 5 minutes to catch up?
16:50:43 <nirik> sure
16:50:47 <kparal> +1
16:50:52 <adamw> eh, i'm not that important
16:50:53 <kparal> to 5 minutes
16:50:58 <adamw> what *exactly* is the data loss scenario for this bug?>
16:51:05 * adamw never actually hit it himself
16:51:15 <jreznik> you may remove partition you don't want to?
16:51:16 <kparal> adamw: visit storage spoke twice. partitions can get swapped numbers
16:51:26 <sgallagh> adamw: If you have existing partitions, you might remove the wrong one without being aware of it
16:51:50 <kparal> I'm not sure if it affects custom part. definitely guided part
16:52:13 <kparal> comment 10 has a reproducer
16:52:15 <Corey84-> not seen it in custom   (  my  forte)
16:52:25 <mattdm> kparal: does the confirm dialog show the right info?
16:52:51 <kparal> mattdm: the reclaim dialog shows partitions, but labels like 'vda1' and 'vda2' are swapped
16:52:57 <kparal> it is hardly noticeable
16:52:57 <adamw> you dont' get a confirm dialog on guided
16:53:11 <mattdm> ah. (I apparently never do "guided")
16:53:19 <kparal> yeah, no confirm dialog, just reclaim dialog
16:53:36 <mattdm> adamw: in the bug long ago, you said "My inclination is to vote -1 blocker on a bug which involves running through the spoke multiple times and changing your mind, at this point."
16:53:51 <mattdm> is that still basically true?
16:54:01 <adamw> mattdm: at that point i think the impact was believed lower
16:54:08 <adamw> has anyone checked if this happened in f20?
16:54:26 <kparal> my last comment never arrived to bugzilla for some reason. this happens in F20, but only for multi disk scenarios. I couldn't reproduce it with single disk scenario
16:54:26 <mattdm> kparal says that it is new in f21
16:54:44 <kparal> ah, I added this comment to a wrong bug
16:55:02 <Corey84-> imo if you have to enter the spoke more than twice you need to preplan deployment better
16:55:10 <kparal> fixed now
16:55:34 <adamw> Corey84-: there's that, but then there's also the fact that, well, we built this whole hub and spoke thing which expressly allows you to do that
16:55:43 <sgallagh> Right, I think the likelihood of re-entering the spoke is sufficiently small as to not be worth blocking on.
16:55:53 <kparal> I often do that
16:56:05 <kparal> just checking whether I set everything right
16:56:06 <adamw> it seems impolite to say "we built something that's clearly designed to allow you to go through spokes multiple times, but we're going to say any data-eating bugs that happen when you do aren't important". kind of a dissonance there.
16:56:07 <mattdm> kparal: yeah but you're trying to break it :)
16:56:08 <sgallagh> kparal: Yeah, but you're schizophrenic ;-)
16:56:11 <Corey84-> adamw,  not discounting that at all but when is too many times tho
16:56:17 <kparal> thanks guys
16:56:23 <sgallagh> kparal: You're welcome!
16:56:42 <kparal> but this time I meant real life scenario. installing on your home machine, along your precious data
16:56:44 <roshi> by design, there shouldn't be "too many times"
16:56:49 <kparal> you want to be sure you set everything right
16:56:52 <Corey84-> on a custom or guided i can see a second reentry but more than that is iffy imo
16:56:54 <sgallagh> adamw: I agree with you, but on the other hand, I don't know that it's a strong enough reason to block
16:57:11 <adamw> the second time through of kparal's single-disk reproducer is a *bit* pathological, though i guess you can do it by mistake
16:57:14 <jreznik> cautios people can hit more likely by tripple checking and retrying configuration several times, so they are in the end more likely to loose data :D
16:57:15 <sgallagh> And anyway, the discussion is somewhat moot, since the anaconda folks don't want to engineer a solution at this point.
16:57:25 <sgallagh> So it's rather academic, IMHO
16:57:28 <kparal> adamw: to summarize, anaconda devs say they can't fix this and 1170153 at the same time
16:57:30 <Corey84-> fair nuff adamw
16:57:38 <adamw> i guess i'd say that in a perfect world with perfect adherence to our policies i'd want us to block on this and slip for however long anaconda wanted to be happy they could fix both bugs properly
16:57:45 <sgallagh> This is the lesser of the two bugs here
16:57:45 <roshi> I would forsure propose this as an F22 blocker even if we revert it now
16:57:57 <jreznik> sgallagh: switch to calamares in f21? :)
16:57:59 <nirik> sgallagh: agreed.
16:58:10 <adamw> in an imperfect world where anaconda folks are sick to death and everyone else wants to be out of here for christmas i can be ok with not fixing it, i guess.
16:58:17 <sgallagh> .fire jreznik
16:58:17 <zodbot> adamw fires jreznik
16:59:06 <roshi> it sounds like reverting this and documenting it is the best course of action we have
16:59:08 <mattdm> +1 imperfect world. It's not just the anaconda team -- a lot of people want to get the release into the hands of users
16:59:26 <adamw> i'd like that too, but i'd also like it to be good =)
16:59:28 <sgallagh> yes
16:59:33 <roshi> even if we had someone who could patch both *now* we'd still be pressed for time to test thoroughly
16:59:50 <Corey84-> +1 revert and doc  again here is fien with me
16:59:57 <jreznik> roshi: yep, fix for this issue definitely means January
16:59:59 <adamw> roshi: well, if we're reverting something we need to re-test thoroughly.
17:00:07 <sgallagh> jreznik: If not February, yes.
17:00:09 <dcantrell> jreznik: jokes about calamares and other projects in fedora won't get us to take this entire process seriously like you ask us to.  I ask that you recognize the amount of work that we put in to the installer that everyone continually and has always badmouthed
17:00:18 <roshi> yeah, but it takes less time to revert and test than to code build and test
17:00:40 <adamw> so, i'm gonna say +/-0 on this, but i'm ok with an overall -1.
17:00:49 <jreznik> dcantrell: well, I appreciate your work on anaconda and yesterday I repeated it like several times you did great job
17:00:57 <kparal> I'm not, but if there's no one to fix it, what can we do
17:01:13 <dcantrell> jreznik: thank you
17:01:30 <roshi> we're all friends here and we all want a solid release
17:01:35 <Corey84-> im for a fix  if we can test in time but not to block it
17:01:36 <nirik> -1 after reverting the fix, document as best we can and try and fix better for f22.
17:01:39 <roshi> assume good faith and all that :)
17:01:40 <kparal> I don't think a warning dialog would be that hard to code
17:01:50 <sgallagh> We're not friends, we're family. Families fight sometimes :)
17:01:51 <Corey84-> nirik, +!
17:01:58 <dcantrell> kparal: what part of code freeze don't you understand?
17:02:00 <roshi> +1 sgallagh :)
17:02:08 <jreznik> roshi: solitaire release? ;p
17:02:08 <dcantrell> sgallagh: nice  :)
17:02:45 <kparal> dcantrell: I probably don't understand your reply
17:02:53 * nirik asks sgallagh: "are we there yet!?"
17:03:12 <sgallagh> nirik: No, and stop teasing your sister.
17:03:13 <dcantrell> kparal: 12:00 < kparal> I don't think a warning dialog would be that hard to code
17:03:20 <sgallagh> So, back to the problem at hand.
17:03:35 <sgallagh> Are we agreed on reverting, documenting and fixing *early* in F22?
17:03:41 <kparal> it's not a fix, but it would at least help a bit. just a fraction of people will read commonbugs
17:03:42 <roshi> are we ready for votes on this? or could people still use some convincing?
17:03:42 <nirik> right, I think we are in broad agreement here, someone craft a proposal?
17:03:52 <nirik> sgallagh: +1
17:03:58 <roshi> I'll do it nirik
17:04:08 <roshi> votes first though :)
17:04:22 * nirik goes to get coffee.
17:04:23 <roshi> +/- 0, ok with general -1 if that's how people vote
17:04:42 <dcantrell> kparal: not disagreeing, but we either have a code freeze or not.  which means problem solving after a code freeze means working with the tools we have limited ourselves to.  such as reverting or documenting problems
17:04:47 <pschindl> +/- 0 from me too. I don't like it.
17:05:06 <jreznik> +1 to revert, document (to be clear) - I don't see way to get fix anytime soon and seems like even trying to mitigate it could lead to more issues (code changes, anaconda team burn out)
17:05:14 <Corey84-> +1 for proposing revert doc and early fix
17:05:38 <roshi> freeze means not adding new things unless something is broke - it's the point of freeze aiui
17:06:01 <roshi> ok, 2 +1 and 2 +/-0
17:06:08 <adamw> dcantrell: i don't know if you mean anaconda or fedora, but fedora doesn't have a 'code freeze' of that nature
17:06:32 <dcantrell> that is clearly evident, but it would be nice
17:06:38 <adamw> dcantrell: the codification of fedora's milestone freezes is 'only changes to fix blocker and freeze exception bugs will be accepted during these times'
17:06:47 <kparal> I'm abstaining and looking for some spirits
17:06:53 <sgallagh> I'm going to recommend that this is neither the time nor place for discussion of the freeze policy.
17:07:04 <roshi> true sgallagh
17:07:07 <roshi> votes?
17:07:10 <adamw> yeah, it was just a clarification, if we're going to discuss changing it that should happen elsewhere
17:07:22 <adamw> i'm assuming we're counting dcantrell as -1 ?
17:07:33 <roshi> well, +1 to revert this one
17:07:37 <roshi> aiui
17:07:59 <dcantrell> my position is still to revert 1166598 and document the problem and workaround
17:08:09 <dcantrell> however that works on the voting number line
17:08:12 <roshi> +1 for dcantrell :)
17:08:29 <roshi> ok, 3 +1 and 2 +/-0, one abstain
17:09:02 <Corey84-> +1 dcantrell
17:09:13 <sgallagh> If I wasn't counted, I'm +1 to my own proposal.
17:09:15 <adamw> wait, what's the proposal?
17:09:16 <Corey84-> still +1 rather
17:09:26 <adamw> oh, sgallagh's. gotcha.
17:09:28 <sgallagh> (12:03:22 PM) sgallagh: Are we agreed on reverting, documenting and fixing *early* in F22?
17:09:36 * nirik is +1
17:09:45 * Corey84- +1
17:09:59 <adamw> assuming a vote of -1 on the blockeriness of the bug, yes.
17:10:05 <roshi> proposed #agreed - 1166598 - RejectedBlocker - The provided fix for this bug caused a larger issue. At this point in the release it's better to revert and document the problem clearly. Repropose this as a F22 Alpha blocker to get a fix early in the next release.
17:10:10 <adamw> +1
17:10:14 <nirik> +1
17:10:16 <sgallagh> roshi: ack
17:10:16 <adamw> ack
17:10:20 <Corey84-> ack
17:10:37 <roshi> #agreed - 1166598 - RejectedBlocker - The provided fix for this bug caused a larger issue. At this point in the release it's better to revert and document the problem clearly. Repropose this as a F22 Alpha blocker to get a fix early in the next release.
17:10:51 <roshi> ok, next proposed blocker
17:10:51 <roshi> #topic (1170245) Win 8 UEFI don't start from grub: "error: cannot load image"
17:10:55 <roshi> #link https://bugzilla.redhat.com/show_bug.cgi?id=1170245
17:10:57 <roshi> #info Proposed Blocker, grub, NEW
17:11:00 <kparal> I updated the title
17:11:07 <kparal> the problem seems to be in secure boot
17:11:10 <adamw> man, i thought someone tested this.
17:11:12 <adamw> oh, SB.
17:11:13 <kparal> if I turn it off, everything works
17:11:18 * roshi will update all these bugs when the meeting ends
17:11:31 <kparal> and I'll repeat what pjones said on #anaconda
17:11:47 <kparal> <pjones> still unlikely to be fixed in F21 at all.
17:11:54 <kparal> <pjones> trouble is, if it worked before on a different machine, that would seem to imply that either a) that machine did not, in fact, have SB enabled, or b) the machine you're testing this on can't actually boot windows correctly
17:11:54 <kparal> <mjg59> pjones: Chainloading to something in the system db should work
17:11:54 <kparal> <mjg59> Anything else has no chance
17:11:54 <kparal> <mjg59> I think Suse have a patch that adds shim support to chainload
17:11:55 <kparal> <pjones> yeah
17:11:57 <kparal> <pjones> hence my last statement.
17:12:15 <kparal> I'm failing to find any criteria for SB
17:12:24 <roshi> I don't see one either
17:12:24 <nirik> I think we have one...
17:12:27 <adamw> it'd be a conditional violation of the windows dual boot install
17:12:27 <kparal> adamw: is it hidden under some generic term?
17:12:31 <adamw> the condition being 'sb enabled'
17:12:39 * nirik looks
17:12:43 <adamw> there's no explicit sb criterion iirc
17:12:58 <kparal> I don't see any
17:13:06 <kparal> https://fedoraproject.org/wiki/Fedora_21_Final_Release_Criteria#Windows_dual_boot
17:13:07 <roshi> me either
17:13:15 <sgallagh> proposal: Reject as blocker, document that installation with secure boot enabled may not work on all systems yet.
17:13:17 <mattdm> Suggestion: document this as "Dual boot not yet working with Win 8 UEFI with SecureBoot enabled", move on.
17:13:22 <adamw> "The installer must be able to install into free space alongside an existing clean Windows installation and install a bootloader which can boot into both Windows and Fedora. ", "The expected scenario is a cleanly installed or OEM-deployed Windows installation.", "This criterion is considered to cover both BIOS and UEFI cases."
17:13:39 <adamw> +1 sgallagh, when something can't be done it can't be done.
17:13:40 <roshi> does OEM installs have sb enabled?
17:13:47 <adamw> yes, usually.
17:13:53 <Corey84-> yep
17:14:00 <adamw> we can adjust the criterion, it's not evil to adjust the criteria in the face of harsh reality
17:14:01 <Corey84-> in 8 or 8.1 it is
17:14:04 <kparal> this particular machine had it off by default
17:14:08 <kparal> I enabled it before installation
17:14:20 <adamw> kparal: win8 oem boxes are required to have it on by default
17:14:21 <kparal> I've seen 2 more machines, all of them had SB off
17:14:28 <kparal> OTOH, none of them had Win8 preinstalled
17:14:31 <adamw> right
17:14:33 <Corey84-> newer machines  OEM 8.1   Are default SB on
17:14:35 <adamw> pre-win8 usually wouldn't
17:14:36 <kparal> I'm not sure how it looks when win8 is preinstalled
17:14:40 <adamw> but anyhow, it seems academic
17:14:56 <mattdm> adamw: yeah we shouldn't hang ourselves on new external factors
17:14:56 <roshi> well, like adamw  said, if it can't be done, it can't be done
17:15:05 <roshi> votes?
17:15:14 <nirik> we should try and narrow down the docs to the actual affected cases if we can.
17:15:16 <Corey84-> W8+    its a MS  pushed requirement  on clean OEM  iirc
17:15:20 <sgallagh> I'm +1 to my proposal
17:15:26 <mattdm> +1 (although yes to narrowing down the docs as suggested)
17:15:27 <nirik> +1 to sgallagh's
17:15:34 <Corey84-> sgallagh,  +1   too
17:15:46 <nirik> and you can boot from efi ok still?
17:16:03 <kparal> nirik: yes I can
17:16:07 <Corey84-> efi iirc isnt the issue its sb that buggers it
17:16:12 <kparal> but not all machines have uefi boot menu
17:16:25 <adamw> kparal: just for the docs' sake, if you turn off SB *after installing* the dual boot starts working right away?
17:16:30 <nirik> right, but that should be in any documenting. ;)
17:16:30 <kparal> adamw: yes
17:16:33 <adamw> k.
17:16:43 <Corey84-> CSM  mode is FINE  yes
17:16:47 <Corey84-> post or pre install
17:17:04 <jreznik> if it works post install, I see less push on this having fixed
17:17:06 <Corey84-> even legacy first  with sb  on SHOULD  work
17:17:13 <roshi> proposed #agreed - 1170245 - RejectedBlocker - This doesn't violate any specific release criterion. Document on common bugs that SB enabled dual boots might not work at this point. Workaround is to turn it off.
17:17:24 <sgallagh> roshi: ack
17:17:27 <Corey84-> +1
17:17:28 <jreznik> ack
17:17:30 <Corey84-> ack
17:17:35 <nirik> ack
17:17:45 <roshi> #agreed - 1170245 - RejectedBlocker - This doesn't violate any specific release criterion. Document on common bugs that SB enabled dual boots might not work at this point. Workaround is to turn it off.
17:17:55 <adamw> i'd phrase it as 'not serious enough violation of the windows criterion', but np.
17:18:08 <roshi> a fair point
17:18:20 <roshi> well, since we're rolling another RC regardless
17:18:26 <roshi> let's look at this FE
17:18:27 <kparal> ack
17:18:41 <roshi> #topic (1169151) docker run fails with 'finalize namespace setup user setgid operation not supported'
17:18:41 * nirik hasn't looked, but is probibly -1.
17:18:43 <roshi> #link https://bugzilla.redhat.com/show_bug.cgi?id=1169151
17:18:46 <roshi> #info Proposed Freeze Exceptions, docker-io, ON_QA
17:19:17 <Corey84-> not a docker guy but looks -1 to me
17:20:07 <nirik> this sounds like it's all 'nicer' in -4... but I see no reason that can't just be a 0 day
17:20:17 <adamw> honestly i have no clue what's going on here
17:20:30 <adamw> i keep asking for someone to just test -2 and -4 and tell me which one's better
17:20:42 <adamw> it seems like a simple request, but for some reason, no-one's done it
17:20:49 <jreznik> nirik: yep, for me it looks like 0 day
17:20:50 <sgallagh> -1 to an FE at this point.
17:20:50 <adamw> so, -1 on the basis of insufficient information.
17:20:53 <mattdm> ugh this is the first I've seen it
17:20:55 <roshi> jzb: you have any insight on this one?
17:20:58 <jreznik> -1 FE
17:20:58 <sgallagh> If there's no reason it can't be fixed in an update, leave it alone
17:21:00 <roshi> larsks: ^^
17:21:26 <Corey84-> -1
17:21:38 <nirik> -1 FE barring further info
17:21:58 <sgallagh> I'm -1 FE, period. No potentially destabilizing changes now, please.
17:22:06 <roshi> looks like it can be fixed with an update
17:22:44 <adamw> sgallagh: well, if someone said -2 was completely non-functional i'd consider it, but no-one has, so.
17:23:04 <Corey84-> if its a easy  0 day -1 FE  for sure
17:23:20 <sgallagh> Off to a meeting. By folks.
17:23:25 <roshi> proposed #agreed - 1169151 - RejectedFreezeException - Based on the information we have on hand this looks like it can be fixed with an update. No need for an exception to freeze.
17:23:29 <roshi> later sgallagh
17:24:07 <nirik> ack
17:24:18 <Corey84-> ack
17:24:26 <jreznik> thanks sgallagh, /me supposed to be on other meeting but priorities are priorities :)
17:24:29 <jreznik> ack
17:24:41 <roshi> #agreed - 1169151 - RejectedFreezeException - Based on the information we have on hand this looks like it can be fixed with an update. No need for an exception to freeze.
17:24:42 <Corey84-> jreznik, fesco ?
17:24:50 <roshi> well, that's all we have for now
17:24:53 <mattdm> I'm _super_ confused with this one because the last I saw in the cloud list was colin noting that everything with docker in the atomic image looked okay.
17:24:53 <nirik> fesco is in 35min.
17:24:55 <jreznik> Corey84-: nope, internal
17:25:05 <Corey84-> ah
17:25:15 <roshi> adamw: are you going to put in the RC request?
17:25:30 <nirik> roshi: we need a build with that fix reverted...
17:25:42 <Corey84-> ^
17:25:55 <roshi> who's going to handle that?
17:26:16 <jreznik> yep, we need build and then request rc5
17:26:16 * roshi thought we just built the RC with the older package and didn't need to rebuild that package
17:26:52 <jreznik> the other question is how confident we will be to take older results to rc5 as there's not much time now
17:26:54 <nirik> thats a possibility I guess. I was thinking it was multiple places, but if it's just one package we might be able to use the older one.
17:27:15 <roshi> but hey, I just test them :) I don't know much about building them :)
17:27:34 <roshi> reverting something like this, reusing results is sketchy at best
17:27:36 <jreznik> it's possible but we have to double check it not to miss anything
17:28:00 <Corey84-> If we can get rc5 out by morning I'm down to test all day tomorrow
17:28:03 <jreznik> roshi: I agree, but we don't have much time... go/no-go is tomorow
17:28:05 <adamw> roshi: can do once we have another anaconda.
17:28:11 <roshi> sounds good
17:28:16 <nirik> blivit apparently.
17:28:21 <roshi> adamw: thoughts on reusing results?
17:28:21 <adamw> whichever
17:28:26 <roshi> I don't think we can for this
17:28:31 <adamw> roshi: we can transfer stuff beyond the installer
17:28:40 <adamw> base, server, desktop
17:28:43 * satellit will also help test
17:28:53 <adamw> there's no changes to the installed package set or package deployment code so that should be safe
17:29:00 <Corey84-> cant do any efi on this box but the rest of it im down
17:29:02 <adamw> i'd want to re-run the whole installation page, i guess
17:29:03 <roshi> there's that 's' word :p
17:29:04 <nirik> we need a new build.
17:29:10 <adamw> yeah, tflink's favourite
17:29:11 <satellit> lives?
17:29:21 <nirik> there's changes after the one that had the fix and other changes mixed in.
17:29:24 <roshi> yeah, gotta redo all the install stuff for sure
17:29:30 <adamw> nirik: right, that's what i figured.
17:29:33 <satellit> fedup?
17:29:35 * nirik just checked
17:29:49 <adamw> satellit: fedup should be transferable, i guess.
17:29:50 <tflink> the s-word is always fun - usually an indication of something that needs to be tested :)
17:30:01 <adamw> tflink: I SEE A VOLUNTEER
17:30:18 <adamw> so, tflink has volunteered to re-run all the server, desktop and base tests, thanks tflink
17:30:21 * tflink runs away, isn't sure from what but runs anyways
17:30:52 <roshi> dude, you have to wait until he actuall *in* the net before you spring it
17:30:58 <Corey84-> i can pull down  desktop tests   for sure
17:31:02 <Corey84-> and base i guess
17:31:03 <roshi> you'll never catch people like that
17:31:06 <roshi> :p
17:31:26 <adamw> hehe
17:31:27 <Corey84-> lol
17:31:34 <adamw> Corey84-: it's ok, we were just giving tflink a hard time.
17:31:41 <larsks> roshi: just saw your notify earlier...what were you pointing at?
17:31:57 <roshi> the bug in the topic
17:32:01 <Corey84-> adamw,  i dont mind tho lol
17:32:13 <larsks> Ah, okay. Thanks.
17:32:33 <Corey84-> helps me  learn the deeper stuff faster than some college course in OSes
17:32:55 <roshi> true that :)
17:33:09 <jreznik> just headsup for go/no-go tomorrow - I'm travelling to FAD tomorrow, the best way I think is to move to PRG tomorrow, have Go/No-Go as we departure Friday early... but with current weather situation, I may be trapped somewhere in the middle of nowhere in the train
17:33:29 <roshi> #topic Open Floor
17:33:40 <jreznik> forecast promises better weather but they promise it for the last two days
17:34:10 <Corey84-> so g/ng is tomorrow or friday am now ?
17:34:11 <jreznik> just in case I'll be offline, someone can help and start it :)
17:34:20 <jreznik> Corey84-: Thursday
17:34:23 <roshi> tomorrow
17:34:27 <Corey84-> k
17:34:34 <jreznik> 17:00 UTC
17:34:41 <roshi> jreznik: can you find someone for that?
17:34:42 <Corey84-> I'll be here
17:34:46 <roshi> volunteers?
17:34:52 <jreznik> and readiness meeting 19:00 UTC
17:35:01 * nirik will not be around for go/no-go either... might make readyness...
17:35:04 <jreznik> roshi: I hope I'll make it, just looking for back up
17:35:09 <Corey84-> might be late to readiness
17:35:36 <roshi> where are the docs on running that?
17:35:42 * Corey84- is too new to all that otherwise wouldnt mind
17:35:46 * roshi can be your backup if you don't find someone more suitable :)
17:36:39 <jreznik> roshi: thanks, LTE coverage is now better, so I hope even in train, I'll be able to connect :)
17:36:55 * adamw will be around all day.
17:37:08 <jreznik> I heard about folks being trapped for 17 hours in train yesterday
17:37:12 <roshi> oof
17:37:15 <roshi> that's not fun
17:37:35 <roshi> well, we'll make sure things get started tomorrow for the meeting
17:37:42 <roshi> anyone have anything else for this meeting?
17:37:42 * Corey84- really needs to get his  replacement   wwan card lol
17:37:47 <Corey84-> 17 hrs  --- refund ?
17:38:06 * roshi lights the fuse
17:38:21 <roshi> 3...
17:38:24 <jreznik> Corey84-: I read about refunds... but in this case, it wasn't railways fault, just weather
17:38:25 <Corey84-> 45 secs fuse ?
17:38:37 <roshi> depends on the day
17:38:45 <Corey84-> that's bs  even airlines will refund on that long a delay
17:38:59 <roshi> ACME Fuse Company doesn't do good QA - can never tell how long it'll burn
17:39:02 <roshi> 2...
17:39:45 <roshi> 1...
17:39:51 <jreznik> roshi: so it's you who cause ammunition storage explosions today? :D with your fuse
17:39:59 <roshi> thanks for coming folks!
17:40:19 <roshi> nah, it's our supplier :p
17:40:20 <jreznik> http://bit.ly/11UI4vL
17:41:24 <jreznik> thanks everyone!
17:41:25 <roshi> sheesh
17:41:38 <roshi> #endmeeting