fedora_coreos_meeting
LOGS
16:30:19 <dustymabe> #startmeeting fedora_coreos_meeting
16:30:19 <zodbot> Meeting started Wed May 13 16:30:19 2020 UTC.
16:30:19 <zodbot> This meeting is logged and archived in a public location.
16:30:19 <zodbot> The chair is dustymabe. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:30:19 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
16:30:19 <zodbot> The meeting name has been set to 'fedora_coreos_meeting'
16:30:23 <cyberpear> .hello2
16:30:24 <zodbot> cyberpear: cyberpear 'James Cassell' <fedoraproject@cyberpear.com>
16:30:26 <jdoss> .hello2
16:30:27 <zodbot> jdoss: jdoss 'Joe Doss' <joe@solidadmin.com>
16:30:29 <jlebon> .hello2
16:30:30 <zodbot> jlebon: jlebon 'None' <jonathan@jlebon.com>
16:30:32 <lorbus> .hello2
16:30:33 <zodbot> lorbus: lorbus 'Christian Glombek' <cglombek@redhat.com>
16:30:36 <dustymabe> #topic roll call
16:30:52 <dustymabe> #chair cyberpear jdoss jlebon lorbus
16:30:52 <zodbot> Current chairs: cyberpear dustymabe jdoss jlebon lorbus
16:30:58 <dustymabe> glad to see the ol' jdoss
16:31:11 * jdoss waves
16:31:13 <bgilbert> .hello2
16:31:14 <zodbot> bgilbert: bgilbert 'Benjamin Gilbert' <bgilbert@backtick.net>
16:31:34 <jdoss> I am still alive. Just been getting firehosed at my new job.
16:32:01 * gilliard__ listening (satellit)
16:33:21 <dustymabe> jdoss: it happens :)
16:33:39 <dustymabe> #chair bgilbert
16:33:39 <zodbot> Current chairs: bgilbert cyberpear dustymabe jdoss jlebon lorbus
16:33:47 <dustymabe> #topic Action items from last meeting
16:33:55 <dustymabe> no action items to speak of specifically :)
16:34:07 <dustymabe> #topic topics for this meeting
16:34:08 <lucab> .hello2
16:34:10 <zodbot> lucab: lucab 'Luca Bruno' <lucab@redhat.com>
16:34:29 <dustymabe> any topics anyone would like to discuss during this meeting? we have one meeting ticket and then we can discuss other topics
16:34:39 <dustymabe> otherwise we'll skip to open floor after the meeting ticket
16:34:41 <dustymabe> #chair lucab
16:34:41 <zodbot> Current chairs: bgilbert cyberpear dustymabe jdoss jlebon lorbus lucab
16:35:28 * dustymabe waits another minute for topic suggestions
16:36:11 <dustymabe> #topic F32 rebase tracker for changes discussion
16:36:15 <dustymabe> #link https://github.com/coreos/fedora-coreos-tracker/issues/372
16:36:38 <dustymabe> ok I'm re-using this ticket for "let's talk about the mechanics of switching to f32 for our testing/stable streams"
16:36:48 <dustymabe> I updated the ticket with our proposed timeline for switching to f32
16:37:16 <dustymabe> which IIUC means our next `testing` release is when we switch to f32
16:37:30 <jlebon> +1
16:37:51 <jlebon> i just linked https://github.com/coreos/fedora-coreos-config/pull/394 to that ticket
16:38:23 <dustymabe> any other things we need to consider?
16:38:34 <dustymabe> lorbus: how do we look from an OKD perspective ?
16:39:15 <lorbus> It should work out of the box as long as podman doesn't break :) We're still explicitly using cgroupsv1 there, too
16:39:50 <dustymabe> yes, we're still on cgroups v1
16:39:53 <lorbus> That explicit config will go away soon, but as long as we don't switch to cgroupsv2 with FCOS now, that won't be an issue
16:40:01 <dustymabe> lorbus: there is a iptables/nft change
16:40:24 <dustymabe> I'd feel much more comfortable if we could get you or vadim to try an OKD cluster on `next`
16:40:44 <lorbus> do you have a link? I thought there is a compat layer
16:40:45 <lucab> dustymabe: I still need to close on https://github.com/coreos/fedora-coreos-tracker/issues/468, I'll find some time before the end of the week
16:41:00 <dustymabe> with our currently proposed schedule we've got 3 weeks to fix any bugs we find
16:41:20 <dustymabe> lorbus: https://github.com/coreos/fedora-coreos-tracker/issues/372#issuecomment-588368597
16:41:28 <dustymabe> that links to the change proposal
16:42:02 <lorbus> +1 to the idea to test OKD on next
16:42:05 <lorbus> I'll see to that
16:42:33 <dustymabe> #action lorbus to try out OKD on our `next` stream so we can work out any kinks before switching `stable` to f32
16:43:00 <dustymabe> lorbus: vadim may have tried it already, so maybe work together with him on that Action Item
16:43:31 <lorbus> yep, definitely
16:43:55 <dustymabe> lucab: thanks! I think I need to respond upstream on that fstrim issue as well
16:44:18 <dustymabe> any other things we need to do or mechanics we need to discuss regarding the switch to f32?
16:44:36 <dustymabe> anyone here running next? did you rebase to it or did you start fresh?
16:45:05 <dustymabe> I'm running it for my IRC server
16:46:27 <dustymabe> I think we've also said this before.. we need to create an update barrier so that all upgrades from f31 -> f32 go through the same path
16:46:29 <dustymabe> correct ?
16:46:40 <jlebon> dustymabe: upgraded from f31? re. IRC server
16:46:48 <jlebon> yes, indeed
16:47:06 <jlebon> https://github.com/coreos/fedora-coreos-streams/issues/99#issuecomment-625291969
16:47:29 <dustymabe> jlebon: I think my irc server started as rebased from stable to next (f32), but I've redeployed it since
16:48:08 <dustymabe> jlebon: lucab: ok so we agreed to that barrier.. should we also limit the paths to f32 ?
16:48:08 <jlebon> yeah, that's probably the bit we should test the most -- upgrade testing
16:48:31 <dustymabe> so let's say the final f31 release on stable is A
16:48:50 <dustymabe> we know we will filter all previous releases of f31 to A before allowing them to upgrade to f32
16:49:07 <lucab> I did a bunch of reabses (including stable-testing-next), I only spotted the Zincati fragment out of place
16:49:08 <dustymabe> but will we allow for A->B A->C A->D
16:49:23 <dustymabe> or will we only allow one path from f31 to f32
16:49:28 <dustymabe> i.e. only A-B exists
16:49:40 <dustymabe> and then you can update B->C or B->D etc..
16:49:48 <lucab> dustymabe: only one, that's the barrier
16:50:31 <dustymabe> lucab: and if B isn't the latest release we still make them go through B?
16:50:58 <jlebon> lucab: hmm, interesting. is there a way to get the other behaviour if we wanted?
16:51:22 <lucab> not sure I follow
16:51:38 <dustymabe> lucab: what i'm talking about is a double barrier essentially
16:52:11 <dustymabe> let's say A is the barrier release (the last release of stable f31)
16:52:14 <jlebon> lucab: does a barrier affect both inbound and outbound edges, or just inbound?
16:52:31 <dustymabe> what paths are available to systems currently on A?
16:52:34 <lucab> jlebon: only inbound
16:53:19 <lucab> dustymabe: whatever are defined by further rollouts and barriers
16:53:24 <dustymabe> lucab: right
16:53:33 <jlebon> that's what dustymabe is asking :)   i'm not sure if we *need* to have that
16:53:39 <dustymabe> so i think jlebon and I are asking if we can control it such that there is only one available path
16:53:52 <dustymabe> so A->B is the only upgrade path that exists for A
16:54:05 <dustymabe> it would limit our upgrade testing matrix
16:54:30 <dustymabe> but not sure how practical it is
16:54:34 <lucab> we can, with two barriers
16:54:42 <dustymabe> lucab: right. ok that's what I was thinking
16:55:02 <dustymabe> whether we want to do that or not can be another discussion probably
16:55:18 <lucab> your other idea/approach makes a system who has to guess about future nodes
16:55:57 <dustymabe> the end result is what I was looking for.. creating two barriers achieves the goal
16:56:07 <dustymabe> ok any other things to discuss for moving to f32 ?
16:56:14 <lucab> (I can think about it a bit more, but I am not thrilled)
16:56:35 <dustymabe> lucab: yeah we may not want to do that, was an idea
16:57:00 <lucab> dustymabe: in general we shouldn't have barriers, unless we know we have to force a chokepoint for a migration script
16:57:25 <lucab> even this F31->F32 is not really needed
16:57:53 <dustymabe> lucab: but it's useful in that it allows us to have users follow a more tested path?
16:57:55 <lucab> it just makes the model easier to reason about for us humans
16:58:21 <dustymabe> i.e. we probably aren't going to spin up FCOS from january to test upgrading to f32
16:58:45 * jlebon has to drop off for other meeting, but overall i think i lean more towards not having a double barrier
16:58:50 <bgilbert> it might make sense for upgrade _testing_ to specifically test an F31 origin
16:59:00 <lucab> dustymabe: right, it trims the space of unknown things
16:59:04 <dustymabe> cool
16:59:05 <bgilbert> and then avoid barriers on the user side
16:59:05 <jlebon> we need to trust our CI testing, and expand it if we feel nervous
16:59:16 <dustymabe> i think we are all saying the same thing :)
16:59:23 <bgilbert> I don't think so?
16:59:40 <dustymabe> oh, k let me dig
16:59:53 <bgilbert> I tend to agree with Luca that barriers should exist for specific technical reasons
17:00:09 <dustymabe> bgilbert: "F31 origin" meaning the original gangsta ?
17:00:14 <dustymabe> the first release?
17:00:17 <lucab> bgilbert: the next CI run on `testing` will test that (but only from last F31 release)
17:00:28 <bgilbert> "origin" in the sense of the starting point for the upgrade test
17:00:36 <bgilbert> so probably one of the last F31 releases
17:00:59 <dustymabe> right, which is what I thought we were proposing.. introduce an update barrier so that we force users to follow the path that our CI tests
17:01:21 <bgilbert> I'm proposing that every future release should be upgrade-tested from F31
17:01:26 <bgilbert> and then we don't need a barrier
17:01:50 <dustymabe> bgilbert: I agree, but I think what you are saying is that we don't need a double barrier
17:01:59 <bgilbert> I'm saying we don't need a single barrier either
17:01:59 <dustymabe> we still need the single barrier
17:02:15 <cyberpear> bgilbert: do you want every F31 release, even from Jan to be able to upgrade to every single F32 release?
17:02:21 <bgilbert> yup!
17:02:23 <bgilbert> it's the more aggressive option, to be sure :-)  and if it turns out to be a disaster, we can use barriers for future releases
17:02:45 <bgilbert> but if we set the precedent, it'll only get harder to do barrierless upgrades later
17:03:03 <dustymabe> bgilbert: hmm. but what is the benefit of doing a barrierless upgrade in this case ?
17:03:14 <lucab> that's fair
17:03:26 <cyberpear> I guess the number of test runs would be "number of F31 FCOS releases in the wild" for each release
17:03:32 <bgilbert> barriers add extra friction
17:03:38 <bgilbert> cyberpear: I don't think we need to test everything -> everything
17:03:48 <lucab> dustymabe: less reboots for a new node created on an old release
17:03:49 <bgilbert> pick a representative F31 release and test upgrading from it
17:03:50 <cyberpear> just everything -> proposed
17:03:54 <dustymabe> I feel like it makes me feel better to know this upgrade path was tested for the f31 to f32 rebase
17:03:55 <walters> we're talking about unknown unknowns, but my instinct says that there aren't any bugs that would happen only when upgrading between an earlier version to latest
17:04:14 <bgilbert> cyberpear: no, F31 -> proposed, forever
17:04:15 <cyberpear> or "oldest F31" -> current and "newest F31" -> current
17:04:25 <bgilbert> cyberpear: sure
17:04:48 <lucab> bgilbert: I think the idea was more or less "representative == barrier"
17:04:56 <bgilbert> in my experience, upgrade bugs are path-dependent anyway
17:04:59 <walters> the cases that exist are likely in things like moby/podman stored containers&images but I think they need to deal with even old images in general anyways
17:05:21 <bgilbert> "this range of F31 releases wrote this file that we can't read anymore" type of thing
17:06:05 <bgilbert> so I guess what I'm saying is, I'm +1 to more CI, but think the barrier is too cautious
17:06:20 <bgilbert> the things that will break are not the things we think will break
17:07:03 <dustymabe> yeah I'm not sure where I stand
17:07:39 <dustymabe> seemed like a no-brainer to me. we've never had the ability to force an upgrade path before and match a known tested major upgrade path
17:07:55 <lucab> for this specific F31->F32, I do not disagree
17:08:16 <lucab> last time we used a barrier was because we needed a real migration script
17:08:17 <dustymabe> but I think you're probably right that it's not absolutely necessary and we'd probably be fine
17:08:51 <lucab> this time we are not aware of any, so there wouldn't be any strict need to
17:09:02 <bgilbert> we will hopefully never have fewer users than right now who might be broken if we try being aggressive
17:09:15 <bgilbert> so now's the time :-P
17:09:24 <cyberpear> normal Fedora upgrades don't go thru any barrier
17:09:39 <dustymabe> cyberpear: because they don't have that ability
17:09:41 <dustymabe> we do
17:10:06 <lucab> bgilbert: OTOH, I'm still scared by the "let's go back and fingerprint every grub since day 0"
17:10:36 <bgilbert> it worked, didn't it?  :-D
17:10:38 <bgilbert> but yeah
17:10:45 <bgilbert> ...or, put a different way, not using a barrier won't create a precedent, but using one will.
17:10:48 <lucab> (grub is a bad example here)
17:11:11 <dustymabe> bgilbert: i don't think we'll have to create a barrier in the future (say f32->f33) if we do use one now
17:11:45 <bgilbert> dustymabe: there will be psychological pressure to.  it'll be the safe, conservative approach.
17:11:57 <bgilbert> we'll have more users then than we did during 31->32, etc.
17:12:14 <bgilbert> we'll be considered more stable then than now
17:12:53 <bgilbert> I've talked too much here, I'll stop
17:12:56 <cyberpear> I vote for "no barrier unless known to be technically required"
17:13:24 <dustymabe> ok maybe let's pick up this discussion again here in the next week (or maybe we start a separate tracker issue to capture the discussion)
17:13:40 <lucab> bgilbert: I think I'm personally fine with establishing a "barrier between majors" rule
17:14:07 <lucab> let's move to GH, we have till next week to come up with a decision
17:14:27 <dustymabe> bgilbert: does "let's continue to have the discussion" sound good? so we can move on to other topics?
17:14:51 <bgilbert> sure
17:14:53 <bgilbert> one other thought I just had
17:15:01 <lucab> (we can also retroactively put a barrier if we need to)
17:15:03 <bgilbert> barriers, by their nature, force us to live with destination-side bugs forever
17:15:24 <bgilbert> i.e., if the target side of the barrier has a kernel bug that causes boot problems on some boxes
17:15:30 <bgilbert> we're stuck with it, or have to retarget the barrier
17:15:35 <cyberpear> can we re-write that barrier later?
17:15:39 <bgilbert> yes
17:16:13 <bgilbert> but in principle the barrier becomes an artifact in its own right that might require maintenance
17:16:17 <bgilbert> EOF
17:16:39 <dustymabe> #action dustymabe to create a ticket where we discuss appropriate update barrier approach for major upgrades
17:17:01 <dustymabe> #topic podman corner case bug: when do we backport fix problems ?
17:17:13 <dustymabe> #link https://github.com/containers/libpod/issues/5950
17:17:18 <dustymabe> ok so mini story here
17:17:24 <dustymabe> i'm running `next` on my irc server
17:17:39 <dustymabe> it starts a rootless podman container via systemd on boot
17:17:53 <dustymabe> that spawns weechat inside of tmux (too much detail)
17:18:20 <dustymabe> anywho - after the last upgrade to `next` my podman containers stopped working
17:18:35 <dustymabe> https://github.com/containers/libpod/issues/5950#issuecomment-625450333
17:18:48 <dustymabe> it only applies to running a rootless container via systemd
17:18:55 <dustymabe> so it's a bit of a corner case
17:19:15 <dustymabe> but i'm wondering if that is something we should consider backporting a fix for in the future
17:19:24 <dustymabe> so far we haven't had any users other than me report the issue
17:19:36 <bgilbert> does it only apply to `next`?
17:19:36 * dustymabe notes we should add a CI test for this specific problem
17:19:53 <dustymabe> bgilbert: good question. I think it's a podman 1.9 bug - so I think it applies to all our current streams
17:20:12 <bgilbert> is it a regression?
17:20:15 <dustymabe> but I need to confirm that
17:20:26 <bgilbert> (for us)
17:20:33 <dustymabe> bgilbert: yes, my system was working.. went down for upgrade and then stopped working
17:20:44 <bgilbert> whoops, sorry, so you said
17:21:09 <bgilbert> has it actually landed in the other streams?
17:21:22 <dustymabe> IIUC podman 1.9 is in all our streams right now
17:21:41 <dustymabe> it took us a 3 weeks to do a new release of `next`
17:21:58 <cyberpear> what do you mean by backport? -- can't we just push a regular update and include it in the next release?
17:21:59 <dustymabe> so I didn't catch the bug when it was in `testing`, but not yet in `stable`
17:23:08 <bgilbert> IMO container runtime regressions are the sort of fix we should prioritize
17:23:28 <dustymabe> bgilbert: right, i agree
17:23:39 <dustymabe> but it does slightly depend on the case
17:23:50 <dustymabe> so in this case it's only a problem if you're starting a rootless container via systemd
17:23:50 <bgilbert> with the current FCOS model (less priority on stability) I agree with cyberpear that we should tend to pick up new packages
17:24:03 <bgilbert> maaaaaybe an actual backport to `stable` but meh
17:24:15 <bgilbert> dustymabe: understood
17:24:17 <dustymabe> if I was running `testing` I would have caught this before it hit stable
17:24:27 <dustymabe> but I was trying to get some coverage on `next` so i missed it :(
17:24:38 <bgilbert> in CL I think we we would have rolled out-of-cycle releases on all channels
17:25:03 <dustymabe> so our options:
17:25:04 <bgilbert> (streams)
17:25:18 <dustymabe> my understanding is that podman is going to do a new release very soon with the fix in
17:25:50 <dustymabe> should we try to respin current testing with that new podman so that we can get stable fixed in the next round of releases (next week)?
17:26:09 <dustymabe> or do we just not do anything since no one has reported an issue ?
17:26:33 <dustymabe> it's hard for me to gauge if it's a real problem for people without having anyone complain about it:0
17:26:55 <bgilbert> dustymabe: if you give me a second I can solve your dilemma :-P
17:27:00 <cyberpear> I count 1 person who complained :P
17:27:06 <dustymabe> in the very least we *should* make sure the fix goes into the testing release that is cut next week
17:28:14 <dustymabe> bgilbert: :)
17:28:25 <bgilbert> all I can offer is my CL experience, which says: these things are judgment calls, and this feels like a case we care about
17:28:26 <bgilbert> complaints or no
17:28:37 <bgilbert> so I'd vote respin
17:28:47 <dustymabe> bgilbert: ok so you'd be a fan of respinning testing to get the new release into it?
17:28:48 <bgilbert> YMMV
17:28:49 <dustymabe> +1
17:28:50 <cyberpear> I'd also vote respin
17:28:56 <bgilbert> dustymabe: yup
17:29:16 <dustymabe> ok. i'll ping the podman guys on the release and try to respin testing
17:29:44 <dustymabe> #action dustymabe to get new podman release into testing release so we can fix stable in next weeks releases
17:29:54 <bgilbert> +1
17:29:58 <dustymabe> bgilbert: as part of that I will also 100% confirm that the bug does affect stable and testing
17:30:04 <bgilbert> cool
17:30:09 <dustymabe> #topic open floor
17:30:09 <bgilbert> also, better to exercise the machinery
17:30:37 <cyberpear> I think for F33, we should rebase "next" in time for F33 Beta, then have the first F33-based "stable" based on exactly F33 final content
17:30:55 <bgilbert> +1 to rebasing earlier, that was always the plan
17:31:10 <bgilbert> what's the benefit to stable based on exactly F33?
17:31:13 <lorbus> +1 to that
17:31:18 <bgilbert> it'd be two weeks after F33 lands, of course
17:31:44 <bgilbert> two+ weeks
17:31:54 <dustymabe> hmmm
17:32:10 <cyberpear> understood, just think would be good to have a very-well defined "point-in-time" snapshot, and that one's been pre-defined
17:32:14 <dustymabe> I think the delay we have could be shortened slightly but probably not by much (my opinion)
17:32:27 <dustymabe> +1 to having a next stream earlier
17:32:29 <bgilbert> "+1 to rebasing earlier" = the next stream, not testing/stable
17:32:33 <bgilbert> yup
17:32:34 <dustymabe> based on f33
17:32:38 <lorbus> we're lagging behind rawhide quite a bit, so the earlier we move the next stream to it, the better imo
17:32:58 <bgilbert> cyberpear: we're effectively a rolling distro, though?
17:33:36 <cyberpear> yes, but if we eventually needed a barrier or double-barrier, the GA content would make a good place for it, IMO
17:33:37 <bgilbert> if the exact package set matters, I feel like we're doing something wrong
17:33:47 <dustymabe> there are a lot of 0day fixes that land after f33
17:34:02 <dustymabe> honestly the GA content is mostly about "does what's delivered on the media work right"
17:34:09 <cyberpear> so let's define some FCOS release criteria and have them part of GA content?
17:34:55 <cyberpear> also, part of our artifacts are ISOs...
17:35:11 <dustymabe> cyberpear: we can certainly get some more hooks into the releng processses such that bugs that affect us are considered with higher priority
17:35:43 <bgilbert> I don't think what happened this cycle is representative
17:35:49 <bgilbert> we were running to catch up
17:35:53 <dustymabe> yep
17:35:57 <bgilbert> we'll get better at this
17:36:04 <cyberpear> yep, just trying to plan for the future
17:36:10 <bgilbert> +1 to hooks into releng where it makes sense.  also better CI.
17:36:16 <bgilbert> but: cyberpear, I'm not clear on what problem you're trying to solve
17:36:20 <dustymabe> cyberpear: yep, and you can help us with that too
17:36:35 <bgilbert> haphazard release process? something else?
17:37:46 <lorbus> I gotta drop. thanks all, and thanks for hosting dustymabe!
17:37:52 <cyberpear> "in 3 years, I want to go back and reproduce my system as it was on F33 FCOS" -- I know it's not a priority for most here, but having a release based on GA content would give it a good chance of succeeding, even w/ overlaid content
17:38:05 <dustymabe> lorbus++
17:38:05 <zodbot> dustymabe: Karma for lorbus changed to 1 (for the current release cycle):  https://badges.fedoraproject.org/tags/cookie/any
17:38:40 <bgilbert> cyberpear: the 33.20201105.3.0 release artifacts will still exist
17:38:46 <cyberpear> the GA content RPM set is kept forever; everything inbetween until EOL is discarded once there's an update made
17:38:47 <bgilbert> presuming you've saved the URLs :-/
17:38:56 <dustymabe> cyberpear: and the git history in the fcos configs have the exact rpm NVRAs
17:39:29 <bgilbert> cyberpear: and we protect those NVRAs from GC for some period of time
17:39:38 <cyberpear> good to know
17:39:52 <cyberpear> anyway, nothing actionable on this today, I think
17:39:54 <bgilbert> honestly I wouldn't trust that the release can be rebuilt from parts in 3 years
17:39:59 <walters> (and the source is always saved)
17:40:22 <bgilbert> even if you had the RPMs.  you'd pin to an old cosa, which might have who-knows-what bugs with your 3-year-newer kernel etc.
17:40:31 * dustymabe notes time
17:40:56 <cyberpear> bgilbert: that's why I'd like to see FCOS become part of the compose process so the GA artifices are also preserved along w/ the RPMs
17:41:02 * cyberpear also sees we're over time
17:41:05 <bgilbert> cyberpear: thanks for bringing it up though.  release processes _always_ need improvement :-)
17:41:05 <dustymabe> will end meeting in two minutes
17:41:28 <bgilbert> cyberpear: outside the FCOS bucket, you mean?
17:42:00 <cyberpear> I mean, have the F33-GA-based FCOS be sent to the mirror network, as if it were part of the GA compose
17:42:04 <dustymabe> the problem here is mostly that we don't have a specific GC contract for FCOS artifacts that users can rely on
17:42:10 <bgilbert> dustymabe: true
17:42:27 <bgilbert> cyberpear: I'm still really really hoping no one ever references our artifacts except from stream metadata
17:42:37 <bgilbert> continued use of old releases = bad
17:42:43 <bgilbert> meanwhile, in the real world...
17:42:47 <cyberpear> (hence, sending it to the mirror network, so it can take advantage of the existing processes in place)
17:42:59 <bgilbert> (which is why I wouldn't be happy about sending it out to mirrors)
17:43:12 <bgilbert> that one might be a losing battle though
17:43:21 <cyberpear> (yeah, real world I find myself occasionally needing a RHEL 5 VM or container, to test something out for someone who's stuck on it for some reason)
17:43:26 <bgilbert> yeah :-(
17:43:41 <dustymabe> #endmeeting