fedora_coreos_meeting
LOGS
16:30:22 <lucab> #startmeeting fedora_coreos_meeting
16:30:22 <zodbot> Meeting started Wed Jan 20 16:30:22 2021 UTC.
16:30:22 <zodbot> This meeting is logged and archived in a public location.
16:30:22 <zodbot> The chair is lucab. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:30:22 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
16:30:22 <zodbot> The meeting name has been set to 'fedora_coreos_meeting'
16:30:32 <lucab> #topic roll call
16:30:36 <dustymabe> .hello2
16:30:37 <slowrie> .hello2
16:30:37 <zodbot> dustymabe: dustymabe 'Dusty Mabe' <dusty@dustymabe.com>
16:30:38 <lucab> .hello2
16:30:39 <zodbot> slowrie: slowrie 'Stephen Lowrie' <slowrie@redhat.com>
16:30:40 <olem> .hello2
16:30:42 <zodbot> lucab: lucab 'Luca Bruno' <lucab@redhat.com>
16:30:43 <lorbus> .hello2
16:30:45 <zodbot> olem: olem 'Olivier Lemasle' <o.lemasle@gmail.com>
16:30:45 <PanGoat> .hello2
16:30:48 <zodbot> lorbus: lorbus 'Christian Glombek' <cglombek@redhat.com>
16:30:51 <zodbot> PanGoat: Sorry, but you don't exist
16:30:53 <cyberpear> .hello2
16:30:57 <zodbot> cyberpear: cyberpear 'James Cassell' <fedoraproject@cyberpear.com>
16:31:01 <PanGoat> :)
16:31:25 <lucab> #chair lorbus PanGoat slowrie dustymabe jlebon cyberpear
16:31:25 <zodbot> Current chairs: PanGoat cyberpear dustymabe jlebon lorbus lucab slowrie
16:31:30 <slowrie> You can do '.hello <fas_username>' if it differs from your IRC nick
16:32:20 <lucab> #chair olem
16:32:20 <zodbot> Current chairs: PanGoat cyberpear dustymabe jlebon lorbus lucab olem slowrie
16:32:58 <lucab> #topic Action items from last meeting
16:33:49 <lucab> I think last one was pretty quick and no action item were added
16:33:52 <PanGoat> slowrie: thanks
16:34:22 <PanGoat> Only docs for two items and the twitter survey
16:34:33 <jlebon> redbeard's action items were there for a while, so i stopped re-actioning them
16:34:45 <PanGoat> I finished one doc item last night (#205) and will now move to the survey
16:34:55 <PanGoat> then the remaining doc
16:34:58 <dustymabe> PanGoat++
16:35:25 <lucab> I'll start from the oldest 'meeting' ticket from https://github.com/coreos/fedora-coreos-tracker/labels/meeting
16:36:01 <travier> .hello2 siosm
16:36:02 <zodbot> travier: Sorry, but you don't exist
16:36:09 <travier> .hello siosm
16:36:10 <zodbot> travier: siosm 'Timothée Ravier' <travier@redhat.com>
16:36:11 <lucab> well, the council status is not for today
16:36:18 <jberkus> .hello jberkus
16:36:19 <zodbot> jberkus: jberkus 'Josh Berkus' <josh@agliodbs.com>
16:36:33 <PanGoat> .hello2 jaimelm
16:36:34 <zodbot> PanGoat: Sorry, but you don't exist
16:36:34 <lucab> #topic Platform Request: CloudStack
16:36:41 <PanGoat> weird
16:36:46 <lucab> #link https://github.com/coreos/fedora-coreos-tracker/issues/716
16:36:58 <lucab> I think this is from olem
16:37:02 <travier> PanGoat: .hello jaimelm
16:37:16 <lucab> #chair travier jberkus
16:37:16 <zodbot> Current chairs: PanGoat cyberpear dustymabe jberkus jlebon lorbus lucab olem slowrie travier
16:37:34 <PanGoat> .hello jaimelm
16:37:35 <zodbot> PanGoat: jaimelm 'Jaime Magiera' <jaimelm@umich.edu>
16:37:40 <PanGoat> There we go. thanks.
16:37:43 <jberkus> lucab: I have a item for the other stuff section of the meeting
16:37:47 <lucab> olem: are you there for a quick summary?
16:37:51 <olem> Yes. I'm CloudStack user and Fedora packager. I started using fcos quite recently, and I try to make fcos work on CloudStack.
16:38:14 <olem> There's already some kind of support in Ignition and Afterburn from CoreOS Container Linux
16:38:53 <olem> but it is currently broken because it relies on systemd-networkd and fcos switched to NetworkManager
16:38:59 <dustymabe> wow, nice work olem filling out all the required info in the reqest template
16:39:01 <jbrooks> .hello2 jasonbrooks
16:39:01 <zodbot> jbrooks: Sorry, but you don't exist
16:39:09 <jbrooks> .hello jasonbrooks
16:39:10 <zodbot> jbrooks: jasonbrooks 'Jason Brooks' <jbrooks@redhat.com>
16:39:36 <olem> Also, CloudStack supports multiple hypervisors, hence requires multiple image formats.
16:40:18 <lucab> olem: it looks like a single VHD may cover most of them, other than vmware, right?
16:40:42 <dustymabe> olem: I'm trying to understand the requirement on systemd-networkd
16:40:57 <dustymabe> is that something we can easily fix?
16:41:11 <lucab> dustymabe: afterburn does parse the DHCP lease out of systemd-networkd
16:41:38 <dustymabe> so we could change that code?
16:41:39 <olem> lucab: I guess so. But I also think that vmware is the most used hypervisor in CloudStack community
16:41:43 <lucab> dustymabe: I still haven't found how to do the same with NM, but it may be possible
16:42:05 <slowrie> lucab: Ignition seems to do the same :\
16:42:21 <lucab> slowrie: ouch, I didn't know
16:42:21 <jlebon> i'm a little concerned about the ever-growing list of cloud images -- it's a good problem to have, but makes me want to revisit smarter ways of doing this
16:42:33 <dustymabe> lucab: we just need to more or less find the IP handed out?
16:43:22 <lucab> dustymabe: IIRC they used a custom DHCP option to signal where the metadata endpoint is
16:43:46 <olem> yes. Actually there's multiple ways to find the metadata server address (virtual router). E.g cloud-init tries to get virtual address by DNS, then systemd-networkd, then dhcpd leases, then default route
16:43:53 <olem> https://github.com/canonical/cloud-init/blob/bd76d5cfbde9c0801cefa851db82887e0bab34c1/cloudinit/sources/DataSourceCloudStack.py#L228
16:43:55 <PanGoat> jlebon++
16:44:12 <olem> However, most of these methods since to be unavailable when Ignition runs...
16:44:21 <jlebon> maybe `coreos-installer` could learn to restamp VM images or something
16:44:37 <travier> jlebon: I like this idea
16:44:40 <PanGoat> nice
16:45:21 <travier> olem: How broken are current images on those platforms? If you boot an OpenStack image on KVM CloudStack, what happens?
16:45:34 <travier> And similarly for VMware
16:46:29 <slowrie> travier: At least for the Ignition side OpenStack assumes a static metadata endpoint (169.254.169.254) so it'd be the wrong metadata URL. It'd work for config drive based metadata but not http metadata service
16:47:01 <olem> The VMware ova image works on CloudStack but needs the ignition file passed as OVF parameter, not userdata. For KVM, the Ignition file cannot be found.
16:48:01 <jlebon> olem: hmm, so if we published only one image, would it make more sense to have just a vhd to cover the !vmware cases?
16:49:21 <olem> As a "degraded solution", yes. But a cloudstack specific image is still required to get afterburn work
16:49:29 <olem> e.g. to get ssh keys, etc
16:50:08 <olem> I mean for vmware
16:50:09 <jlebon> right gotcha
16:51:00 <lucab> it sound like we want a VHD plus a OVA as final artifacts
16:51:12 <jlebon> is there a repository of cloudstack images shared?
16:51:30 <lucab> but the real blocker is getting the metadata out of the DHCP lease from NM
16:51:55 <olem> There's http://dl.openvm.eu/cloudstack/ which is maintained by a member of cloudstack community
16:52:12 <dustymabe> for the DHCP options.. could we query NetworkManager for that information? It has it if you run `nmcli c show <connection_name>`
16:52:27 <dustymabe> alternatively we could write a dispatcher script that saved the info off somewhere
16:53:01 <lucab> dustymabe: if that works without dbus
16:53:15 <dustymabe> ahh, so this is in the initramfs?
16:53:21 <slowrie> Yes. This is Ignition
16:53:26 <dustymabe> ahh
16:54:04 <dustymabe> i'm pretty sure the journal has the option information output, but grabbing it from there isn't fun either
16:54:13 <lucab> olem: is the metadata endpoint always on the default gateway?
16:54:47 <lucab> reading from http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.8/virtual_machines/user-data.html it seems so
16:54:52 <slowrie> dustymabe: this is the code in question https://github.com/coreos/ignition/blob/master/internal/providers/cloudstack/cloudstack.go#L135-L180
16:56:02 <lucab> if that is the case, something akin to iproute may be enough, without introspecting the DHCP lease
16:56:23 <olem> It's available in initramfs?
16:57:06 <olem> I tried parsing /proc/net/route as cloud-init does, and did not manage to get the default route.
16:57:38 <lucab> olem: I think so, but even if not it could be added or we can query via netlink, I guess
16:58:35 <olem> ok, so it could be a solution
16:58:49 <dustymabe> if feel like `ip` is in the initramfs
16:58:52 <dustymabe> if not we could add it
16:59:07 <jlebon> it definitely is, i used it yesterday :)
16:59:24 <jlebon> hmm, that might've been RHCOS :|
16:59:36 <dustymabe> `ip route show default` should get us there
17:00:01 <jlebon> while we work on adding NM support for it, i'll file a ticket as well so we can discuss ways to dedupe these images better
17:00:45 <lucab> even if not 100% correct, that at least should buy us time for the proper NM way
17:01:08 <olem> ok
17:01:43 <lucab> anything else on this topic?
17:02:20 <lucab> should we push to openvm.eu directly from our pipeline?
17:03:21 <jlebon> can we hold the actual artifact discussion for now?
17:03:30 <olem> Oh, I suppose getfedora.org is the place where people expect to get fcos images?
17:03:38 <lucab> yup
17:04:20 <lucab> olem: normally yes, but we also upload to some cloud platforms, if that makes it easier to consume
17:04:45 <lucab> ok, I'll move to the next ticket
17:05:00 <lucab> #topic Enabling DNF Count Me support in Fedora CoreOS
17:05:08 <lucab> #link https://github.com/coreos/fedora-coreos-tracker/issues/717
17:05:42 <lucab> travier: do you want to introduce this?
17:06:15 <travier> lucab: sure
17:06:38 <travier> So this is a first step to discuss planning for enabling DNF Count Me by default on Fedora CoreOS
17:07:03 <travier> DNF Count Me support has been added to rpm-ostree, mirroring the support in libdnf in classic Fedora
17:07:52 <travier> With this, Fedora CoreOS will report anonymously that they are running to Fedora mirrors infrastructure to enable statistics
17:08:18 <travier> The details of the implementation that make this OK from a privacy perspective is in the ticket
17:08:39 <travier> This can also be disabled if users do not want to be counted
17:09:09 <travier> So we are to enable that by default we need to announce it to let people opt out of it
17:09:28 <travier> Note that currently, if you overlay any RPM on top of the base image, this triggers counting already
17:09:45 <travier> (EOF)
17:10:02 <dustymabe> travier: i guess we could or should maybe follow what other Fedora editions are already doing
17:10:18 <travier> they have count me enabled by default
17:10:19 <dustymabe> Counting is on by default already in other editions, right?
17:10:23 <travier> yes
17:10:42 <jlebon> and IIUC, it was also enabled by default on upgrade, right?
17:10:47 <travier> And this will have to be discussed for other rpm-ostree editions but I don't expect resistance
17:10:57 <travier> yes I think so
17:10:59 <dustymabe> I'd say we match, but a fedora magazine article explaining would be really nice to have
17:11:12 <lucab> in theory we prepared the pinger config exactly for this case
17:11:22 <travier> This is really hard to weaponize for tracking as far as I konw
17:11:25 <lucab> in practice I don't know whether we want/need to link the two
17:11:41 <jlebon> the plan you have in https://github.com/coreos/fedora-coreos-tracker/issues/717#issue-789135655 looks good to me
17:12:38 <dustymabe> yep sounds good to me
17:13:03 <dustymabe> what do the mechanics look like if I want to opt out of counting.. what would I do today?
17:13:12 <travier> yes, at some point we discussed doing that in a separated program (pinger) vs rpm-ostree
17:13:29 <PanGoat> I'm just catching up on the reading. In the implementations completed, is there notice that it's enabled when DNF is run.
17:13:41 <travier> but it felt easier doing it in rpm-ostree in the beginning due to libdnf support & repo parsing. Maybe this could be moved
17:14:23 <travier> https://coreos.github.io/rpm-ostree/countme/ > to disable
17:14:34 <lucab> travier: I mean, just the config, as we told people upfront how to tweak pinger
17:15:11 <travier> hum
17:15:24 <lucab> but I don't have big concerns on enabling this for new installs and updates
17:16:09 <jlebon> it's nice that the countme=0 bit is shared with how it's done on dnf-based Fedora
17:17:03 <dustymabe> jlebon: the problem with `sed -i 's/countme=1/countme=0/g' /etc/yum.repos.d/*.repo` is that those repo files will never get updated again
17:17:07 <smooge> I want to thank travier for helping work on this
17:17:18 <travier> lucab: we could indeed add a check for that to skip counting
17:17:29 <travier> smooge: thanks
17:17:30 <jlebon> dustymabe: i think that's a problem shared with traditional too
17:17:37 <smooge> they were very open to suggestions and items
17:17:47 <dustymabe> smooge++
17:17:47 <zodbot> dustymabe: Karma for smooge changed to 10 (for the current release cycle):  https://badges.fedoraproject.org/tags/cookie/any
17:18:11 <lucab> travier: yeah that was my reasoning, but I don't think we really want it
17:18:19 <dustymabe> jlebon: i wonder if we should prefer the disable of the timer
17:18:43 <jlebon> dustymabe: that won't disable counting via pkglayering though i think
17:18:46 <dustymabe> well I guess it says "also"
17:18:49 <dustymabe> ahh, ok
17:19:09 <travier> yes, that's the current issue.
17:19:22 <travier> DIsabling only the timer will not stop libdnf from trying to count
17:19:23 <dustymabe> so let's say in 2 months I start a new node and I layer a packge
17:19:27 <dustymabe> will it get counted twice
17:19:32 <travier> sort of
17:19:47 <travier> because the User Agent is different, the infra can distinguish the two
17:19:53 <travier> rpm-ostree vs libdnf
17:20:13 <travier> There is an open issue about that in libdnf and I could work on that if needed
17:20:29 <travier> because I agree that disabling the timer feels better as a UX
17:20:47 <jlebon> hmm, yeah.  if we can tell libdnf to turn that off, that'd be cleaner
17:21:03 <jlebon> because then it's just the timer, and there's no double counting under different UAs
17:21:23 <travier> https://github.com/rpm-software-management/libdnf/issues/1068
17:21:26 <dustymabe> travier: thanks so hard for working on this.. and answering my questions to help me understand!
17:21:51 <travier> dustymabe: 👍
17:21:53 <jlebon> yeah, this is going to be really useful!
17:22:04 <jlebon> anyone know if the stats are available publicly somewhere?
17:22:23 <smooge> yes
17:22:26 <smooge> give me a moment
17:22:42 <smooge> https://data-analysis.fedoraproject.org/csv-reports/countme/
17:23:01 <PanGoat> nice
17:23:05 <jlebon> sweet, thanks smooge!
17:23:33 <smooge> older compilation of stats are at https://data-analysis.fedoraproject.org/csv-reports/mirrors/
17:25:20 <lucab> ok, I think we are done on this topic
17:25:57 <lucab> I'll go to open floor for the last 5 minutes of the meeting
17:25:57 <dustymabe> so all good on the currently proposed plan?
17:25:59 <lucab> #topic Open Floor
17:26:16 <jberkus> I'd like to ask about shirts/swag
17:26:26 <travier> jberkus: :)
17:26:45 <dustymabe> jberkus++
17:26:46 <jlebon> woah hey didn't know jberkus was here :)
17:26:47 <zodbot> dustymabe: Karma for jberkus changed to 1 (for the current release cycle):  https://badges.fedoraproject.org/tags/cookie/any
17:26:57 <lucab> dustymabe: I didn't see any objections, it doesn't feel like we need voting
17:27:12 <jberkus> We really ought to have some, both for the contributors and the users.  However, I'm not clear that Fedora CoreOS has an official logo I can use to produce those?
17:27:32 <dustymabe> lucab: +1
17:27:38 <travier> Please add objections and remarks to the countme issue :)
17:27:56 <travier> jberkus: we have an official logo !
17:28:07 <lucab> jberkus: there are logos in the official SVG
17:28:25 <jberkus> oh, good.  it wasn't clear that that was official.  link?
17:28:43 <travier> https://getfedora.org/static/images/fedora-coreos-logo.png
17:28:56 <lucab> https://pagure.io/fedora-web/websites/blob/master/f/sites/asset_sources/assets.svg
17:28:57 <travier> I want FCOS swag too :)
17:29:39 <dustymabe> +1 for being able to send swag to contributors
17:30:26 <cyberpear> seems like countme "done" status should be stored in /var -- does it?
17:30:49 <jberkus> if we need to tweak that logo for design reasons, what's the approval process?
17:30:59 <jberkus> also, someone want to work with me on picking out swag?
17:31:07 * dustymabe me me
17:31:07 <travier> cyberpear: it is stored in a private directory created by systemd in /var yes
17:31:15 <cyberpear> thanks!
17:31:25 <dustymabe> jberkus: sent you a DM
17:31:38 <jberkus> ok, good, Dusty to work with me on it
17:31:46 <jberkus> last thing wanted folks to know abotu this: http://containerplumbing.org/
17:31:55 <jberkus> look for CfP next week
17:32:02 <travier> https://github.com/coreos/rpm-ostree/blob/master/src/app/rpm-ostree-countme.service#L8-L11
17:32:07 <dustymabe> thanks jberkus
17:32:18 <dustymabe> I have a topic for open floor - cgroups v2
17:32:25 <jberkus> ok, done, thanks
17:32:33 <dustymabe> we should probably open this up as a real topic in the next meeting
17:32:42 <PanGoat> ^^
17:32:50 <dustymabe> but in general, we should try really hard to make cgroups v2 happen for f34
17:32:58 <dustymabe> olem: is updating docker which supports v2
17:33:09 <dustymabe> podman is good already
17:33:19 <dustymabe> anyone know the status of upstream kube
17:33:20 <travier> 👍
17:33:31 <travier> 1.20 should be OK I think
17:34:14 <lucab> dustymabe: mid-release I guess? I think we want to let the new docker soak a bit first, as we can't go back after the cgroup switch
17:35:40 <dustymabe> lucab: I was hoping to get the new docker into `next` soonish along with the cgroups v2 change
17:35:58 <dustymabe> of course, people can choose to configure v1, right?
17:36:22 <lucab> yep, I'm only talking about our defaults
17:37:11 <dustymabe> do you think targetting f34 day 1 is too early?
17:37:17 <lucab> ok, anything else for today? otherwise I'm going to close this here in a few seconds
17:37:32 <dustymabe> lucab: +1 - can discuss later
17:37:35 <jlebon> https://v1-19.docs.kubernetes.io/docs/setup/release/notes/ says: "Support for running on a host that uses cgroups v2 unified mode", so sounds like it's good to go!
17:37:41 <jlebon> let's chat in the ticket?
17:37:46 <lucab> dustymabe: maybe not, I'm just scared of major docker changes
17:38:03 <travier> https://github.com/kubernetes/enhancements/issues/2254
17:38:27 <travier> jlebon: nice!
17:39:07 <lucab> ok, let's move the rest of the discussion to the ticket, I'm closing now
17:39:18 <lucab> #endmeeting