cloud_wg
LOGS
17:01:15 <kushal> #startmeeting cloud_wg
17:01:15 <zodbot> Meeting started Wed Dec 16 17:01:15 2015 UTC.  The chair is kushal. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:15 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
17:01:15 <zodbot> The meeting name has been set to 'cloud_wg'
17:01:20 <kushal> #topic Roll Call
17:01:29 <dustymabe> .hellomynameis dustymabe
17:01:29 <kushal> .hellomynameis kushal
17:01:32 <zodbot> dustymabe: dustymabe 'Dusty Mabe' <dustymabe@redhat.com>
17:01:35 <zodbot> kushal: kushal 'Kushal Das' <mail@kushaldas.in>
17:02:05 * gholms takes a seat in the bleachers
17:02:17 <kushal> gholms, :)
17:02:42 <kushal> #chair gholms dustymabe jzb jbrooks scollier
17:02:42 <zodbot> Current chairs: dustymabe gholms jbrooks jzb kushal scollier
17:02:48 <kushal> Did I miss anyone?
17:03:15 * jsmith is here
17:03:17 <dustymabe> haven't seen adimania in a while
17:03:23 <linuxmodder> .hello corey84
17:03:24 <zodbot> linuxmodder: corey84 'Corey Sheldon' <sheldon.corey@gmail.com>
17:03:29 <kushal> #chair linuxmodder
17:03:29 <zodbot> Current chairs: dustymabe gholms jbrooks jzb kushal linuxmodder scollier
17:03:39 <kushal> #topic Action items from last meeting
17:03:52 <kushal> * Kushal will write to list about #143 (f24 features)
17:03:52 <linuxmodder> still kinda new to cloud but  no better way to learn right
17:04:00 <kushal> * kushal to do more posts for attracting/guiding volunteers
17:04:06 <kushal> linuxmodder, correct
17:04:14 <kushal> linuxmodder, being in the meeting is a good start.
17:04:23 <dustymabe> linuxmodder right :)
17:04:23 <kushal> So I have started a thread.
17:04:24 <dustymabe> corey right?
17:04:34 <kushal> But it seems no reply yet.
17:04:45 <dustymabe> oops. I see your .hello from above
17:04:55 <kushal> dustymabe, feel free to put a change :)
17:05:16 <dustymabe> kushal: all of my ideas were half baked or for F24 time frame
17:05:22 <dustymabe> err.. F25
17:05:39 <kushal> dustymabe, then get the wiki pages ready for F25 :)(
17:05:41 <kushal> :)
17:05:48 <kushal> I will write to list for that too
17:05:54 <kushal> number80, you there?
17:06:36 <kushal> I guess maxmillion's openshift can go in as a change
17:06:45 <jzb> kushal: +
17:06:48 <jzb> er, +1
17:07:50 <kushal> let me find my change page
17:08:20 <rtnpro> .fas rtnpro
17:08:21 <zodbot> rtnpro: rtnpro 'Ratnadeep Debnath' <rtnpro@gmail.com>
17:08:28 <kushal> rtnpro, welcome
17:08:32 <dustymabe> I'm +1 for openshift change as well
17:08:44 * rtnpro reads above
17:08:50 <kushal> https://fedoraproject.org/wiki/Changes/Cloud_MOTD
17:08:50 <jzb> there was a question on the list recently about using Gluster w/ the host
17:09:05 <jzb> I'm poking some Gluster folks to see if we can't get that done in the F24 timeframe
17:09:14 <jzb> so you can just use Gluster in a container.
17:09:48 <kushal> jzb, I think we will have to help them in filing the change
17:09:55 <nzwulfin> jzb: we'll still need the gluster client on the host or somewhere that k8s can use it
17:09:58 <gholms> kushal: Why do that only on the cloud image?
17:10:03 <jzb> kushal: yeah
17:10:23 <jzb> nzwulfin: are we sure that can't be done in a privileged container?
17:10:39 * jzb wonders how much weight the gluster client would add
17:11:01 <kushal> gholms, I am creating the generic package, so it can be used in any place within Fedora
17:11:21 <nzwulfin> jzb: i'm not sure, i just want to make sure we're talking about the right goal
17:11:27 <kushal> gholms, jzb also iirc server is trying to do something for their usecases.
17:11:31 <gholms> kushal: That doesn't answer my question.  ;)
17:11:49 <linuxmodder> had a d/c  sorry
17:11:50 <linuxmodder> makes sense to ammend the motd for that
17:11:58 <linuxmodder> blind assumption gholms  it changes more  frequently then the other offerings
17:12:00 <jzb> gholms: only do what?
17:12:08 <kushal> gholms, I am trying to enable it for cloud first.
17:12:12 <linuxmodder> probabably noticable but doubtfully  impactingly so
17:12:19 <nzwulfin> jzb: i've not found anything done currently that talked about clients in containers, just gluster servers
17:12:21 <linuxmodder> kushal,  the server wg is
17:12:27 <gholms> jzb: To only do MOTD updates on cloud
17:12:28 <linuxmodder> gholms,  have to start somewhere and make |get a functional  template  why not  cloud
17:12:30 <kushal> gholms, cloud base image.
17:12:52 <linuxmodder> big enough audience to test it  not  frontline enough to kill things if they break gholms
17:13:14 <dustymabe> i'm +1 for cloud MOTD
17:13:16 <kushal> gholms, also the atomic should be a bit different.
17:13:18 <gholms> Seems like if it's good enough for the product one is least likely to log into it's good enough for the rest, at least at first glance.
17:13:22 <linuxmodder> kill things ==  novice or non dev  user  borking and having no clue how to fix
17:13:27 <rtnpro> +1 for MOTD
17:13:54 <linuxmodder> +1 MOTD as well
17:14:08 <jbrooks> jzb, glusterfs-fuse and ceph-common are in rhelah now, and they'll be in the next centos atomic host
17:14:09 <kushal> gholms, it is good for workstation too, but I don't have any say there directly.
17:14:50 <gholms> People introduce systemwide changes all the time.  Just propose it as one.  ;)
17:15:01 <kushal> Anyone else wants to put in any change?
17:15:12 <jzb> jbrooks: in the host itself?
17:15:20 <jbrooks> jzb, yes, in the tree
17:15:23 <jzb> sigh
17:15:29 * gholms is for it, for the record, though it should take instances without Internet access into consideration
17:15:32 <jbrooks> Just in this latest version
17:15:48 <kushal> gholms, in that case, it will have other messages :)
17:15:49 <jzb> ok
17:15:57 <kushal> gholms, I will keep in mind.
17:16:04 <jzb> well, if they've thrown in the towel for now, might as well pull those in.
17:16:09 <nzwulfin> jbrooks: probably for openshift support
17:16:14 <linuxmodder> gholms,  question how you plan to  do the polling for deltas tehn
17:16:22 <gholms> kushal: If it doesn't take a minute longer to boot...  ;)
17:16:39 <gholms> linuxmodder: Do it asynchronously
17:16:40 <kushal> #chair nzwulfin jbrooks
17:16:40 <zodbot> Current chairs: dustymabe gholms jbrooks jzb kushal linuxmodder nzwulfin scollier
17:16:44 <jbrooks> I think it's a matter of... not everything can be in a container right now
17:17:17 <kushal> gholms, yup yup, I will ask for the ideas about how to implement it properly :)
17:17:18 <nzwulfin> can we propose cloning walters for F24 then ;)
17:17:31 <kushal> nzwulfin, +1 to that
17:17:42 <gholms> walters as a service (tm)
17:17:49 <jzb> nzwulfin: dude, we've had clone walters as a feature request for years.
17:17:52 <jbrooks> We hella need that
17:17:59 <nzwulfin> i only propose that b/c he'll read it in logs not live :) :)
17:18:18 <nzwulfin> i think a second dustymabe would help too
17:18:22 <kushal> So everyone please at least reply to that thread in the list about f24 features.
17:18:26 <dustymabe> :)
17:18:36 <dustymabe> I'm working on a 2nd dustymabe in the lab
17:18:47 <kushal> Moving to tickets then.
17:18:47 <dustymabe> frankenmabe
17:18:57 <kushal> #topic Fedora Cloud FAD (late 2015/early 2016) https://fedorahosted.org/cloud/ticket/115
17:19:17 <kushal> dustymabe, so it seems most of the fedora engg team folks will stay back till 9th
17:19:18 * dustymabe fail
17:19:25 <kushal> dustymabe, good name btw
17:19:41 <dustymabe> kushal: right. I am planning to try to stay an extra few days in Brno to attend those meetings
17:20:14 <kushal> dustymabe, +1 :)
17:20:21 <kushal> jzb, I hope you will be there.
17:20:55 <jzb> kushal: yes
17:21:23 <jzb> kushal: trying to cut down on the "on the road" time, but I'm in Europe from the Thursday before FOSDEM through Tuesday after DevConf.cz
17:21:26 <dustymabe> A proper FAD at some point would be nice too
17:21:30 <kushal> Nice, may be we can start putting in names in the ticket (who ever we know for sure attending)
17:21:48 <tflink> as a heads up, there is a RH QE event in brno before devconf if that ends up being relevant
17:21:55 <number80> I'll be around up from feb, 3 to feb 9 (morning)
17:22:03 <kushal> dustymabe, I don't mind. Fly me to moon.
17:22:47 <linuxmodder> (having  ISP stability issues today  -- apoligize in advance if  I re-hash things)
17:22:51 <kushal> number80, hello there
17:22:54 <kushal> #chair number80
17:22:54 <zodbot> Current chairs: dustymabe gholms jbrooks jzb kushal linuxmodder number80 nzwulfin scollier
17:23:05 <kushal> Moving to next ticket.
17:23:22 * number80 preparing fesco meeting
17:23:32 <linuxmodder> nice number80
17:23:42 <kushal> #topic Migrate all Dockerfiles / Images to systemd where possible https://fedorahosted.org/cloud/ticket/121
17:24:06 <dustymabe> kushal: let's lump all dockerfile related tickets together and ask if anyone has anything to add
17:24:19 <dustymabe> I feel like these have kind of stagnated because we don't really know what we are doing with them
17:24:52 <jsmith> I feel the same :-p
17:25:00 <kushal> scollier, this one seems to be on you.
17:25:13 <jzb> dustymabe: I've been poking mattdm about this
17:25:22 <jzb> we kind of stalled with the build system "everything not on GitHub" etc thing
17:25:36 * jzb goes to see if he can summon mattdm
17:25:39 <number80> *nods*
17:25:48 <scollier> kushal, reading.
17:26:28 <scollier> kushal, right, so I spoke with jzb about this and we really need to follow up and get closure.  timelines, direction, etc...
17:26:37 <linuxmodder> stabbing in dark on containers but can't a unpriv'd user  be  given a  similarto sudo access for such things
17:26:42 <nzwulfin> does the proposal on the list to add LABELs to the Dockerfiles help or hinder motion
17:26:45 <jzb> scollier: are you a proven packager?
17:26:53 <scollier> jzb, no
17:26:57 <jzb> or do we have one in the cloud wg to own packages?
17:26:59 <jzb> hrmph
17:27:03 <kushal> number80, is
17:27:10 <jzb> who owns the current fedora-dockerfiles package?
17:27:27 <kushal> .whoowns fedora-dockerfiles
17:27:27 <zodbot> kushal: adimania
17:27:31 <kushal> jzb, :)
17:27:36 <jzb> ah
17:27:41 <linuxmodder> .whoowns fedora-dockerfiles
17:27:41 <zodbot> linuxmodder: adimania
17:27:49 <linuxmodder> kushal, h^
17:27:51 <jzb> and adimania has been absent lately
17:27:53 <jzb> bonus
17:27:58 <jzb> ok
17:28:01 <linuxmodder> lovely
17:28:03 <kushal> jzb, yes, he is starting his own work iirc
17:28:11 <jzb> kushal: his own work?
17:28:13 <jzb> how do you mean?
17:28:37 <linuxmodder> likely  like me  a  startup (in my case)
17:28:46 <jzb> OK
17:28:56 <kushal> yes
17:28:59 <linuxmodder> or some similar such
17:29:00 <kushal> or consulting
17:29:06 <jzb> ok
17:29:08 <kushal> jzb, I don't have details.
17:29:13 <jbrooks> I don't think I've ever used these from the rpm
17:29:17 <jbrooks> just from git
17:29:22 <jzb> my proposal for now is this
17:29:32 <jzb> let's start by fixing the stuff where it lives now
17:29:33 <linuxmodder> jbrooks,  same
17:29:35 <kushal> jbrooks, yes, but rpm was for normal users to see the examples.
17:29:42 <kushal> jzb, +1 to that
17:29:53 <jzb> nothing is stopping anybody from moving the files at a later date for the buildsystem or whatever
17:29:54 <linuxmodder> +1 fixing "in-place"
17:30:01 <jbrooks> github is pretty easy for normal people, too -- you can browse right to the example
17:30:08 <jzb> if we want to put them in pagure later or split out into separate packages, then they can do so
17:30:13 <scollier> jzb, ack.
17:30:20 <jzb> any objections?
17:30:22 <linuxmodder> the cli bit is a bit  "oh shit" for normal users often tho
17:30:35 <linuxmodder> non efrom me
17:30:47 <jbrooks> +1 to in-place
17:30:58 <kushal> Cool
17:31:01 <scollier> jzb, i'll let the co-maintainer know
17:31:14 <kushal> So should I skip the rest of the dockerfiles tickets?
17:31:22 <jzb> #commands
17:31:22 <zodbot> Available commands: #accept #accepted #action #agree #agreed #chair #commands #endmeeting #halp #help #idea #info #link #lurk #meetingname #meetingtopic #nick #rejected #restrictlogs #save #startmeeting #topic #unchair #undo #unlurk
17:31:37 <jzb> ok, so proposed:
17:32:25 <linuxmodder> I'm good with a prop. to  fix in place  and  move to pagure
17:32:32 <jzb> Cloud group agrees to continue to maintain dockerfils on GitHub for now until a more concrete proposal and timeline is proposed to move them elsewhere. Work will re-commence on updating Dockerfiles for dnf, systemd, etc.
17:33:04 <jzb> can we vote on that (and cloud wg members, please add (binding) to +1 or -1)
17:33:08 <jzb> ?
17:33:18 <nzwulfin> +1
17:33:22 <linuxmodder> jzb,  did you mean #proposal  Cloud group agrees to continue to maintain dockerfils on GitHub for now until a more concrete proposal and timeline is proposed to move them elsewhere. Work will re-commence on updating Dockerfiles for dnf, systemd, etc.
17:33:45 <jzb> linuxmodder: ish, doesn't look like #proposal is an accepted command :-)
17:33:47 <linuxmodder> +1
17:33:53 <kushal> +1
17:33:57 <dustymabe> +1 for me
17:34:14 <linuxmodder> #proposal  Cloud group agrees to continue to maintain dockerfils on GitHub for now until a more concrete proposal and timeline is proposed to move them elsewhere. Work will re-commence on updating Dockerfiles for dnf, systemd, etc.
17:34:27 <linuxmodder> it works elsewhere in meetings thats odd
17:34:30 <kushal> +1 again
17:34:35 <kushal> #chair linuxmodder
17:34:35 <zodbot> Current chairs: dustymabe gholms jbrooks jzb kushal linuxmodder number80 nzwulfin scollier
17:34:41 <kushal> you are there
17:34:54 <jzb> linuxmodder: I did a #commands to check
17:34:57 <linuxmodder> ah you chair'd my fas not  mick lol
17:35:06 <jzb> it's not listed
17:35:16 <scollier> oh, jzb, my vote +1
17:35:20 <jzb> anyway - any -1's ?
17:35:26 <linuxmodder> +1
17:35:27 <jzb> going once...
17:35:28 <dustymabe> not that I can tell
17:35:28 <jbrooks> +1
17:35:31 <jzb> ok
17:35:46 <rtnpro> +1
17:35:48 <linuxmodder> looks like all +1 s
17:35:52 <jzb> #agreed Cloud group agrees to continue to maintain dockerfils on GitHub for now until a more concrete proposal and timeline is proposed to move them elsewhere. Work will re-commence on updating Dockerfiles for dnf, systemd, etc.
17:35:59 <jzb> thanks all
17:36:28 <kushal> jzb, so skipping the rest of the dockerfiles tickets?
17:36:49 <jzb> kushal: anything that's outside that scope we need to discuss?
17:37:02 <kushal> nope, all normal ones
17:37:16 <nzwulfin> do we need to talk about the LABEL proposal from the list?
17:37:29 <kushal> #topic make docker archived image get imported with lowercase tag https://fedorahosted.org/cloud/ticket/131
17:37:29 <linuxmodder> hailed mattdmm in -admin he may be  in shortly
17:37:36 <kushal> nzwulfin, may be on openfloor
17:37:40 <nzwulfin> kushal: ok
17:38:01 <dustymabe> fix is in updstream as far as I know
17:38:02 <kushal> iirc the patch was ready.
17:38:03 <jzb> oh, didn't Ian say last week this was solved?
17:38:15 <kushal> jzb, may be we missed it :(
17:38:32 <dustymabe> maybe solved upstream.. might have to trickle down and then we might have to change our tools to use it
17:39:11 <dustymabe> i would leave ticket open for now
17:39:17 <kushal> I will ask Adam for his input.
17:39:26 <linuxmodder> +1 for leaving open
17:39:39 <jzb> k
17:39:40 <kushal> Okay. Moving to next ticket then.
17:39:49 <kushal> #topic Fedora 23 Retrospective https://fedorahosted.org/cloud/ticket/135
17:40:12 <kushal> jzb, do you want to say thing on this?
17:40:29 <kushal> * anything
17:40:53 <jzb> I think we covered it last week, I probably need to go back and update the ticket.
17:41:41 <kushal> jzb, thanks. Adding an action item.
17:41:56 <linuxmodder> agree with comments  but not sure how to go about that
17:42:00 <kushal> #action jzb will update #135 on Fedora 23 retrospective
17:42:23 <kushal> #topic vagrant boxes fixups https://fedorahosted.org/cloud/ticket/136
17:42:24 <jzb> linuxmodder: "that" being...?
17:42:47 <linuxmodder> fixing the  lack of  autocloud  testing on docker and the like
17:43:14 <kushal> linuxmodder, we added those tests :)
17:43:30 <kushal> dustymabe, do you want to say anything on this?
17:43:37 <dustymabe> on the current ticket
17:43:40 <dustymabe> ?
17:43:41 <kushal> I thought we can just add a device from virt-manager
17:43:50 <kushal> dustymabe, Yes, #136
17:44:16 <dustymabe> kushal: right
17:44:37 <kushal> dustymabe, So do we need to do anything special to the Vagrant images?
17:44:52 <dustymabe> we should be able to easily add this, but I think it would take some coordination with one of us and maybe someone like Ian to pull it off
17:45:12 <dustymabe> If I had some dedicated time for it I could do it, but I don't right now with the wedding coming up and atomicapp work
17:46:07 <kushal> dustymabe, may be I misunderstood, do we have to add that device in the image? or can we just do it in the instance from virt-manager or virtualbox?
17:46:24 <jbrooks> That's pretty easy
17:46:25 <jzb> dustymabe: is it possible to do that via a Vagrantfile?
17:46:37 <jbrooks> Someone could mod their vagrantfile to do it, ... like jzb said
17:46:51 <jzb> so maybe a sample vagrantfile is what's really called for?
17:46:53 <dustymabe> I think it is something we can add to the vagrant boxes
17:47:03 <dustymabe> to make it the default
17:47:05 <kushal> jbrooks, jzb so we should do few blog posts + more docs
17:47:22 <kushal> dustymabe, I think we should keep to the Vagrantfile
17:47:34 <kushal> Vagrant users can decide what they want to do
17:48:50 <jzb> let's start there, and then see if there's a call to do it in the box by default?
17:49:12 <jzb> I don't know that I've ever been like "this vagrant box needs a cd device"
17:49:33 <dustymabe> jzb: the reason the guy wanted it was so he could follow the tutorial for installing vbox guest additions
17:49:38 <linuxmodder> (not a vagrant guy so staying out of it  but generally seems like a saen appraoch)
17:49:40 <dustymabe> it tells people to insert a cd
17:49:46 <linuxmodder> er,sane
17:50:09 <dustymabe> there is nothing wrong with adding it to the box in my opinion
17:50:16 <jbrooks> I bet we could do the whole thing in a vagrant provisioning script
17:50:17 <dustymabe> people who didn't use it before won't notice a difference
17:50:22 <dustymabe> people who want it will use it
17:50:22 <linuxmodder> dustymabe,  the vbox site  shows that you use the  cd adapter bit
17:50:24 <jbrooks> I bet it'd be super short
17:50:28 <linuxmodder> has since 4.3.x
17:50:55 <dustymabe> either way.. what I am saying is that we don't lose anything by adding a cd device to the vm
17:51:14 <dustymabe> and obviously somebody has asked for it, so some people would see it as a benefit
17:51:37 <linuxmodder> isn't vagrant more  lib-virt-ish than  vbox-ish?
17:51:55 <jzb> linuxmodder: probably more people using vbox than libvirt
17:51:56 <jbrooks> the opposite, actually
17:51:57 <linuxmodder> seems more like a lazy factor to me  tbh
17:51:58 <dustymabe> linuxmodder: not sure what that means.. it basically builds on the hypervisor
17:52:13 <jbrooks> dustymabe, but, yeah, nothing lost by adding it
17:52:14 <dustymabe> lazy factor.. meaning people shouldn't be lazy?
17:52:24 <nzwulfin> does adding it to the box means it's available to all providers without mods to the Vagrantfile?
17:52:33 <jzb> nzwulfin: yes
17:52:40 <nzwulfin> thanks
17:52:45 <linuxmodder> dustymabe, no more like I can't google this shit  lazy  which we all know is  hardly ever good turnout
17:53:02 <jbrooks> a plugin for installing/updating this: https://github.com/dotless-de/vagrant-vbguest
17:53:22 <dustymabe> yeah.. well part of the draw of vagrant is making things easy.. otherwise people would just use libvirt/vbox directly
17:53:30 <jzb> dustymabe: +1
17:53:36 <linuxmodder> jbrooks,  plugin being totally optional I assume  not forced on user?
17:53:41 * nzwulfin is going to stay out of that one...
17:53:43 <kushal> My search gave me this at the first result https://gist.github.com/leifg/4713995
17:53:44 <linuxmodder> if so I'm +1
17:53:50 <jbrooks> Yeah, you'd go do it yourself
17:54:01 <jzb> linuxmodder: so - odds are if you google "vagrant" and anything you're going to wind up on a tutorial for Ubuntu
17:54:03 <dustymabe> so.. that was just one case where someone might want a cd device
17:54:16 <jzb> linuxmodder: so sending folks to Google rather than helping them, probably not the best strategy
17:54:18 <linuxmodder> adn ubuntu != fedora  right?
17:54:20 <dustymabe> there are others I'm sure
17:55:02 <dustymabe> anywho.. at least for the time being there is no one working on this so we'll move to next year probably
17:55:20 <jzb> dustymabe: I will however ask that we make tickets more specific ;-)
17:55:21 <kushal> dustymabe, I will look into this, but only after 23rd :)
17:55:42 <kushal> Okay we have only 5 minutes
17:55:43 <dustymabe> jzb: yeah we were getting some feedback on the fed mag article
17:55:47 <dustymabe> and I thought we might have more items
17:56:04 <kushal> Moving to next ticket?
17:56:10 <dustymabe> https://fedoramagazine.org/fedora-cloud-vagrant-boxes-atlas/
17:56:23 <jzb> updated ticket
17:56:28 <kushal> #topic 	Producing 2 week atomic images https://fedorahosted.org/cloud/ticket/139
17:56:37 <kushal> We had a release yesterday.
17:57:01 * jzb is working on a post for the Magazine/Project Atomic
17:57:19 <kushal> All is good in that side.
17:57:33 <kushal> Going to open floor?
17:57:40 <kushal> #topic Open Floor
17:57:46 * jsmith has nothing additional
17:57:47 <kushal> networkd :)
17:58:01 <nzwulfin> I'd like to +1 the proposal on the list to add LABELs to our example Dockerfiles :)
17:58:15 <dustymabe> nzwulfin: I'm ok with that
17:58:22 <kushal> nzwulfin, I am +1
17:58:31 <kushal> Any comments on networkd?
17:58:34 <dustymabe> kushal: re: networkd
17:58:50 <dustymabe> did anyone ever talk to the point of Atomic Host sharing tree with bare metal
17:58:52 <tflink> I'm still concerned about making sure it's well tested
17:59:02 <dustymabe> and the implications of moving to networkd for bare metal
17:59:11 <dustymabe> I feel like this is a discussion walters should be involved in
17:59:39 <kushal> tflink, right concern, but we have to do it someday. So why not now.
17:59:48 <tflink> swapping out the network stack means that all the other network testing for other fedora products would not be re-usable for the cloud image
17:59:53 <tflink> kushal: why do we have to do it?
18:00:04 <tflink> I still don't understand that
18:00:14 <kushal> tflink, 1. systemd 2. Other major cloud images are doing the same.
18:00:23 <tflink> the only things I've seen are "becuase it's smaller" and "becuase everyone else is doing it"
18:00:41 <kushal> tflink, everyone else doing it is a good cause for users.
18:00:55 <tflink> how would most users even notice?
18:00:57 <kushal> where as it is also something in the image already.
18:01:14 <dustymabe> mhayden: might have some input
18:01:27 <kushal> tflink, I guess we should continue 1-2 more minutes :)
18:01:28 * mhayden stumbles in
18:01:29 <siXy> re, open floor: on the subject pre-meeting of adding stuff to base image, I'd love for there to be a method for doing DNS lookups, to debug DNS issues.
18:01:48 <dustymabe> siXy: for atomic host you mean?
18:01:52 <siXy> yes.
18:02:01 <kushal> mhayden, we are having the Networkd discussion for F24 :)
18:02:08 <mhayden> oh, okay
18:02:18 <kushal> mhayden, also having a thread in the list,
18:02:30 <kushal> mhayden, may be you want to reply to that :)
18:02:35 <dustymabe> siXy: I'm +1 for that as well
18:02:40 <dustymabe> just haven't had time to submit a PR
18:02:43 <linuxmodder> other than the  smaller  bit  waht other bonuses or  pros does networkd bring exactly
18:02:44 <kushal> anyway, ending this meeting? Is that okay?
18:02:58 <kushal> linuxmodder, easy to use.
18:02:58 <nzwulfin> siXy: the tools container?
18:03:06 <dustymabe> kushal: let's finish out the discussion on networkd
18:03:11 <kushal> dustymabe, Okay
18:03:11 <siXy> nzwulfin: if DNS is broken, you won't be able to pull an image
18:03:16 <siXy> nzwulfin: therefore that doesn't help
18:03:21 <dustymabe> siXy: right
18:03:24 <nzwulfin> siXy: yeah i was about to SMH for that
18:03:25 <dustymabe> that has been my argument in the past
18:03:30 <nzwulfin> soon as i hit enter ...
18:03:33 <dustymabe> so it needs to be in there
18:03:33 <tflink> I'm still concerned that we're 1) making atomic the primary deliverable. and 2) making major changes in the cloud image which significantly increase its testing requirements
18:03:34 <siXy> :)
18:03:35 <linuxmodder> can't you still use IPs  in the interim
18:03:39 <kushal> mhayden, so here, people want to know about the + points of networkd.
18:04:03 <jzb> tflink: what's the concern about 1 exactly?
18:04:05 <tflink> when to be quite frank, there isn't an incredible track record of testing the cloud image when it mostly overlaps with a small server install
18:04:08 <kushal> tflink, I think we already took the decision of making atomic as the primary deliverable.
18:04:20 <tflink> jzb: removing focus at the same time we're changing scope
18:04:28 <tflink> I'm not saying we shouldn't
18:04:41 <jzb> tflink: that's good, because as kushal said it's already decided
18:04:42 <siXy> linuxmodder: not really.  I've no idea what the IP of docker hub might be, for a start.  adding nslookup or similar shouldn't be that expensive.
18:04:49 <jzb> :-)
18:05:03 <tflink> just concerned that there's a change in focus at the same time there's a change in scope when there's not a great track record of testing in the first place
18:05:09 <tflink> of the cloud image
18:05:09 <kushal> mhayden, still there?
18:05:20 <jzb> I'm not grokking the "change in scope" bit
18:05:27 <tflink> jzb: networkd
18:05:32 <jzb> tflink: also looking for a proposed solution
18:05:38 <dustymabe> so I'm not too worried about networkd not working in cloud environments
18:05:39 <jbrooks> Does the rest of Fedora have an opinion on networkd?
18:05:41 <tflink> nothing else in fedora uses
18:05:45 <tflink> jzb: solution for what?
18:05:46 <mhayden> kushal: i am
18:05:47 <kushal> tflink, that is why we are bringing in more volunteers for helping in testing.
18:05:57 <dustymabe> but the problem is that atomic host shares the tree with the bare metal version
18:06:00 <jzb> tflink: teh concern
18:06:02 <jzb> er, the
18:06:04 <tflink> kushal: that doesn't change the past track record
18:06:13 <kushal> dustymabe, I am not changing anything on the tree
18:06:18 <mhayden> it seems like it would be troublesome if cloud images shifted to networkd without the rest of fedora moving at some level
18:06:20 <kushal> dustymabe, but just enable network in the kickstart file
18:06:21 <tflink> but I'm more of a "believe it when I see it" kind of person
18:06:34 <mhayden> systemd-networkd on workstation isn't the best idea, but it's good for servers/cloud/containers
18:06:50 <tflink> afaik, the cloud image would be the only place where networkd is used by default
18:06:50 <dustymabe> kushal: so what is the point then..
18:06:52 <mhayden> IIRC, ArchLinux has completely gone to systemd-networkd
18:06:52 <kushal> mhayden, but WGs started so that we can take the best decision for us.
18:06:56 <jbrooks> Can we get the server WG to take it, too?
18:06:58 <dustymabe> if we are not going to remove networkmanager
18:07:02 <mhayden> kushal: good stuff
18:07:04 <jbrooks> Then we'd have more ppl testing
18:07:13 <mhayden> jbrooks: i'm on the server wg and i'm happy to bring it up there
18:07:28 <jbrooks> mhayden, cool, worth talking about, at least
18:07:33 <mhayden> definirely
18:07:36 <mhayden> err definitely
18:07:36 <kushal> dustymabe, we use the old initscripts now iirc
18:07:46 <dustymabe> kushal: only for the cloud image
18:07:51 <jzb> For some reason I thought the reason we were entertaining it was b/c other wgs wanted to move to it as well.
18:07:53 <dustymabe> I think atomic host cloud image uses nm
18:07:59 <siXy> FWIW, I'd be concerned about networkd -- it might add complexity to docker network plugins, and I doubt test coverage for this stuff would be easy to create.
18:08:08 <mhayden> systemd-networkd's resource requirements are a little lower than NM last time i looked
18:08:22 <tflink> if there are other WGs looking to use networkd as default, I have fewer concerns about tesitng scope
18:08:30 <kushal> siXy, mhayden is running networkd on fedora instances in production for a long time.
18:08:51 <jzb> https://fedoraproject.org/wiki/Cloud/Network-Requirements
18:08:57 <jzb> actually mhayden had written this up
18:09:05 <mhayden> oh those docs are awful, who wrote those
18:09:06 <siXy> kushal: yup.  but is he also testing the 10 or so common (fsvo "common") docker network plugins, such as calico?
18:09:28 <siXy> because I can pretty much guarantee that upstream won't be testing against networkd yet.
18:09:31 <kushal> siXy, I don't think even we test those.
18:09:32 <mhayden> i use docker with networkd at the moment... haven't seen any issues
18:09:38 <dustymabe> siXy: define upstream?
18:09:39 <mhayden> if networkd doesn't know a config for the device, it doesn't touch it
18:09:47 <siXy> kushal: right, but they work now, and breaking them would be unfortunate for us at least, and probably others
18:10:11 <siXy> dustymabe: upstream for the network plugin.  so for calico projectcalico.org
18:10:31 <dustymabe> so do they develop all of their stuff on Fedora?
18:10:36 <dustymabe> or other linux distros?
18:10:41 <kushal> siXy, I think that should be upstream's responsibility to test on Fedora.
18:10:45 <dustymabe> what other distros use network manager?
18:11:02 <kushal> dustymabe, both ubuntu and debian is moving to networkd
18:11:11 <tflink> kushal: for everything?
18:11:23 <kushal> tflink, for cloud (head less systems)
18:11:42 <kushal> tflink, for laptop/desktop things NM is the number one.
18:11:43 <jzb> hmmm
18:11:49 <jzb> didn't we agree to do this last release?
18:11:58 <siXy> kushal: reasonable.  but pushing ahead too aggressively with new stuff can make it hard work for users, as the upstream may take a while to catch up.
18:12:05 <tflink> i thought the agreement was not to do it for F23 and revisit later
18:12:13 <siXy> I'm just worried our networking might break, basically :)
18:12:28 <jzb> fwiw
18:12:32 <kushal> coreos is using networkd from 2014
18:12:32 <tflink> but I wasn't paying a whole lot of attention when that came up, so i could be wrong
18:12:33 <jzb> I think networkd is used by coreos
18:12:37 <jzb> what kushal said :-)
18:12:49 <kushal> siXy, not new :)
18:13:08 <jzb> https://fedoraproject.org/wiki/Changes/Cloud_Systemd_Networkd
18:13:14 <siXy> ok, I haven't looked at coreos in ages.  ignore my objections then :)
18:13:16 <dustymabe> so I think either way we have to get walters input on this
18:13:46 <dustymabe> if he says no and has a good reason then it is basically a show stopper
18:13:46 <kushal> mhayden, have you used it on baremetals?
18:13:55 <dustymabe> networkmanager is what is used now on Atomic Host
18:14:01 <mhayden> kushal: using it on ~ 10-15 bare metal boxes at the moment
18:14:02 <dustymabe> in the cloud and on bare metal
18:14:10 <kushal> mhayden, so no issues.
18:14:20 <mhayden> kushal: including bonded interfaces, vlans, plain bridges, macvlan's, etc
18:14:38 <kushal> mhayden, can you please write these in reply to the cloud list thread.
18:14:43 <kushal> That way others will know.
18:14:45 <mhayden> abolutely
18:14:46 <tflink> that's still not enough testing to release it, in my opinion
18:14:54 * jzb notes we're 15 minutes over
18:15:14 <kushal> jzb, means we are actually working :)
18:15:24 <gholms> Heh
18:15:26 <mhayden> kushal: is that "Fedora cloud image feedback" thread the right one?
18:15:51 <dustymabe> Putting Networkd on cloud Atomic and base image for F24
18:15:52 <kushal> mhayden, nope, "Putting Networkd on cloud Atomic and base image for F24"
18:15:52 <tflink> mhayden: "Putting Networkd on cloud Atomic and base image for F24", I htink
18:15:53 <mhayden> kushal: oh, nevermind -- i see the one with "Putting networkd..."
18:15:55 <tflink> ah
18:15:58 <tflink> ha
18:15:59 <kushal> :)
18:15:59 <dustymabe> haha
18:16:58 <kushal> So after mhayden replies, we will have ask walters for his input.
18:17:12 <kushal> * have to
18:18:13 <kushal> and if he does not have any problem (which we can not fix), we can go ahead with this.
18:18:27 <dustymabe> ok endmeeting time
18:18:31 <kushal> 3
18:18:33 <kushal> 2
18:18:36 <kushal> 0.5
18:18:42 <kushal> #endmeeting