17:02:54 <number80> #startmeeting Cloud WG
17:02:54 <zodbot> Meeting started Fri Aug 15 17:02:54 2014 UTC.  The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:02:54 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
17:03:07 <number80> #chair jzb agrimm imcleod
17:03:07 <zodbot> Current chairs: agrimm imcleod jzb number80
17:03:15 <number80> #topic roll call
17:03:23 <number80> .hellomyname is hguemar
17:03:30 <number80> .hellomynameis hguemar
17:03:31 <zodbot> number80: hguemar 'Haïkel Guémar' <karlthered@gmail.com>
17:03:35 <jzb> .hellomynameis jzb
17:03:36 <zodbot> jzb: jzb 'Joe Brockmeier' <jzb@zonker.net>
17:03:40 <agrimm> .hellomynameis arg
17:03:42 <zodbot> agrimm: arg 'Andy Grimm' <agrimm@redhat.com>
17:03:43 <stickster> Whoops, now in the right channel.
17:03:47 <imcleod> .hellomyname is imcleod
17:03:50 <stickster> .hellomynameis pfrields
17:03:50 <zodbot> stickster: pfrields 'Paul W. Frields' <stickster@gmail.com>
17:03:55 <number80> #chair stickster
17:03:55 <zodbot> Current chairs: agrimm imcleod jzb number80 stickster
17:04:01 <imcleod> .hellowmynameis imcleod
17:04:02 <oddshocks> hello!
17:04:09 <agrimm> try again, imcleod !  :)
17:04:14 <number80> #chair oddshocks
17:04:14 <zodbot> Current chairs: agrimm imcleod jzb number80 oddshocks stickster
17:04:16 <oddshocks> sorry many people are speaking to me at the same time
17:04:21 <imcleod> I can't seem to type.  Perhaps I should remain silent.
17:04:29 <stickster> I have that problem too, but the doctor gave me pills.
17:04:32 <number80> no quorum but at least, we could hold a proper meeting
17:04:33 <imcleod> .hellomynameis imcleod
17:04:34 <zodbot> imcleod: imcleod 'Ian McLeod' <imcleod@redhat.com>
17:04:36 <imcleod> yay
17:04:55 <number80> nobody from atomic ? walters ?
17:05:01 <jzb> number80: ahem
17:05:22 <jzb> walters: you around?
17:05:27 <number80> sorry, I know you're the lead for the atomic image :)
17:05:40 <agrimm> number80, I'm involved w/ atomic
17:05:48 <number80> awesome
17:05:49 <scollier> here
17:05:49 <jzb> number80: I'm just trying to herd cats.
17:06:00 <number80> maybe we could start now
17:06:01 * dustymabe here
17:06:14 <jzb> (Herding cats is really a weird expression if you know cats. Mostly, they sleep a lot.)
17:06:16 <agrimm> If I weren't in this meeting, I might finally be making our vagrant builds work again. :)
17:06:29 * jzb boots agrimm out of the meeting.
17:06:36 <agrimm> jzb, I can guarantee walters is one cat who's not sleeping much these days!
17:06:43 <number80> #topic automatic smoketests on image build
17:06:46 <number80> https://fedorahosted.org/cloud/ticket/38
17:06:55 <number80> I bet this is a fast one as roshi is away
17:07:21 <number80> dustymabe noticed that there is no cloud testing days in QA agenda
17:07:30 <imcleod> Also, images aren't quite building yet....  But we are close.
17:07:39 <number80> it was supposed to be held in the same day as RDO (but same as us)
17:08:32 <number80> #info define a testing day when images will build
17:09:01 <jzb> imcleod: define close? :-)
17:09:48 <agrimm> imcleod, are we still having the weird /dev/root issue?
17:09:52 <agrimm> or what?
17:10:09 * number80 hopes we didn't lose imcleod
17:10:13 <imcleod> Yes.  I added the ability to log copious amounts of early boot output.
17:10:35 <imcleod> Which gave us root cause behind that fairly cryptic dracut error.  The network is not coming up, so no stage2 installer, no install, etc.
17:10:46 <imcleod> Believe this is down to a change in the device name in F21.  Working it.
17:10:59 <imcleod> Also believe we may have seen this before but I cannot recall details or what the solution was. Sigh.
17:11:21 <agrimm> device naming always sucks, and changing it sucks more
17:11:36 <number80> imcleod: do you have a ticket number ?
17:11:42 <imcleod> number80: I do not.
17:11:44 <number80> ok
17:11:51 <dustymabe> imcleod: do we somehow specify ksdevice and give the device name?
17:12:00 <imcleod> jzb: I will define"copious" as https://kojipkgs.fedoraproject.org//work/tasks/475/7310475/oz-x86_64.log
17:12:15 <dustymabe> imcleod: and that device name is now wrong?
17:12:32 <imcleod> dustymabe: Oz injects the ks file into the ramdisk itself.  Original ks file explicitly tries to bring up eth0.
17:12:54 <dustymabe> imcleod: got it. thanks
17:13:05 <imcleod> dustymabe: Anaconda folk suggested removing explicit name reference.  That doesn't work either I'm afraid.  I need to spin up a test VM myself and poke around to get the name correct.
17:13:13 <imcleod> I'd prefer that we not have to hardcode a name into the kickstart.
17:13:20 <number80> *nods*
17:13:23 <agrimm> imcleod, so do you get the same error if you try to build with imagefactory elsewhere (e.g., on your laptop) where you can get a vnc access?
17:13:34 <imcleod> Oddly enough, no.  And I'm not sure why.
17:13:38 <dustymabe> imcleod: I know in the past we could use something like ksdevice=link but since we are injecting the kickstart that doesn't apply
17:15:02 <number80> agrimm: I just noticed that this ticket was assigned to red_trela, do you want ownership of this one ?
17:16:27 <agrimm> number80, I guess I could.  It might fit in well with smoke testing that walters and I are going to have to do for atomic images...
17:16:33 <number80> ok
17:16:59 <number80> I guess we could move to the next topic ?
17:17:08 <agrimm> I don't know anything about the tools I should be using for it, though.  probably will need advice on that (from roshi?)
17:17:41 <agrimm> yeah, go ahead and move on
17:17:52 <number80> roshi isn't around but it will be helping him on defining test cases for taskotron
17:18:27 <number80> #topic start communication/collaboration on cloud image updates
17:18:29 <number80> https://fedorahosted.org/cloud/ticket/51
17:19:53 <number80> I don't think we moved forward on this matter
17:21:21 <jzb> number80: call it out on the mailing list, perhaps?
17:21:33 <stickster> This seems quite a bit wider than just a few people in this meeting.
17:21:42 <number80> jzb: do you want to do it ?
17:21:51 <jzb> number80: I can bring it up on list, sure.
17:22:01 <stickster> Do we know that relevant QE people are on the list?
17:22:13 <jzb> stickster: we do not, but I'll CC Adam
17:22:19 <jzb> stickster: per the discussion at Flock
17:22:20 <number80> #action jzb start discussion avec cloud image updates policy on the cloud/releng list
17:22:30 <stickster> Even if we're looking for people to raise a hand and say, "Yeah, I'll own that"... makes sense. thanks jzb
17:23:12 <number80> so next topic ?
17:23:24 <jzb> number80: sounds good
17:23:43 <number80> #topic Process for determining when and why Docker trusted images need to be rebuilt
17:23:48 <number80> https://fedorahosted.org/cloud/ticket/59
17:23:58 <jzb> that would be me
17:24:03 <number80> yup
17:24:09 <jzb> I will bang out a draft this weekend and send to the list
17:24:48 <jzb> I don't expect my first cut to be the perfect criteria, but having a proposal will let people have something to hack on.
17:24:57 <jzb> number80: EOF
17:25:00 <number80> #action jzb send a draft for Docker trusted images updates policy on the list
17:25:04 <number80> thanks jzb
17:25:34 <number80> #topic Next step on the test plan
17:25:40 <number80> https://fedorahosted.org/cloud/ticket/61
17:26:10 <number80> I should assign this to myself and check with roshi when he's back to finalize this point
17:26:15 <number80> any comments ?
17:26:24 <jzb> number80: is this very much like #51?
17:26:49 <number80> it is similar but the not the same
17:26:59 <number80> if you want ownership, no problem with that :)
17:27:22 <jzb> number80: nope, just wondering if they should be two tickets or not.
17:28:14 <number80> the former is about when and why we should trigger updates to the cloud images, the other is related to the generic QA release policy
17:28:44 <number80> It's close enough to be managed by the same person
17:29:10 <jzb> number80: OK
17:29:22 <jzb> number80: feel free to assign to me then
17:29:26 <number80> ok
17:30:08 <number80> #topic Project Atomic weekly status
17:30:15 <number80> https://fedorahosted.org/cloud/ticket/64
17:30:23 <number80> jzb again :)
17:31:17 <jzb> number80: I think we're still at the same place as last time. I haven't gotten any updates from walters on progress. I think he may be a wee bit busy this week.
17:31:23 <number80> ok
17:31:43 <jzb> stickster: any info on your end there?
17:31:56 <agrimm> jzb, he's in all next week, so we shuld make some progress.  but then he's out for two
17:32:06 * stickster looks to oddshocks and dgilmore
17:32:07 <agrimm> so we _really_ need to make progress next week.  :)
17:32:35 <jzb> agrimm: agreed. I am at LinuxCon next week but will try to make sure to sync with the folks on this...
17:32:38 <stickster> agrimm: One thing we have going for us is we have some more Fedora rel-eng help now, in the form of pbrobinson
17:32:50 <oddshocks> here
17:33:05 <agrimm> stickster, oh, good to know
17:33:34 <number80> awesome
17:34:10 <number80> should we move to the next topic (docker image deliverable)
17:34:18 <stickster> He's in UK TZ though, so this particular meeting might be tougher for him.
17:34:25 <dgilmore> hola stickster
17:34:48 <oddshocks> jzb: it's worth noting that internal Fedora and RH openstack instance image uploading is in the near future, to join with EC2 images
17:35:27 <dgilmore> oddshocks: we can't officially do internal red hat things
17:35:44 <dgilmore> since we generally cant push from fedora to them
17:35:51 <number80> #info upcoming support of internal Fedora openstack instance image uploading in fedimg
17:36:16 <oddshocks> dgilmore: This is news to me. Good to know
17:36:18 <dgilmore> unless the Red Hat openstack is public
17:36:24 <oddshocks> dgilmore: In that case,j
17:36:29 <imcleod> One is public, one is internal.
17:36:31 <oddshocks> In that case, just Fedora OpenStack ;)
17:36:35 * stickster notes https://fedorahosted.org/cloud/ticket/64 is not being updated, and that makes it not as useful as a "tracking" item.
17:36:44 <dgilmore> oddshocks: most internal Red Hat things are firewalled off, and we have no access from Fedora infra
17:36:53 <stickster> oddshocks++
17:37:09 <jzb> stickster: ack
17:37:36 <stickster> jzb: What would be a better way to summarize that ticket so we can answer the right questions in these meetings?
17:37:44 <oddshocks> dgilmore: OK, cool. That makes sense to me. In that case we'll do Fedora openstack, and from there move on to GCE, HP, and Rackspace, once legal clears.
17:37:45 <number80> #action Cloud WG updating #64 (Atomic tracker) every week
17:38:05 <stickster> jzb: I think there was a bug that walters pointed to which is relevant for OStree
17:38:17 <stickster> oddshocks: Do you have a reference to that bug? ^
17:38:23 <number80> oddshocks: you're still waiting clearance from legal ?
17:39:46 <jzb> stickster: just a sec
17:39:56 <oddshocks> stickster: I see the bug, do you think that's something that I could work on generating weekly?
17:40:30 <jzb> atm, really, summarizing the weekly infra/atomic meetings is probably the most useful.
17:40:33 <jzb> stickster: ^^
17:40:41 <oddshocks> number80: AFAIK no one has been given clearance or credentials on Rackspace, GCE, or HP, as legal has -- to my knowledge -- been working with them to secure accounts to host our images with.
17:40:58 <jzb> I don't think there's anything outside those issues right now that we are tracking for Atomic.
17:41:31 <stickster> sorry, pinged elsewhere, back now.
17:41:47 * oddshocks is being pinged like crazy today
17:41:59 <stickster> jzb: Yeah, agreed, if we could do that in the ticket it would be easier to tell what's blocked :-)
17:42:09 <imcleod> number80: HP is the only one I am actively working on with legal.  (Which does not mean nobody else is doing GCE or Rackspace)
17:42:18 <jzb> stickster: ack.
17:42:36 <number80> #info imcleod is working with legal on HP
17:42:49 <jzb> is Rackspace still a consideration?
17:42:51 <number80> I know that skottler had some discussions with legal about GCE
17:42:59 <number80> but no follow-up since he left
17:43:03 <jzb> IIRC they're kind of shuttering their public cloud.
17:43:22 <imcleod> jzb: AFAIK yes.  It's important for OpenStack testing.  Also, internal sources indicate that they are not, in fact, shuttering the public cloud.
17:43:45 <dustymabe> imcleod: this might be a weird question but is anyone working with digital ocean?
17:43:53 <imcleod> jzb: They are trying to emphasize their value add as a provider and this has been incorrectly interpreted as a shutdown of the public cloud.
17:43:56 <jzb> hmmm. OK. I must have gotten a wrong impression.
17:44:02 <agrimm> what do the legal hurdles tend to be?  I'm on two RH projects where we are moving forward with GCE, but I understand Fedora's needs are different
17:44:11 <number80> dustymabe: good point, since we already have a friendly contact there :)
17:44:18 <imcleod> jzb: You may have better info.
17:44:36 <imcleod> agrimm: For HP it was indemnification in their standard agreement.
17:44:37 <dustymabe> number80: :) - I also happen to work in NYC in the same building as DO
17:44:39 <oddshocks> hm,,,
17:45:10 <number80> dustymabe: do you want to check with them ? but we still someone in touch with legal (aka spot)
17:45:16 <number80> *need
17:45:18 <oddshocks> FWIW, any of these providers are super-easy to add support for in Fedimg: http://libcloud.readthedocs.org/en/latest/supported_providers.html
17:45:43 <dustymabe> number80: sure.. I am new though so I might need some guidance on what needs to be done/coordinated
17:46:02 <jzb> imcleod: in this case, almost certainly not.
17:46:03 <dustymabe> new to innerworkings of fedora that is
17:46:07 <number80> dustymabe: I could help you
17:46:15 <dustymabe> number80: awesome
17:46:37 <imcleod> oddshocks: Can we upload and register to all of those?  My understanding, based on the experience with Rackspace, was no.
17:46:41 <number80> #action dustymabe/number80 working on Digital Ocean support
17:46:45 <frankieonuonga> sorry connection issues
17:46:46 <frankieonuonga> I am in
17:46:52 <frankieonuonga> how far are we ?
17:46:54 <number80> #chair frankieonuonga
17:46:54 <zodbot> Current chairs: agrimm frankieonuonga imcleod jzb number80 oddshocks stickster
17:47:09 <frankieonuonga> am I late ?
17:47:19 <number80> we're currently speaking about fedimg
17:47:32 <frankieonuonga> aaah ok
17:48:01 <oddshocks> imcleod: Right. That's the challenge we're soon to face (or already facing) -- how to host images on providers that aren't as easy to register images with as EC2
17:48:10 <number80> btw, if you're discussing with a cloud platform and with legal, please log it in the minutes using info
17:48:32 <oddshocks> imcleod: I think that's part of the reason we have legal we have to take with these providers if we want them to serve our latest cloud images
17:48:38 <number80> I already added imcleod with HP
17:49:51 <imcleod> oddshocks: Roger.  I think we're going to be stuck with a diverse collection of upload/registration techniques, even if our initial builds can be more or less unified.
17:50:00 <oddshocks> number80: All I was told (by multiple folks on different days) is that Rackspace, GCE, and HP are all on hold until we clear things with legal, and figure out how things will work with them
17:50:02 * agrimm is happy that GCE is trivial for registering images
17:50:17 <number80> oddshocks: do you know the point of contacts ?
17:50:30 <oddshocks> agrimm: :)
17:50:54 <number80> #info agrimm working on clearance with legal on GCE, Rackspace
17:51:11 <number80> blame oddshocks for that ^^
17:51:14 <stickster> oddshocks: number80: I'd be happy to talk to the right legal person, whether that's spot or someone else. I'd like to know before bugging spot that it's something he's involved with.
17:51:16 <oddshocks> number80: No, unfortunately. I've just heard about this in #fedora-cloud or #fedora-meeting.
17:52:03 <oddshocks> It was said a few times that the ball wasn't in my field for those 3 providers, which were the ones people were hoping for next.
17:52:33 <number80> stickster: agreed, we need to know who is working with legal on which matters
17:52:54 <oddshocks> dgilmore: Do you know anything about this? ^
17:52:58 <number80> s/matters/issues/
17:53:05 <oddshocks> dgilmore: Regarding legal/accounts with Rackspace, GCE, and HP?
17:53:37 <stickster> number80: Typically spot has been main point of contact for all things related to both Fedora and legal matters. However, I have a long history of working with attorneys too (Red Hat and others) and would be able to assist if needed.
17:53:37 <oddshocks> walters: ^
17:53:56 <dgilmore> oddshocks: i know HP is underway, the rest I do not know
17:54:00 <oddshocks> stickster: that sounds good to me.
17:54:04 <oddshocks> dgilmore: OK, thanks.
17:54:08 <number80> stickster: great, we need to identify also the POC in the Cloud SIG side
17:54:38 * stickster is getting the feeling that this is splaying out into "too many roles" for legal stuff.
17:55:06 <oddshocks> stickster: It might be worth pining spot at least and letting him know that we are looking toward some sort of connection with Rackspace, GCE, and HP that would allow us to programmatically upload our latest cloud images, if possible.
17:55:07 <number80> #action number80 find and identify all our current requests with legal and their owners
17:55:19 <oddshocks> /s/pining/pinging
17:55:29 <number80> I'll start a thread in the mailing list
17:55:44 <stickster> number80: I think we're making this more complicated than needed. What we probably need is simply (1) to determine if spot is working on this; (2) if not, who is; (3) identify someone responsible for poking regularly to check progress and update somewhere central.
17:56:06 <oddshocks> stickster++
17:56:11 <number80> *nods*
17:56:14 <imcleod> I am working HP.
17:56:16 <imcleod> Full stop.
17:56:19 <number80> #undo
17:56:19 <zodbot> Removing item from minutes: ACTION by number80 at 17:55:07 : number80 find and identify all our current requests with legal and their owners
17:56:41 <imcleod> It's not a huge time suck and I have the internal legal contact at this point.  I could try to take on GCE and RackSpace as well.
17:56:43 <jzb> stickster: I could be wrong, but I don't think that this is on spot's plate atm.
17:56:53 <imcleod> jzb: I'm confident it is not.
17:57:09 <jzb> I'd ask, but I think he's traveling today
17:57:11 <stickster> imcleod: OK, that's helpful, thanks.
17:57:16 <oddshocks> jzb: I don't think he's aware of it either. IDK who would have brought it to him
17:57:35 <jzb> OK
17:57:36 * stickster leaves another 'spot' callout here to tell him: never mind
17:57:37 <oddshocks> I think we need to identify a single person who can figure out stickster's 3 points about Rackspace and GCE
17:57:38 <jzb> anything else ?
17:57:40 <number80> As long as we avoid duplicate efforts and be able to track properly progress on legal requests, I'm fine
17:57:50 <jzb> we're nearly at the end of the hour.
17:57:53 <number80> yup
17:57:58 <stickster> number80++
17:58:01 <imcleod> I have Rax contacts.  I will try to start that as well.
17:58:02 <number80> #topic open floor
17:58:08 <imcleod> Do we have any tie in to Google at all?
17:58:10 <oddshocks> imcleod: OK
17:58:12 <imcleod> Any starting contact?
17:58:30 <oddshocks> If imcleod has Rackspace and HP, I can try and investigate GCE with any legal folks
17:58:36 <number80> imcleod: you should ask mattdm about it
17:58:49 <oddshocks> I don't know about anyone with direct Google contacts unfortunately
17:58:50 <number80> he had some contacts at GCE
17:58:54 <jzb> oddshocks: I wonder if Matt Hicks can help
17:58:55 <agrimm> imcleod, I can connect you to the person who owns the multipurpose GCE account that we use for atomic.  they may know
17:59:05 <oddshocks> jzb: Haven't met him, but any leads are good
17:59:18 <oddshocks> agrimm: good idea.
17:59:28 <imcleod> agrimm: That'd be a start.  I also know several former VA Linux co-workers who are now at Google.  May try to work through them.
17:59:29 <jzb> oddshocks: can you shoot me a note specifically with what you need? I'll bridge the conversation.
17:59:39 <stickster> oddshocks: Are you waiting on anything from people here (or sometimes here) to do e.g. further OStree work in MirrorManager?
17:59:45 <jzb> I know Matt has offered to help us with other GCE things.
17:59:48 <imcleod> oddshocks: Apols.  Do you want to own GCE?  If so I'll bow out and focus on HP and Rax.
17:59:58 <oddshocks> jzb: Yeah, I'll send you the list of any creds we need to access the services and particularly what we're trying to do
18:00:07 <jzb> oddshocks: groovy, thansk
18:00:09 <jzb> er, thanks
18:00:10 <number80> great
18:00:13 * jzb cannot type this week
18:00:40 <number80> before we end this meeting, anyone has another topic to bring out ?
18:01:03 <oddshocks> imcleod: I'm just trying to ensure I have my share of the work by taking something off your shoulders, but if you're cool with being the initial POI with all 3 providers, that's fine with me
18:01:28 <imcleod> oddshocks: I'm happy to try.  Having a single Fedora person talking to internal legal about it might be helpful.
18:01:33 <oddshocks> stickster: I'm listening for any further clear tasks or action items. So far I just have those two tickets.
18:01:36 <imcleod> I expect the issues will be similar.
18:01:49 <oddshocks> imcleod: cool, feel free to let me know if I can help at all.
18:02:13 <number80> :)
18:02:13 <imcleod> oddshocks: Cheers.
18:02:17 <oddshocks> #action oddshocks Send out email to folks about what we need from Rackspace, GCE, and HP
18:02:56 <number80> May I close the meeting ?
18:03:03 <jzb> number80: +1
18:03:07 <oddshocks> the prosecution rests
18:03:22 <stickster> oddshocks: *nod
18:03:34 <number80> Thank you gentlemen for attending this meeting and see you next week !
18:03:38 <stickster> number80: Thank you for being here and for moderating :-)
18:03:39 <frankieonuonga> sure number80 +1
18:03:45 <number80> #endmeeting