cloud_sig
LOGS
18:59:43 <rbergeron> #startmeeting Cloud SIG
18:59:43 <zodbot> Meeting started Fri Sep  9 18:59:43 2011 UTC.  The chair is rbergeron. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:59:43 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
18:59:51 <rbergeron> #meetingname Cloud SIG
18:59:51 <zodbot> The meeting name has been set to 'cloud_sig'
19:00:01 <rbergeron> #topic Who's here?
19:00:06 * kkeithley is here
19:00:07 * clalance here
19:00:12 * tflink is lurking until the blocker bug review meeting is over
19:00:13 <rbergeron> #chair jforbes
19:00:13 <zodbot> Current chairs: jforbes rbergeron
19:00:29 * rbergeron notes she's at a conference and the wireless sucks, so...
19:00:59 <rbergeron> alrighty, well, let's get with it
19:01:04 <rbergeron> #topic EC2
19:01:40 <rbergeron> jgreguske - did i see you pop in before we started?
19:01:42 <jforbes> I saw some images posted
19:01:44 <tflink> you had to do that one first, didn't you?
19:01:49 <tflink> :)
19:01:49 <jgreguske> rbergeron: yes
19:01:50 <jforbes> though I am not sure what their status is
19:01:57 <rbergeron> tflink: historically yes :)
19:01:58 <jforbes> ahh, jgreguske :)
19:02:16 * rbergeron wonders if anyone knows what's shakin
19:02:27 <tflink> we could use some status update on https://bugzilla.redhat.com/show_bug.cgi?id=718722
19:02:31 <rbergeron> jforbes: so posted but haven't tested?
19:02:41 <tflink> supposedly it is blocking EC2 compose and is a beta blocker, but we don't have much information
19:02:57 <jforbes> rbergeron: not sure if they are meant to be tested, just noticed some koji generated images up that weren't there before
19:03:06 <rbergeron> #info bz 718722 needs more info
19:03:10 <rbergeron> #chair tflink
19:03:10 <zodbot> Current chairs: jforbes rbergeron tflink
19:03:22 <jgreguske> dgilmore: has been working on getting the koji images working
19:03:27 <rbergeron> tflink: can you give us the short version - i can't see bugzilla atm
19:03:36 <dgilmore> hola amigos
19:03:37 <tflink> rbergeron: will try, leading other meeting ATM
19:03:40 <rbergeron> who filed the ticket, what do we need to provide?
19:03:48 <rbergeron> tflink: ah, yeah
19:04:04 <tflink> this is an older bug that has been brought up as a possible issue for EC2 compose
19:04:09 <jforbes> tflink: Umm, not seeing anything EC2 related
19:04:11 <tflink> I don't know how exactly it fits in, though
19:04:20 <tflink> dgilmore should know more, honestly
19:04:20 <jforbes> tflink: it doesnt :)
19:04:20 <dgilmore> jforbes: the images dont boot due to grub being broken
19:04:37 <jforbes> dgilmore: no, EC2 doesnt use our grub
19:05:07 <jforbes> dgilmore: it uses grub.conf, and since grub1 isnt around anymore, you have to generate a grub1 config by hand, boxgrinder has code to do so if you want to check it out
19:05:07 <rbergeron> jforbes: can you elaborate on that in the bz?
19:05:18 <tflink> either way, we need more information in the bz
19:05:25 <tflink> the EC2 stuff came from IRC conversations :)
19:05:29 <dgilmore> jforbes: you dont have to use grub2
19:05:58 <jforbes> dgilmore: the work will have to be done at some point
19:05:59 <rbergeron> clalance: do we know if imagefactory or oz or whatever have the same type of problem?
19:06:06 <dgilmore> jforbes: i was trying to test the images locally, just to make sure they booted and were working
19:06:08 <jforbes> dgilmore: there is code for you to copy which does the work, and works
19:06:13 <clalance> rbergeron: I have not seen this problem with Oz.
19:06:24 <clalance> rbergeron: I just committed support for F-16 into Oz yesterday.
19:06:37 <tflink> boxgrinder works fine with FC16 EC2 AMIs too
19:06:39 <jgreguske> I would argue that code should have ended up in appliance-tools, so that both koji and bg would benefit
19:06:45 <clalance> rbergeron: But from reading the bug, it looks like it only happens on upgrades from F-15 grub1 -> F-16 grub1.
19:06:51 <clalance> (and a fresh install, such as Oz does, gets grub2)
19:06:54 <dgilmore> jgreguske: i whole hartedly agree
19:07:14 <jforbes> jgreguske: perhaps a fair argument, but boxgrinder does all of the EC2 manipulation with libguestfs after appliance-tools is gone
19:07:19 <dgilmore> and thats where im working on patching up a fix
19:08:05 <dgilmore> jforbes: there really should be no image manipulation done after appliance-tools is run
19:08:13 <dgilmore> at least in a ideal world
19:08:14 <jforbes> because this is only a problem for things using pv-grub which is basically only EC2
19:08:33 <jgreguske> the only thing you'd need to change is to add menu.lst, right?
19:08:41 <jgreguske> but that can be done in ks %post
19:08:41 <jforbes> dgilmore: well, boxgrinder is a different beast, and it makes sense because you can have one config file generate images for multiple targets.
19:08:49 <jforbes> jgreguske: sure
19:09:40 <rbergeron> so......... what do we need at this point
19:09:46 <dgilmore> jgreguske: well likely my images will boot in ec2
19:10:00 <jgreguske> dgilmore: I think jforbes is right about ec2 not using the installed grub
19:10:07 <dgilmore> appliance-creator requires you have grub installed and setups the config
19:10:07 <jgreguske> dgilmore: that's what pvgrub provides
19:10:21 <jforbes> It is exactly what pv-grub provides, all we really need is a menu.lst
19:10:24 <dgilmore> so really all you need is to do he link to menu.lst
19:10:26 <jgreguske> yeah it does the install manually, I think patching it is right
19:10:58 <dgilmore> rbergeron: i guess i need to upload a image to ec2 and test it there
19:11:27 <rbergeron> so the bz tflink referred to is blocking? or not
19:11:30 <rbergeron> or unknown
19:11:50 <jgreguske> well it's certainly a problem, but I'm no longer convinced it blocks ec2 specifically
19:11:53 <dgilmore> rbergeron: i guess all it does is makes some noise in the build log
19:12:02 <jforbes> rbergeron: it should have no impact on EC2
19:12:09 <dgilmore> and means you can only use the image in ec2 and not locally with libvirt
19:13:01 <rbergeron> so dgilmore is going to upload /test - what else needs doing? anyone?
19:13:25 <jgreguske> dgilmore: the candidate image(s) have networking and ssh enabled?
19:13:37 <jforbes> dgilmore: so don't install grub1, just create a menu.lst for grub1 with grub2 installed
19:13:39 <dgilmore> jgreguske: ssh definetly
19:13:41 <jgreguske> dgilmore: I heard that some systemd-related fixes needed to happen
19:13:57 <dgilmore> networking i need to double check, the kickstart enables it
19:14:05 <dgilmore> but i dont see it being enabled in the log
19:14:07 <jgreguske> that should be enough
19:14:19 <dgilmore> jgreguske: i set the run level in %post
19:14:24 <jgreguske> dgilmore: ok
19:14:37 <rbergeron> #action dgilmore is going to upload/test ec2 images
19:14:49 <dgilmore> though systemd really needs to be fixed in some way i think
19:15:01 <rbergeron> dgilmore: can you post to the list when they're uploaded if your basic testing works so others can give them a whirl?
19:15:11 <rbergeron> dgilmore: what's up with systemd
19:16:05 <dgilmore> rbergeron: it defaults to graphical.target which doesnt work right if you do not have x installed
19:16:18 <dgilmore> rbergeron: cause the boot to hang
19:16:43 <rbergeron> is there a bug filed anywhere?
19:16:59 <dgilmore> rbergeron: not that im aware of
19:17:04 <rbergeron> #info systemd needs fixing - defaults to graphiccal.target which doesn't work right if you don't have x installed, causes boot to hang
19:17:16 <rbergeron> does someone need to file one? /me looks around for a victim/volunteer
19:17:42 <tflink> minimal installs have been working fine
19:18:01 <tflink> that doesn't mean that there isn't an issue, though
19:18:27 <tflink> dgilmore: do you know if firstboot is installed and enabled?
19:18:50 <dgilmore> tflink: it might be that anaconda is setting things up on a minimal install
19:19:03 <dgilmore> tflink: so its only effecting things done outside of anaconda
19:19:15 <tflink> oh, I thought that anaconda was being used here
19:19:20 <dgilmore> tflink: i added -firstboot to the kickstart package list
19:19:23 <dgilmore> so its not installed
19:19:41 <dgilmore> tflink: no appliance-creator doesnt use anaconda
19:20:50 <rbergeron> didn't we have this problem last time around? or am i forgetful
19:21:31 <jforbes> Yes, we covered it a couple of weeks ago, boxgrinder has mods for the systemd bits needed
19:21:31 <dgilmore> rbergeron: probably
19:21:52 <dgilmore> jforbes: again, it should get fixed in appliance-creator not boxgrinder
19:21:53 <jforbes> I would have to look them up, mgoldman gave the details then
19:21:55 <dgilmore> anyways
19:22:22 <dgilmore> jforbes: anything generic should get fixed as low as possible in the chain
19:22:29 <dgilmore> so it helps everyone
19:22:36 <jforbes> dgilmore: again, completely different targets... boxgrinder doesnt want an EC2 ready image spit out of appliance creator. the grub bit is not generic at all
19:23:02 <dgilmore> jforbes: its not ec2 specific
19:23:07 <jgreguske> a symlink doesn't really hurt anything
19:23:21 <jforbes> dgilmore: though with the systemd changes I am not sure.  I don't write that code, I only use it, and notice they have things fixed before I need them usually
19:23:40 <dgilmore> jforbes: the default target being set in appliance-creater means that everything consuming the image can boot it without modification
19:23:43 <jforbes> jgreguske: a symlink isn't, converting a grub2 config to a grub1 config so pvgrub can use it is
19:24:13 <dgilmore> jforbes: i noticed that debian and ubuntu added a grub-ec2 package
19:24:17 <dgilmore> which is just grub1
19:24:29 <jforbes> dgilmore: they did
19:24:38 <dgilmore> jforbes: i guess until pv-grub supports grub2  best to just keep grub1 installed
19:24:41 <jforbes> dgilmore: because they couldnt figure out how to generate a grub1 config from a grub2 config
19:25:15 <jforbes> dgilmore: from what I gathered talking to folks, there are not plans for pvgrub support of grub2
19:25:32 <dgilmore> jforbes: then you really need something in grubby to update the grub1 config on kernel updates
19:25:55 <jforbes> dgilmore: ideally, yes
19:26:06 * rbergeron tries to nail down what the plan here is, if any
19:26:45 <dgilmore> rbergeron: install grub1 into the appliances
19:26:48 <jforbes> dgilmore: though as I said, I am not writing this code, I don't have input to the design of it, I am simply pointing out a working solution
19:26:53 <dgilmore> then kernel updates will just work
19:27:18 <rbergeron> who is doing it, and does it need to be done before you upload/test images?
19:29:06 <dgilmore> rbergeron: i believe what i have will work
19:29:23 <dgilmore> rbergeron: we are probably making needless noise for this meeting
19:29:28 <rbergeron> okay. ;)
19:29:55 <rbergeron> #info We shall proceed ahead and fix things as we find them :)
19:30:10 <rbergeron> dgilmore: can you let us know how the test goes, either email on list or in the ticket?
19:30:53 <rbergeron> okay
19:31:00 * rbergeron shall move onwaards
19:31:16 <rbergeron> #topic Aeolus
19:31:21 <rbergeron> clalance: hi
19:31:25 <clalance> rbergeron: Hello.
19:31:35 <clalance> rbergeron: I built the latest aeolus git into F-16 yesterday.
19:31:45 <clalance> Barring additional bugfixes, that will be the one that will go out with Beta.
19:32:00 <rbergeron> #info latest aeolus is in f-16, will be the one that goes into beta.
19:32:32 <rbergeron> cool. :)
19:32:43 * rbergeron kind of generalized that info bit but close enough
19:32:53 <rbergeron> I saw yet another excellent video blog.
19:32:53 <clalance> There are a couple of package dependency problems for the conductor-devel that we are working through.
19:33:08 <clalance> We hope to have them for beta, but it's not the end of the world if they don't make it.
19:33:19 <clalance> (they don't affect the runtime operation of aeolus, just the running of the rspec and cucumber tests)
19:33:28 <rbergeron> #link http://www.youtube.com/watch?v=mOkfK89CEyE
19:33:49 <rbergeron> #info Nice video on workload manipulation using aeolus
19:33:52 <clalance> rbergeron: Cool, I hadn't seen that.
19:34:08 <rbergeron> #info a few pkg dep problems for conductor-devel that they're working through, but won't kill beta if they aren't available
19:34:16 <markmc> clalance, f-16 aeolus will still use condor, then?
19:34:41 <clalance> markmc: Yeah, we've been working through issues with the condor replacement, but it still has 2 outstanding issues that I know about.
19:34:51 <markmc> clalance, yeah, pity :)
19:34:51 <clalance> markmc: We have to fix those before we go forward.
19:34:59 <clalance> markmc: But I don't see a reason we can't do that in an update.
19:35:38 <clalance> But we'll cross that bridge once we are closer to having working code.
19:35:59 <rbergeron> :)
19:36:24 <markmc> +1 to vinny's screencast
19:36:34 <rbergeron> #info f-16 aeolus will still be using condor, condor replacement still ahve 2 outstanding issues
19:36:41 <rbergeron> markmc: it's pretty cool
19:36:42 <rbergeron> ugh
19:36:44 <rbergeron> #undo
19:36:44 <zodbot> Removing item from minutes: <MeetBot.items.Info object at 0x1867626c>
19:36:57 <rbergeron> #info f-16 aeolus will still be using condor, condor replacement still has 2 outstanding issues
19:37:15 <markmc> rbergeron, I preferred the first version, 'ahve' has character
19:37:29 <rbergeron> ahve ahve character :)
19:37:38 <markmc> :)
19:37:45 <clalance> rbergeron: That's pretty much it from our side.
19:37:53 <rbergeron> okay, moving on then.
19:37:59 <rbergeron> #topic CloudFS
19:38:01 <rbergeron> err
19:38:03 <rbergeron> #undo
19:38:03 <zodbot> Removing item from minutes: <MeetBot.items.Topic object at 0x1826dbcc>
19:38:06 <kkeithley> #topic HekaFS
19:38:09 <rbergeron> #topic HekaFS
19:38:12 <clalance> Old habits die hard ;).
19:38:15 <rbergeron> #chair kkeithley
19:38:15 <zodbot> Current chairs: jforbes kkeithley rbergeron tflink
19:38:25 <kkeithley> :-)
19:38:29 * rbergeron is literally watching a gluster presentation right now
19:38:35 <kkeithley> cool
19:38:37 <kkeithley> or kewl
19:38:38 <jdarcy> Which one?  There are so many.
19:38:54 <rbergeron> i'm watching john mark walker.
19:39:02 <jdarcy> Nothing to report here, BTW.  Got a few patches pushed upstream, that's about it.
19:39:29 <rbergeron> #info patches going upstream, not much to report in hekaFSland.
19:39:43 <rbergeron> :)
19:39:57 * rbergeron knows kkeithley is working on some instructions for usage as well
19:39:58 <kkeithley> I did my start-from-scratch-document-along-the-way write-up, which I sent to various people
19:40:11 * rbergeron nods - are you planning on pushing that a bit wider?
19:40:20 <kkeithley> then I moved it to the fedoraproject wiki under HekaFS
19:40:35 <rbergeron> oh
19:40:37 <kkeithley> got a few nits in the managment UI I'll probalby fix
19:40:42 <rbergeron> sweet, can you link that real quicklike?
19:40:51 <kkeithley> https://fedoraproject.org/wiki/SimpleHekaFS
19:41:09 <rbergeron> #info kkeithley working on a document for usage of HekaFS
19:41:25 <rbergeron> #info Has been pushed to the wiki
19:41:37 <rbergeron> Cool. Thanks, guys :)
19:41:41 <kkeithley> gluster 3.2.3 has come out the other end of the packaging tube and is in the f16 fedora repo
19:41:53 <jsmith> Funny... I was just looking at the HekaFS section of the Cloud Guide :-)
19:41:59 <rbergeron> #info gluster 3.2.3 is in the f16 repo
19:42:05 <kkeithley> the latest hekafs rpm (0.7-11) should come out in another day
19:42:25 <rbergeron> #info latest hekafs rpm (0.7-11) should be out in another day
19:42:38 <rbergeron> thank you :)
19:42:46 <rbergeron> #topic OpenStack
19:42:51 <rbergeron> #chair markmc
19:42:51 <zodbot> Current chairs: jforbes kkeithley markmc rbergeron tflink
19:42:54 <rbergeron> hi there.
19:43:07 * markmc puts down his wine
19:43:17 <rbergeron> hey, pass that over!
19:43:24 <markmc> rbergeron, so, I think my status mail covers it
19:43:28 <markmc> rbergeron, steady progress
19:43:37 <markmc> rbergeron, diablo release only a couple of weeks away now
19:43:49 <markmc> rbergeron, patches going upstream without much trouble
19:44:23 <markmc> rbergeron, still some issues around SELinux, networking, iscsi etc.
19:44:30 <markmc> rbergeron, but nothing to worry about
19:44:48 <markmc> rbergeron, http://lists.fedoraproject.org/pipermail/cloud/2011-September/000813.html
19:44:48 * mdomsch needs to apply markmc's keystone packaging patches
19:45:02 <markmc> rbergeron, tempo and nova-adminclient are nice packaging tasks for anyone interested
19:45:09 <markmc> mdomsch, cool, I was going to ask about that
19:45:21 <mdomsch> highly distracted this week
19:45:22 <markmc> ke4qqq, did you see mdomsch's keystone packaging?
19:45:32 <markmc> ke4qqq, you want that for swift, right?
19:45:36 <markmc> mdomsch, cool
19:45:58 <rbergeron> ke4qqq is in a presentation, just less rude than i am
19:46:01 <rbergeron> ;)
19:46:13 <markmc> ah :)
19:46:27 <rbergeron> #info See openstack status mail to cloud-sig list
19:46:37 <rbergeron> #link http://lists.fedoraproject.org/pipermail/cloud/2011-September/000813.html
19:47:01 <rbergeron> #info tempo and nova-adminclient are nice packaging tasks available rigth now for those interested in helping
19:47:12 <rbergeron> #action mdomsch to apply markmc's keystone packaging patches
19:48:17 <rbergeron> markmc: maybe you should ask him on-list
19:48:42 <markmc> rbergeron, ah, I'm sure he's following, really
19:48:54 <markmc> rbergeron, oh, karma for packages in updates-testing plz!
19:48:59 <mdomsch> markmc, consider your patches pulled and pushed to github; I'll update the bz
19:49:07 <markmc> mdomsch, excellent
19:49:19 <rbergeron> #info karma for packages in updates-testing please :)
19:50:46 <rbergeron> okay, anything else? :)
19:50:50 <rbergeron> oh, test day?
19:50:58 <rbergeron> have we goten any moveent on that?
19:51:10 <rbergeron> we should probably consider adding our names to their list
19:51:12 <rbergeron> err
19:51:14 <markmc> no, dropped the ball completely
19:51:15 <rbergeron> cloud to that list
19:51:22 <markmc> yeah, we should snag the date at least
19:51:33 * markmc will be there with OpenStack at least
19:51:46 <rbergeron> shall i do that?
19:51:55 <markmc> that would be superb :)
19:51:57 * rbergeron can organize :)
19:52:06 * markmc can see that :)
19:52:09 <rbergeron> #action rbergeron to get us a cloud test day reserved
19:52:21 <rbergeron> markmc: that's where the magic ends though :)
19:52:21 <tflink> the only open test day ATM is 2011-10-20
19:52:34 <rbergeron> tflink: yeah, that's what we were thinking
19:53:24 <rbergeron> okay, i'll get that, and then i'll start floggin geveryone for test plans and so forth.
19:53:36 <mdomsch> markmc: patch1 is obsolete now, upstream took it :-)
19:54:03 <rbergeron> markmc: is the plan to push the systemd additions upstream?
19:54:11 <rbergeron> or is angus better to ask on that?
19:54:38 <markmc> mdomsch, right, it was cherry picked from bzr
19:54:49 <mdomsch> k
19:54:56 <markmc> mdomsch, only needed it because newer tarball snapshots aren't available
19:55:08 <mdomsch> f14 doesn't have %{_unitdir} :-(
19:55:19 * mdomsch finds a rawhide box to work with...
19:55:38 <markmc> rbergeron, I hadn't immediate plans to push systemd units upstream
19:55:46 <markmc> rbergeron, there are no initscripts or anything upstream currently
19:55:47 <rbergeron> okay. just was curious
19:55:52 <markmc> rbergeron, maybe at some point though
19:56:12 <rbergeron> interesting
19:56:12 <rbergeron> okay
19:56:43 <rbergeron> #topic Any other business?
19:57:51 * rbergeron looks at markmc's wine glass
19:58:18 * clalance is about to go get some beer :)
19:58:32 <clalance> The sun is actually out here.
19:58:34 <clalance> First time in days.
19:58:35 <markmc> clalance, oi! bit early for you :)
19:58:49 <clalance> markmc: Yeah, I know.  Oh well :).
19:58:55 <markmc> slacker :)
19:59:06 <rbergeron> lol
19:59:39 <rbergeron> okay, folks. have a good weekend, good work as usual :)
19:59:46 <markmc> ttyl
19:59:47 * rbergeron salutes everyone for coming today
19:59:49 <clalance> rbergeron: Thanks!
19:59:53 <rbergeron> thanks :)
19:59:55 <rbergeron> #endmeeting