19:00:12 <rbergeron> #startmeeting Cloud SIG
19:00:12 <zodbot> Meeting started Fri Mar 23 19:00:12 2012 UTC.  The chair is rbergeron. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:12 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
19:00:18 <rbergeron> #meetingname Cloud SIG
19:00:18 <zodbot> The meeting name has been set to 'cloud_sig'
19:00:33 <rbergeron> oh wait, i see multiple rackers now
19:00:41 * mdomsch is here
19:00:48 * rackerjoe Joe Breu is here
19:00:52 * skvidal is here
19:00:55 <rbergeron> #topic Gathering of the Peeps
19:01:01 * rackerhacker might work at rackspace, can't remember
19:01:30 * rackerhacker could go for a cold breu about this time on a friday
19:01:34 <rbergeron> rackerhacker: who? never heard of 'em
19:01:35 <rbergeron> ;)
19:01:41 <gholms> Peeps!
19:01:45 * gholms noms
19:01:50 <rbergeron> yes, it's that tim eof the year
19:01:53 <rackerhacker> rbergeron: http://i3.kym-cdn.com/entries/icons/original/000/003/617/okayguy.jpg
19:01:54 * tdawson is here.
19:02:10 <rbergeron> hey mr. dawson :)
19:02:24 <gholms> mull: Just in time!
19:02:45 <rbergeron> mull, coming to us live from a plane?
19:02:55 <rbergeron> dgilmore: you about by chance?
19:03:03 <dgilmore> rbergeron: si amiga
19:03:05 <mull> I'm in Rochester
19:03:09 <rbergeron> word, okay
19:03:15 <rbergeron> mull: ROC...'in
19:03:18 <mull> :)
19:03:20 <rbergeron> with dokken
19:03:43 * rbergeron puts a blanket over her friday sillies
19:03:45 <mull> wokkin' actual (chinese food for lunch)
19:03:55 <rackerhacker> well played
19:04:00 <rbergeron> mull: ohhhhhhhhhhh, nice
19:04:13 <rbergeron> i will have to eventually "wien" you off the punniness as well
19:04:27 * rackerhacker gets out a shovel
19:04:33 * rbergeron waits for ghoms to link the sad trombone sound
19:04:42 <rbergeron> okay, shall we dance?
19:04:50 <rbergeron> #topic EC2, images, and so forth
19:05:13 <rbergeron> dgilmore: heya, can you tell us what's shakin with the whole beta thing and other image types :)
19:05:33 <dgilmore> rbergeron: sure
19:05:45 <dgilmore> rbergeron: earlier this week i got ec2 images that booted
19:05:46 <rbergeron> #chair gholms
19:05:46 <zodbot> Current chairs: gholms rbergeron
19:06:02 <dgilmore> but for whatever reason i couldnt ssh in
19:06:37 <gholms> Hiya, gregdek
19:06:40 <dgilmore> im as we speak spinning up some f17 Beta rc images
19:06:43 * gregdek hullos.
19:06:52 <rbergeron> gregdek: yo
19:06:54 <dgilmore> so ill see how they go
19:07:08 <rbergeron> dgilmore: any thoughts yet on why not ssh'ing in?
19:07:23 <dgilmore> if i can still boot but not login ill take some more drastic measures to see whtas going on
19:07:38 <dgilmore> rbergeron: server side says client disconected
19:07:42 * rbergeron wonders if error messages might bgive anyone else here an idea
19:07:46 <dgilmore> client side says key was rejected
19:07:53 * gholms is still incredibly confused about that one
19:07:56 * rbergeron wonders if she could get faster freaking internet that isn't lagged to all hell
19:07:57 <dgilmore> rbergeron: really not error messages
19:08:06 <rbergeron> gholms: have you seen the same thing or just in discussion?
19:08:14 <gholms> Discussion with dgilmore
19:08:33 <rbergeron> #info ec2 images are booting; can only boot but not log in
19:08:50 <rbergeron> #info server side says client disconnected; client side says key was rejected; not sure wtf is going on at the moment, but still plugging way
19:09:01 <rbergeron> #info thank you to dgilmore for plugging away at this :)
19:09:10 <gholms> It's almost like sshd doesn't like who it's talking to.
19:09:25 <rbergeron> dgilmore: this is making from scratch or making from... the... um.. normal process?
19:10:07 <dgilmore> rbergeron: same processes as was used for f16
19:10:25 <rbergeron> okay
19:10:37 <rbergeron> well, perhaps meetingminutesviewing will prod people into having some miraculous idea
19:10:44 <rbergeron> how about $otherimages that have been requested?
19:10:46 <dgilmore> oh and to make the ec2 images work they only support pv-grub
19:11:04 <dgilmore> so we would need to make a completely different set for use in kvm etc
19:11:09 <rackerhacker> dgilmore: i can lend a hand on xen, not on ec2 specifically
19:11:38 <skvidal> dgilmore: the whole image? or just the kernel + ramdisk?
19:11:50 <dgilmore> rackerhacker: ok. well the current images will boot if using pv-grub
19:12:07 <dgilmore> skvidal: to work with our upload scripts i had to exclude grub2
19:12:07 <rackerhacker> dgilmore: that's the only way we've done it since F16 (w/pv-grub)
19:12:16 <dgilmore> there is no bootloader in the images installed
19:14:04 <dgilmore> rbergeron: so if we want otherimages its fine
19:14:11 <dgilmore> we just need to do them seperate
19:14:31 <rbergeron> okay, have we come to a concrete list of what those are or the one is, iirc?
19:14:51 <rbergeron> and are we going to post that for beta for sanity check or not really planning on it
19:15:13 <dgilmore> we should post for Beta
19:15:22 <rbergeron> okay
19:15:54 <rbergeron> can you drop a note to the mailing list letting everyone know what it is or i ugess if nothing else "that what it is is available" when we hit beta?
19:16:13 <dgilmore> sure
19:16:21 <rbergeron> #action dgilmore to post info about additional non-ec2 images being "ready to try" at beta (or additionally beforehand with a heads-up of what is coming)
19:16:28 <rbergeron> coolio, i'm gonna move on, thanks for being here :)
19:16:32 * rbergeron looks at clock
19:16:44 <rbergeron> #topic Special Guest Star skvidal and The Cloud Stuff He's Doing
19:16:48 <skvidal> hi
19:16:48 <rbergeron> #chair skvidal
19:16:48 <zodbot> Current chairs: gholms rbergeron skvidal
19:16:57 <rbergeron> HI SETH
19:17:01 <skvidal> so fedora infrastructure has a plan
19:17:12 <skvidal> we're going to be building an eucalyptus cluster
19:17:20 <skvidal> for unicorns and magical ponies
19:17:28 <skvidal> and for random builders
19:17:32 <skvidal> and random testing
19:17:44 <skvidal> we've setup a test cluster using some out-of-warranty hw
19:18:07 <skvidal> and we have the structure/concept under our belts (we think :) and now we're in the get real hw mode
19:18:30 <skvidal> right now we're hoping to have a blade center to devote to it with some goodly sized machines
19:18:32 <rbergeron> #info Fedora Infra has a plan to build a eucalyptus cluster (for unicorns and magical ponies, and for builders and random testing)
19:18:46 <skvidal> the cluster will be eucalyptus 3.<mumble>
19:18:51 <gholms> Heh
19:19:03 <rbergeron> #info test cluster is setup using out of warranty hw currently; structure/concept is under our belts, waiting on actual hardware to arrive
19:19:08 <skvidal> and I've been keeping track of everything I'm doing so, hopefully, others can duplicate this effort
19:19:13 <rbergeron> #info will be eucalyptus 3.<mumble>
19:19:23 <rbergeron> skvidal: as in "how i did it so you can too" type of documentation?
19:19:23 <skvidal> the timeline for the hw and for the networking changes we have to get made is.... unknown at this point
19:19:44 <skvidal> rbergeron: well, to be fair it is more "how I did it so when I get eaten by a grue someone else can figure this out" documentation
19:19:49 <skvidal> but that's more or less the same thing
19:19:52 <rbergeron> skvidal: is tihs all gonna be hooked up with puppet and whatnot too?
19:19:56 <skvidal> no
19:20:02 <rbergeron> skvidal: raptor/bus
19:20:05 <skvidal> I doubt that.
19:20:18 <skvidal> the guest instances won't be puppeted
19:20:26 <skvidal> and I wouldn't want to force that on people I didn't like
19:20:40 <rbergeron> <snicker>
19:20:42 <skvidal> the cluster itself may or may not be w/the rest of infrastructure
19:20:57 <skvidal> mainly b/c of network isolation
19:21:13 <rbergeron> and infra will be the point of requesting "i need SOME CLOUD PLZ"?
19:21:23 <skvidal> the thing about this cluster is that we'd like to make it easier for fedora contributors to take advantage of
19:21:39 <skvidal> but we'd rather to keep them separated from our protected infrastructures
19:21:42 <skvidal> for obvious reasons
19:22:19 * rbergeron nods
19:22:51 <rbergeron> so i guess that part of the plan is probably still being worked out (how to request / how much you get / what you can use it for)
19:23:04 <skvidal> that's all a bit up in the air, I think
19:23:09 <rbergeron> okay
19:23:24 <rbergeron> up and running first? :) lol
19:23:25 * mdomsch would love a contributor to pick up the FTBFS work and run it on there
19:23:34 <gholms> Ooh, there's an idea.
19:23:35 <skvidal> mdomsch: so would we all, I think.
19:24:06 <skvidal> but let's not get too far ahead of ourselves :)
19:24:15 <skvidal> I just wanted to update people on the general plan
19:24:17 * nirik wondered if that would be a good GSoC... but perhaps not.
19:24:19 <skvidal> 1. test out euca - done
19:24:29 <skvidal> 2. get hw/network/etc
19:24:38 <skvidal> 3. get stuff in 2 setup into production
19:24:42 <skvidal> 4. profit?
19:25:08 <rbergeron> #info the hope is that this cloudy space will be easier for contributors to take advantage of, as it ideally will be separated from the more protected pieces of infrastructure
19:25:09 <skvidal> the euca folks have been helping me with issues
19:25:22 <skvidal> as I've gotten it setup on the junk boxes
19:25:23 <rbergeron> underpants?
19:25:33 <skvidal> in case anyone is wondering we're using kvm - not xen
19:25:47 <skvidal> and I cannot, personally, imagine that changing.
19:25:54 <rbergeron> #info it is using KVM, in case anyone is wondering
19:26:13 <skvidal> I think that's all?
19:26:22 <rbergeron> cool. anyone have questions? /me thanks seth for coming into the meeting today :)
19:27:27 <rbergeron> skvidal: are we going to vote on the name of the cloud
19:27:39 <rbergeron> </troll>
19:27:43 <rbergeron> #topic Events incoming
19:27:51 <skvidal> righty-o
19:27:55 <rbergeron> #info OpenStack Summit/Conf is coming, see email I sent to list
19:27:58 <gholms> Thanks, skvidal!
19:28:04 <skvidal> gholms: thank you
19:28:12 <rbergeron> https://fedoraproject.org/wiki/OpenStackSummitConf_April2012
19:28:31 <rbergeron> #info if you're going, plz sign up so I can harass you into sitting at the booth for at least a wee bit of time on Thurs/Fri
19:28:39 <rbergeron> #info err, sweetly ask you
19:28:55 <rbergeron> #link http://lists.fedoraproject.org/pipermail/cloud/2012-March/001333.html
19:29:16 <rbergeron> I'm also kind of hoping for some help with general cloud sig colateral type stuff , wouldn't it be fun to have a flyer, etc.
19:29:56 <rbergeron> So if you are willing and able or at least able to put some ideas that i can transform into nicer english, because I'm clearly qualified with my great grammar and sentence structure, feel free to add in (links in mail link above)
19:30:01 <rbergeron> So that's the first one.
19:30:03 <rbergeron> Second one is
19:30:46 <rbergeron> #info OpenCloudConf - April 30 - May 3
19:30:52 <rbergeron> #link http://www.opencloudconf.com/
19:31:36 <rbergeron> johnmark is wrangling this with dave nielsen at the silicon valley cloud center
19:31:51 <rbergeron> I am guessing that at some point given how soon it is that there will be information about how to actually participate
19:32:00 <rbergeron> <insert me trolling on johnmark here>
19:32:33 <rbergeron> But there is oppportunities for cloudiness of all types so keep it in mind, i guess.
19:32:38 <rbergeron> And that's about all i have on that.
19:32:46 <rbergeron> More as that unfolds :)
19:32:52 <rbergeron> #topic Feature Funj
19:33:00 <rbergeron> #undo
19:33:00 <zodbot> Removing item from minutes: <MeetBot.items.Topic object at 0x2d5c4850>
19:33:01 <rbergeron> #topic Feature Fun
19:33:06 <rbergeron> That's better.
19:33:25 <rbergeron> russellb, ayoung, i am not sure who else i saw roll in from the openstack crowd: wassup?
19:33:34 <rbergeron> dprince perhaps
19:33:46 <ayoung> um
19:33:59 <russellb> we've been doing a bunch of updates in the last week or so to RC1 of the various projects for the Essex release
19:34:10 <ayoung> Biggest thing in the Openstack world is the incipience of Essex
19:34:18 <ayoung> And the planning stage for Folsome
19:34:18 <russellb> so, continued testing to help make sure we don't break the world is good
19:34:27 <ayoung> Folsom
19:34:46 <ayoung> We will break the world, though
19:35:10 <ayoung> So really, we are asking you to help figure out early in which particular way we've broken it
19:35:13 * rbergeron hands ayoung a foot-long hot dog for his use of the word "incipience" in a sentence
19:35:50 <ayoung> \m/ _ _ \M/
19:35:54 <rackerjoe> Rackspace is working on a set of chef recipes to installation and testing of the fedora packages.  any bugs will report upstream
19:36:31 <ayoung> rbergeron, one thing that has come up, and is a Fedora wide issue, not just cloud, is the means by which we deploy web apps
19:36:32 <rbergeron> rackerjoe: chef recipes for for fedora packages... on your own infrastructure or to go with openstack or  ... . .. ?
19:37:11 <rbergeron> rackerjoe: also upstream to... chef or openstack or ?
19:37:11 <rackerjoe> This will primarily be used for our deployments but it is on github.com/rcbops for anyone to utilize
19:37:53 <rackerjoe> Out plan is to report openstack specific bugs upstream to the openstack project.  Any packaging bugs we'll file a bugzilla for ATM
19:37:54 <rbergeron> rackerjoe: that is super cool, would you be willing to be connived into dropping a line to the mailing list about that? i am sure there are people who would be interested who may not actually slog through reading meeting minutes :)
19:38:24 <rbergeron> #info rackerjoe is working on chef recipes for fedora packages, mostly for rax infrastructure deployments but you can find it....
19:38:26 <rackerjoe> Once it is in a state where it.. eh em..  always works.  Will do
19:38:28 <rbergeron> #link github.com/rcbops
19:38:39 <russellb> and dprince and derekh have been working on puppet stuff
19:38:42 <rbergeron> rackerjoe: some people may be willing to help get it to that state :)
19:38:47 <russellb> it's used with smokestack.openstack.org
19:39:12 <rackerjoe> correction: the chef recipes are for RCB Deployments of OpenStack (not the RAX public cloud infrastructure)
19:39:13 <rbergeron> #info OpenStack: "the incipience of essex" - lots of updates in lsat week or so to RC1 for varoius essex projects
19:39:21 <rbergeron> #info plz test, wtb help there
19:39:46 <rbergeron> #info correction: the chef recipes are for rcb deployments of openstack, not rax public cloud infrastructure
19:39:50 <rbergeron> rackerjoe: gotcha
19:40:01 <rbergeron> #info dprince and derekh are working on puppet-y things
19:40:10 <rbergeron> russellb: what's this whole devstack business about
19:40:27 <russellb> devstack is a script primarily used for upstream development
19:40:39 <russellb> gets the whole stack up and running quickly from git checkouts
19:40:47 <rackerjoe> it should never be used for operational deployments
19:40:54 <russellb> right
19:41:05 <rackerhacker> i'm currently hacking on some automated kickstarts to get openstack set up within VM's/servers (devstack-ish approach, but more for production)
19:41:19 <russellb> but I find it handy while hacking on OpenStack code ... it previous was Ubuntu only, now it's getting Fedora support
19:42:49 <rbergeron> #info devstack is a script primarily used for upstream dev; gets the whole stack up and running quickly from git checkouts; not design for operational deployments...
19:43:06 <rbergeron> #info was prevoiusly ubuntu-only, now getting fedora support
19:43:29 <rbergeron> rackerhacker: anyplace where people can look or just still in the toying with it on your own phase
19:43:48 <rackerhacker> rbergeron: i'm sidelined right now by BZ801650
19:43:55 <rbergeron> #info rackerhacker is working on automated kickstarts to et openstack set up within vms/servers devstack-ish aproach, but more for production)
19:43:58 <rackerhacker> law has a fix but it's not yet pushed
19:44:40 <rackerhacker> err well, it's still in testing, but can be pushed to f17-stable if he so desires
19:45:16 <rbergeron> and since we're frozen probably not until after beta i am guessing unless it's flagged as NTH fix?
19:45:40 <dprince> rackerjoe: Are your Chef changes public?
19:45:49 <rbergeron> rackerhacker: well, cool, i guess same to you as rackerjoe: mails are nice :)
19:45:56 <rackerhacker> i'm not 100% sure... jeff law seemed to say that the first change was relatively trivial, but i'm not sure about the second :/
19:46:06 <rbergeron> rackerhacker: ahhh
19:46:14 <rackerhacker> rbergeron: i'll email jeff to find out
19:46:20 <rackerjoe> dprince: yes they are.  Don't let the name of the branch fool you.. https://github.com/rcbops/chef-cookbooks/tree/ubuntu-precise
19:46:29 <rbergeron> LOL
19:46:32 <rackerhacker> precisely
19:46:49 <dprince> rackerjoe: I like to obfuscate my branch names as well ;)
19:46:51 * rbergeron puts on her hot pangolin costume
19:47:09 <rbergeron> hrm, it's just not me
19:47:19 <rackerjoe> I haven't tested the recipes on the latest cut of the f17 packages yet but it is on my todo list
19:47:42 <rbergeron> okay, i think that's about all things openstacky, unless you guys want to go on, i'll just start yelping at ke4qqq or sparks,  or spstarr for cloudstack / opennebula stuff if they're here
19:48:16 <rbergeron> or mgoldmann about as7, or mull/gregdek/gholms/obino if any of them want to give any mini-updates on euca progress for f18 and beyond
19:48:32 * rbergeron notes as7 is more java-ish but that it has plenty of overlap with some of the packages we do over here
19:49:00 <gholms> Sorry, was distracted
19:49:11 <mull> rbergeron, jhernandez and mgoldman did a few more reviews for me.  My dep list has about 4 things left on it
19:49:31 <rbergeron> gholms: yeah yeah
19:49:34 <gholms> mull did my last review.
19:49:54 <rbergeron> #info Euca still plugs along, dep list has about 4 things left
19:50:56 <mull> not much else from us
19:50:57 * rbergeron takes other silence as things being golden (like a nice toasty bun)
19:51:04 <rbergeron> mull: gotcha
19:51:29 <rbergeron> #info anyone in Rochester area, head over to CloudCampRoc and harass mull and gregdek about how ncst is gonna implode tonight
19:51:48 <mull> rbergeron, ncsu is not my team ... ou is
19:51:55 <gregdek> mull doesn't care about NCSU.  Harrass him about Ohio.
19:51:56 <rbergeron> oh
19:52:00 <mull> :)
19:52:04 <gregdek> Even better, harrass him about Ohio State.
19:52:06 <gholms> Hehe
19:52:11 * mull kicks gregdek
19:52:13 <rbergeron> well, we want them to win
19:52:24 <rbergeron> http://ianweller.fedorapeople.org/brackets/rbergero.html
19:52:34 <rbergeron> for the love of god, save my bracket
19:52:41 <rbergeron> okay, moving on :)
19:52:41 <gholms> Haha
19:52:56 <rbergeron> #topic S3 & Mirrors
19:53:00 <rbergeron> mdomsch: HI!
19:53:07 <mdomsch> IT'S ALIVE!
19:53:09 <rbergeron> it's the moemnt you've been waiting a long long time for :)
19:53:28 <mdomsch> I flipped the switch in MM a couple days ago
19:53:32 <mdomsch> random stats:
19:53:39 <mdomsch> Unique IP checkins since 3/21/2012 to the S3 mirror:
19:53:39 <mdomsch> EL5: 5296
19:53:39 <mdomsch> EL6: 1683
19:53:39 <mdomsch> Fedora: 170
19:53:52 * mdomsch is disappointed in the fedora numbers
19:53:56 <mdomsch> but there you have it
19:54:01 <rbergeron> Fedora ... is that F8, F16, ?
19:54:06 <rbergeron> or too hard to parse that out
19:54:07 <mdomsch> it's nearly all f16
19:54:17 <rbergeron> interesting
19:54:17 <mdomsch> it's easily parsable
19:54:43 <mdomsch> though note - I only have F15-16-17 in the mirror
19:54:59 <mdomsch> so anyone still running f8 won't get directed there
19:55:17 <mdomsch> aside from my script spamming sysadmin-main (which skvidal has offered to fix)
19:55:28 <mdomsch> it's working as expected, absent any complaints
19:56:17 <mdomsch> any questions?
19:56:29 * rbergeron notes that in the past month or so, in wiki/Statistics ... F8 has gone from 7,288,234 to 7,320,091 unique connections to repository
19:56:40 <rbergeron> mdomsch: when did you turn it on?
19:56:40 <mdomsch> hmmm
19:56:55 <mdomsch> 3/21
19:57:13 <rbergeron> in hte past week from 7312439 to 7320091
19:57:33 * rbergeron notes we've always speculated that that large # was from amazon (if you compare the numbers on the wiki page, it's totally out of whack with other releases)
19:57:45 <rbergeron> Fedora 7 || Fedora 8 || Fedora 9 || Fedora 10 || Fedora 11 || Fedora 12 || Fedora 13 || Fedora 14 || Fedora 15 || Fedora 16
19:57:48 <rbergeron> |-
19:57:51 <rbergeron> | 4409781 || 7312439 || 4119425 || 4791180 || 5021611 || 5554257 || 4596972 || 5253673 || 2771852 || 1631638
19:58:03 <rbergeron> (those are last week's numbers)
19:58:26 <mdomsch> rbergeron, so you think we should sink F8 content into there?
19:58:26 <rbergeron> mdomsch: i assume that there's no way someone could have hardcoded in an ami where to connect to for a repository, right?
19:58:38 <mdomsch> rbergeron, sure they could have
19:58:48 <mdomsch> /etc/yum.repos.d/*.repo
19:58:53 <rbergeron> mdomsch: I am just curious about how many people are still using F8, really
19:58:59 <rbergeron> and if a huge number of that is from ec2
19:59:03 <rbergeron> (mostly the latter point)
19:59:23 <mdomsch> I suppose we could upload F8 content to the mirror, and then watch the mirror logs
19:59:27 <rbergeron> there are tons of various amis out there that are like... built for hadoop-specific things that hadoop folks/cloudera have put out
19:59:41 <rbergeron> remixes if you will based off f8 :)
20:00:16 <mdomsch> so you want me to put f8 there too?
20:00:25 <rbergeron> mdomsch: i think it might be interesting. i am guessing that finding out what ami people are actually using is asking too much :)
20:00:36 <mdomsch> yes, we have no way to get that
20:00:38 <rbergeron> mdomsch: maybe for a bit? just to satisfy curiosity? does that seem reasonable to people?
20:01:53 * rbergeron doesn't think it would kill us
20:01:59 <mdomsch> k
20:02:32 <rbergeron> mdomsch: is there a way we could see those numbers of updates ... regularly-ish?
20:02:32 <mdomsch> now, we're in US-East-1 only
20:02:42 <mdomsch> any reason to think we should upload to other regions too?
20:02:44 <rbergeron> or is it a "ondemand ask mdomsch when he's not busy with $rest oflife" thing :)
20:03:09 <mdomsch> rbergeron, great question.  I've started plumbing the S3 logs into FI's awstats tool, but it's not done yet
20:03:12 <rbergeron> again, i guess it's hard for us to know what usage we have in other regions, we're just doing this as we all know it's the most used?
20:03:36 <mdomsch> maybe spevack can give us some insight
20:03:47 <mdomsch> #action mdomsch to upload F8 into the S3 mirror
20:03:51 <rbergeron> O MIGHTY SPEVACK
20:04:28 <mdomsch> that's all then
20:04:39 <rbergeron> #info S3 / mirrors is now ON! unique ip checks to s3 mirror since 3/21/2012: EL5: 5296; EL6: 1683; Fedora (15-16-16): 170
20:04:50 <rbergeron> mdomsch: you're awesome, thank you :)
20:04:54 <mdomsch> 17
20:04:58 <mdomsch> rawhide
20:05:25 <rbergeron> #info S3 / mirrors is now ON! unique ip checks to s3 mirror since 3/21/2012: EL5: 5296; EL6: 1683; Fedora (15-16-17): 170
20:05:57 <rbergeron> rawhide also? how does that wind up getting aggregated over time... do we just have an always ever-growing rawhide # or ...
20:06:07 <rbergeron> #info SUPER HUGE MIRACULOUS HUGS to mdomsch for all his work on this, thank you!
20:06:28 <mdomsch> no, it deletes content that's been removed from the master mirror
20:06:43 <rbergeron> i guess devil is in the details and probably not going to assess it deeply right this second :)
20:06:44 <mdomsch> that's one somewhat painful point - the mirrors use hardlinks - S3 doesn't
20:06:53 <mdomsch> so content moving from rawhide to f17 gets copied up again
20:07:03 * rbergeron nods
20:07:08 <mdomsch> and content moving from updates-testing to updates gets copied again
20:07:56 <rbergeron> okeedokee
20:08:15 <rbergeron> well, i guess some info is better than no info
20:08:33 <rbergeron> #action rbergeron to harass spevack to read these logs and give input to above thoughts on other zones
20:08:49 <rbergeron> #topic Open Floor / Your topic here
20:08:59 * rbergeron yields the floor to others, since she is a floor-hog
20:09:31 <tdawson> Just want to say that OpenShift is still working towards being open sourced in time for F18.
20:10:25 <ayoung> I'd like to ask if there is any interest in Diskless booting with Ramdisk RootFS out there?
20:10:54 <rbergeron> #info openshift is still working towards being open sourced in time for F18 (YAY)
20:11:12 <rbergeron> tdawson: just let us know when you are ready to get on the train of packagership if you're not already
20:11:13 <ayoung> I mean, besides my obvious interest...
20:11:28 <tdawson> There has been alot of finger slapping and "you can't make a spec file that way" ... so I guess we're makingn progress.
20:11:44 <rbergeron> LOL
20:11:57 <rbergeron> tdawson: who's gonna be the lucky packager people in fedoraland?
20:13:41 <tdawson> I'm not positive who will be doing what, but me, rharrison, J5, and at least two others ... I'm terrible with names.
20:13:48 <rbergeron> ayoung: i'm not seeing a lot of ... feedback, i wonder if mailing list might have more opinion (perhaps with use cases or osmething)
20:14:17 <ayoung> rbergeron, I'll write it up...probably a blog post
20:14:45 <ayoung> the idea is that cloud nodes are ideally diskless, at least in certain usages
20:14:50 <rbergeron> tdawson: awesome, so you have at lesat 2 already packers to help you out
20:15:07 <rbergeron> one a sponsor even :)
20:15:35 <rbergeron> #info ayoung curious about interest in diskless booting with ramdisk rootFS; will blog
20:15:45 <rbergeron> #info idea is that cloud nodes are ideally diskless, at least in certain usages
20:15:52 <tdawson> Yep ... but there are a few packages that still scare me when I look at their spec files.  I'll be cleaning one of those up next week.
20:16:03 <rbergeron> tdawson: well, we are happy to welcome that stuff (FINALLY OMG)
20:16:26 <rbergeron> anyone else? :)
20:16:30 <rbergeron> ayoung: sounds like a plan
20:16:31 <rbergeron> :)
20:17:14 * rbergeron holds the meeting for a minute before closing out and thanks everyone from the bottom of her cold little heart for coming
20:17:20 <rbergeron> :)
20:18:14 <rbergeron> [cue gholms witty comment here]
20:19:07 <rbergeron> mestery :)
20:19:35 <rbergeron> alrighty folks, thanks for coming :)
20:19:41 <rbergeron> #endmeeting