cloud
LOGS
21:00:38 <rbergeron> #startmeeting cloud SIG
21:00:38 <zodbot> Meeting started Thu Aug 12 21:00:38 2010 UTC.  The chair is rbergeron. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:38 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
21:00:45 <rbergeron> #meeting name cloud SIG
21:00:49 <gholms|work> #meetingname cloud
21:00:49 <zodbot> The meeting name has been set to 'cloud'
21:01:03 <rbergeron> #chair gholms|work
21:01:03 <zodbot> Current chairs: gholms|work rbergeron
21:01:27 * rbergeron looks around to see who else is here
21:01:31 <rbergeron> #topic roll call
21:01:33 * rbergeron 
21:01:38 * creiht bows
21:01:49 <brianlamere> hola
21:01:49 * gholms|work groggily waves
21:02:09 <rbergeron> jforbes is at linuxcon this week
21:02:28 <rbergeron> gholms - why the groggy? :)
21:02:52 <gholms|work> Spending too much time at work; I need more sleep.
21:03:07 * rbergeron throws gholms a pillow
21:03:22 <gholms|work> Shall we begin?
21:03:25 <rbergeron> allllrighty then - yes plz
21:03:29 <rbergeron> #topic agenda
21:03:36 <rbergeron> #link http://fedoraproject.org/wiki/Cloud_SIG#Upcoming_meeting_agenda
21:03:50 <rbergeron> ^^^ actually ... agenda from 2 weeks ago? -
21:04:00 <rbergeron> i was out last week - not sure where we're at - maybe gholms wants to lead a bit?
21:04:32 * jsmith is here
21:04:35 <gholms|work> I wasn't here last week either.
21:04:44 <rbergeron> ahhh.
21:05:02 * rbergeron wonders who was here :)
21:05:58 <gholms|work> There appears to have been no meeting last week.
21:05:59 <brianlamere> I was here, but I only listened for things that pertained to me
21:06:09 <rbergeron> here's logs from last week:
21:06:11 <rbergeron> http://meetbot.fedoraproject.org/fedora-meeting/2010-08-05/fedora-meeting.2010-08-05-21.00.html
21:06:13 <gholms|work> Or if there was it didn't get #meetingnamed.
21:06:46 <brianlamere> no, there was - there was a bit of a discussion about whether a #fedora-cloud channel should be created, and we explained that with the new pv-grub it will be transparent; will be just as though it's a normal vm somewhere
21:07:16 <gholms|work> #fedora-cloud already exists.
21:07:20 <rbergeron> i think we do have a #fedora-cloud channel :)
21:08:08 <brianlamere> ah well, see - I retained a lot from the meeting.
21:08:25 <rbergeron> you did :)
21:09:19 <brianlamere> I don't really remember talking about much else though
21:09:24 * rbergeron nods
21:09:26 <gholms|work> It appears as if people on #fedora are unsure of how to handle support for cloud-based images.
21:09:56 <rbergeron> yes - i don't know exactly how to handle that - other than to make sure we hvae some decent documentation they can point to.
21:10:25 <brianlamere> yeah, hopefully in the end he was convinced that with the new images he'll be able to just treat it like a normal xen guest.  Cause really, that's all it will be.  He had apprehension due to the fed8 images avail currently
21:10:41 * rbergeron nods
21:10:46 <gholms|work> pvgrub ought to make that a non-issue.
21:11:30 * rbergeron nods
21:11:42 <brianlamere> they should aim at just adding xen support, and not worry about aws-centric stuff.  That's the point of pv-grub
21:12:20 <brianlamere> with the concern that fedora doesn't do xen anymore really ;)
21:12:57 <gholms|work> On to the next topic?  (EC2)
21:13:24 <rbergeron> indeed
21:13:30 <rbergeron> #topic EC2 status
21:13:35 <rbergeron> What do we know here?
21:14:01 <rbergeron> i see in the last meeting logs -
21:14:10 <rbergeron> 21:04:29 <jforbes> As for the feature, we missed the chopping block with agreement that working proof of concept would constitute 100% done for feature freeze purposes
21:14:23 <rbergeron> do we know when we need to have working proof of concept done?
21:14:57 <gholms|work> I'm of the opinion that being on the feature list isn't that important - we can still publish F14 images when we're ready.
21:15:12 <brianlamere> well, if we missed it then it may not matter?
21:15:47 <rbergeron> we're still on feature list.
21:16:23 <rbergeron> it's nice to be there, because Feature list is one place that press people look when trying to figure out 'what's new in fedora' - also, things like talking points and one page release notes come from things on feature list.
21:16:31 <rbergeron> basically, it's a good way for us to get some attention for doing new stuff.
21:17:28 <rbergeron> so - it looks like I need to poke justin for some of that information - what do we need to have in palce when for a working proof of concept
21:17:33 <gholms|work> Well, we need a documented image-building process (along with the relevant kickstart) and a documented uploading process.
21:17:40 <rbergeron> yup.
21:18:05 <gholms|work> Once an image is uploaded it just becomes standard fodder for AWS tools and euca2ools.
21:18:27 <gholms|work> Maybe a few tests to make sure the image works?  (e.g., Can I log in?)
21:18:28 * rbergeron wonders where we should start  as far as documentation
21:18:41 <gholms|work> Well, who here has built an EC2 image before?
21:19:00 <brianlamere> I've built a "few"
21:19:23 <brianlamere> I think I built/created about a dozen a day avg for a couple weeks
21:19:24 <rbergeron> not many.
21:19:28 <rbergeron> wow
21:19:46 <gholms|work> That's a start; then you know at least part of the process.
21:19:59 <brianlamere> I was working out a few kinks, with no local xen server so I was just building it at ec
21:20:28 <gholms|work> Are you able to build it in a chroot and package that?
21:20:51 <gholms|work> I tried doing that once but then my grid project ended.
21:20:51 <brianlamere> once the kinks were out I then made my blessed images for the various tasks I do.  That doesn't include the AMIs I had for other things, that's just the fedora13 pv-grub images
21:21:14 <brianlamere> yeah, I posted a tiny, terrible, ugly script to the list that does it for you
21:21:30 <brianlamere> pvgrub2ebs or something like that
21:21:50 <gholms|work> #link http://github.com/dazed1/pvgrub2ebs
21:22:25 <rbergeron> brianlamere: would you be willing to take a first stab at writing some documentation - i'm sure on the wiki would be fine
21:22:33 <rbergeron> basic instructions
21:22:45 <rbergeron> or even to the mailing list - i can convert to wiki
21:22:50 <brianlamere> yeah, that will make a ebs-boot in a chroot.  justin is working on ephemeral backed though, built more intelligently (kickstart on a local system, etc)
21:23:26 <gholms|work> There has got to be a way to make anaconda do this.
21:23:40 <brianlamere> rbergeron:  for justin's work?  I can, I'm sure.
21:24:27 <rbergeron> usually, just having a basic framework gives some other people a place where they can fill in what they know - but it's harder to get people to help when nothing's been started yet.
21:24:59 <rbergeron> do you want to just send to the mailing list? or would you be willing to do it on the wiki page :)
21:25:11 <brianlamere> gholms:  oh, yeah, I'm sure there is (a way to make anaconda do it).  But I cheated on fedora with gentoo some years back, and just got used to chroot builds.  I've been doing ugly stuff for embeded or from-scratch systems for so long I am used to it anyway
21:25:35 <gholms|work> What you're doing is basically what I initially did.  I never got any further.
21:26:08 <brianlamere> yeah, for my purposes I didn't really need to get further since at that point you can do "yum install happypackage"
21:26:25 <brianlamere> short of dealing with the key imports, of course.  but I don't use those anyway.
21:26:55 <brianlamere> so my needs sortof diverge from the general population's at that point; I do auth for the nodes based on public keys in ldap
21:27:11 <gholms|work> If we can figure out how to make anaconda do most of the work then we can just throw most everything in a kickstart.
21:27:23 <gholms|work> I think huff made one at one point.
21:28:10 <brianlamere> gholms:  yes, the way to do that is to kickstart locally on a xen host/server you can better control, then copy the image that was created out to S3 and create an AMI that boots from it
21:28:16 <brianlamere> which is what justin is doing
21:28:34 <brianlamere> at least, I couldn't really think of how to make anaconda do it "out there" for me
21:28:59 <brianlamere> (though really, I only needed a working AMI, so I didn't spend terribly much time thinking about it)
21:30:43 <gholms|work> brianlamere: If you could write about your experiences that would make a good starting point.  Is that all right with you?
21:31:30 <jsmith> What about booting an AMI and using VNC to control it locally?
21:31:52 <rbergero1> wow - fail
21:31:56 <jsmith> (control the Anaconda installation, that is)
21:31:58 <brianlamere> gholms:  yeah, that would be easy enough
21:32:20 <gholms|work> jsmith: Chicken and egg problem
21:32:25 * rbergero1 wonders if anyone can see her
21:32:34 <jsmith> rbergero1: Yeah, we see you
21:32:37 * gholms|work pokes rbergero1
21:32:45 <rbergero1> weiiiird
21:32:50 * rbergero1 waits for rbergeron to die off
21:32:51 <creiht_> heh
21:33:09 <brianlamere> jsmith:  I thought about making a tiny AMI that would run as a network-boot kickstart, and then create yet another AMI...but you have to have the chicken lay the egg.
21:33:16 <gholms|work> rbergero1: /msg nickserv ghost rbergeron  (IIRC)
21:34:12 <gholms|work> #action brianlamere to document basic AMI build process; we can refine it later
21:34:14 <brianlamere> jsmith: like maybe just an AMI that is the contents of the installation DVD, and that takes use arguments during the instances creation, and....yeaaahhh, certainly there's possibilities, but in the end you're just trying to make an AMI ;)  so just make one
21:34:28 <jsmith> brianlamere: Gotcha
21:36:01 <gholms|work> Oh, before I forget:
21:36:23 <gholms|work> #info New euca2ools build; please try building images with it:  http://repos.fedorapeople.org/repos/gholms/cloud/
21:38:02 <brianlamere> let me know what sort of documentation you'd like on it; I could just expand the readme at http://github.com/dazed1/pvgrub2ebs to be more friendly to people who don't know what they're doing at all
21:38:34 <gholms|work> brianlamere: Any progress on co-maintaining boto?
21:39:37 <gholms|work> Heck, the readme you have can probably just go on the wiki; it's pretty good.
21:41:17 <gholms|work> [You hear in the distance the chirping of a song bird]
21:42:11 <brianlamere> I haven't heard back from Robert yet; I'll re-poke him
21:42:50 <gholms|work> brianlamere: If all else fails he's on IRC as rsc if you can't reach him by email.
21:44:14 <brianlamere> ok, I'll give him a couple days on the new email and then check irc
21:48:13 * rbergeron nods
21:48:16 <gholms|work> Anything else for EC2 or shall we move on?
21:48:17 <brianlamere> freenode is not cooperating with the meeting
21:48:21 <rbergeron> no, it's not.
21:50:19 <gholms|work> What's next on the agenda?
21:51:17 <gholms|work> (Can anyone hear me?)
21:52:37 <brianlamere> yes, then you left
21:52:43 <gholms|work> Oh, good.
21:53:01 <brianlamere> but I'm just me, question is whether others can ;)
21:53:21 <gholms|work> rbergeron, jsmith:  Can you hear me at all?
21:53:35 <rbergeron> i can hear you now :)
21:53:43 * rbergeron has no long idea how long it's been since you talked, but... :)
21:53:56 <gholms|work> About five minutes
21:53:59 * rbergeron had changed the topic from ec2 like 20 minutes ago before she split off
21:54:11 <brianlamere> he asked what was next on the agenda
21:54:23 <gholms|work> Awesome.  Looks like it was just me and brianlamere for a while.
21:54:23 <jsmith> gholms|work: Yeah, I'm here
21:54:34 * jsmith thinks we're all back
21:54:36 <gholms|work> Anything else for EC2, or shall we move on?
21:55:01 <jsmith> Greg had talked about some deal with Amazon for some free instances or something
21:55:06 <jsmith> Anybody know what ever happened with that?
21:55:14 * jsmith assumes it got lost in the shuffle
21:55:21 * gholms|work mentioned http://repos.fedorapeople.org/repos/gholms/cloud/ for those who were not here to see it
21:55:22 <rbergeron> no - i'm not sure
21:55:30 <rbergeron> ke4qqq: do you know anything about the above? ^^^
21:55:48 <rbergeron> i know there was some contact at amazon we have
21:55:54 <brianlamere> oops, ec2 question: is this what you'd like me to put the document?  https://fedoraproject.org/wiki/Publishing_image_to_EC2
21:56:17 <ke4qqq> rbergeron: I emailed and was absolute fail on calling - no response to email
21:56:42 <gholms|work> What we really need as far as EC2 instances go is a yum mirror in every availability zone and a mirrormanager configuration that points instances to the correct mirrors.
21:57:09 <rbergeron> do you still have his number / name?
21:57:12 <gholms|work> If we don't have that infrastructure set up then people are going to have to pay for yum update bandwidth.
21:57:56 <jsmith> Aye...
21:58:16 <ke4qqq> gholms|work: do they do that for other distros?
21:58:27 <brianlamere> well considering that you don't get charged for traffic, then what we need then is just people (several per) in each region that are willing to pay for storage
21:58:36 <gholms|work> I'm not sure.
21:58:53 <gholms|work> EC2 charges for outbound traffic.  Inbound is free for a limited time.
21:58:59 <brianlamere> err...charged for traffic intra-region, that is
21:59:10 <gholms|work> Intra-region or intra-zone?
21:59:30 <brianlamere> they keep moving that date out for the inbound traffic; it was Feb, then June, now Nov...
21:59:46 <brianlamere> I'm pretty sure it's free intra-region.  quick enough to check, sec
22:00:16 <gholms|work> Ok, they charge for "all data transferred between instances in different Availability Zones in the same region"
22:00:39 <brianlamere> "There is no Data Transfer charge between Amazon EC2 and other Amazon Web Services within the same region"
22:01:28 <gholms|work> Yeah, but then it says, "Data transferred between AWS services in different regions will be charged as Internet Data Transfer on both sides of the transfer."
22:01:31 <brianlamere> ha ha - we quoted conflicting info.  what's the scoop on if the files are on RSS?
22:01:48 <gholms|work> This is one EC2 instance talking to another EC2 instance - not EC2 talking to S3.
22:03:39 <gholms|work> Either way we need someone to figure out what, if any, courtesies Amazon is willing to extend to us.
22:03:52 <rbergeron> yes
22:03:56 <rbergeron> ke4qqq - want to try again?
22:04:05 <ke4qqq> sure
22:04:16 * rbergeron gets out her bus keys
22:04:17 <rbergeron> thanks :)
22:04:42 <rbergeron> #action ke4qqq to look into what amazon can gives us as far as courtesies / help / etc
22:05:25 <gholms|work> We also need to know how to upload official images.  Do we need a shared account, or...?
22:06:03 <brianlamere> the bulk of the data transfer could be from S3 (or RRS) buckets, with an ec2 just serving as a head...then we get the benefit of "There is no Data Transfer charge for data transferred within an Amazon S3 Region via a COPY request" ( http://aws.amazon.com/s3/#pricing )
22:06:17 <brianlamere> sorry, didn't hit return on that earlier
22:07:04 <brianlamere> by "official" do you mean, ones that replace the options amazon gives when you try to create an instance from the webpage?
22:07:18 <brianlamere> (ie, replacing the ugly fed8 AMIs)
22:07:25 <gholms|work> There should be an "official" Fedora Project bucket.
22:07:31 <ke4qqq> didn't justin already have the account to do stuff like that
22:07:46 <gholms|work> Can you directly serve up the contents of a S3 bucket in a form that yum can use?
22:07:58 <brianlamere> because it seems like for any other purpose, something is "official" just by declaring it so; make a space, make the images public, advertise it on the webpage
22:08:42 * rbergeron isn't sure
22:08:47 <gholms|work> The official, RH-sponsored RHEL images are in a particular bucket, for example.\
22:10:36 <rbergeron> okay - so we need to figure out
22:10:47 <rbergeron> #1 if we have an official fedora bucket
22:11:01 <rbergeron> #2 if there is a shared account for uploading official images
22:11:11 <rbergeron> what else
22:11:12 <brianlamere> I didn't even know there were official redhat public images :)
22:11:23 <brianlamere> when I do:      ec2-describe-images -H --region us-west-1 -x all|grep -i "redhat"
22:11:25 <brianlamere> I get nota
22:12:50 * rbergeron notes we're a bit overdue - thank you, freenode :)
22:14:17 <brianlamere> oh wait, nm - I hadn't described other's images in a while ;)  yeah, they have redhat-cloud for rhel5
22:15:28 <gholms|work> Anyone know who manages those images?
22:16:13 <rbergeron> not sure.
22:16:19 <rbergeron> shall we find out?
22:16:21 * rbergeron can ask around
22:16:37 <gholms|work> I *think* it's huff, judging by my IRC logs.
22:16:48 <brianlamere> ok, it seems it's much like the other naming issues there; it's a first-come, first-served, basis; you claim a name, then that name is now yours
22:16:55 <gholms|work> Yeah
22:17:18 <brianlamere> so someone just needs to grab whatever "official" name the group wants to use, before someone outside the group does
22:17:42 <brianlamere> might as well grab the appropriate s3 bucket names, too
22:19:40 * rbergeron feels like we're naming off a lot of stuff to do here - not sure what i should be capturing exactly, or when we need to be doing it, etc
22:20:23 <gholms|work> Let's find out how the rhel images are uploaded.  That's mostly independent of the image building process.
22:21:38 <rbergeron> #action rbergeron find out who owns the rhel images - figure out how they get uploaded
22:21:47 <brianlamere> yeah, there's certainly some sort of way to do uploads to those as a group; as a solo-person I haven't even looked at that personally.
22:21:57 * rbergeron wonders if she still has control of the bot after all the splitting
22:22:11 <gholms|work> You'll find out afterwards!
22:22:26 <brianlamere> try changing the meetings name, and see ;)
22:22:34 <gholms|work> So that's image-building and image-uploading.  Anything else, or shall we move on?
22:23:22 <gholms|work> brianlamere: If you can, please see how much of the process you can use euca2ools for.  A new enough version is in the repo I linked to above.
22:23:53 * rbergeron nods
22:24:00 <rbergeron> #meetingname fedora_cloud_continued
22:24:00 <zodbot> The meeting name has been set to 'fedora_cloud_continued'
22:24:03 <rbergeron> yay
22:24:06 <gholms|work> #undo
22:24:06 <zodbot> Removing item from minutes: <MeetBot.items.Action object at 0x12f5c290>
22:24:21 <gholms|work> Action object?  Whaa?
22:24:38 <gholms|work> #action rbergeron find out who owns the rhel images - figure out how they get uploaded
22:24:46 <gholms|work> #meetingname cloud
22:24:46 <zodbot> The meeting name has been set to 'cloud'
22:24:56 <brianlamere> gholms:  ok
22:24:58 <gholms|work> I'll have to keep that in mind.
22:26:12 <gholms|work> Next topic?
22:27:07 <rbergeron> yup.
22:27:12 <rbergeron> #topic openstack
22:27:48 <rbergeron> I think we're a bit hung up here from what i've seen - i know ian reviewed the swift package, and silas is just trying to get a chance to wrap up the loose ends
22:28:29 <rbergeron> as of the last mail i saw - august 8
22:28:31 <rbergeron> ianweller: you around?
22:30:00 <ianweller> rbergeron: maybe
22:30:13 <ianweller> i think he did post a new spec. i haven't got to it yet
22:30:21 <rbergeron> https://bugzilla.redhat.com/show_bug.cgi?id=617632 <--- he asked for a little guidance on the upstream question - not sure if you've seen that
22:30:32 <ianweller> yeah. i got the email and got distracted.
22:30:36 * ianweller sighs  :)
22:31:20 * gholms|work is out of tea
22:31:29 * ianweller puts that bug on his todo list for saturday
22:31:46 <rbergeron> ianweller: no worries :)
22:32:26 <gholms|work> #action ianweller to continue working on openstack-swift package review
22:33:38 <rbergeron> okay.
22:33:52 <rbergeron> #topic open floor
22:35:38 <rbergeron> anyone else have anything?
22:36:05 <gholms|work> [Everyone stares at rbergeron]
22:38:16 * rbergeron laughs
22:38:19 <rbergeron> i guess not.
22:38:23 * rbergeron will end the meeting in like 30 seconds
22:39:27 <gholms|work> Thanks for coming, everyone!
22:40:00 <rbergeron> #endmeeting