cloud
LOGS
21:00:19 <gholms> #startmeeting Cloud SIG (23 Sep 2010)
21:00:19 <zodbot> Meeting started Thu Sep 23 21:00:19 2010 UTC.  The chair is gholms. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:19 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
21:00:22 <rbergeron> crap
21:00:25 <gholms> #chair rbergeron
21:00:25 <zodbot> Current chairs: gholms rbergeron
21:00:26 <rbergeron> wrong key :)
21:00:28 <gholms> ;)
21:00:37 * rbergeron just had her power go out so is flustered
21:00:40 <gholms> #meetingname cloud
21:00:40 <zodbot> The meeting name has been set to 'cloud'
21:00:50 <gholms> All yours!
21:01:06 <rbergeron> #topic Roll Call!
21:01:09 <rbergeron> Who's here? :)
21:01:12 <gholms> bacon
21:01:35 <rbergeron> omg, WHERE
21:01:57 <gholms> Well, not any more.  You should've been here for the last meeting.
21:02:12 <rbergeron> hey obino :)
21:02:20 <rbergeron> hey, i came back at the 30 minute mark as planned :)
21:02:26 <obino> hello rbergeron
21:02:35 <rbergeron> hi brianlamere :)
21:02:37 <gholms> jforbes: You around?
21:02:59 <rbergeron> I think jforbes is doing virt test day, but maybe he can pop in and update us real quick :)
21:03:13 <rbergeron> #topic EC2 status
21:03:24 <rbergeron> Or if anyone else knows what's going on as of late - maybe they can fill us in :)
21:03:38 <brianlamere> greetings!
21:03:52 <rbergeron> Or beyond that - if we want to talk about anything else EC2-related.
21:04:05 <rbergeron> I know we've been talking about documentation a bit, i have no idea where that's at as of this moment.
21:04:16 <brianlamere> do you know if jforbes got back with Ben@amazon about the AMIs?
21:05:11 <gholms> I heard most of his time went toward preparing for the test day today.
21:05:28 <brianlamere> I only ask because if he responded, he took me off the cc-list (which is entirely ok obviously, I'm just eager to see those Fed8 AMIs gone)
21:05:38 <brianlamere> ah, ok.
21:06:03 * rbergeron looks to see if she saw anything on that
21:06:09 <brianlamere> Ben seemed very interested in removing those images and I didn't really have a good answer for him
21:06:30 <brianlamere> so I pointed him to jforbes, but that was...errr...nearly 2 weeks ago?  at least a week and a half
21:07:01 <gholms> We should probably try to keep people from instantiating more systems that are two years behind on errata.
21:07:20 <rbergeron> Or beyond that - if we want to talk about anything else EC2-related.es.
21:07:24 <rbergeron> woops
21:07:27 <rbergeron> fail keyboard
21:07:29 <brianlamere> we should keep people from using any AMI that isn't using pv-grub, at this point
21:07:32 * rbergeron meant to say, yes :)
21:07:46 <brianlamere> regardless how outdated the packages are ;)
21:07:59 <gholms> I want to know how to restrict a S3 bucket to a given EC2 region.
21:08:04 <rbergeron> what about people who are alerady using F8 on ec2 - will that disappear? will they know to be migrated?
21:08:09 <rbergeron> orrrr ?
21:08:29 <gholms> rbergeron: Their instances *should* keep running; they just won't be able to create new ones.
21:08:32 <brianlamere> restricting on a region is something Ben said he'd work with us on, last I heard on that he was waiting for some info back from us
21:08:41 <gholms> What does he need?
21:09:15 * gholms also thinks Ben should mail the list
21:09:18 <brianlamere> there are ACLs that can be set on the buckets to restrict it to a region; we could do that ourselves really, but the tools for doing that type of ACL are a bit...err...underdeveloped
21:09:36 <gholms> How do you select a specific region?
21:09:40 <rbergeron> is ben on the list? :)
21:10:01 <gholms> If he is he's never posted to it.
21:10:09 <rbergeron> oh, right, i could probably look at that
21:10:16 <brianlamere> Not that I know of; I can suggest it might be a quicker way for getting communication accomplished though ;)
21:10:24 <gholms> Ohh yeah.
21:10:46 <gholms> If their IP blocks were fixed that would work instead, but AFAIK that isn't the case right now.
21:11:02 <rbergeron> nada. :)
21:11:04 <gholms> Did I talk about how Ubuntu's EC2 mirroring system works right now?
21:11:04 <rbergeron> yes, it would be.
21:11:23 <rbergeron> I think you did, but feel free to run it by us again. :)
21:12:22 <brianlamere> there are ACLs other than IP blocks.  There's a bit of convo on that I had with Ben and Matt, it was waiting on whether there was opposition to making an account per region, err...something else...and then finally, Ben was going to get with some people there too see what could be comp'd
21:13:18 <gholms> Yeah, I didn't see anything like a region block in the ACL docs, so I figured I'd ask.  *shrug*
21:13:32 <brianlamere> you can actually ACL based on region.  That, and just do a metalookup on your instance of what region you're in, and then point to fedora-(that region name)-s3.amazon.com for the bucket ;)
21:13:35 <gholms> As it starts up, an Ubuntu instance looks up what region it's in and adds a region-specific mirror to its apt configs.  We can do the same thing by naming our S3 buckets accordingly.  I think brianlamere talked a bit about that sort of thing before.
21:14:05 <gholms> brianlamere: Any links or examples or anything?  Because that would be awesome.
21:14:12 <brianlamere> aye - my thought was we could cheat by replacing the preferred mirror with the mirror named after the region :)
21:15:09 <gholms> We don't even have to do that; we can just add a baseurl line to the main repo configs and that mirror will take precedence over those given by mirrormanager.
21:15:23 <gholms> As an added bonus, that makes stuff fallback gracefully.
21:15:55 <rbergeron> can we document what would be the best thing to do somewhere? if it's not done already
21:15:59 <rbergeron> :)
21:16:54 <brianlamere> the cloud-init stuff ubuntu uses can tell you region name, but it's also available from any ec2 instance by doing a simple curl - it's all just REST stuff
21:17:00 <gholms> Yeah
21:17:05 <brianlamere> "curl http://169.254.169.254/latest/" and you get fun info
21:17:34 <gholms> Are there any docs or examples that talk about restricting a S3 bucket by region?  I must've missed them.
21:18:11 <smooge> here
21:18:12 <smooge> sorry
21:18:12 <brianlamere> specifically, "curl http://169.254.169.254/latest/meta-data/placement/availability-zone/"
21:18:24 <brianlamere> gholms:  nope, unfortunately.  Not that I've seen, at least
21:18:31 <rbergeron> hiya smooge :)
21:18:45 * gholms hands out more coffee
21:19:05 <brianlamere> it's in the REST documentation somewhat, but none of the tools that are most-often used will handle it
21:19:12 <brianlamere> but, Ben said he'd help with it ;)
21:20:18 <gholms> Are you collaborating with him on that, or...?
21:21:45 <brianlamere> I think the idea was we're waiting on actually organizing the game plan in a way that can be presented; I had suggested that maybe we look at our 3 options:
21:22:34 <brianlamere> err...what were those options...eh, there was an email about it I sent out, I can try to dig it up ;)
21:22:45 <brianlamere> I tend to write such long emails they get boring, I know
21:22:52 <brianlamere> it's a character flaw
21:23:34 <brianlamere> Amazon has what they're willing to do already, what they /might/ be willing to do if they knew what we wanted, and then we have what we're willing to do if it's done without any help from them at all
21:25:01 <brianlamere> the issue is just then that I can't speak for Fedora about what they want or are willing to do - I could help make a mini proposal though, if that would help
21:26:02 <gholms> Well, we have one bucket per region, each of which is restricted to its respective region's instances, and an instance (either one or one per region) that pulls packages from upstream mirrors and adds them to those buckets using an as-yet unwritten script.
21:26:12 <gholms> Does that reflect what we discussed so far?
21:27:52 <brianlamere> are those things already in place?  He had suggested doing something I already do myself for such things, which was an account per region, for security reasons; I didn't see any say whether they were against that idea, but I also didn't see anyone say they were for it ;)
21:28:20 * rbergeron is leaving decisions to those who know bettter :)
21:28:25 <gholms> I don't think anyone has done anything like that yet.
21:28:48 <brianlamere> Amazon would be willing to cover the buckets, and possibly the ec2 instances as well; Ben was going to discuss it with mgmt there, but I think he was hoping for a more precise gameplan before thinking about the ec2 portion
21:29:25 <brianlamere> insert "most likely" before the "would" at the start, there
21:29:35 <gholms> One account per region still sounds good to me, though I'm not sure how we would manage that.
21:30:03 <gholms> There being multiple people who will probably need to manage this and all...
21:30:12 <brianlamere> the accounts can be set to inhereit up to one account, so you really only see the one.  At my work, we create an account per customer, but we only get one bill ;)
21:30:47 <gholms> Sure, but then how do we *do* stuff with these sub-accounts?
21:31:58 <brianlamere> well each would then have their own set of keys, and since they'd have an ec2 head per region that head would write to their S3 buckets per their individual keys; same as one would do it with one account, just instead of using the same keys everywhere you use only the keys for that region in a region
21:32:57 <gholms> How do we log in and otherwise manage them?
21:34:18 * rbergeron has no idea :)
21:34:22 <brianlamere> well that would be however it is that Fedora does things; that's not the keys for login accounts on the servers
21:35:18 <brianlamere> Amazon just does a weird thing where they create master keys for the account that are what are used to do the REST queries, including operations on S3
21:36:05 <brianlamere> that's the /aws/ account, not the unix account.  How people log in is done via...I dunno...FreeIPA?  =D
21:36:40 * gholms should speak with mmcgrath about making this sort of thing work
21:36:46 <brianlamere> AWS does need to do a lot to split out the master account into sub accounts per service though, I agree
21:37:17 <gholms> Ooh!  Have you looked at IAM yet?
21:38:10 <gholms> IAM does almost exactly what you're talking about.  It's a preview right now, though.
21:38:10 <gholms> http://aws.amazon.com/iam/
21:38:58 <brianlamere> ...why no, I haven't.
21:39:23 <brianlamere> Hadn't heard about it.  Gosh, I...hope they get that out of beta soon, that will make my life at work SOOOOO much easier
21:39:38 <brianlamere> by a factor of 42 million soybeans
21:39:44 <gholms> O_o
21:40:08 <brianlamere> (I can't quantify it)
21:40:17 <gholms> Someone (infrastructure, maybe?) would still need to hold the keys to the main AWS accounts, but that might help alleviate keys-to-the-kingdom problems on the AWS side anyway.
21:40:19 <gholms> Grr...
21:40:23 * gholms kicks itunes
21:41:06 <gholms> Can you dole out permissions on instances and buckets to specific AWS accounts?
21:42:15 <rbergeron> lol
21:42:31 <rbergeron> I have no idea.
21:42:40 <brianlamere> well...ok, lemme reclarify ;)
21:42:54 <brianlamere> those keys have nothing to do with permissions *on* an instance
21:43:10 <brianlamere> it is solely permission to *create* an instance, modify it, delete it, etc
21:43:39 <gholms> Everything on the AWS side, not the VM's side.
21:43:51 <brianlamere> you can make a million unix accounts on the instance and Amazon doesn't have any say over those.  The AWS key is just for doing REST commands.
21:44:18 <brianlamere> the S3 buckets though - if you set them with sane settings, only a particular account can write to the bucket
21:45:22 <brianlamere> and, as it sits, the keys to do that writing are the same keys that allow you to create instances, EBS volumes, ELB instances, etc etc
21:45:23 <gholms> All I'm wondering is this:  *someone* is going to need the authority to invoke and cancel mirror instances.
21:48:07 <brianlamere> that's what the per-region accounts facilitate; allow that account *only* access to S3 and no other services, and it isolates that.  You can also then add other accounts as having limited access to the buckets (you could allow my account to modify 1 particular file in a bucket, for instance)
21:48:56 <brianlamere> I guess I should just write down what I'm trying to express in as few words as possible; the ability to quickly isolate a problem, revoke access in seconds without crashing the kingdom, etc is the reason for the seperate accounts ;)
21:50:55 <brianlamere> then if someone has a non-admin account on an ec2 instance, and is given a tool that allows them to send files to a bucket (which the tool does by a very restricted sudo or setuid access to reading a file with the key info) then viola - unvetted people get the work done, but don't have god-powers ;)
21:51:08 <brianlamere> trick is then just making that tool
21:51:20 <gholms> That's the automated "service" aspect of this.  At some point a human with separate credentials is going to have to be able to manage these instances (e.g., creating them).  How does *that* work?
21:51:42 <gholms> Sounds like a great hackfest at fudcon.  You should go.  :)
21:51:48 <rbergeron> yes, you should :)
21:52:01 <rbergeron> though we need to have some of that worked out before F14 is out :)
21:52:10 <gholms> We do?
21:52:36 <rbergeron> Well maybe we don't to just get images up... but it sounds like you guys think we need to?
21:52:54 <gholms> It's not a blocker, but it would cut costs for users.
21:53:00 <brianlamere> I would suggest that the account that creates the ec2 instances be a completely different account than the one(s) that are used to write to S3.  Once the instances are created they're just normal machines managed however you want, with the caveat that someone somewhere does need to sometimes create new instances, change the firewall (security group) rules, etc
21:53:39 <gholms> That's the account I have been asking about.  Who holds the keys to *that* one?
21:53:45 <brianlamere> aye - the end goal is to cut costs and improve user experience.  getting a local repo would be much faster
21:53:53 <brianlamere> I nominate...you!
21:53:59 <gholms> D:
21:54:08 <brianlamere> lol - jforbes?
21:55:03 <rbergeron> I have a password there. :)
21:55:04 <brianlamere> yes, in the end someone will have to be G-d.  Probably either someone at RH, or a key Fedora person.  They wouldn't need to do day-to-day stuff though.
21:55:05 <gholms> How about I try to brainstorm with mmcgrath and the rest of the infrastructure people.
21:55:19 <rbergeron> gholms: that sounds excellent.
21:55:22 <rbergeron> unless smooge has an idea. :)
21:55:31 * rbergeron lights up smooge's irc window
21:55:46 <gholms> smoooooge!
21:55:54 <brianlamere> but until IAM gets out the door well, unless you want everyone to be G-d then those broken out accounts are needed ;)
21:56:16 <gholms> Would it be worth trying out IAM in its current form and seeing how well that works?
21:56:28 <brianlamere> and until Ben@Amazon knows what Fedora-Cloud specifically needs/wants, he doesn't have much to hand his mgmt to ask for :)
21:57:04 <brianlamere> I can't speak on IAM much; I'm glancing at it, and it is certainly addressing that problem, but...I've never used it
21:58:24 <brianlamere> in fact, with IAM they only have 1 large complaint that I know of in the community ;)  which is that an instance can only have 1 public IP
21:58:40 <gholms> How about this:  #action gholms to brainstorm mirror management ideas with infrastructure and hopefully write a very rough proposal
21:58:58 <rbergeron> worksforme
21:59:05 <gholms> Good point - we would probably need DNS entries, too.  :)
21:59:38 <brianlamere> do you need any help writing the really rough specs of the platform that is needed?  it, head/bucket per region?
21:59:52 <brianlamere> err...replace the "it" with "ie"
22:00:16 <brianlamere> the tools that would need to be developed, how MM needs to be adjusted, etc
22:00:21 <rbergeron> wiki wiki wiki :)
22:00:38 <brianlamere> oh, yeah, guess that would work too.
22:00:43 <gholms> One bucket and instance per region with an associated account (or IAM credential); I think we have that down.
22:01:34 <gholms> brianlamere: How about you jot down what you have in mind now and we can edit and discuss it on the list or something.
22:02:16 <gholms> Then I will ping the infrastructure people to see what ideas we can come up with.
22:04:22 <smooge> sorry feeding kid
22:04:27 <smooge> what did you need
22:04:31 <brianlamere> "now" as in, here in the channel?
22:04:44 <rbergeron> i think as in what you currently have in mind :)
22:04:49 <smooge> was feeding the kid
22:04:57 <smooge> now am focused on window
22:05:10 <gholms> brianlamere: Yeah, throw what you have in mind up on the wiki or something so we have something to start tweaking.
22:05:29 <brianlamere> ok, I'll toss it on wiki.  that's better than a wall-o-text here ;)
22:05:37 <rbergeron> smooge: they were talking about talking to infrastructure about some stuff
22:05:50 <brianlamere> can I create new pages on the wiki?  or just modify current ones
22:06:02 <smooge> if you have an account you can create new pages
22:06:14 <gholms> https://fedoraproject.org/wiki/EC2_Mirrors
22:06:25 <smooge> unless they have created a "provenwiki" yet
22:06:26 <gholms> ^ Just edit that
22:07:49 <smooge> ok gholms you will want to talk with mdomsch as he is the mirror master. From there we can work out other details.
22:08:22 <gholms> I really wish we could make this work with mirrormanager...
22:08:36 <smooge> well its going to be ip based somewhere.
22:08:39 <brianlamere> aye - Matt and I have talked (briefly) about MM needing to be modified.  It will need to be changed to understand S3 as a backend
22:08:43 <smooge> unless the ips wander around
22:08:54 <gholms> The IPs *do* wander around.
22:08:54 <brianlamere> the IPs wander around.  a *lot*
22:09:05 <gholms> That's why it'll be really hard to make it work with MM.
22:09:19 <brianlamere> hey look, this IP was in Japan yesterday, now it's in NY!
22:09:19 <smooge> well then they should stop that
22:09:40 <smooge> wandering ips... whats next systems you can bring up and down at a moments notice?
22:09:51 <gholms> Man, what a great idea...
22:09:51 <brianlamere> craziness!
22:10:02 <smooge> you crazy kids and your ideas about freedom.
22:10:07 <gholms> #action brianlamere to jot down some ideas on the wiki
22:10:19 * rbergeron has to go to school for pickup time
22:10:27 <rbergeron> gholms, can you end? :)
22:10:33 <gholms> I suppose...  :P
22:10:46 <brianlamere> but before you do that sort of thing, you'd want to make some sort of automatic machine image of sorts...we could call it an AMI...something that you could say "make an machine that looks like this AMI" and boom, there it would be
22:11:07 <brianlamere> ;)
22:11:16 <gholms> #action gholms to brainstorm ideas with "MirrorMaster" mdomsch and the Infrastructure people
22:11:27 <gholms> How does that sound?
22:11:47 <rbergeron> sorry :(
22:11:58 <gholms> rbergeron: No worries.  ;)
22:12:19 <brianlamere> ok, now that EC2 is done, what other topics are there?  (heh - kidding)
22:12:31 <gholms> #topic Eucalyptus
22:12:37 <gholms> obino: You still here?
22:12:42 <obino> yep
22:12:49 <gholms> Anything of interest?
22:12:57 <obino> I don't have much on this front: I just got back from vacation
22:13:04 <obino> still catching up on email :(
22:13:08 <gholms> How's the office move going?
22:13:18 <obino> geez ... lotsa work
22:13:26 <obino> hopefully will do it on the 1st
22:13:30 <obino> or around there
22:13:44 <gholms> Hope that goes well.
22:13:51 <gholms> Next up is open floor.
22:14:07 <gholms> #topic Open floor
22:14:12 * gholms waits
22:14:40 <daMaestro> Is there going to be a Fedora Infrastructure hosted instance of all of the software we are working on packaging and integrating?
22:14:55 <gholms> Such as?
22:15:04 <daMaestro> Aka... are we going to utilize the work that we are doing here to host Fedora stuff?
22:15:08 <daMaestro> trac, hosted, people, etc
22:16:13 <gholms> When this is all up and running then Fedora (or anyone else, really) can use them just like any other Fedora VMs.
22:16:21 <gholms> They'll just be hosted by Amazon.
22:16:24 <daMaestro> K
22:16:45 <daMaestro> So we wont be building a Fedora hosted Infrastructure for a private cloud?
22:16:45 <brianlamere> well, I think the idea here is to get services and tools that make cloudness easier with Fedora; Eucalyptus over with obino, making sure python-boto is up to date, getting local repos in the major cloud services, working with openstack, etc
22:16:53 <daMaestro> I know mmcgrath was working on something a while ago...
22:16:55 <gholms> Yeah
22:17:03 <daMaestro> but never heard of it again (it went the way of ovirt)
22:17:33 <gholms> Ahh, you mean an independent instance of the whole FAS/trac/people stack?
22:17:58 <daMaestro> I mean, are we going to eat our own dogfood ... so to write.
22:18:10 <brianlamere> daMaestro:  are you saying as a comparison to the Ubuntu Cloud Services stuff?  the integration in to Landscape, etc?  I think the idea is to use more open tools (openstack, etc) and get tools out there happier with Fedora (Euca), versus making something proprietary
22:18:17 <daMaestro> Is Fedora Infrastructure going to be utilizing the work this SIG is going to be done.
22:18:36 <brianlamere> oh, nm
22:18:56 <gholms> How would they use it?
22:19:09 <daMaestro> Cool, so it sounds more integration of the tools and maintenance then actually building things inside the Fedora Infrastructure.
22:19:24 <daMaestro> Not proving "cloudy services" to Fedora contributors.
22:19:46 <gholms> Yeah, we're making it possible to run Fedora in cloud services, mostly.
22:19:50 <daMaestro> K
22:20:05 <gholms> The servers we were talking about are essentially yum mirrors in convenient places.
22:20:28 <gholms> I would like to see if we can link these mirrors we're setting up to the main infrastructure for auth purposes and whatnot.  That's the big thing I want to speak with infrastructure about.
22:20:38 <daMaestro> K
22:20:42 <brianlamere> there are lots of tools out there already, enough so that splintering further would likely just create more confusion than any gains from "competition"
22:21:45 <gholms> brianlamere: What sort of thing are you referring to?
22:21:50 <brianlamere> the above is just for yum, yeah - though it uses other tools that are themselves useful for people ;)  cloud-init, etc
22:22:28 * gholms realizes we would probably need RHEL instances; wonders if we could get RH support for that as well
22:22:33 <daMaestro> Cool. I'd like to see the work that is done here to be able to be used inside the Fedora Infrastructure to provide contributors services such as "packagers get 1G of storage, 256M of ram and a shell, N number of compute units to be able to run mock builds and issue koji commands, etc"
22:23:04 <daMaestro> However, it sounds as if that discussion needs to be held outside of this SIGs meeting.
22:23:28 <gholms> That's within the realm of possibility, but largely independent of the get-fedora-on-ec2 effort.
22:23:30 <brianlamere> you know, if IAM works well, then that could get easy - but that seems outside of what people here are working on
22:23:49 <obino> we have the ECC and I want to put a new fedora image on it: would it be useful for mock operation (for example)?
22:23:52 <brianlamere> things here would serve as the tools for helping those things along.  that's more a policy thing, though - not a technical thing
22:25:17 <daMaestro> Thanks, that is all I have right now.
22:25:19 <gholms> Mock is very I/O-heavy.
22:25:45 <gholms> It would be easier to set up shared instances that auth against FAS than it would be to allow contributors to invoke their own on Fedora's tab.
22:26:26 <gholms> Since instances run pennies-per-hour I think it would be realistic to just tell people, "Go run a Fedora instance of your own for a couple hours."
22:26:44 <brianlamere> is there a err...pam-fas module?
22:28:15 <gholms> Doesn't look like it.  I wonder how people log into shared servers...
22:28:48 <gholms> There's Eucalyptus's public cloud if Eucalyptus is willing to deal with the load.
22:29:34 <obino> well we can try: we don't monitor what people do with it yet (and I don't think we will if we don't receive complaints)
22:30:01 <gholms> If a Fedora image makes it onto Eucalyptus then we can point people to that, too.
22:30:14 <obino> it's high on my todo list
22:30:16 <gholms> obino: Do you guys support self-hosted kernels yet?
22:30:26 <gholms> That's critical for Fedora images.
22:30:28 <obino> only for admin user :)
22:30:35 <brianlamere> there are several attempts at pam-openid, I've just never seen one that actually works.  I would have been all impressed and stuff if there had been a pam-FAS ;)
22:31:04 <brianlamere> gholms:  remember, pv-grub means you hand off to whatever kernel the system wants
22:31:18 <gholms> That's what I'm referring to.
22:31:25 <gholms> e.g., no-AKI/ARI images
22:31:29 <obino> gholms: if you need a kernel I can probably upload it anyway
22:31:33 <obino> we just need to test it
22:32:01 <gholms> obino: Multiple kernel updates per month, though?
22:32:22 <obino> I guess I'll need more beers then
22:32:26 <brianlamere> obino:   does the Euca cloud do pv-grub, though?  I think that's the question
22:32:28 <gholms> ;)
22:32:28 <obino> ok I need to look into it :)
22:32:42 <gholms> pv-grub support would make this problem go away.  ;)
22:32:44 <brianlamere> (or other methods of chainloading)
22:33:09 <gholms> Or anything else, for that matter.  VMware obviously supports self-hosted kernels.
22:33:11 <obino> brianlamere: I don't think we do as yet. What is needed for doing it?
22:33:17 * obino not familiar with pv-grub
22:33:39 <gholms> obino: It lets an image boot off the kernel stored on its EMI.
22:33:59 <brianlamere> obino:  all created instances would just need to use the pv-grub aki for the availability zone they're in; if that's done, then we're set
22:34:07 <gholms> That way you don't have to register and choose EKIs and ERIs.
22:34:20 <obino> do you need EBS booting for it?
22:34:25 <brianlamere> nope
22:34:25 <gholms> Nope
22:34:47 <brianlamere> it's just the aki the instance is registered with.  It should always be exactly one particular AKI depending on where it is
22:34:52 <obino> so it's in the initrd to chainboot to the other kernel?
22:35:01 <brianlamere> err..well, one of two (depending on the hd0 versus hd0,0 bit)
22:35:17 <brianlamere> it's in grub.conf to chain boot to the other kernel
22:35:35 <obino> ok, I'll need to look into it and perhaps a bit of help
22:35:37 <brianlamere> pv-grub looks for grub.conf, reads what is supposed to be chained, and runs that
22:35:56 <obino> I know the engineers have been looking at it but I doin't know the timeline
22:35:57 <brianlamere> point is, a year from now it will still be that same AKI
22:36:34 <brianlamere> the aki never changes, but you get the benefit of having a fresh, updated kernel  regardless :)
22:36:36 <obino> brianlamere: perhaps I can hit you offline for this? I wonder if we can do something for the ECC
22:36:53 <gholms> Sounds like a good discussion for the list or #fedora-cloud.
22:37:07 <obino> or on #fedora-cloud :)
22:37:18 <brianlamere> yeah, I'm sure it would be easy to catch up on.  I can shoot an email to the fedora@euca list and the fedora list both
22:37:21 * gholms rereads what he just wrote, grins
22:37:34 <gholms> Oh, about that.  I can't post to that list.
22:38:06 <brianlamere> yeah, you need to get with obino and he'll make your account able to post there - mine can now
22:38:18 <gholms> I see.
22:38:27 <obino> brianlamere: sure that would be great. I still want to follow up for the new fedora images
22:38:52 <obino> I think everybody can use that email address
22:38:55 <obino> at least I hope
22:39:07 <gholms> Everything I sent to it bounced.
22:39:13 <brianlamere> obino:  you or someone else there had to change something to allow me to send email to it
22:39:28 <gholms> As long as instances can be constantly kept up to date I'm fairly certain fesco won't complain.
22:39:33 <obino> hold on let me look into it
22:39:56 <gholms> Let's continue this in #fedora-cloud.  Anything else for the meeting?
22:40:17 <brianlamere> I gota run give a pill to a puppy 1 mile away, but I'll be back in a bit
22:40:40 <gholms> Thanks for coming, people!
22:40:42 <gholms> #endmeeting