17:01:57 #startmeeting cloud WG weekly meeting 17:01:57 Meeting started Wed Jan 15 17:01:57 2014 UTC. The chair is samkottler. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:57 Useful Commands: #action #agreed #halp #info #idea #link #topic. 17:02:08 howdy 17:02:25 #chair rbergeron mattdm number80 geppetto jzb 17:02:25 Current chairs: geppetto jzb mattdm number80 rbergeron samkottler 17:02:45 welcome to 2014's first official meeting :-) 17:02:58 * samkottler wonders if special guest davidstrauss is gonna join us 17:03:30 #topic rollcall 17:03:45 jzb is here 17:03:48 \o/ 17:03:53 .fasinfo hguemar 17:03:55 number80: User: hguemar, Name: Haïkel Guémar, email: karlthered@gmail.com, Creation: 2006-07-18, IRC Nick: number80, Timezone: Europe/Paris, Locale: en, GPG key ID: 41CF9CEA, Status: active 17:03:58 number80: Approved Groups: +packager cla_fedora cla_done fedorabugs ambassadors cla_fpca gitbeefymiracle 17:03:59 .hellowmynameis skottler 17:04:04 .hellomynameis skottler 17:04:06 samkottler: skottler 'Sam Kottler' 17:04:11 * samkottler struggles 17:04:22 .hellomynameis jzb 17:04:22 .fasinfo mattdm 17:04:27 jzb: jzb 'Joe Brockmeier' 17:04:30 mattdm: User: mattdm, Name: Matthew Miller, email: mattdm@mattdm.org, Creation: 2005-04-13, IRC Nick: mattdm, Timezone: US/Eastern, Locale: en, GPG key ID: 72CF3A1B, Status: active 17:04:33 mattdm: Approved Groups: @gitdockerfiles gitspin-kickstarts +gitspins @gitcloud-image-service fi-apprentice gitcloud-kickstarts cla_done fedorabugs packager ambassadors cla_fedora cla_fpca 17:04:41 hey! finally found a way to get it to limit to matching my username... 17:05:05 i didn't knew that command too :) 17:05:29 .hellomynameis slim_shady 17:05:31 rbergeron: Sorry, but you don't exist 17:05:32 .hellomynameis geppetto 17:05:35 darnit 17:05:37 geppetto: Sorry, but you don't exist 17:05:42 cool :) 17:05:44 lol 17:05:49 geppetto: Has to be your FAS ID 17:06:04 this meeting has entered the land of meta 17:06:08 .hellomynameis sgallagh 17:06:10 sgallagh: sgallagh 'Stephen Gallagher' 17:06:19 .hellomynameis james 17:06:21 geppetto: james 'James Antill' 17:07:14 #topic PRD update and pending issues 17:07:23 it's due on the 20th 17:07:38 it's looking a lot better than it was a week ago! 17:07:39 and there are still some things we need to talk with the server WG about, sgallagh is here so maybe we should do that now? 17:07:55 yep, it's definitely shaping up 17:08:02 I've been trying to spend an hour or so per day on it 17:08:14 awesome. 17:08:42 I'm for starting with talking iwth sgallagh re server/cloud 17:08:48 * number80 gives a kudos to samkottler 17:08:51 we need to figure out the features we're gonna support 17:09:12 the use cases are mostly done, which was a big hurdle 17:10:01 features to support or features to enable to enable the .. use cases 17:10:02 Do we want to try to hash out some specific thigns while we have everyone here 17:10:11 or should we look at some of the bigger picture things? 17:10:26 rbergeron: yes 17:10:35 i'm not opposed, though it might be useful to know if any of those things are influenced by the server-ish stuff sgallagh is here for? 17:10:36 both 17:10:49 yeah, let's talk to sgallagh first :-) 17:10:55 sgallagh go! 17:10:57 * sgallagh feels honored 17:11:34 go sgallagh // right syntax 17:11:39 The Server WG had its hopefully-final PRD meeting yesterday. We invited jzb and number80 to represent Cloud interests 17:11:59 * sgallagh really wants to make a Zork joke right now 17:12:46 sgallagh: what's stopping you? 17:13:19 I'm stuck in a maze of twisty cubicles, all alike... 17:13:51 Ok, so there was a lot of discussion yesterday about where the dividing line is for things like OpenStack and OpenShift 17:15:15 I think that right now at least, cloud is focused on guest images and excludes openstack execution nodes 17:15:25 that was our sense as well 17:15:39 that might change in the future, but leaves "runs in virt" as a common thread 17:15:58 we proposed that OpenStack nodes should be a Server Role, possibly contributed by this group 17:16:17 but "owned" by server 17:16:39 I think that's always been the assumption 17:16:48 the hypervisor'ish stuff would be owned by server 17:17:03 any dissent? speak now. :) 17:17:32 That's been my assumption, but there was a lot of discussion about it yesterday 17:18:09 It's also ambiguous where "managing an OpenShift deployment" belongs. 17:18:10 sgallagh: more or less, though I sort of envisoned OpenStack nodes as closer to the server side than cloud side. 17:18:39 jzb: I agree, but it's fair to say that the implementation will need to involve Cloud heavily 17:18:53 sgallagh: indeed 17:19:15 I thik there's room to have some overlap. You can do e.g., a web server or web application in _either_ a cloud paradigm (<-bingo!) or the traditional server way. 17:19:22 Why don't we turn the conversation around a bit. I had jzb and number80 in our meeting to answer our questions. Why don't you raise your own for us. 17:19:32 yay! 17:19:38 * jzb tries to think of a stumper 17:20:05 Where do you guys see ambiguity in our missions? 17:20:08 ask something about configuration management or orchestration tools :) 17:20:19 that is a good Q 17:20:28 rbergeron: Config management is clearly in our laps 17:20:51 "Orchestration tools" is a harder question 17:21:09 I think the biggest source of ambiguity is that people equate "openstack" (and all the others) with cloud and i can see how that might be cause for misdirected questions. 17:21:09 sgallagh I think some of the ambiguity is that we're both doing "here's a base thing which can be adapted for different roles". 17:21:12 stuff like mesos blurs the lines 17:21:16 We've got a use case for orchestration of containers, but that's not a complete answer 17:21:17 rbergeron that too 17:21:38 for server, it's the base platform plus application stacks 17:21:46 for cloud it's... well, possibly the same thing. 17:21:47 "Server Roles" 17:22:26 Logging? Things like checkpoint/restore? Storage (gluster or other cloud-ish storage)? Are we going to try and ... match up where we can on things? Or... "you can go your own wayyyyyyyyyyyy" 17:23:13 logging is a much more complex question than a single word. 17:23:25 I think we can safely draw the line at 'do you need this thing on a physical machine' 17:23:32 logging is a yes and storage is a yes 17:23:35 Checkpoint/restore is probably not all that useful to Cloud images, since you're aiming for mostly-disposable anyway, yes? 17:23:37 at least in my view 17:23:47 sgallagh right 17:23:50 yeah 17:24:01 samkottler: if the users are doing it right 17:24:01 I think we want to keep the line more at cattle/pets than hw/virt 17:24:01 sgallagh: processes running and such? i don't know 17:24:09 although people run stuff like...database servers in the cloud, too 17:24:10 moving vm's and such 17:24:10 sorry that was for sgallagh 17:24:26 sgallagh: from working on CloudStack, I found a lot of questions around snapshotting/restore. 17:24:50 people do care about their instances being easy to restore 17:24:54 (sorry, got disturbed by coworkers) 17:24:59 sgallagh: it's also used a lot to create a golden image. Spin one up, modify, then snapshot -> template 17:25:04 even just from a convenience perspective 17:25:13 jzb: :/ but yeah, common 17:25:34 but we expect the cloud env to hande that, right? 17:25:40 in other words, ideally, no - we generally not care. In the real world, we do. 17:25:44 mattdm: yes. 17:25:48 it's a hypervisor thing anyhow 17:26:09 yup 17:26:10 samkottler: it's a hz thing except that the IaaS exposes the hz features. 17:26:17 I'm not sure the PRDs need to be exhaustive either 17:26:26 jzb: ah, that's a dandy point there 17:26:33 It's probably worth noting that individual implementation goals can be solved through collaboration 17:27:04 jzb: like in the case where a guest requests a snapshot of itself? 17:27:14 I'm not sure it's sensible or feasible to enumerate every possible aspect of the implementation. 17:27:30 * rbergeron needs to snapshot/restore her soda. where did it disappear to? ugh 17:27:39 samkottler: I was thinking when a user uses the API or UI to create one. 17:27:53 sgallagh: just trying to make sure we're not missing anything significant in overlap so just suggesting All The Things 17:27:54 I view "ownership" of these areas as more of shepherding than dictating. 17:27:57 let's move on with the general understanding of were the line stands 17:27:59 samkottler: again, I ran into a lot of users who basically used the IaaS as Advanced Virtualization 17:27:59 sgallagh +1 17:28:03 sgallagh: +1 17:28:06 sgallagh: +1 17:28:45 If we can do it without confusion, I would like to steer users who are interested in IaaS-as-server-in-the-sky towards using Fedora Server rather than the cloud image 17:28:52 I'm sure as hell not going to make a decision on a major change to hypervisor tech without consulting this team, for example. 17:28:52 but that might be hard to message. 17:28:57 do we want to do a vote on the line or just move forward with everyone in agreement? 17:29:21 mattdm: I always looked at it as, they can start using the tool the wrong way and learn to do it right 17:29:21 Just as one more aside: 17:29:36 One thing that came up yesterday that I'd like to have buy-in from both sides is this: 17:29:38 mattdm: so if they adopt OpenStack/CloudStack/Euca, it makes it easier to learn the right habits eventually. 17:30:09 We need to share a "Universe" package repository, so that if someone installs the base cloud image, they will be able to "yum install fedora-server-release" and have any and all capabilities that Server offers (such as Roles) 17:30:46 The subtext being: we are not producing disparate Fedora repositories (excepting the install media) 17:31:02 Okay, that's an interesting one. Maybe we should make that explicit in the PRD? 17:31:03 separate repositories would only make things worse (think RPM hell dependencies) 17:31:18 I mean, yes, I think there needs to be a universal repo 17:31:33 the question is: is going from cloud base -> server an expected normal path? 17:31:35 sgallagh: So is the intention that people can/will be able to install multiple *-release packages? 17:31:35 sgallagh: as yesterday, agreed 17:31:52 sgallagh: I think there may have been some accidental FUD that the idea of separate repos even surfaced. 17:31:53 what about the other way around? (I think it might be okay to define this as a one-way path) 17:31:55 geppetto: I don't want to make a general statement on that. 17:32:14 I'm fine with "Cloud can become Server" and not the reverse (or Server->Workstation, etc.) 17:32:14 you can adopt one of your cattle as a pet. but you can't really send your kitty out to live in the herd. 17:32:40 mattdm: My parents told me they sent my dog to the farm. Are you telling me they lied? 17:33:24 jzb: Right, but the separate repos is related, but not the whole statement I was just making. 17:33:36 sgallagh: Why not server => cloud? 17:33:56 mattdm: my cats agree 17:34:11 server to cloud makes perfect sense IMO 17:34:12 geppetto: I'm not ruling it out. But I don't know if I see obvious value there, unless I misunderstand your target 17:34:18 i may be confused here - i sort thought people were like 17:34:20 moving from the server 17:34:21 to the cloud 17:34:34 rbergeron: well the intermediate is just taking a server and putting it on kvm/xen 17:34:43 samkottler: right, babysteps 17:34:58 * samkottler thinks a server to cloud path is necessary 17:35:03 My initial statement was merely this: You want to virtualize/containerize/stick in public IaaS some of your critical infra. 17:35:09 which basically involves virtio drivers, cloud-init, whatever else 17:35:25 sgallagh: you mean having a p2v like tool ? 17:35:29 You can spin up the Amazon Fedora Cloud instance and then easily make that become a Server instance with the frameworks we provide 17:35:48 number80: Really not talking that level of detail at the moment. 17:36:10 ok 17:36:14 this is traditional server -> server-in-the-sky. that's fine, but server-in-the-sky isn't necessarily Fedora Cloud 17:36:17 is my thinking. 17:36:18 Merely that a machine (virtual or not) installed from the Cloud Image can *become* a "full" Server install simply through yum/dnf/packagekit 17:36:45 in order to really migrate to take advantage of cloud computing, you rearchitect. 17:37:01 my POV is that fedora-cloud is basically a trimmed-down image on which you can install whatever "role" you wish too 17:37:12 mattdm: Well, no matter what, you'll still probably want a fairly traditional Domain Controller and DNS environment 17:37:18 But that's a different discussion 17:37:47 number80: agreed, but there may also be roles that make sense only in the cloud. 17:37:49 number80 and Fedora Cloud provides some cloud-computing-focused roles, and a way to go to fedora server for roles that aren't necessarily covered but still always useful 17:37:59 jzb: +1 17:38:29 so assuming here's a universe what makes so directly separating the roles necessary? 17:38:53 I guess my disconnect is why I wouldn't say...install freeipa on a cloud instance just like I would on a regular box in a DC 17:39:25 samkottler: You can. But we're (Server) going to be doing some additional packaging work to make role deployment simpler 17:39:25 okay, so: do we want to tie this more closely together and share the roles? 17:39:38 But it's going to require additional infrastructure that you may or may not want. 17:39:52 sgallagh: ah okay that makes more sense now 17:40:15 so it'd basically expose comps to you or something? 17:40:22 whatever, I get the point, no need to dive so deep 17:40:26 what are the pieces of a "role" exactly? 17:40:32 wait, i'm slower than samkottler 17:40:33 samkottler: Things like interaction with the Cockpit Project so that you could deploy the role with minimal interaction (just answer three questions and go) 17:40:36 (obviously) 17:40:59 is a role == set of packages known to work in a specific way? 17:41:05 together to do something? 17:41:11 rbergeron: Essentially yes. 17:41:22 Though our definition of "set of packages" is currently intentionally vague 17:41:42 Because people get hung up on packaging==rpm 17:41:56 how does it overlap with something like ... heat templates 17:42:07 FYI: https://fedoraproject.org/w/index.php?title=Server/Product_Requirements_Document_Draft#Featured_Server_Roles 17:42:12 sgallagh: role == software required for a specific task? 17:42:22 jzb: That's a better description, yes 17:42:36 or .. docker containers for various pieces of things 17:42:43 rbergeron: it wouldn't orchestrate across-server stuff 17:42:50 just on a single node 17:42:57 rbergeron: docker containers is one possible packaging implementation (and is being considered) 17:43:58 samkottler: yeah, but i am just wondering about how to make the "known templating things" out there - which are basically all defining roles of some sort, just implementing them in ways where they can (or can't) be auto-magically scaled 17:44:46 the perennial wordpress example - do we want a wordpress role (if that's what we're sort of thinking) to look the same as a heat template for wordpress, etc 17:44:49 rbergeron: wouldn't that be closer to config management's realm? 17:44:51 or not really concerned with that at that point 17:45:10 samkottler: "best tool for the job" - people are gonna do it however they want :) 17:45:32 I guess separating cfg mgmt from one-time ops isn't really something that you do in say...docker land 17:45:40 rbergeron: Wordpress is probably not the sort of Role we'd (Server) be most interested in. 17:46:09 From the Use Cases section of our PRD: Examples may include: FreeIPA Domain Controller, BIND DNS, DHCP, Database server, iSCSI target, File/Storage server, OpenStack Hypervisor Node. 17:46:14 sgallagh: i know, it's just a sample example . 17:46:36 I'm pretty much entirely disinterested in treating specific applications as Server Roles 17:46:51 (others on the WG may have a different opinion on that matter) 17:46:54 i'm not going to be cruel and bring up triple-o :) 17:47:18 * sgallagh looks forward to quintuple-o. It's the next logical step, right? 17:47:20 rbergeron: SCORN 17:47:30 okay, i think i get the picture 17:47:32 sorry to be all questiony 17:47:44 Questions now are more useful than questions in six months. 17:47:45 * mattdm thinks questiony is very helpful 17:48:37 here's an idea: what if we make server-in-the-sky one of the fedora cloud spins? 17:48:59 right now, we (tentatively) have a generic base, docker, and big data 17:49:00 * rbergeron was going to bring up load balancing but i think we can skip-it 17:49:08 hmm 17:49:23 but what do you run on the server in the sky? 17:49:28 we could also have one that's basically got the server role-enablement stuff in place. 17:49:30 is it a db server or a IPA server or what 17:49:37 mattdm: We were probably going to request help implementing this. It's one of our preferred delivery mechanisms. 17:49:52 samkottler: "Yes" 17:50:12 how server and server-in-the-sky roles will be different ? 17:50:13 samkottler it's still also a base, but a base tailored for running the Fedora Server roles. 17:50:13 I guess the other issue is that one of the things we might have is a custom kernel build so it might just be bloated 17:50:56 * samkottler will have to think more about what it'd actually look like 17:51:11 samkottler: didn't the kernel team said that maintaining multiples kernels would be a PITA for them ? 17:51:32 in the meantime we can force people to to ephemeralize data by having jwb remove fsync from our kernel build 17:52:04 done 17:52:19 number80: Yes, they did. I think they were okay with breaking up the module subpackages, but not actual custom kernels. jwb? 17:52:36 yeah. i'm already working on splitting up the packaging 17:52:42 i have kernel-core and kernel-drivers 17:52:57 kernel-core is what you guys keep saying should be the small thing, without actually telling me what you need in it. 17:53:13 so right now, it literally just has the contents the kernel RPM puts in /boot. 17:53:21 which... won't really work for anyone 17:53:28 so. discuss. then inform. 17:53:53 we should probably include a general list of kernel requirements in the PRD 17:54:00 \o/ 17:54:26 I think most of the requirements will be 'rip out this giant pile of drivers' 17:54:32 (small caveat: it's not actually working perfectly yet, but i am workign on it at least) 17:54:37 so core won't be affected 17:54:43 * number80 gotta go or he'll be ear-raped by the office alarm 17:54:46 should those kernel requirements inc. networking/ovs-type stuff? 17:54:49 jwb I can help make that more specific. Is there a bug or other place to help track what should actually be in there? 17:54:56 * rbergeron doesn't know that she's seeen networky-stuff anywhere really 17:55:16 rbergeron: I think ovs is in the server realm? 17:56:00 mattdm, no bug. we could create one, or just use the kernel list. i have a new copr project i'm going to use to get people to test builds. 17:56:12 rbergeron, ovs? 17:56:18 oh, open-vswitch? 17:56:18 jwb: openvswitch 17:56:38 yeah, we build that. i figured that and the virtio stuff would be good things to have in -core, but beyond that... 17:57:08 and if this grows beyond "core", we can call it "base" instead 17:57:25 jwb hah 17:57:32 samkottler: i do'nt know what's actually needed to be enabled where, but ... i'm pretty sure you would use it to manage gusests / connect them to each other in various ways? 17:58:04 mattdm, i wasn't intending a dig. but it would be hard to say e.g. ovs is really "core" functionality 17:59:07 and if you get to "do we want a firewall?" then you just grabbed a ton of netfilter drivers, etc 17:59:19 which is why i would like you to tell me what you want :) 17:59:35 because if it winds up being most of what we already have... then the exercise is rather pointless 17:59:39 samkottler: http://www.itworld.com/virtualization/335244/rhev-upgrade-saga-rhel-kvm-and-open-vswitch 17:59:40 mattdm: I can work with you on kernel requirements 18:00:00 sounds good. 18:00:37 anyway: i don't know my head from my ass on this but if it hasn't been thought about at all it might be worth thinking about . or consulting with someone 18:00:57 who is more in-depth in ovs/neutron/open daylight land 18:01:04 (mestery, cdub, etc) 18:01:53 * rbergeron shuts up 18:02:05 e.g. also not me. :) 18:02:17 you cloud people have names that are amazing at not describing wtf they do 18:02:20 oh look. fesco time! i will keep this window open 18:02:24 * samkottler doesn't know enough about SDN to actually make decisions 18:02:27 jwb openstack is the WORST 18:02:36 it's got some sort of horrible name virus 18:02:52 lol 18:03:01 jwb: openvswitch is the most descriptive 18:03:04 and it's still crap 18:03:09 samkottler, was just going to say that 18:03:21 need magic cloud decoder ring 18:03:40 anyway, i'm awaiting further direction/discussion 18:03:49 and i'd be happy to start spitting out copr builds 18:03:51 jwb: it's simple really...hook your heat up to your nova to route with your neutron to connect to cinder 18:04:02 welp, computers are bad 18:04:11 at least they all have something to do with temperature! 18:04:30 * rbergeron notes that heat makes clouds rise (deploy, whatever), but i may have been involved with that naming 18:04:49 ANYWAY 18:05:02 rbergeron: heat makes it rain 18:05:04 * rbergeron notes the time and things in the interest of decision making or otherwise 18:05:19 okay yeah let's keep it moving 18:05:36 any more PRD stuff that's pressing? 18:05:47 #action mattdm and samkottler to work on getting kernel requirements together for the PRD 18:07:17 bueller? 18:08:03 #topic open floor 18:08:24 Nothing here. 18:08:28 so with the centos news and such, there is going to be a cloud SIG for centos 18:08:37 do we want to have some representation with them? 18:08:45 samkottler: yes, I think we do. 18:08:57 jzb: twas kind of a loaded question 18:09:15 I've seen the idea of a shared cloud SIG between fedora and centos 18:09:18 what do people think of that? 18:10:01 samkottler: would the scope be similar? 18:10:05 that sounds epic. 18:10:07 oh, sorry 18:10:09 LOL 18:10:25 rbergeron++ 18:10:31 jzb: yeah I believe so 18:10:32 rbergeron: I see what you did there. 18:10:39 centos might have some good guidance for us 18:10:45 they already build lots of divergent kernels and stuff 18:10:55 true true 18:11:23 samkottler: what do we need to do to make that happen? 18:11:37 samkottler, actually, not lots 18:11:43 i've been poking at them 18:11:46 jzb: I guess the big thing is just for people to be looking for convos on centos-devel 18:11:51 and make sure you're subscribed 18:12:15 OK 18:12:28 at least in my initial conversations, most of their kernels are still based on RHEL with the xen one being the only one i've found that actually diverged at a base kernel level (3.10 iirc) 18:12:36 and their reasons were decent 18:12:41 * jwb shuts up 18:13:09 jwb: they have someone from citrix who manages the 3.10 builds? 18:13:17 the core team doesn't do it 18:13:34 samkottler, yep 18:13:57 gotcha 18:14:02 well that's all I've got for the open floor 18:14:14 more PRD work to be done, but we're getting closer 18:14:27 jzb: rbergeron: mattdm: number80: geppetto: anything else? 18:14:38 nope 18:14:57 samkottler: nope 18:15:09 #endmeeting