infrastructure
LOGS
19:00:01 <nirik> #startmeeting Infrastructure (2011-05-19)
19:00:01 <zodbot> Meeting started Thu May 19 19:00:01 2011 UTC.  The chair is nirik. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:01 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
19:00:01 <nirik> #meetingname infrastructure
19:00:01 <zodbot> The meeting name has been set to 'infrastructure'
19:00:01 <nirik> #topic Robot Roll Call
19:00:01 <nirik> #chair goozbach smooge skvidal codeblock ricky nirik abadger1999
19:00:01 <zodbot> Current chairs: abadger1999 codeblock goozbach nirik ricky skvidal smooge
19:00:05 <nirik> ooh...
19:00:09 <nirik> rbergeron: did we do it again?
19:00:20 <rbergeron> Oh. We did.
19:00:22 <nirik> readyness and infra at the same time? ;(
19:00:24 <rbergeron> I did.
19:00:32 <rbergeron> Fail.
19:00:52 <nirik> you want to do readyness in meeting-1? or should we move there?
19:00:53 <goozbach> oh noes!
19:01:01 * abadger1999 here
19:01:03 <goozbach> who had first dibs? :)
19:01:07 <rbergeron> I can move.
19:01:10 <rbergeron> Darnit.
19:01:13 <abadger1999> and figures out where we're going :-)
19:01:16 <goozbach> rbergeron: we're sorry
19:01:17 * rbergeron makes a note on the time confusion.
19:01:18 <dgilmore> rbergeron: :P
19:01:22 * rbergeron will head to #fedora-meeting-1
19:01:28 <goozbach> really really really sorry
19:01:48 <nirik> sorry about that. I should have noticed too. ;(
19:01:51 <StylusEater_work> shrug, we still here?
19:01:53 <noriko> rbergeron: are we moving another channel?
19:02:08 <rbergeron> noriko: yeah, #fedora-meeting-1
19:02:12 <rbergeron> sorry
19:02:17 <nirik> we are still here. They are taking #fedora-meeting-1
19:02:22 <nirik> rbergeron: sorry.
19:02:23 <StylusEater_work> nirik: kk
19:02:28 <StylusEater_work> rbergeron: sorry
19:02:49 <goozbach> present!
19:02:55 * StylusEater_work is here
19:03:04 * janfrode is here too :-)
19:03:23 <goozbach> janfrode!!!!
19:04:01 <nirik> cool.
19:04:12 <Southern_Gentlem> Fedora Readiness meeting moved to #fedora-meeting-1
19:04:15 <nirik> so, I guess lets get started... lets start with new folks intros...
19:04:40 <zodbot> Announcement from my owner (jsmith): Fedora Readiness Meeting now in #fedora-meeting-1
19:05:05 <nirik> #topic New folks
19:05:13 <nirik> Meeting Agenda Item: Introduction Jason Shive
19:05:33 <nirik> Meeting Agenda Item: Introduction Jan-Frode Myklebust
19:05:51 <nirik> Warren Thomas
19:05:55 <nirik> any of you 3 here?
19:06:00 <Klainn> <--- Jason
19:06:01 <janfrode> I'm a sysadmin at an ISP doing using much the same software/tools as you fedora infrastructure guys.. Running RHEL5/6, puppet, kickstarts, bind, KVM, RHEV, etc..
19:06:06 <nirik> or any other new folks who would like to say hi? :)
19:06:20 <StylusEater_work> janfrode: welcome!
19:06:27 <janfrode> Wanting to broaden horizon by joining the infrastructure group here :-)
19:06:30 <goozbach> welcome to jason and jan-frode
19:06:32 <fabiocruz> hello
19:06:34 <nirik> Welcome to you both.
19:06:35 <StylusEater_work> Klainn: welcome!
19:06:47 <wileyman> wileyman is here
19:06:47 <Klainn> Yo yo
19:07:01 <mosburn> I'm new here, used to almost all sysadmin tasks looking for interesting tickets to work
19:07:03 <StylusEater_work> wileyman: <--- Jason Shive no?
19:07:10 <Klainn> Negative, I am
19:07:20 <goozbach> mosburn: welcome as well
19:07:25 <Klainn> I may be wiley, but that is not my nicknake.
19:07:25 <goozbach> so who wants to work on what?
19:07:33 <nirik> So, I'd like to point to https://fedoraproject.org/wiki/Infrastructure/GettingStarted for everyone.
19:07:35 <wileyman> Warren Thomas <---- wileyman yes
19:07:49 <nirik> Please do read that thru and hang out in #fedora-admin and/or #fedora-noc...
19:07:59 <nirik> ask about tickets you are interested in or tasks you would like to help with.
19:08:32 <nirik> I'm going to try and weed out some easyfix tickets I can point folks at to get started sometime in the next week or so.
19:08:57 <nirik> but again, welcome and hopefully we will see more of you all in the coming weeks/months. ;)
19:09:15 <Klainn> I'm not too picky, if it's something I'm not familiar with I probably should be so hit me with whatever.
19:09:16 <liknus> rbergeron: readiness meeting is coming here?
19:09:28 <nirik> liknus: over in #fedora-meeting-1
19:09:34 <liknus> oops thanks!
19:10:04 <nirik> if any of you folks would like to be in the apprentice group so you can look around at our setup and decide what you might want to work on, see us in #fedora-admin after the meeting.
19:10:23 <nirik> I've started a page on the apprentice group: https://fedoraproject.org/wiki/Infrastructure_Apprentice
19:10:35 <nirik> please do edit, add, ask questions on it, etc.
19:10:57 <nirik> ok, moving on to upcoming tasks:
19:11:02 <nirik> #topic Upcoming tasks
19:11:18 <nirik> REMINDER: we are in deep freeze for f15 release, which will be next tuesday.
19:11:29 <nirik> so, avoid any changes to any systems without approval.
19:11:51 <nirik> Other upcoming tasks: https://fedoraproject.org/wiki/Infrastructure_upcoming_tasks
19:12:14 <nirik> I need to update the ical file, but that will have our upcoming stuff in an ical feed.
19:12:19 <goozbach> nice!
19:12:47 <nirik> after freeze, I will publish that to a easy to remember set of urls.
19:13:23 <nirik> Anyone have any other upcoming items they wish to schedule, see me.
19:13:33 <nirik> Ideally we will be scheduling any outages a week in advance.
19:13:51 <StylusEater_work> ?
19:14:04 <nirik> StylusEater_work: go ahead
19:14:12 <StylusEater_work> nirik: puppet changes need to be scheduled?
19:14:27 <skvidal> StylusEater_work: posted and then approved with 2 +1's
19:14:33 <StylusEater_work> skvidal: roger
19:14:53 <nirik> https://fedorahosted.org/fedora-infrastructure/browser/architecture/Environments.png
19:15:11 <nirik> if they affect any machine in the freeze zone, then yeah, must be approved.
19:15:42 <nirik> ok, moving on to meeting tickets then...
19:15:48 <nirik> #topic Meeting tickets
19:15:56 <nirik> I'd like to go over the f15 final ones quickly.
19:16:03 <nirik> https://fedorahosted.org/fedora-infrastructure/query?status=new&status=assigned&status=reopened&group=milestone&keywords=~Meeting&order=priority
19:16:24 * nirik waits for hosted
19:16:43 <nirik> .ticket 2709
19:16:44 <zodbot> nirik: #2709 ([Fedora 15 Final] Communication with RH IS) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/2709
19:16:59 <nirik> I talked with them and they are aware to watch for any problems on release day or before.
19:17:17 <nirik> .ticket 2711
19:17:18 <zodbot> nirik: #2711 ([Fedora 15 Final] Modify Template:FedoraVersion) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/2711
19:17:23 <nirik> thats on release day I think.
19:17:45 <nirik> .ticket 2721
19:17:46 <zodbot> nirik: #2721 ([Fedora 15 Final] Mirror space) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/2721
19:17:49 <nirik> .ticket 2723
19:17:50 <zodbot> nirik: #2723 ([Fedora 15 Final] Permissions on mirrors) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/2723
19:18:04 <nirik> I think those are good, need to check to make sure.
19:18:20 <nirik> .ticket 2720
19:18:21 <zodbot> nirik: #2720 ([Fedora 15 Final] New Website) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/2720
19:18:37 <nirik> I guess we don't have anyone from websites here?
19:19:00 <goozbach> nirik: they're discussing it in F-M-1
19:19:02 <nirik> I can check with them.
19:19:03 <goozbach> :)
19:19:03 <nirik> yeah.
19:19:47 <nirik> ok, I think those are the release tickets that can be looked at right now.
19:19:47 <goozbach> sounds like elad661 says they're good to go
19:19:55 <nirik> yeah.
19:20:10 <nirik> anyone have any other release tickets we should address? or just meeting related tickets?
19:21:04 <StylusEater_work> just meeting here
19:21:17 <nirik> StylusEater_work: go ahead.
19:21:24 <StylusEater_work> .ticket 2777
19:21:25 <zodbot> StylusEater_work: #2777 (/etc/system_identification missing on some hosts?) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/2777
19:21:35 <nirik> smooge: is ticket 2721 ok and safe to close?
19:21:54 <smooge> one sec
19:21:57 <StylusEater_work> with the help of smooge and skvidal I managed to document the puppet setup for /etc/system_identification
19:22:32 <StylusEater_work> my question is we seem to be using servergroups (and the associated classes) for the file definition in puppet ... can I instead move this file definition to the host configs?
19:22:40 <StylusEater_work> does that make sense?
19:23:15 <StylusEater_work> or do we want to stick with servergroups and somehow generalize it or overwrite it per node?
19:23:30 <nirik> cool. Thanks for working on it!
19:23:53 <nirik> so, the question becomes are all machines in a servergroup the same security profile, or is that more a per host decision?
19:23:53 <smooge> nirik, done
19:24:14 <StylusEater_work> nirik: without verifying my suspicion is it varies
19:24:41 <smooge> StylusEater_work, we have used servergroups because we may have 20 app servers and we are good at forgetting of updating 9 of them to the same level
19:24:53 <ranjibd_> how do we enforce security policiies one /etc/system_identification in place?
19:25:35 <StylusEater_work> smooge: I understand and it makes sense but I believe we can make per node changes for things like node specific files. I'm willing to do it.
19:25:40 <nirik> it's informational more than a enforcement item.
19:25:52 <nirik> ie, it's to let you know when you login there what kind of machine you are dealing with.
19:26:04 <StylusEater_work> smooge: I just wanted to get feedback before generating a large patch list for skvidal
19:26:09 <nirik> StylusEater_work: I'd say propose your changes and we can look at the diff?
19:26:26 <skvidal> nirik: I suggested he attach it to the ticket on this
19:26:29 <StylusEater_work> nirik: kk
19:26:41 <CodeBlock> hi there, late but here
19:26:46 <nirik> sounds good.
19:26:46 <StylusEater_work> skvidal: yes you did, but there are a lot of hosts to generate patches ... I know code talks but...
19:26:48 <nirik> welcome CodeBlock
19:27:03 <CodeBlock> On tethered wifi on our way to dayton \o/, going 65 mph down a highway. <3 technology
19:27:04 <nirik> StylusEater_work: yeah, sometimes it's a ton of small changes to many many files. ;(
19:27:12 <skvidal> CodeBlock: you're not driving, right?
19:27:26 <CodeBlock> skvidal: do you need to even ask that :|
19:27:30 <CodeBlock> No, I am not driving
19:27:34 <skvidal> good
19:27:37 <nirik> does anyone have any other tickets they would like to discuss at this time?
19:27:46 <nirik> StylusEater_work: again, thanks for looking at that
19:28:27 <goozbach> nirik: there was an item by codeblock
19:28:29 <nirik> #topic Gather ideas for splitting hosted to more than one box
19:28:34 <nirik> yep. it's next. ;)
19:28:35 <goozbach> thatstheone
19:28:38 * nirik was jumping around.
19:28:51 <nirik> CodeBlock: care to discuss this one?
19:29:18 <CodeBlock> oh ..right
19:29:47 <CodeBlock> Yeah, so...as a summer project, I would like to discuss what needs to be done to split hosted off to several boxes, rather than just one (at serverbeach, no less)
19:30:00 <CodeBlock> Load on the primary hosted box is constantly 3-5ish
19:30:20 <CodeBlock> skvidal had some ideas on this a while back, but.. I'd like to revisit it
19:30:37 <janfrode> Will it work to have one editable host, and several readonly ?
19:31:03 <CodeBlock> skvidal: you here?
19:31:09 * skvidal is here
19:31:18 <CodeBlock> I think skvidal's original idea was to have some kind of central storage that each box would see and use for stuff
19:31:26 <skvidal> well it would be something like
19:31:29 <nirik> that would be nice, but not easy to do. ;)
19:31:32 <skvidal> central, replicated storage
19:31:42 <skvidal> and then other boxes tying into that for accessing data
19:31:44 <skvidal> per-service
19:32:03 <skvidal> the services provided by hosted are:
19:32:11 <skvidal> $scm-access
19:32:20 <skvidal> $scm-ssh-commit-access
19:32:28 <skvidal> website
19:32:38 <skvidal> (trac)
19:32:41 <skvidal> (download)
19:32:57 <skvidal> mailman
19:33:19 <skvidal> that's it, right?
19:33:40 <CodeBlock> sounds right
19:33:45 <skvidal> if nothing else we casn break mailman out - trivially
19:33:50 <skvidal> s/casn/can/
19:34:11 <nirik> I think the big load is http...
19:34:18 <nirik> search engines hitting the scm repos.
19:34:20 <smooge> I think breaking out mailman would be good.
19:34:40 <nirik> skvidal: what would you suggest for the storage?
19:34:54 <nirik> http://fedoraproject.org/awstats/fedorahosted.org/ now has some web stats for us.
19:34:55 <skvidal> that's a great question :)
19:35:11 <skvidal> I had considered gluster/cloudfs but it is just not ready
19:35:17 <skvidal> it won't work for the way we need groups to work
19:35:20 * nirik has to step away for a sec. continue on
19:35:22 <skvidal> so that's a non-starter
19:36:07 <CodeBlock> So for the search engine stuff, .. after-freeze I'm wanting to get robots.txt in place. I tried twice at the start of the freeze, and nboth attempts failed/robots.txt still isn't working
19:36:09 <CodeBlock> sigh, this wifi is getting laggy
19:36:41 <skvidal> lustre might be something to investigate.
19:36:50 <CodeBlock> so where would we move mailman off to, first off?
19:36:52 <skvidal> or we could do something as simple as shared nfs
19:37:08 <skvidal> CodeBlock: anywhere, really - smtp is so easy to move around
19:37:22 <abadger1999> <nod>'s at nirik's evaluation of where load is coming from.  web-repo viewers are a big part (and we have both trac and a standalone web viewer in many cases)
19:38:23 <skvidal> abadger1999: so that's the other weird issue
19:38:25 <skvidal> trac
19:38:27 <nirik> I suspect mailman is almost none of the load. ;)
19:38:31 <skvidal> nirik: I do too
19:38:40 <skvidal> nirik: but it is also the easiest to move :)
19:38:54 <janfrode> Wouldn't also splitting off scm be easy, and free up file cache for the web viewers ?
19:39:15 <skvidal> janfrode: gotta have access to the data,
19:39:21 <skvidal> janfrode: and ssh-commit to that locaiton, too
19:39:31 <CodeBlock> well another thing splitting stuff off gives us is.. right now if hosted01 dies, we lose...mailing lists, trac, all repos. If we split it off, if one thing dies, ideally it won't break *everything* down
19:39:51 * nirik wishes they had pulled drbd into rhel6. ;(
19:39:53 <CodeBlock> s/break/bring/
19:40:08 <skvidal> nirik: drbd really only gives us a copy
19:40:13 <skvidal> last time I looked it doesn't do two-way writes
19:40:35 <nirik> it does in fact have a master/master mode. ;) (which you can put something like gfs2 on)
19:40:40 <skvidal> ah
19:40:42 <skvidal> I didn't know that
19:40:48 <nirik> but gfs2 is kinda a pain.
19:40:54 * StylusEater_work has to leave
19:40:57 <nirik> and it's not in anyhow, so don't mind me.
19:41:11 <skvidal> janfrode: so your suggestion
19:41:12 <nirik> we could also split things by projects
19:41:18 <skvidal> break each scm out to a host?
19:41:29 <skvidal> nirik: by letter?
19:41:33 <nirik> 500 projects on hosted1, 500 projects on hosted2, alternate when adding new.
19:41:37 <CodeBlock> nirik: yes, but that kills the last thing I said ;)
19:41:38 <skvidal> janfrode: the issue of course is most of our projects are git
19:41:45 <nirik> or some other critera that makes them about the same number.
19:41:49 <skvidal> CodeBlock: how?
19:42:15 <skvidal> nirik: I think the question I have is simple - how much can we spend on hosted?
19:42:16 <janfrode> No, I was thinking one host for scm/ssh, one for website/trac
19:42:30 <skvidal> nirik: what's our budget and how important is hosted to our mission
19:42:36 <skvidal> imo hosted seems pretty damned important
19:42:52 <nirik> janfrode: yeah, but scm has to be mounted on website... and thats most of our load with spidering the scm content. ;(
19:42:54 * CodeBlock agrees, which is why I'd like to spend some of my summer giving it some love
19:43:05 <skvidal> so let's give the big example
19:43:13 <nirik> skvidal: I agree. I think it's important and I'd like to see us enhance and make it awesome.
19:43:16 <skvidal> 1 box for the mailing lists for collab and hosted
19:43:42 <janfrode> nirik: so proxy it in the web frontend ?
19:43:49 <skvidal> 3 boxes for the trac/git/websites for the projects (hosted1/2/3 - divided up the project name)
19:44:09 <skvidal> 1 box which has write access to all of the above 3 by nfs for commits?
19:44:28 <skvidal> so the 3boxes for the websites are essentially proxies
19:44:38 <nirik> janfrode: yeah, we could potentially do some caching there...
19:44:42 <CodeBlock> hmm
19:44:42 <skvidal> which redirect you around internalyl to get to the right data directly from the host by http
19:45:22 <CodeBlock> it would be better than what we have, that's for sure
19:45:38 <skvidal> so if we were to lose one of those 3 boxes we'd only lose 1/3rd of the projects websites
19:45:47 <CodeBlock> yeah
19:45:59 <skvidal> and the mailing lists are immaterial here
19:46:17 <CodeBlock> so mailing lists would go on the same box as fpo lists?
19:46:44 * janfrode likes creating active/active/* cluster with IBM GPFS storage.. but that's proably not an option here..?
19:46:45 <nirik> skvidal: and also could move those sites to one of the other ones.
19:47:02 <nirik> but if the backend box died they would all die
19:47:15 <skvidal> nirik: I' msaying no backend
19:47:39 <skvidal> nirik: the 3 boxes host the actual data
19:47:39 <nirik> oh I see...
19:47:49 <skvidal> and if we wanted to limit impact due to outage
19:47:57 <skvidal> each of those is at a different center, I'd think
19:48:41 <nirik> yeah.
19:49:01 <nirik> we could possibly cross replicate too if we wanted... rsync or something
19:49:01 <skvidal> and by doing the internal redirects from their apache servers
19:49:04 <skvidal> nirik: +1
19:49:17 <skvidal> hmmm
19:49:19 <skvidal> that would be nutty
19:49:26 <skvidal> do a raid config
19:49:30 <CodeBlock> so how would this work, budget wise? I realize I'm not the "right" person to ask that question, but .. could we afford something like that?
19:49:30 <skvidal> hosted1,2,3
19:49:39 <skvidal> each copies half of each of the others
19:49:49 <smooge> oh no... not raid over NFS... please no
19:49:50 <nirik> CodeBlock: it depends. ;) we would have to see where and what things we will have...
19:49:54 <skvidal> notactual raid
19:50:06 <smooge> oh ok
19:50:07 <nirik> rsync probibly would be good enough. ;)
19:50:15 <skvidal> smooge: I meant that if 2 hosts were up
19:50:23 <skvidal> we could reassemble all the data fro mthe other 2
19:50:35 <skvidal> CodeBlock: so instead of budget
19:50:40 <skvidal> let's look at a plan
19:50:43 <skvidal> and figure out cost
19:50:49 <skvidal> a couple of things which would help us, i think
19:50:52 * nirik thinks that sounds good.
19:50:57 <skvidal> 1. how much space/mem we're using now
19:51:03 <skvidal> 2. how many connections in any given month
19:51:04 <smooge> yeah.. you will need to work out the locking mechanism and what happens when bob commits to X, joe commits to Y, and they haven't sunk up
19:51:08 <skvidal> 3. growth rate of the above
19:51:12 <skvidal> so we can predict
19:51:19 <skvidal> smooge: there would not be any multiple writers
19:51:21 <skvidal> smooge: a single writer
19:51:24 <skvidal> just read-only replicated
19:51:36 <skvidal> no one is suggesting anything crazy - just keeping a hot copy
19:52:15 * nirik notes the awstats stuff is at least a start. We need to add in git.fh.o, etc after freeze.
19:52:27 <skvidal> nod
19:52:36 <skvidal> CodeBlock: so a suggestion
19:52:39 <skvidal> you want to work on this this summer?
19:52:54 <CodeBlock> that was my plan, yes
19:53:02 <skvidal> CodeBlock: how about you come up with a game plan by the week after next
19:53:22 <nirik> that sounds good...
19:53:46 <CodeBlock> skvidal: sounds good - I'll throw something out on the list and get ideas too
19:54:31 <skvidal> the total disk space in use on hsoted1 is actually quite small
19:54:31 <nirik> #action CodeBlock to gather more ideas and put together a longer term hosted plan by week after next.
19:54:39 <nirik> yep.
19:54:48 <nirik> ok, anything more on this? or shall we move on?
19:55:22 <nirik> I had one more quick item before open floor I forgot I wanted to mention...
19:55:26 <nirik> #topic Tickets
19:55:51 <nirik> I'd like to try and get together for a few minutes sometime in the next few weeks with folks who have tickets assigned to them...
19:56:00 <skvidal> oh dear
19:56:12 <nirik> so we can see whats done, whats easy, what can be nuked.
19:56:14 * skvidal goes to look
19:56:33 <nirik> also, we have a bunch of tickets for things like pkgdb, fas, or websites that might be better on their own trackers.
19:56:52 <nirik> skvidal: you only have 1 ticket I think. ;)
19:56:56 <skvidal> I just checked
19:56:58 <skvidal> I think I'm free!
19:57:08 <skvidal> https://fedorahosted.org/fedora-infrastructure/report/4
19:57:10 <skvidal> tickets by owner
19:57:21 <nirik> Anyhow, if folks would like to corner me when they have time, or I can find them... it would be nice to get our trac down to sanity.
19:57:32 <abadger1999> <nod> WRT the coding tickets -- we need to figure out how to make EasyFix easy to find.
19:57:55 <abadger1999> Since putting them on fas, python-fedora, pkgdb, etc... makes it hard to find for people coming into infra.
19:58:02 <nirik> abadger1999: yeah. I think keyword would be ok for that.
19:58:17 <nirik> sure, but some of them would be best moved... but it will be on case by case basis.
19:58:23 <abadger1999> But putting them all in fedora-infrastructure isn't logical with someone coming into it from the individual web apps.
19:58:40 <abadger1999> Yeah.
19:58:47 <abadger1999> well.. not sure about case-by-case.
19:59:15 <nirik> it may work best that some are just pointers...
19:59:20 <nirik> ie, see otherticket foo
19:59:25 <abadger1999> If there's less places to search and a simple logic to figuring out where to search, I think that's much better than some easyfix here, some there.
19:59:33 <abadger1999> <nod> that would work.
19:59:53 <nirik> #topic Open Floor
20:00:01 <nirik> ok, anything for open floor in the last few seconds?
20:00:04 <nirik> too late. ;)
20:00:07 <skvidal> hah
20:00:37 * nirik will wait a few to see if anyone comes up with anything.
20:01:13 * CodeBlock just throws out a "good job" to everyone, with the freeze and such so far, etc.
20:01:23 <CodeBlock> encouragement++;
20:01:28 <CodeBlock> ;)
20:01:29 <nirik> yeah, I am hoping for a nice smooth release. ;)
20:01:49 <nirik> Thanks for coming everyone. :) Lets continue in #fedora-admin and #fedora-noc.
20:01:51 <nirik> #endmeeting