infrastructure
LOGS
19:59:45 <mmcgrath> #startmeeting Infrastructure
19:59:45 <zodbot> Meeting started Thu Jul 29 19:59:45 2010 UTC.  The chair is mmcgrath. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:59:45 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
19:59:51 <mmcgrath> #meetingname infrastructure
19:59:51 <zodbot> The meeting name has been set to 'infrastructure'
19:59:52 <smooge> hi there
20:00:01 <onekopaka_laptop> 'ello.
20:00:03 <abadger1999> Yo!
20:00:09 * CodeBlock 
20:00:10 <abadger1999> :-) just in the nick of time
20:00:24 <mmcgrath> #topic who's here?
20:00:50 <smooge> here
20:00:51 <onekopaka_laptop> I'm here according to what I see.
20:00:53 * CodeBlock ... again ;)
20:01:15 <smooge> skvidal is blogging about bikers
20:01:16 * sijis is around
20:01:19 <mmcgrath> Ok, well lets get started.
20:01:22 <mmcgrath> #topic Meeting tickets
20:01:24 * skvidal is here
20:01:27 <mmcgrath> .tiny https://fedorahosted.org/fedora-infrastructure/query?status=new&status=assigned&status=reopened&group=milestone&keywords=~Meeting&order=priority
20:01:27 <zodbot> mmcgrath: http://tinyurl.com/47e37y
20:01:30 <mmcgrath> .ticket 2275
20:01:32 <zodbot> mmcgrath: #2275 (Upgrade Nagios) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/2275
20:01:40 <mmcgrath> CodeBlock: what's the word here?
20:02:33 <CodeBlock> well, we have a noc3 running now, and have moved nagios-external to it, which is now nagios3
20:02:34 <CodeBlock> so... for now if anyone sees any problems with it, please let me know
20:02:45 <mmcgrath> CodeBlock: are we getting alerts from noc3 yet?
20:03:34 <CodeBlock> mmcgrath: I think we should be, I don't really have a way to test that - I might add a fake check just to see
20:03:38 <CodeBlock> but theoritically, we should be
20:03:49 <mmcgrath> k
20:03:58 <smooge> kill a server and find out
20:04:04 <mmcgrath> CodeBlock: well, lets let it run until next week, if all's good we'll rename and have at it.
20:04:08 <smooge> no wait no a good idea
20:04:11 <mmcgrath> CodeBlock: anything else?
20:04:15 <CodeBlock> smooge: hehe
20:04:19 <CodeBlock> mmcgrath: Don't think so
20:04:27 <mmcgrath> k
20:04:29 <mmcgrath> next ticket
20:04:31 <mmcgrath> .ticket 2277
20:04:32 <zodbot> mmcgrath: #2277 (Figure out how to upgrade transifex on a regular schedule) - Fedora Infrastructure - Trac - https://fedorahosted.org/fedora-infrastructure/ticket/2277
20:04:44 <mmcgrath> abadger1999: I think you created this one.
20:04:50 <mmcgrath> AFAIk we still have no one maintaining it in EPEL
20:04:53 <mmcgrath> any translators here?
20:05:13 <abadger1999> mmcgrath: Yeah -- talked with translators at FUDCon Santiago and we thought that it might be good to get this on a regular schedule
20:05:34 <abadger1999> That way we don't disrupt translator workflow at a critical point in the Fedora release schedule.
20:05:42 <mmcgrath> abadger1999: the funny thing here is I don't think this really has much to do with us beyond a yum update.
20:05:55 <mmcgrath> if someone from the translators team wants to build the rpm at specific times, we do montly updates anyway
20:05:56 <abadger1999> mmcgrath: Well.. don't we want to test the upgrades too?
20:06:01 <mmcgrath> and we could certainly do others
20:06:03 <mmcgrath> we do in staging first
20:06:11 <mmcgrath> but without any updatese coming, we've got nothign to test :(
20:06:26 <smooge> and without a test plan...
20:06:27 <abadger1999> And last time... the update required we do a bit of db work before/during/after the code was updated.
20:06:33 <Ttech> oops.
20:07:02 <abadger1999> Also... the way transifex works keeps changing -- so it's probably not going to see many updates in EPEL.
20:07:22 <abadger1999> but the updates bring new features that our translators will want so we probably have to stay o nthe treadmill.
20:07:44 <mmcgrath> We'll make whatever they want us to run work, but someone still needs to package it
20:07:52 <mmcgrath> and I don't think that's us since it's just an upstream now.
20:08:11 <mmcgrath> we're too far removed from that group and workflow to know when they want what, what's important, etc.
20:08:21 <mmcgrath> surely some translator can take it.  it's orphaned in EPEL
20:09:00 <abadger1999> mmcgrath: Well -- otoh they're too far away from us to know what happens when you try to upgrade from 0.6 to 12.15 in one smooth go.
20:09:00 <smooge> I think because it didn't fit well with EPEL no broken updates
20:09:19 <mmcgrath> abadger1999: nothing's going to fix that though
20:09:29 <mmcgrath> we have staging, it's not hard for us to upgrade and give it a look, let them know it's ready
20:10:01 <abadger1999> mmcgrath: Sure.  But we need to coordinate schedules and such.
20:10:11 <mmcgrath> what is there to coordinate though?
20:10:34 <mmcgrath> if that team decides they want a new transifex, they can package it and make it ready.  They can email us if they want to update it.
20:10:38 <abadger1999> Remember last time?  with stickster asking if we sho9uld move our translation  infrastructure to indifex.org for F14?
20:10:50 <mmcgrath> I remember a package not being ready.
20:10:54 <abadger1999> mmcgrath: Does docs package mediawiki?
20:11:12 <mmcgrath> the translations team doesn't have to package transifex if they don't want but someone does.
20:11:41 <mmcgrath> and that someone's not us.  Stuff like this is a partnership.  They gotta pony up and do some work too.
20:11:50 <abadger1999> I think packaging is our job.
20:11:58 <mmcgrath> I think it's absolutely not.
20:12:11 <mmcgrath> since we don't know anything about it, nor the needs, nor what translations needs are
20:12:16 <mmcgrath> we're not upstream for transifex.
20:12:19 <abadger1999> There's certainly a partnershiip  here but why would it be translators jobs to package something?
20:12:20 <mmcgrath> we're not the users of transifex
20:12:35 <mmcgrath> because they're the team that wants to use it.
20:12:37 <abadger1999> There's no guarantee that they're system admins and know the first things baout how to install software.
20:12:52 <mmcgrath> then it's their job to find a packager.
20:13:14 <mmcgrath> we don't run around looking for apache packagers
20:13:32 <abadger1999> mmcgrath: If RHEL dropped it, wouldn't we?
20:13:45 <mmcgrath> maybe
20:13:51 <abadger1999> mmcgrath: The EPEL maintainer dropped the ball with mod_wsgi so we package it in infra.
20:13:53 <mmcgrath> I'm just pissed that this work got dropped in our lap.
20:14:00 <mmcgrath> It make sme not want to accept any hosting without a written agreement.
20:14:14 <abadger1999> mmcgrath: Didn't it get dropped in our lap when we chose to install transifex in our environment?
20:14:19 <mmcgrath> Teams cannot, ever, come to us just with uses in mind.  they need to take part ownership.
20:14:25 <mmcgrath> .whoowns transifex
20:14:25 <zodbot> mmcgrath: ivazquez (orphan in Fedora EPEL)
20:14:31 <mmcgrath> orphan in epel.
20:14:36 <mmcgrath> you're suggesting we now do that work too
20:14:42 <abadger1999> Which is Why I support you when you say we shouldn't be accepting new apps without more manpower coming with it :-)
20:15:23 <mmcgrath> but just like with docs, the websites team, etc.  They all have major ownership over those services.
20:15:26 <mmcgrath> we just host it.
20:15:53 <mmcgrath> have we even asked the translations team to find a packager?
20:16:26 <abadger1999> mmcgrath: I doubt it.  But it can't go into EPEL -- it needs to be a packager that adds it into the infrastructure repo.
20:16:35 <mmcgrath> why can't it go in epel?
20:16:45 <abadger1999> mmcgrath: Because it changes too much.
20:16:51 <mmcgrath> isn't that for the packager to decide?
20:16:59 <abadger1999> [13:09:00] <smooge> I think because it didn't fit well with EPEL no broken updates
20:17:13 <mmcgrath> do we know an upgrade would break updates or are we assuming it?
20:17:14 <abadger1999> mmcgrath: No.  EPEL policy dictates what can go into EPEL.
20:17:30 <mmcgrath> since the current upgrade hasn't even been tested.
20:17:42 <onekopaka_laptop> I don't think we should just forbid transifex from being in EPEL
20:17:43 <mmcgrath> I don't care where the RPM goes, but someone on that team (who knows their schedule) needs to package it.
20:17:48 <abadger1999> mmcgrath: yes, upgrades from transifex violate the epel policies on updates quite frequently.
20:18:12 <abadger1999> mmcgrath: Okay, so what can we give them?  Sponsor them into sysadmin-web  so they can use staging?
20:18:27 <mmcgrath> naw, sysadmin-test so they can use dev.
20:18:30 <mmcgrath> unless they want web.
20:18:39 <mmcgrath> and want to take ownership of hosting it as well.  I'm fine with that.
20:18:45 <mmcgrath> but that's a much bigger commitment then just packaging.
20:18:51 <mmcgrath> and I don't think they'd need it.
20:18:59 <mmcgrath> sure, if they want it we can help them.  But I wouldn't think that's a requirement.
20:19:10 <abadger1999> Okay -- I just think that only having control over packaging doesn't help much -- packaging is part of deployment.
20:19:28 * mmcgrath asks ignacio why no one's maintaining it in EPEL.
20:19:43 <onekopaka_laptop> I'm looking at the Transifex download page
20:19:55 <onekopaka_laptop> Soon, Tx will land in a yum repo near you, and you'll be able to install it with something like yum install transifex.
20:20:01 <mmcgrath> in fairness, packaging is the only thing standing between upstream and us at the moment.  It can be part of anything in that process, at the moment it's the only thing missing.
20:20:42 <mmcgrath> ignacio is goign to ask diegobz if he'll continue to maintin it.
20:20:52 <mmcgrath> it's quite possible we've made a problem where there isn't one, just miscommunication.
20:21:24 <mmcgrath> abadger1999: I have no problem with someone from this team who wants to do the packaging for it.  but I'm just doubtful anyone will step up since that's not what we do.
20:21:28 <onekopaka_laptop> upstream clearly is pointing their potential users to the yum repositories
20:22:09 <mmcgrath> and I don't think it should be required of us.
20:22:17 <mmcgrath> anywho, anyone have any additional questions on that?
20:23:01 <onekopaka_laptop> I don't think we have to do it, but if others are unable to find packagers, we should help make a few packagers out of some translators
20:23:03 <abadger1999> mmcgrath: I would tend to disagree.  I think that packaging is absolutely something that we do... but we can talk about that in some other, bigger arena.
20:23:12 <mmcgrath> abadger1999: k
20:24:06 <mmcgrath> #topic updates
20:24:15 <mmcgrath> smooge: how'd this go, what went wrong, what needs to be done still and how can we avoid it next time?
20:24:27 <mmcgrath> seems like the last 4 months of updates something has gone not right or it's taken longer then the outage window, etc.
20:24:31 <stickster> mmcgrath: abadger1999: Tuning in just now to this conversation -- As a data point in your conversation: http://lists.fedoraproject.org/pipermail/trans/2010-July/007819.html
20:24:43 <smooge> ok we are working on 144 servers and 96 are still needing to be rebooted/final updates
20:25:19 <mmcgrath> so two outage windows and less then half of the servers actually got updated and rebooted?
20:25:22 <mmcgrath> what happened?
20:25:54 <smooge> ok a couple of issues. We wanted to use func-yum and I found some bugs for skvidal
20:26:09 <smooge> Second we had a compete between git branching and outage
20:26:50 <smooge> so nothing inside PHX2 was going to be rebooted because of that. Last week I ran into an issue with TG2 and moksha that took my time downgrading
20:27:39 <skvidal> the func-yum issue was not with func-yum but with func and the groups setup
20:27:39 <mmcgrath> yeah that was a bummer, we had dmalcolms and Oxf13's outage scheduled over the top of ours
20:27:46 <skvidal> I fixed it late yesterday
20:27:49 <skvidal> sorry for the hassle
20:28:26 <smooge> skvidal, it was a small thing actually once it worked it was pretty quick
20:28:28 <mmcgrath> it's fine, I'm just trying to figure out what's going on because I know we've not had a clean upgrade process in some months
20:28:29 <onekopaka_laptop> I would think we would be able to avoid scheduling outages over each other
20:28:48 <smooge> then I ran into a couple of issues where a box that had been set up to xen shutodwn decided to do a xen save
20:28:49 <mmcgrath> onekopaka_laptop: well dmalcoms was just a fat finger in the scheduling
20:28:55 <smooge> this meant I had to go and reboot again and such
20:29:35 <onekopaka_laptop> mmcgrath: and Oxf13's outage?
20:29:47 <Oxf13> massive.
20:29:50 <skvidal> mmcgrath: what part of the update process has been unclean? I'm not being defensive - I want to make sure i've gotten all the shit fixed :)
20:30:13 <onekopaka_laptop> Oxf13: was the problem that your outage was larger than you thought?
20:30:18 <mmcgrath> onekopaka_laptop: you'd have to ask smooge  there, I'm not sure he even scheduled a second outage
20:30:21 <Oxf13> no, in fact it may be shorter.
20:30:23 <mmcgrath> skvidal: its' just not worked right.
20:30:27 <smooge> most boxes were updated last week. Its just about 8 updates and then trying to figure out why some boxes in the cnode and virtweb do not show up in puppet/func
20:30:33 <mmcgrath> skvidal: we've always ended up going way past the outage window or not getting boxes updated or rebooted.
20:30:34 <smooge> then I need to get vpn working to a couple others
20:30:35 <Oxf13> onekopaka_laptop: but I was late in requesting the outage
20:30:45 <smooge> mmcgrath, I scheduled a second outage over the weekend
20:31:01 <skvidal> mmcgrath: ah - gotcha.
20:31:22 <mmcgrath> smooge: which weekend, this last one or the next one?
20:31:27 * mmcgrath might have just missed it
20:31:38 <smooge> last weekend
20:31:58 <onekopaka_laptop> July 23rd
20:31:59 <smooge> I got one of three dates wrong STILL
20:32:04 <onekopaka_laptop> I saw an email
20:32:14 <mmcgrath> so taht didn't overlap with jesses update right?
20:32:20 <mmcgrath> err jesse's outage
20:32:22 <onekopaka_laptop> or rather I went back now and saw
20:32:35 <mmcgrath> smooge: so looking forward, what needs to happen in the future?
20:33:56 <smooge> ok we need to better advertise and work with our customers/partners on scheduling down time
20:34:29 <smooge> Wednesday had been picked because Mon/Fri are usually bad and Thursday is meeting day for many of us
20:34:39 <smooge> Tuesday was causing issues too
20:34:39 <onekopaka_laptop> mmcgrath: from what I see, Jesse's update, quoted at 48 hours, would have been happening at the same time as updates
20:34:43 <Oxf13> I was also bad about communicating how much outage I would need
20:35:15 <mmcgrath> smooge: how long would a total upgrade and reboot take?
20:35:49 <smooge> well it takes me about 30 minutes per xen server and clients to update and then a reboot is usually adding in 10-15 minutes
20:35:58 <smooge> not counting ping/irqs
20:36:26 <smooge> some of that can be parsed out to more people but other parts cant
20:36:27 <sijis> would having additional hands help?
20:36:39 <smooge> we can't reboot all the app or proxy servers at the same time
20:36:57 <smooge> and then you have bapp01/db02 which have been now missed through 2-3 reboot cycles
20:36:59 <onekopaka_laptop> smooge: I agree, that'd cause massive amounts of panic
20:36:59 <mmcgrath> 30 minutes per xen server?
20:37:04 <mmcgrath> that seems way too high.
20:37:23 <mmcgrath> smooge: so what does that mean in total outage window?
20:37:28 <onekopaka_laptop> mmcgrath: that's including the guests
20:37:35 <mmcgrath> yeah that still seems way to high.
20:37:43 <mmcgrath> I'd think we could do all the external machines at once.
20:37:51 <mmcgrath> or pretty close to that.
20:37:59 <mmcgrath> I don't see any reason to do updates in serial.
20:38:02 <smooge> mmcgrath, the func-yum speeds it up some but then there is the "oh why is postgres83 yelling that 84 showed up." or other things that require a little bit of hand holding
20:38:15 <skvidal> smooge: okay I have a couple of ideas here
20:38:51 <skvidal> 1. we should be able to use func-list-vms-per-host to know which hosts are where
20:38:52 <smooge> mmcgrath, I did that once :). I remember being advised to be less adventurous :)
20:39:13 <skvidal> 2. then dump the hosts back out to func-yum for doing updates on a set of items under a vm at a time - in parallel
20:39:22 <mmcgrath> smooge: I think doing that should be a goal of ours.  lets fix things that are broken (like the postgres clusterfuck)
20:39:29 <mmcgrath> though I think that only impacted 2 publictest servers
20:39:32 <mmcgrath> I could be wrong.
20:39:59 <smooge> mostly pt. I ran into something on app01.stg and bapp1 I think similar
20:40:02 <smooge> no thats nagios
20:40:04 <skvidal> I can add a test-run func-yum run
20:40:15 <smooge> skvidal, a test run would be nice
20:40:46 <mmcgrath> at 30 minutes per xen server we're looking at something like 12-15 hours of outage per month for upgrades for less then 200 hosts and that doesn't seem reasonable to me.
20:40:46 <smooge> I end up thinking I remember all the crap hosts and then end up with "oh wait that needs something"
20:41:38 <smooge> mmcgrath, I agree. I need to come up with something better
20:42:20 <smooge> ideally it should be func-yum upgrade and find out what needs to be restarted but they keep updating the kernel and glibc
20:42:22 <sijis> would upgrading staging first give some indication on what needs to be done to the prod boxes?
20:42:30 <skvidal> sijis: only sometimes
20:42:38 <CodeBlock> speaking of upgrading stuff, mmcgrath any idea when the fas update is going to go live?
20:42:46 <skvidal> smooge: restarted I can make happen using needs-restarting
20:42:54 <skvidal> smooge: just gotta deploy it
20:43:05 <mmcgrath> CodeBlock: depends on when the sqlalchemy bits are fixed, abadger1999's got them on his todo but he's a busy dood.  we need like 8 more toshios
20:43:08 <smooge> skvidal, I 'deployed' a verion in my home directory on a bunch of boxes.
20:43:24 <CodeBlock> mmcgrath: hehe, alright
20:43:43 <skvidal> smooge: I  have a couple of other items like
20:43:48 <skvidal> 'is running kernel latest'
20:43:52 <smooge> mmcgrath, my 30 minutes is also me being a bit more cautious than I probably should be
20:44:15 <skvidal> mmcgrath, smooge: let me ask a question
20:44:17 <smooge> I just keep knocking off someone important when I start doing it faster
20:44:20 <mmcgrath> shoot
20:44:30 <skvidal> why do we target updates as monthly?
20:44:44 <mmcgrath> because it has to be some interval and that seemed reasonable.
20:44:46 <skvidal> why not weekly - to keep the sheer overwhelmingness of the change to a smaller amount
20:44:48 <mmcgrath> automatic updates == teh fail.
20:44:54 <skvidal> I wasn't suggesting automatic
20:44:59 <skvidal> I understand wanting to watch them
20:45:07 <mmcgrath> I think the problem isn't the number of packages isn't the problem.  it's getting the updates done.
20:45:51 <smooge> skvidal, I tried doing that last year I think. I ended up with issues that func-yum should fix now (not getting all app servers in sync etc).
20:46:29 <smooge> the big one comes out that kernel updates end up being the big time killer
20:46:37 <skvidal> b/c of the reboots?
20:46:42 <smooge> yeah.
20:47:06 <skvidal> okay - we can also do update targets
20:47:22 <smooge> take it out of DNS; shutdown all the domU's; reboot the domO; make sure the domU's come up; put it back in DNS
20:47:47 <skvidal> 'take it out of dns'
20:47:51 <skvidal> take _what_ out of dns?
20:47:56 <mmcgrath> skvidal: 'dig fedoraproject.org'
20:48:01 <smooge> we have had an issue where domU's sometimes come up perfectly and other times the dom0 says "Oh I know I just started burt you cant have 2 GB of ram '
20:48:09 <smooge> wildcard and @
20:48:13 <skvidal> oh
20:48:15 <skvidal> you mean the app server
20:48:17 <skvidal> I'm sorry
20:48:24 <smooge> most of the outside servers have 1 proxy on them somewhere
20:48:25 <mmcgrath> well the proxy servers
20:48:34 <skvidal> i thought you were talking about the xen server
20:48:36 <mmcgrath> I do wish we had automated proxy recovery
20:48:41 <abadger1999> mmcgrath, CodeBlock: btw, what did you think about just having fas conflict with the earlier sqlalchemy?
20:48:42 <smooge> then check to make sure that haproxy says the app box came up ok
20:48:50 <smooge> skvidal, sorry wasn't clear
20:48:54 <onekopaka_laptop> mmcgrath: define "automated proxy recovery"
20:49:01 <abadger1999> Since fas is on its own servers, it should work.
20:49:08 <abadger1999> But it's icky icky.
20:49:11 <mmcgrath> abadger1999: I'm fine with that if we won't run into any issues there.
20:49:27 <mmcgrath> abadger1999: we don't have any need for fas0X to actually have the older alchemy right?
20:49:33 <mmcgrath> python-fedora doesn't need it for fasClients or anything?
20:49:47 <skvidal> agreed on the automated proxy recovery - otherwise all the dns dreck would be simpler
20:49:50 <abadger1999> Try it out.  I think that jsut having the sqlalchemy that fas needs installed will work.  python-fedora doesn't need it.
20:49:51 <mmcgrath> onekopaka_laptop: right now if app1 goes down, no one notices because the load balancers take it out.  When it's back they add it back in.  We don't have to do anything
20:50:03 <mmcgrath> onekopaka_laptop: with the proxy servers it's a lot more complicated since it's dns balance based.
20:50:09 <onekopaka_laptop> mmcgrath: okay.
20:50:12 <abadger1999> hmm... although the python-fedora package seems to require sqlalchemy.
20:50:24 * abadger1999 will figure out why that is.
20:50:28 <mmcgrath> abadger1999: k, I didn't realize it would be that easy.  so the upgrade process would just be to remove the old version and upgrade
20:50:35 <mmcgrath> abadger1999: oh yeah that is a little weird :)
20:50:37 <onekopaka_laptop> mmcgrath: so treating DNS round-robin same as haproxy
20:50:52 <mmcgrath> onekopaka_laptop: ehh, sort if.  we also have geodns in there but that's something else.
20:51:00 <onekopaka_laptop> mmcgrath: yeah.
20:51:02 <mmcgrath> the big thing is when a proxy server goes down, dns keeps sending people there.
20:51:03 <mcloaked> ?
20:51:13 <onekopaka_laptop> mmcgrath: which is very bad.
20:51:18 <mmcgrath> yeppers
20:51:41 <mmcgrath> holy moly the meeting has 10 minutes left :)
20:51:47 <mmcgrath> anyone mind us moving on for now?
20:51:59 <skvidal> mmcgrath., smooge : I'd like to talk more - later about updates
20:52:00 <onekopaka_laptop> I"m pretty sure nobody wants to rewrite BIND to have that functionality too
20:52:04 <skvidal> but I'm fine moving on for now
20:52:07 <mmcgrath> skvidal: sure, I'm around :)
20:52:18 <mmcgrath> onekopaka_laptop: I bet there are options I just don't know of any
20:52:19 <mmcgrath> ok
20:52:22 <mmcgrath> #topic Open Floor
20:52:23 <onekopaka_laptop> mmcgrath: okay
20:52:25 <mmcgrath> anyone have anything to discuss?
20:52:36 <onekopaka_laptop> blogs.fp.o, I guess
20:52:39 <onekopaka_laptop> I'm back
20:52:52 <onekopaka_laptop> and I'm actually going to get the documentation done
20:52:54 <smooge> ok
20:52:58 <mmcgrath> Just a heads up, I'm working on a personal repo setup thing on people - http://repos.fedorapeople.org/
20:53:00 <onekopaka_laptop> I have alll these nice screenshots
20:53:03 <mmcgrath> onekopaka_laptop: yeah we need SOP and stuff.
20:53:04 <mmcgrath> perfect
20:53:21 <onekopaka_laptop> so people will be able to easily make their own blogs!
20:53:21 <mdomsch> mmcgrath: different than throwing repos in ~/public_html ?
20:53:22 <smooge> I am building a fakefas for the Insight project. Then I need to work on how to deal with updates better
20:53:46 <mmcgrath> mdomsch: a little yeah, take a look - https://fedoraproject.org/wiki/User:Mmcgrath/Fedorapeople_repos
20:54:02 <mmcgrath> mdomsch: if you have anything you can host or you'd like to please run through the steps.  I'd like to announce it soon but it could use testing :)
20:54:18 <sijis> onekopaka_laptop: i did notice a bug.. when you get an error in control panel whne you first login. i think that should only be displayed after you login.
20:54:53 <onekopaka_laptop> sijis: do you have anything like a screenshot?
20:55:04 <onekopaka_laptop> sijis: because I know there's one ugly ugly ugly bug
20:55:05 <sijis> you can reproduce it everytime
20:55:20 <sijis> as long as you aren't logged in
20:55:25 <onekopaka_laptop> where if you go to log into the admin
20:55:36 <onekopaka_laptop> it throws you to a non-existant page
20:55:45 <sijis> we could be talking about the same one
20:55:51 <sijis> its a redirect problem, ibeliee
20:55:55 <sijis> *believe
20:56:03 <onekopaka_laptop> yeah
20:56:03 <mmcgrath> <nod>
20:56:13 <mmcgrath> k, we've got 5 minutes left.  Anyone have anything else they'd like to discuss?
20:56:22 <mmcgrath> I did forget to mention varnish is all in place now for smolt and the wiki
20:56:23 <onekopaka_laptop> I think it's because WordPress is all unaware of the SSL fun
20:56:25 <mmcgrath> it's been working fine
20:56:27 <mmcgrath> anyone seen any issues?
20:56:34 <onekopaka_laptop> mmcgrath: nope.
20:56:39 <onekopaka_laptop> however
20:56:51 <onekopaka_laptop> addendum to my email about assests
20:57:06 <onekopaka_laptop> last I checked, /static is served by Apache on the proxy
20:57:12 <smooge> not me
20:57:19 <onekopaka_laptop> and it never goes past that
20:57:33 <mmcgrath> onekopaka_laptop: correct, that actually never gets as far as the varnish server.
20:57:35 <onekopaka_laptop> so we have nothing to worry about there.
20:57:40 <mmcgrath> it gets served directly from the proxy servers
20:58:07 * gholms invites everyone interested to the Cloud SIG meeting right after this
20:58:12 <onekopaka_laptop> so just so nobody gets hung up on that typo
20:58:17 <onekopaka_laptop> not typo
20:58:19 <onekopaka_laptop> topic*
20:58:30 <gholms> Nice
20:58:35 * mdomsch needs to roll a new MM release out
20:58:45 <mmcgrath> coolz
20:58:49 <mdomsch> not critical, but geppetto wanted it
20:58:55 <mmcgrath> <nod>
20:58:55 <onekopaka_laptop> mdomsch: is it feature packed?
20:59:03 <mmcgrath> anyone have anything else to discuss?  If not we'll close the meeting in 30
20:59:15 <onekopaka_laptop> mdomsch: showing up Apple with over 9000 new features? ;-)
20:59:19 <mdomsch> it's mostly bugfixes, but one feature (marking private=True on private mirrors in metalinks)
20:59:34 <mdomsch> which rawhide yum uses to let people use only private mirrors if that's their policy
20:59:41 <mmcgrath> Ok I'm going to close so the cloud guys can get going :)
20:59:46 <mmcgrath> #endmeeting