fedora-meeting
LOGS

20:00:16 <mmcgrath> #startmeeting
20:00:23 * ricky 
20:00:26 <mmcgrath> #topic Infrastructure -- Who's here?
20:00:31 * dgilmore 
20:00:31 * ricky (oops)
20:00:32 * dgilmore 
20:00:33 * dgilmore 
20:00:40 * LinuxCode 
20:01:10 * johe here
20:01:14 * nirik is around.
20:01:21 <mmcgrath> K.  lets get started on ticket
20:01:23 <mmcgrath> s
20:01:30 <mmcgrath> #topic Infrastructure -- Tickets
20:01:32 <mmcgrath> .tiny https://fedorahosted.org/fedora-infrastructure/query?status=new&status=assigned&status=reopened&group=milestone&keywords=~Meeting&order=priority
20:01:37 * davivercillo is here
20:01:47 <mmcgrath> So the first and only meeting item is, again,
20:01:49 <mmcgrath> .ticket 1503
20:01:54 <onekopaka> hello
20:01:55 <mmcgrath> last time we talked about the the whole metting
20:01:59 * onekopaka was here.
20:02:01 <mmcgrath> this time lets kill it after 10 minutes max.
20:02:03 <mmcgrath> abadger1999: around?
20:02:03 <onekopaka> is*
20:02:05 <dgilmore> mmcgrath: i think we could again
20:02:07 <smooge> here
20:02:20 * abadger1999 arrives
20:02:36 <dgilmore> did we get input on what source we have to make available? and how we have to make it available if we went with AGPL everywhere?
20:02:46 <mmcgrath> abadger1999: so give us the latest poop
20:02:48 <smooge> dgilmore, spot is working on it with legal
20:02:50 <LinuxCode> I assume this is the AGPLv3 issue ?
20:02:59 <smooge> or I should wait until abadger1999
20:03:00 * skvidal is here
20:03:03 <LinuxCode> wasnt somebody supposed to be here ?
20:03:20 <ricky> LinuxCode: They were at OSCON
20:03:27 <LinuxCode> ohh
20:03:28 <abadger1999> spot is talking to legal.  So I think we don't have much to say here.
20:03:32 <LinuxCode> the person spot mentioned ?
20:03:36 <abadger1999> Unless people have new questions since last year
20:03:46 <mmcgrath> abadger1999: so no progress since last week?
20:03:52 * LinuxCode hasnt, but sees both sides of the argument
20:04:15 <dgilmore> abadger1999: right. until we clear up the legal requirements we can do anything
20:04:16 <abadger1999> mmcgrath: Well, spot has the list of questions now and it has gone from him to legal.  But we haven't gotten a writeup yet.
20:04:22 <abadger1999> So we can make no progress.
20:04:45 <mmcgrath> k
20:04:56 <mmcgrath> Is Bradley Kuhn here?
20:04:59 * mmcgrath notes domsch invited him
20:05:11 <ricky> I think he was OSCONing, so mdomsch moved it up a week
20:05:22 <mmcgrath> ah, k
20:05:26 <abadger1999> I'd like to go ahead with relicensing python-fedora to lgplv2+ since that won't be affected by whatever we decide regarding agpl.
20:05:28 <mmcgrath> ah.  so he did :)
20:05:49 <mmcgrath> abadger1999: What all are we accomplishing with that?
20:06:20 <abadger1999> mmcgrath: Right now it's gplv2.  LGPLv2+ will make it so more people can use it.
20:06:46 <abadger1999> for instance, if someone writes code with the apache license, python-fedora will work under the LGPLv2+.
20:06:58 <abadger1999> also, mdomsch would like it to change.
20:07:10 <mmcgrath> k
20:07:16 <mmcgrath> well, if thats all we have on that I'll move on
20:07:17 <abadger1999> mirrormanager is MIT licensed.  But when used with python-fedora, the combined work is GPLv2.
20:07:35 <skvidal> together - they fight crime!
20:07:52 <abadger1999> with python-fedora LGPLv2+, mirrormanager remains MIT in practice.
20:07:52 <onekopaka> hmm.
20:08:03 <abadger1999> s/inpractice//
20:08:07 <mmcgrath> skvidal: that's good, I'm pretty sure smolt is committing some....
20:08:14 <mmcgrath> abadger1999: ok, thanks for that
20:08:20 <mmcgrath> anyone have anything else on this topic before we move on?
20:08:52 * LinuxCode shakes head
20:08:59 <mmcgrath> k
20:09:13 <mmcgrath> #topic Infrastructure -- Mirrors and 7 years bad luck.
20:09:19 <LinuxCode> haha
20:09:20 <smooge> you broke it
20:09:25 <LinuxCode> its your fault
20:09:26 <LinuxCode> lol
20:09:27 <mmcgrath> So, as far as I know the mirrorsr are on the mend.
20:09:41 <LinuxCode> +1 can confirm that
20:09:41 <smooge> I believe so.
20:09:47 <LinuxCode> had first updates today
20:10:04 <mmcgrath> jwb just did a push that finished.
20:10:09 <mmcgrath> so there's a bash update coming out.
20:10:17 <mmcgrath> we're trying to time how long it takes that update takes to get to our workstations
20:10:25 <jwb> mmcgrath, i see it on d.f.r.c, but i haven't gotten it via yum yet
20:10:26 <mmcgrath> so if any of you see a bash update available, please do ping me and let me know.
20:10:37 <jwb> (and i'm doing yum clean metadata/yum update every little bit)
20:10:55 * mmcgrath verifies he also doesn't have it
20:11:21 * LinuxCode cleans up and sees if it has made it to the UK
20:11:24 <mmcgrath> yeah, no update yet.
20:11:27 <mmcgrath> So keep an eye on that.
20:11:32 <mmcgrath> Here's some of the stuff that's been done
20:11:42 <mmcgrath> 1) we've put limits on the primary mirrors
20:11:56 <mmcgrath> 2) we've started building our own public mirrors system which, for now, will be very similar to the old mirrors system.
20:11:59 <mmcgrath> but we control it
20:12:22 <mmcgrath> 3) we've leaned harder on various groups that we're blockign on to get our i2 mirror back up and our other primary mirror back up
20:12:26 <mmcgrath> we're supposed to have 3 of them.
20:12:43 <mmcgrath> But still no root cause, though it sounds like a combination of thigns
20:12:47 <mmcgrath> err things.
20:13:05 <mmcgrath> To me the biggest issue isn't that the problem came up, it's that it took so long to fix and our hands were largely tied for it.
20:13:16 * davivercillo came back...
20:13:18 <mmcgrath> So we're working hard to build our own mirrors out that we can work on, monitor, etc.
20:13:34 <Southern_Gentlem> su
20:13:38 <Southern_Gentlem> su
20:13:42 <ricky> Password:
20:13:43 <LinuxCode> yeh, was a bummer that it took that long
20:13:43 <dgilmore> mmcgrath: how is that going to work?
20:13:52 <nirik> Southern_Gentlem: sudio? :)
20:14:07 <Southern_Gentlem> wrong window sorry
20:14:09 <ricky> dgilmore: It'll just be rsync servers that mount the netapp
20:14:12 <mmcgrath> dgilmore: for now we've got sync1 and sync2 up (which are RR behind sync.fedoraproject.org) which we're going to dedicate to our tier0 and tier1 mirrors.
20:14:23 <mmcgrath> They mount the netapp and basically do the same thing download.fedora.redhat.com did
20:14:28 <mmcgrath> Long term though...
20:14:33 <ricky> Have we decided to dedicate it, or just have connection slots reserved for tier 0/1?
20:14:46 <dgilmore> mmcgrath: ok what about from other data centres?
20:14:58 <dgilmore> RDU and TPA?
20:15:02 <ricky> rsync's connection limiting allows us to be pretty flexible with how we do that
20:15:03 <mmcgrath> ricky: right now we're not going to tell others about it and we might explicitly deny access the non tier0/1 mirrors
20:15:05 <smooge> TPA?
20:15:07 <mmcgrath> notting: FYI this might interest you.
20:15:08 <LinuxCode> mmcgrath, so, the other mirrors grab from sync1 and sync2 ?
20:15:08 <ricky> OK
20:15:13 <ricky> smooge: Tampa, I think
20:15:14 <mmcgrath> LinuxCode: only tier0 and 1
20:15:17 <LinuxCode> k
20:15:21 <smooge> oh I thought Tampa was gone
20:15:22 <mmcgrath> dgilmore: so the future of that is going to look like this.
20:15:26 <ricky> The other mirrors should technically grab from tier 0 or 1
20:15:42 <mmcgrath> TPA's mirror has been offline since February but it is physically in PHX2 now just not completely hooked up.
20:15:55 <smooge> ah ok
20:15:59 <mmcgrath> They're going to get it setup, get the snapmirror working again, then we'll have some servers there that mount that netapp and share.
20:16:07 <mmcgrath> it'll be similar if not identical to what we have in PHX1.
20:16:15 <mmcgrath> for me the concern is 1) is the limiting factor bandwidth or disk space.
20:16:33 <mmcgrath> and if it's bandwidth, we might need additional servers in PHX2 which I understand has a much faster pipe.
20:16:38 <mmcgrath> That's all regular internet stuff.
20:16:45 <LinuxCode> mmcgrath, what about failure ?
20:16:45 <mmcgrath> on the I2 side we're going to get RDU setup
20:16:56 <ricky> And will we get access to the rsync servers on the non-PHX sites?
20:17:01 <dgilmore> mmcgrath: so the same thing in RDU?
20:17:01 <mmcgrath> LinuxCode: well we'll have one in PHX and one in PHX2 so we'll be redundant in that fashion.
20:17:08 <LinuxCode> k
20:17:10 <mmcgrath> dgilmore: similar in RDU, though probably not a whole farm of servers.
20:17:13 <dgilmore> couple of boxes in front of teh netapp?
20:17:15 <mmcgrath> we'll have proper I2 access there.
20:17:16 * SmootherFrOgZ is around btw
20:17:26 <mmcgrath> but one thing I'm trying to focus on there is using ibiblio as another primary mirror.
20:17:43 <mmcgrath> Or at least work it in to our SOP so it can be pushed to very quickly and easily instead of pulled from.
20:17:52 <mmcgrath> we see one sign of problems from our primary mirrors and that can be setup and going.
20:17:55 <mmcgrath> we were lucky this last week.
20:18:03 <mmcgrath> no ssh vulnerabilities were actually real for example :)
20:18:40 <mmcgrath> so that's really what it's all going to look like.
20:18:52 <mmcgrath> smooge has some concerns about IOPS on the disk trays.
20:19:04 <mmcgrath> and we may have to take a more active role in determining what kind of trays we want in the future.
20:19:05 <dgilmore> mmcgrath: cool
20:19:13 <mmcgrath> this one was done between the storage team and netapp and months of their research.
20:19:34 <dgilmore> mmcgrath: its all sata right?
20:19:40 <smooge> yes.. the trays and how they are 'set' up were based on if they were FC.
20:19:41 <dgilmore> how big are the disks?
20:19:47 <smooge> and now they are 1TB SATAs
20:20:02 <smooge> the issue is that the SATAs perform at 1/3 the rate FC would
20:20:13 <dgilmore> so we have one shelf in each location?
20:20:19 <mmcgrath> smooge: I'd have hopped that months of research would have shown that though.
20:20:21 <smooge> but the FC would cost 8x more
20:20:27 <mmcgrath> I think their thoughts were that our FC rates were very underutilized.
20:20:42 <dgilmore> smooge: right id expect that kind of decrease in performance
20:21:06 <mmcgrath> So the longer term future on all of this is still in question.
20:21:09 <LinuxCode> mmcgrath, for the cost, you could make more mirrors
20:21:12 <smooge> mmcgrath, it could have been but more like 3/5's of capacity
20:21:26 <mmcgrath> and I'm pretty sure our problems, caused kernel.org's problems last week as well.
20:21:31 <LinuxCode> and maybe raid6+0 them
20:21:32 <mmcgrath> and his' machines are f'ing crazy fast.
20:21:40 <LinuxCode> ehh raid5+0
20:21:52 <LinuxCode> raid6 be slow
20:21:58 <smooge> LinuxCode, the issue comes down to the number of spindles either way
20:22:07 <LinuxCode> hmm
20:22:08 <smooge> and the bandwidth of the controllers
20:22:09 <mmcgrath> LinuxCode: raid6 and raid5 with lots of disks have nearly identical read performance.
20:22:19 <LinuxCode> true that mmcgrath
20:22:36 <mmcgrath> But still, no ETA on any of that.
20:22:38 <LinuxCode> even with stripping applied too ?
20:22:43 <smooge> anyway.. it is what it is or whats done is done or some other saying
20:22:57 <LinuxCode> smooge, hehe
20:22:58 <mmcgrath> I have a meeting with Eric (my primary RH contact) to find out about funding for new servers and what not for all of this.
20:23:05 <mmcgrath> and the scary part is we had these issues with just 1T of storage.
20:23:15 <mmcgrath> these trays were purchased so we could have closer to 8T of storage to use.
20:23:32 <LinuxCode> hmmm
20:23:39 <mmcgrath> If we find the trays can't handle it.... then I don't know what's going to happen but I know the storage team won't be happy.
20:23:49 <mmcgrath> So anyone have any additional questions on any of this?
20:23:59 <LinuxCode> are these san trays ?
20:24:16 <smooge> netapp
20:24:21 <LinuxCode> k
20:24:50 <mmcgrath> K, so that's that.
20:25:01 <mmcgrath> #topic Infrastructure -- Oddities and messes
20:25:22 <mmcgrath> So have things seemed more fluxy then normal to anyone else or is it just me?
20:25:33 <mmcgrath> We've largely corrected the ProxyPass vs RewriteRule [P]
20:25:34 <mmcgrath> thing
20:25:49 <mmcgrath> but I still feel there's lots of little outstanding bugs that have crept in over the last several weeks that we're still figuring out.
20:25:58 <mmcgrath> of particular concern to me at the moment is smolt.
20:26:12 <mmcgrath> but there were other things like the openvpn issue ricky discovered yesterday.
20:26:23 <dgilmore> it seems like nagios has been having moments
20:26:31 <dgilmore> where we get alot of alerts
20:26:31 <ricky> Are we sure that the smolt changes were necessarily from the merge?
20:26:45 <ricky> smolt was one of the ones whose proxy config was complex enough that I didn't touch it much
20:27:08 <mmcgrath> ricky: I actually think the smolt issues were discovered not because of the change but because of a cache change you made.
20:27:15 <ricky> Ahhh, yeah.
20:27:22 <mmcgrath> I think nagios had been checking a cached page the whole time so even when smolt went down, nagios just didn't notice.
20:27:40 <LinuxCode> hehe
20:27:46 <mmcgrath> or at least didn't notice it unless things went horribly bad.
20:27:52 <mmcgrath> I'd like to have more people looking at it though
20:28:01 <mmcgrath> onekopaka has been doing some basic hits from the outside.
20:28:07 <mmcgrath> basically a "time smoltSendProfile -a"
20:28:10 <onekopaka> mmcgrath: I have.
20:28:12 <mmcgrath> and the times were all over the place.
20:28:13 * sijis is sorry for being late.
20:28:18 <mmcgrath> including about a 5% failure rate.
20:28:42 <mmcgrath> Of course I hate to be spending time on something that is clearly not in Fedora's critical path, but we've got to knock it out
20:28:59 <LinuxCode> does smolt provide some debugging output thats useful ?
20:29:13 <mmcgrath> LinuxCode: it's almost entirely blocking on db.
20:29:13 <LinuxCode> as to network, dns issues
20:29:16 <mmcgrath> we even have the queries.
20:29:18 <LinuxCode> hmm
20:29:24 <LinuxCode> weird
20:29:40 <davivercillo> mmcgrath: I can try to help you with this ...
20:29:44 <ricky> Do we know which queries are causing the locked queries though?
20:30:05 <mmcgrath> ricky: not really
20:30:12 <mmcgrath> I still don't even understand why they're being locked
20:30:19 <mmcgrath> and why does locktime not mean anything?
20:30:20 <LinuxCode> is there a conn limit set up on the db end for the smolt unit ?
20:30:28 <ricky> locktime?
20:30:35 <mmcgrath> LinuxCode: it's not that weird, it's got 80 million rows :)
20:30:40 <mmcgrath> ricky: yeah in the slow queries log
20:30:41 <ricky> The time column on processlist is the that the query has been in its current state
20:30:42 <abadger1999> Do we have any reproducers?  I can try with postgres but we'd need to know whether we've gained anyhting or not.
20:30:51 <ricky> Hm, I remember looking the slow queries one up
20:31:11 <mmcgrath> davivercillo: how's your db experience?
20:32:01 <LinuxCode> mmcgrath, so queries get processed or a connection passes to the db server, but it doesnt handle it, correct ?
20:32:02 <davivercillo> mmcgrath: not so much yet... but I can learn fast ! :D
20:32:16 <mmcgrath> LinuxCode: the queries take several seconds to complete
20:32:18 <mmcgrath> for example
20:32:55 <LinuxCode> hmmm
20:33:01 <mmcgrath> I don't even have an example at the moment.
20:33:06 <LinuxCode> np
20:33:10 <ricky> Ah, lock_time is the time the query spent waiting for a lock
20:33:11 <mmcgrath> but they're there.
20:33:27 <ricky> So for the queries in the lock state with high times in processlist, they should have high lock_time if they're in the slow query log
20:33:33 <mmcgrath> ricky: so if a query is running on a table for 326 seconds... does that mean it was locked that whole time?
20:33:47 <ricky> Depends on where the 326 number came from
20:34:26 <mmcgrath> ricky: in the slow queries log, do you see any queries that have a Lock_time above 0?
20:34:49 <mmcgrath> oh, there actually are some.
20:35:13 <mmcgrath> only 56 of 2856 though
20:35:16 <mmcgrath> So anyway
20:35:24 <mmcgrath> davivercillo: how's your python?
20:35:25 <LinuxCode> could it be that smolt sends some weird query, that then causes it to hickup ?
20:35:40 <mmcgrath> LinuxCode: nope, it's not weird queries :)
20:35:48 <LinuxCode> just a wild though
20:35:50 <LinuxCode> t
20:35:51 <mmcgrath> it's just the size of the db
20:35:51 <davivercillo> mmcgrath: I think that is nice...
20:35:52 <LinuxCode> hehe
20:36:01 <onekopaka> joins + size = slowness
20:36:22 <mmcgrath> well and that's something else we need to figure out, we've spent so much time optimizing render-stats (which is still pretty killer)
20:36:29 <mmcgrath> but we haven't looket at optimizing the sending profiles.
20:36:30 <LinuxCode> mmcgrath, yeh but if you do something funky + huge db = inefficient
20:36:31 <davivercillo> mmcgrath: I did that script checkMirror.py, do u remember ?
20:36:37 <mmcgrath> huge db == inefficient :)
20:36:42 <davivercillo> :P
20:36:48 <LinuxCode> mmcgrath, haha of course
20:36:50 <mmcgrath> davivercillo: yeah but that was smaller :)
20:36:57 <LinuxCode> but there is no way around that
20:36:58 <mmcgrath> davivercillo: ping me after the meeting, we'll go over some stuff.
20:37:01 <davivercillo> mmcgrath: yep, I know... :P
20:37:03 <mmcgrath> if any of you are curious and want to poke around
20:37:07 <davivercillo> mmcgrath: Ok !
20:37:09 <mmcgrath> you can get a sample db to download and import here:
20:37:21 <mmcgrath> https://fedorahosted.org/releases/s/m/smolt/smolt.gz
20:37:24 <mmcgrath> It's about 500M
20:37:27 <thekad> mmcgrath, yes! thanks!
20:37:40 * thekad has been waiting to load something like that
20:37:59 <mmcgrath> Ok, I don't want to take up the rest of th emeeting with smolt stuff so we'll move on.
20:38:20 <mmcgrath> #topic Infrastructure -- Open Floor
20:38:27 <mmcgrath> Anyone have anything they'd like to discuss?
20:39:03 <dgilmore> importing meat pies from australia?
20:39:18 * mdomsch invited Bradley Kuhn to a future meeting to talk about agplv3
20:39:25 <thekad> mmcgrath, actually, about this smolt stuff, is there a ticket where we can track?
20:39:28 <mdomsch> we may have it cleared up by then, maybe not.
20:39:33 <SmootherFrOgZ> dgilmore:  :)
20:39:33 <dgilmore> mdomsch: have at it
20:39:38 <mmcgrath> thekad: not actually sure.  I'll create one if not.
20:39:45 <LinuxCode> mmcgrath, Id just like to know when you guys have time to help me do that new mapping of infra
20:39:57 <mmcgrath> mdomsch: yeah we were talking about it a bit earlier.  I saw your first email but not th esecond email :)
20:39:57 <LinuxCode> it will probably take a few weeks, if not longer
20:40:01 <smooge> dgilmore, are they mutton meat pies?
20:40:03 <thekad> I've seen this topic pop up several times, but we start from scratch every time, I think we could benefit there :)
20:40:09 <dgilmore> smooge: no
20:40:11 <LinuxCode> if that ticket still even exists
20:40:17 <smooge> dgilmore, then no thankyou
20:40:31 <dgilmore> smooge: four'n'twenty pies
20:40:37 <dgilmore> smooge: best ones ever
20:40:59 <mmcgrath> Ok, anyone have anything else they'd like to discuss?
20:41:03 * thekad is being dragged away by his 2yo daughter bi5
20:41:04 <smooge> dgilmore, as long as they don't have raisins and such in them
20:41:14 <LinuxCode> mmcgrath, see above
20:41:19 <LinuxCode> to replace this
20:41:22 <LinuxCode> https://fedoraproject.org/wiki/Infrastructure/Architecture
20:41:27 <LinuxCode> was in the talk some time ago
20:41:46 <mmcgrath> LinuxCode: yeah you were going to add docs to git.fedorapeople.org
20:41:48 <LinuxCode> there was a ticket, but not sure what happened to it
20:41:54 <mmcgrath> err git.fedorahosted.org/git/fedora-infrastructure.git :)
20:41:57 <LinuxCode> k
20:42:15 <LinuxCode> well I will have time now, but need you guys to explain to me exactly whats where
20:42:20 <mmcgrath> .ticket 1084
20:42:29 <LinuxCode> so I just ask some stupid questions now and then
20:42:39 <mmcgrath> LinuxCode: Do you have some time to work on it this afternoon?
20:42:50 <LinuxCode> its kinda late now
20:42:51 <LinuxCode> ;-p
20:42:57 <LinuxCode> 21:42
20:43:01 <mmcgrath> LinuxCode: yeah I'll add some stuff.
20:43:17 <LinuxCode> k
20:43:20 <mmcgrath> for those docs I think it's less important on where stuff physically is, and more important on how the pieces fit together.
20:43:21 <LinuxCode> a list be ok
20:43:26 <LinuxCode> that give me a starting point
20:43:33 <mmcgrath> that's really what people are talking about when they do architecture
20:43:34 <LinuxCode> yah of course
20:43:40 <LinuxCode> to give people a better idea
20:43:40 <mmcgrath> LinuxCode: <nod>  i'll update that ticket shortly actually
20:43:45 <LinuxCode> excellent
20:43:53 <smooge> I have an open floor question
20:43:53 <mmcgrath> I think starting on the Proxy servers first would be a good way to go.
20:43:57 <mmcgrath> smooge: have at it
20:44:00 <LinuxCode> def
20:44:09 <LinuxCode> we talk another time
20:44:31 <smooge> Someone was working on a inventory system earlier. Does anyone remember who it was , where it was, etc?
20:44:37 <smooge> I can't find any reference versus IRC :)
20:44:44 <LinuxCode> inventory....
20:44:51 <LinuxCode> kinda rings a bell....
20:44:51 * nirik thinks it was ocsinventory. Not sure who was doing it tho.
20:45:12 <mmcgrath> smooge: I think it was boodle
20:45:14 <sijis> i saw something on the list about ipplan. is that it?
20:45:15 <mmcgrath> .any boodle
20:45:20 <smooge> boodle is a tool?
20:45:25 <mmcgrath> mdomsch: you work with boodle right?
20:45:25 <smooge> boodle is a person?
20:45:35 <mmcgrath> boodle is a dude(le)
20:45:40 <ricky> Heh
20:45:42 <LinuxCode> http://publictest10.fedoraproject.org/ocsreports/
20:45:44 <LinuxCode> thats in my ticket
20:45:48 <LinuxCode> not sure if that helps
20:45:51 <mdomsch> mmcgrath, yes
20:45:53 <LinuxCode> the machine aint up
20:45:55 <smooge> LinuxCode, what ticket
20:46:00 <mmcgrath> mdomsch: he was working on the inventory stuff
20:46:02 <LinuxCode> https://fedorahosted.org/fedora-infrastructure/ticket/1084
20:46:05 <LinuxCode> scroll to bottom
20:46:12 <LinuxCode> 03/16/09 20:36:44 changed by boodle
20:46:13 <mdomsch> mmcgrath, I remember; I haven't seen anything on that in a bit
20:46:16 <mdomsch> ha
20:46:22 <smooge> LinuxCode, thanks.. my browser skills FAILED
20:46:22 <mdomsch> yeah, since about then
20:46:32 <LinuxCode> smooge, haha
20:46:33 <mmcgrath> mdomsch: I just didn't know if he was still working on it or what
20:46:34 <LinuxCode> ;-D
20:46:41 <mmcgrath> butI think smooge has an itch to get it going.
20:46:48 <mmcgrath> and it's probably best to let him scratch it :)
20:46:56 <mdomsch> smooge, go for it
20:47:07 <mdomsch> just put a note in that ticket so he knows
20:47:08 <LinuxCode> that be something useful to me actually
20:47:15 <thekad> bump the ticket
20:47:16 <smooge> ok cool. mdomsch can you send me an email address so I can contact him too
20:47:19 <LinuxCode> to make those updated diagrams
20:47:24 <ricky> smooge: What was the software you had experience with again?
20:47:33 <smooge> exactly what he was using
20:47:35 <mmcgrath> I swear there was an inventory ticket he was working on
20:47:38 <ricky> Oh
20:47:48 <ricky> oscinventory?  That might have ended...  a bit poorly
20:47:50 <smooge> mmcgrath, probably I have epic fail this week with searching
20:48:04 <ricky> I remember one of the ones he was trying, I found bad security problems on a quick lookover
20:48:16 <LinuxCode> ricky, with the app ?
20:48:23 <mmcgrath> ricky: do you know what happened with pb10?
20:48:24 <ricky> Yeah, grepping my IRC logs now
20:48:24 <mmcgrath> err pt10
20:48:34 * mdomsch has to run; later
20:48:37 <mmcgrath> mdomsch: laterz
20:48:46 <LinuxCode> http://publictest10.fedoraproject.org/glpi/
20:48:49 <LinuxCode> there is that one too
20:48:55 <LinuxCode> also kinda rings a bell
20:48:57 <smooge> yeah.. they tie into one another
20:48:58 <ricky> I have no idea, it might have just not gotten started on reboot
20:49:05 <LinuxCode> smooge, kk
20:49:06 <smooge> ocsng is the tool that polls the boxes
20:49:17 <smooge> glpi is the perty front end where you can enter data
20:49:22 <mmcgrath> .ticket 1171
20:49:30 <mmcgrath> smooge: see that ticket as well
20:49:49 <thekad> mmcgrath, that's the one
20:49:49 <ricky> Yeah, OSC was the security hole one
20:50:27 <smooge> geez I really failed
20:50:28 <ricky> Like I was able to delete a row from some table without logging on or anything
20:50:30 <thekad> .ticket 1084
20:50:37 <smooge> I went looking for GLPI
20:50:41 <thekad> that's the next one
20:50:43 <ricky> I didn't look much closer at the security stuff after an initial look at it though.
20:50:44 <nirik> ricky: nasty. ;( It's in fedora, you might note that to the maintainer.
20:50:44 <LinuxCode> ricky, did you report that ?
20:51:11 <mmcgrath> well
20:51:13 <mmcgrath> just the same
20:51:21 <mmcgrath> smooge: you want to open up an "inventory management" ticket?
20:51:26 <ricky> mmcgrath: Looks like publictest10 just didn't get started on a reboot - should I start it again?
20:51:42 <smooge> mmcgrath, put that down as an action please
20:51:46 <mmcgrath> ricky: sure, smooge might be able to use it
20:51:48 <smooge> I will start on it right away
20:51:55 <mmcgrath> #action Smooge will create a ticket and get started on inventory management
20:52:14 <smooge> ricky, we will see if the updated version has the bug and then work it out
20:52:31 * davivercillo need to go home now ! See you later !
20:52:39 <ricky> OK.  I just remember getting a really bad impression from that and the other code, but hopefully some of this is fixed.
20:52:46 <davivercillo> Good Night !
20:52:54 <mmcgrath> davivercillo: ping me when you get time later
20:52:57 <mmcgrath> or tomorrow :)
20:52:59 <mmcgrath> or whenever
20:53:00 <davivercillo> mmcgrath: for sure
20:53:13 <davivercillo> bye
20:53:32 <mmcgrath> So we've only got 7 minutes left, anyone have anything else to discuss?
20:54:05 * ricky wonders if sijis wanted to say anything about blogs
20:54:09 <mmcgrath> sijis: anything?
20:54:13 <mmcgrath> abadger1999: or anything about zikula?
20:54:27 <sijis> yeah, as you saw, the authentication part on the blogs is working.
20:54:54 <ricky> Thanks for working on that plugin
20:54:55 <abadger1999> mmcgrath: When should we get docs people started in staging?
20:55:01 <sijis> we are able to also verify that minimum gropu memberships are met before allowing a login
20:55:05 <abadger1999> I think they have all of the packages in review.
20:55:17 <abadger1999> But htey're not all reviewed yet/some are blocking on licensing.
20:55:28 <thekad> sijis, which groups are those? cla_done?
20:55:45 <sijis> a person has to be in cla_done an another other non-cla group
20:55:46 <ricky> Is http://publictest15.fedoraproject.org/cms/ really as far as they're going to take the test instance?  Not trying to complain, but I'm just used to seeing slightly more complete setups in testing first
20:55:46 <mmcgrath> abadger1999: how long till the licensing is resolved do you think?
20:56:59 <sijis> there are few minor things to work out.. but it should be ready to be tested.
20:57:12 <ke4qqq> ricky - we need to spend more time on pt15 - we largely haven't done anything with it in months. specifically we need to get all of the pieces that we have packaged, and beat on it
20:57:23 <abadger1999> mmcgrath: I encountered problems in both packages I reviewed.  One has been resolved (I just need to do a final review) the other is waiting upstream.  docs has contacted several people related to that
20:57:27 <ricky> Ah, cool, so maybe not quite staging-ready yet
20:57:46 <ke4qqq> ricky: hopefully not far off
20:57:55 <abadger1999> ianweller also encountered some major problems in one that he reviewed -- but I think it might have been optional.
20:57:56 <ricky> Cool, thanks
20:58:24 <abadger1999> ke4qqq and sparks would know for sure.
20:59:01 <ke4qqq> we still have three (and maybe four) though that includes the one thats waiting abadger1999's final approval, that are blocked on licensing probs
20:59:01 <mmcgrath> abadger1999: hmm
20:59:08 <mmcgrath> abadger1999: what are the odds they won't be resolved?
20:59:36 <abadger1999> ke4qqq: Want to field that?  And any contingency if that happens?
20:59:59 <ke4qqq> mmcgrath: I think we'll workaround - upstream is pretty committed to fixing stuff
21:00:04 <ke4qqq> there is just a ton of stuff
21:00:15 <mmcgrath> <nod>
21:00:23 <mmcgrath> Ok, so we're at the end of the meeting time, anyone have anything else to discuss?
21:00:31 <jayhex> just want to say hi before we end. Julius Serrano here.
21:00:40 <mmcgrath> jayhex: hello Julius!
21:00:43 <mmcgrath> thanks for saying hey.
21:00:43 <thekad> welcome jayhex
21:00:48 <ricky> jayhex: Hey, welcome!
21:01:06 <sijis> jayhex: welcome.
21:01:29 <mmcgrath> Ok, if no one has anything else, we'll close in 30
21:02:06 <mmcgrath> #endmeeting