gluster-meeting
LOGS
15:01:03 <jclift> #startmeeting Weekly Community Meeting
15:01:03 <zodbot> Meeting started Wed Mar 19 15:01:03 2014 UTC.  The chair is jclift. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:03 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
15:01:24 <jclift> Hi everyone, who's here?
15:01:26 <jclift> (hi Lala) :)
15:01:36 * msvbhat is present
15:01:48 <msvbhat> Hello all
15:01:58 <ndevos> hya!
15:02:09 <jclift> Etherpad for this meeting is here: http://titanpad.com/gluster-community-meetings
15:02:19 <tdasilva> o/
15:02:59 <discretestates> Hey (this is RJ)
15:03:15 <jclift> Hi RJ, thanks for being part of this. :)
15:03:31 <jclift> #topic Action Items from previous minutes
15:03:48 * xavih is here
15:03:49 <discretestates> Glad to be here :)
15:04:02 <jclift> "lalatenduM to set up bug triage process page in wiki"
15:04:12 <jclift> That's done earlier today yeah?
15:04:17 <lalatenduM> jclift, yes
15:04:27 <jclift> Cool. :)
15:04:29 <jclift> "jclift_ and johnmark to update guidelines on community standards"
15:04:36 <jclift> I still haven't done this
15:04:38 <jclift> Bad me
15:04:45 <jclift> Punting that to later
15:04:59 <jclift> "hagarth to send out a note on abandoning patches over 1yo"
15:05:20 <jclift> Haven't seen that happen, so I'll AI to do it this week
15:05:54 <jclift> #action hagarth to send note to gluster-devel, that patches older than 1 year will be abandoned
15:06:47 <jclift> Hmmm, has TitanPad dropped out for anyone else just now?
15:07:09 <jclift> k, its back
15:07:16 <ndevos> F5 worked for me
15:07:22 <semiosis> :O
15:07:39 <jclift> :)
15:07:40 * sas_ was late to the party :(
15:07:41 <jclift> "jclift to get the Rackspace info and credentials from lpabon + johnmark"
15:07:44 <jclift> That's done
15:07:52 <jclift> "jclift to give the Rackspace credentials to lalatenduM + purpleidea so they can setup the Gluster puppet rackspace testing stuff"
15:07:56 <jclift> That's done too
15:08:04 <lalatenduM> jclift, thanks
15:08:09 <jclift> "lalatenduM + purpleidea to try setting up Rackspace vm's for automatic testing using puppet-gluster"
15:08:20 <jclift> lalatenduM: Hows that going?
15:08:38 <lalatenduM> jclift, purpleidea is trying puppet + vagrant on rackspace
15:08:51 <jclift> Ahhh.  Badly written action item?
15:08:54 <lalatenduM> jclift, once is ready we will deploy jenkins
15:09:04 <jclift> Cool
15:09:23 <jclift> Don't suppose you spun up the new jenkin instance in Rackspace?
15:09:43 <lalatenduM> jclift, nope
15:09:45 * jclift is just trying to find the owner of the new VM, since purpleidea said it's not him
15:09:53 <jclift> Heh, it's still a mystery then. ;)
15:09:54 <lalatenduM> jclift, I am not the one :)
15:09:59 <ndevos> jclift: thats probably lpabon
15:10:02 <lalatenduM> jclift, I think we should carry this AI to next week
15:10:13 <jclift> ndevos: Ahhh, cool.  I'll ask him.
15:10:28 <jclift> "jclift will include lpabon in the jenkins testing stuff"
15:10:41 <semiosis> jclift: when you find out who's responsible for jenkins let me know.  i'll add projects for the java stuff.
15:10:51 <lalatenduM> jclift, yup, that would be right
15:11:01 <semiosis> jclift: just need a jenkins login, i'm familiar with it
15:11:03 <jclift> In progress.  He has a Rackspace account, but I haven't pinged him yet to see if he's had time to do stuff
15:11:18 <jclift> Cool
15:11:50 <jclift> #action jclift will ping lpabon to find out if he's the owner of the new Jenkins instance in Rackspace
15:12:24 <jclift> #action jclift will put semiosis in touch with whoever the owner of the new Jenkins instance is, so he can get an account
15:12:30 <tdasilva> jclift: do you mean a new jenkins instance different than build.gluster.org?
15:12:37 <jclift> tdasilva: Yep
15:12:41 <semiosis> #action semiosis will add java projects to jenkins
15:12:47 <jclift> Good thinking :)
15:13:31 <tdasilva> jclift: I'm not aware of a new jenkins instance on rackspace, but lpabon has login credentials to build.gluster.org and the racksapce slave VMs
15:13:38 <jclift> tdasilva: build.gluster.org is a old + having issues, so we're looking to implement a better approach
15:13:41 <tdasilva> and he can create new slaves
15:13:59 <jclift> tdasilva: Cool.  He definitely sounds like the right guy then :)
15:14:14 <jclift> "msvbhat Will email Vijay to find out where the geo-replication fixes for beta3 are up to, and try to get them into 3.5.0 beta4 if they're not already"
15:14:22 <jclift> msvbhat: How's that looking?
15:15:23 <jclift> msvbhat: ping?
15:15:46 <msvbhat> jclift: Done
15:15:52 <jclift> Cool. :)
15:16:24 <jclift> k, does anyone know what this one is about? "several new xlators (encryption, cdc, changelog, prot_client, prot_server, reddir-ahead, dht) have .so.0 and .so.0.0.0 in both release-3.5 and master branch"
15:16:39 * lalatenduM is wondering if we have documentation available for Geo-rep in 3.5
15:16:41 <jclift> Seems to have been added to agenda in the wrong spot
15:16:59 <ndevos> sounds like autotools added libtool versioning to the xlators
15:16:59 <jclift> #action lalatenduM to find out if wew have docs available for geo-rep in 3.5
15:17:01 <msvbhat> jclift: I think removing some xlators from spec, so that they don't get built
15:17:09 <lalatenduM> jclift, haha :)
15:17:11 <jclift> :)
15:17:26 <msvbhat> jclift: lalatenduM: They are not complete :) the geo-rep doc
15:17:43 <ndevos> msvbhat: nah, the .so for a xlator should not be versioned, that is some switch in the Makefile.am
15:17:45 <jclift> lalatenduM: Well, that's you AI done easily then
15:17:53 <lalatenduM> msvbhat, can you take the documentation part?
15:18:34 <msvbhat> jclift: lalatenduM: Sure, they are already available as part of rhs-2.1. We need to port it to upstream
15:18:47 <jclift> ndevos: k.  That's definitely bug-sounding.  There's no mention on the etherpad if there's an associated BZ for it
15:18:53 <lalatenduM> msvbhat, cool
15:19:05 <jclift> msvbhat: Are you ok to do that porting?
15:19:24 <lalatenduM> jclift, I think I know abt the so file bug
15:19:31 <lalatenduM> i mean .so file
15:19:41 <ndevos> jclift: xlators with a .so.0.0.0 is surely a bug we want to have fixed in 3.5
15:19:46 <msvbhat> jclift: I will do it with some help from doc team. I need to know the best way to do it
15:20:04 <lalatenduM> jclift, that is on kaleb actually
15:20:08 <jclift> lalatenduM: k.  Are you ok to find out if there's an appropriate BZ for it, and if so to make sure it's on the 3.5.0 blocker list?
15:20:15 <jclift> Ahh, cool
15:20:28 <lalatenduM> jclift, I know of a RHS bug but not sure if we have cloned it
15:20:29 <jclift> So it's already all in progress, we don't need to do anything about it now?
15:20:35 <lalatenduM> jclift, yes
15:20:44 <jclift> k, moving on then
15:21:06 <jclift> msvbhat: For the geo-rep docs to be ported to upstream, any idea if that's already under way?
15:21:07 <ndevos> lalatenduM: can you add me on CC of that bug? I can patch that pretty quickly
15:21:20 <lalatenduM> ndevos, sure
15:21:44 <jclift> lalatenduM: Do it now, or should we make an action for that
15:21:45 <jclift> ?
15:21:46 <msvbhat> jclift: Not, sure. I will find out and will take it up, if not underway
15:21:59 <lalatenduM> jclift, yup on it :)
15:22:25 <jclift> #action msvbhat to find out if the porting of geo-rep docs from RHS to upstream is already underway
15:22:29 <jclift> lalatenduM: :)
15:22:49 <jclift> k, moving on
15:22:53 <ndevos> jclift: #action the .so.0.0.0 one too please, kaleb then can read-up quicker :)
15:23:00 <jclift> k
15:23:21 <jclift> Hmmm, what's a good AI for it?
15:23:33 <jclift> ndevos: Can you action it, as you'll word it clearer
15:23:50 <lalatenduM> jclift, put AI on me :)
15:23:59 <lalatenduM> ndevos, it s BZ 1076127
15:24:00 <ndevos> #action lala to find the bug for the xlator .so.0.0.0 and inform ndevos so he can fix it
15:24:18 <lalatenduM> ndevos, lets talk abt it on #gluster-devel
15:24:21 * jclift hopes MeetBot is working today  Manual note creation took a while last time. ;)
15:24:37 <ndevos> #link https://bugzilla.redhat.com/1076127
15:24:55 <jclift> #topic Gluster 3.6
15:25:37 <jclift> We had the Planning Meeting last week, which seemed good.  But there's still mention in the Etherpad about a Go/No-Go meeting.
15:25:49 <jclift> Does anyone know what that's about, because I don't?
15:26:29 <jclift> k, moving on
15:26:30 * msvbhat thinks Go/No-Go meeting will be held later when the developement is done
15:26:37 <jclift> #topic 3.5.0
15:26:43 <ndevos> I guess its about branching from master? But that should only be done when all the core features are in
15:26:57 * jclift shrugs
15:27:07 <jclift> We can ask Vijay when he's back :)
15:27:35 <ndevos> :)
15:27:56 <jclift> Asked Vijay about this earlier today, and he said:
15:27:57 <jclift> 11:55 <hagarth> 3.5 is in the final lap
15:27:57 <jclift> 11:55 <hagarth> most of the items in the blocker list have been addressed
15:28:00 <jclift> 11:56 <hagarth> excepting quota and glupy possibly ..
15:28:02 <jclift> 11:56 <hagarth> We can provide an update that the release might happen over the next two weeks
15:28:26 <jclift> It sounds like the .so naming thing above needs to be done too
15:28:42 <jclift> Atin also has a patch up for review that we'd like in 3.5.0 too:
15:28:54 <jclift> http://review.gluster.org/#/c/7292/
15:29:04 <jclift> Does anyone have time to look over that?  It seems pretty simple
15:29:23 <jclift> It's changing the default behaviour of remove-brick so it no longer force-commits
15:29:37 <jclift> eg stop data loss risk from simple mistake
15:30:01 <jclift> lalatenduM: Your kind of thing?
15:30:13 <lalatenduM> jclift, yeah I will take a look
15:30:18 <jclift> Thanks. :)
15:30:35 <lalatenduM> jclift, somehow missed the patch till now :)
15:30:37 <jclift> #action lalatenduM To do a review of Atin's patch http://review.gluster.org/#/c/7292/
15:30:46 <jclift> lalatenduM: No worries. :)
15:31:10 <jclift> Anyone else have anything for mentioning for 3.5.0?
15:31:48 <jclift> k, moving on
15:31:48 * kshlm is here
15:31:58 <jclift> Cool :)
15:31:59 <lalatenduM> jclift, I think we still have a mem leak issue
15:32:05 <lalatenduM> jclift, in 3.5.0
15:32:07 <kshlm> jclift, atin's patch is on master.
15:32:14 <jclift> Oh, cool
15:32:19 <kshlm> he'll send a seperate patch for 3.5 with just a deprecation message.
15:32:37 <jclift> kshlm: We need it accepted into master first don't we?
15:32:41 <jclift> eg to cherry pick back?
15:32:46 * kshlm had forgotten today was Wednesday.
15:32:53 <jclift> Heh ;>
15:33:05 <jclift> Ahhh
15:33:07 <jclift> Sorry
15:33:10 <kshlm> Both patches are gonna be seperate.
15:33:12 <jclift> I understand what you mean
15:33:14 <jclift> yeah
15:33:21 <jclift> Different one.  Got it.
15:33:31 <lalatenduM> jclift, the mem leak issue from Emmanuel's mail
15:33:40 <jclift> We need Atin to write the release-3.5 branch patch for it
15:33:55 <kshlm> Just asked him for it on the review.
15:34:02 <jclift> Thanks
15:35:12 <jclift> Anyone know how we should approach this memory leak problem?
15:35:46 <lalatenduM> jclift, I think hagarath is our ans :)
15:35:58 <jclift> Does anyone have time to run NetBSD up in a VM, and then isolate the commit that caused the problem?
15:36:34 <jclift> lalatenduM: He's super busy recently, so we probably shouldn't rely solely on him for this.  Emmanuel Dreyfus seems to want assistance.
15:37:18 <jclift> I'm putting this down as "We're open to idea on how to solve this one."
15:37:43 <jclift> lpabon: Cool.
15:38:07 <lpabon> lpabon: sorry im late.. took a little while to drive here
15:38:24 <lpabon> lol, i sent a message to myself...i need sleep
15:38:27 <jclift> lpabon: Np.  We had some stuff regarding you before
15:39:14 <jclift> lpabon: There's a VM in rackspace that has -jenkins in it's name.  Seems to be new-ish, started within the last month.  Is that one of yours?
15:39:56 <lpabon> yes, i am playing with making it a regression system.  Once I do, i'll clone it N times to make regressions run in parallel
15:40:05 <lpabon> but its low priority atm
15:40:31 <lpabon> is the meeting over? i have a suggestion on the build system
15:40:32 <jclift> Cool.  I just wanted to know who the owner is, as it wasnt' obvious in the Rackspace gui
15:40:44 <jclift> lpabon: No, the meeting is still very much going
15:40:49 <jclift> http://titanpad.com/gluster-community-meetings
15:41:07 <jclift> lpabon: We're on the 3.5.0 item.  Just finished it mostly.
15:41:21 <jclift> TitanPad keeps dropping though :/
15:41:47 <lpabon> yeah, i don't see anyone else talking.. i think i'm in the correct channel, no?
15:41:54 <jclift> You are
15:41:59 <jclift> I just got busy typing
15:42:10 <jclift> And TitanPad has dropped out.
15:42:13 <jclift> Gah
15:42:24 <jclift> Moving on
15:42:33 <jclift> #topic 3.4.3
15:43:07 <jclift> In the now-down-etherpad it mentioned hagarth is planning to release it this week
15:43:16 <jclift> Anyone have objections to that?
15:43:53 <lalatenduM> jclift, nope
15:43:53 <jclift> #action jclift to find a more stable Etherpad than TitanPad
15:44:15 <jclift> GluserPad
15:44:20 <jclift> GlusterPad ;)
15:44:32 <lalatenduM> jclift, nice :)
15:44:52 <jclift> k, it's back
15:45:06 <jclift> On the etherpad it menions a few items for 3.4.3
15:45:18 <jclift> There's https://bugzilla.redhat.com/show_bug.cgi?id=859581
15:45:23 <glusterbot> Bug 859581: high, unspecified, ---, vsomyaju, ASSIGNED , self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
15:46:22 <jclift> Looks like we need reviewers for this change: http://review.gluster.org/#/c/6737/
15:46:42 <jclift> The patch is in the Posix handling code
15:47:19 <jclift> lalatenduM kshlm msvbhat ndevos: Are any of you guys familiar with that section of the code, and could take a look?  It seems simple code wise
15:47:25 <jclift> http://review.gluster.org/#/c/6737/2/xlators/storage/posix/src/posix-handle.h
15:47:51 <lalatenduM> jclift, nope, :(, but will be some day :)
15:48:00 <jclift> :)
15:48:16 <lalatenduM> will vote for ndevos :)
15:48:18 <lpabon> jclift: maybe Kaleb?
15:48:19 <ndevos> jclift: I think thats a backport...
15:48:23 <kshlm> I'm not familiar with it but I'll take a look.
15:48:28 <jclift> kshlm: Thanks
15:48:50 <jclift> #action kshlm will look into the review of http://review.gluster.org/#/c/6737/2 so we can get it into 3.4.3
15:49:21 <jclift> ndevos: Yeah, it could be.  Still needs reviewers though, etc. ;)
15:49:35 <ndevos> kshlm: that patch is available for review on master. release-3.5 and release-3.4, none seems to have been merged yet (bug 859581)
15:49:37 <glusterbot> Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=859581 high, unspecified, ---, vsomyaju, ASSIGNED , self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
15:50:05 <jclift> On the Etherpad it mentions two other 3.4.3 related items:
15:50:07 <kshlm> pk seems to have reviewed it.
15:50:22 <kshlm> pk has reviewed it.
15:50:35 <jclift> A BZ about a bug that Susant can't reproduce, so that ones' been dropped
15:50:42 <ndevos> yeah, pk +1'd it, it only needs a +2 for the maintainers to merge it
15:51:07 <ndevos> kshlm: maybe you can ask pk to +2 those patches instead?
15:51:13 <kshlm> who's the maintainer for posix?
15:51:20 <jclift> And an item about the patch that Yang Feng requested on gluster-users, but can't be merged because the code around it has changed substantially between 3.4 and 3.5
15:51:34 <msvbhat> kshlm: Avati I suppose
15:51:38 * jclift nukes both of these old items from the etherpad
15:51:39 <kshlm> ndevos,I will.
15:52:05 <ndevos> the MAINTAINERS files does not have someone specific for the posix xlator
15:52:46 <jclift> k, do you guys want to continue this on gluster-devel after this meeting?
15:52:58 <jclift> Moving on...
15:53:00 <ndevos> #action kshlm to talk to pk about +2'ing the patches for bug 859581
15:53:00 <kshlm> so avati or vijay can +2 it and take it in then.
15:53:01 <glusterbot> Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=859581 high, unspecified, ---, vsomyaju, ASSIGNED , self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
15:53:12 <jclift> #topic other items
15:53:50 <jclift> kshlm: We're in the last few minutes of alloted time.  So, do that in gluster-devel. ;)
15:53:55 <semiosis> #action jclift to do a live webcast with johnmark talking about glusterflow!
15:54:02 <semiosis> it's your turn
15:54:10 <ndevos> hehe, he's putting it off
15:54:20 <jclift> discretestates: Want to intro yourself?
15:54:36 <discretestates> hey all
15:54:40 <semiosis> hi again
15:54:42 <discretestates> I'm RJ
15:54:48 <discretestates> I know Jay Vyas
15:54:49 <jclift> :)
15:54:55 <discretestates> He sent me your emails about Gluster and GSoC
15:54:59 <discretestates> I'm really interested in working with you
15:55:03 <ndevos> welcome discretestates!
15:55:15 <discretestates> You saw the proposal I sent to the list -- I've edited it more and looking into people's suggestions
15:55:38 <discretestates> Since Jay has volunteered to mentor, that's great -- seems everyone is friendly so I can ask questions if neeeded
15:55:51 <discretestates> We will need to talk with Fedora since it's under them
15:55:56 <discretestates> I spoke briefly with John Mark
15:55:56 <jclift> discretestates: We're happy to have you.  You're enthusiastic, positive, and have a clue.  That's all good. :)
15:56:25 <discretestates> but I'm not sure if Fedora is expecting a Gluster project or not
15:56:30 <discretestates> Thanks, jclift
15:56:46 <discretestates> So, I'll work on all that and track John Mark and others down :)
15:56:55 <jclift> discretestates: Cool, was just about to ask.
15:57:09 <discretestates> If you want to introduce me to people, that's always helpful, too :)
15:57:16 <discretestates> I just don't want to catch Fedora off guard
15:57:23 <kshlm> I sent out a mail regarding that to the Fedora GSOC admin. I haven't heard back yet.
15:57:29 <jclift> Apparently there's not much time left to chase stuff up (3 days?) so if you're trying to get a hold of someone, but can't, let us know.
15:57:34 <discretestates> Thanks!
15:57:40 <semiosis> discretestates: unclear to me from the ML post, are we talking about a RESTful API for glusterfs control, or data/filesystem access?
15:57:47 <discretestates> data / file system access
15:58:00 <lpabon> discretestates: hi, do you mind taking a look at gluster-swift?
15:58:11 <discretestates> lpabon: That was on my list.  I'd love to
15:58:11 <semiosis> discretestates: right, we have UFO already
15:58:16 <discretestates> UFO?
15:58:20 <jclift> ndevos: You might be able to reverse proxy that too. ;)
15:58:25 <lpabon> please don't call it ufo
15:58:31 <lpabon> lol
15:58:44 <lpabon> ufo is definitely a nice goal, but we are far far from that
15:58:45 <semiosis> discretestates: also I'd like to talk to you about possibility of using my gluster java libs for this
15:59:01 <semiosis> lpabon: woops, noted
15:59:05 <discretestates> semiosis: Jay mentioned your libs.  That'd be great.  Someone also mentioned Python libs
15:59:16 <semiosis> python meh
15:59:19 * lalatenduM thinks it is g4s now :)
15:59:19 <semiosis> java woo!
15:59:19 <discretestates> If people could respond to my email on the dev list, I'd appreciate it
15:59:25 <discretestates> g4s?  okay
15:59:28 <lpabon> python rules-- java meh
15:59:28 <ndevos> jclift: you can reverse proxy alomst anything, but it's the protocol/api the client speaks that you need to support (a webbrowser doesnt speak SWIFT)
15:59:35 <semiosis> lpabon: :D
15:59:38 <lpabon> :-D
15:59:46 <jclift> ndevos: Thanks. :)
15:59:47 <discretestates> I want to make sure I know about the current work in the community so I don't duplicate
15:59:54 <discretestates> I'll look into the swift backend and g4s
16:00:11 <lpabon> yes, it may provide most of what you are looking for...
16:00:14 <discretestates> Jay and I were thinking of emulating the WebHDFS API to allow Gluster to be used with any WebHDFS client (spring, fluentd, other have them)
16:00:37 * jclift points out we're at the end of our time limit
16:00:39 <lpabon> discretestates: the best part is that it uses wsgi, so you can write your own *filter*
16:00:50 <discretestates> oh awesome, okay
16:01:01 <lpabon> discretestates: and easily insert it in the I/O path
16:01:05 <discretestates> very nice
16:01:21 <jclift> Cool.  That sounds like a successful intro discretestates :)
16:01:23 <semiosis> jclift: final note, let me know if you have logstash related questions for glusterflow.  i'm a logstash dev too
16:01:35 <jclift> semiosis: Oh sweet
16:01:35 <lpabon> discretestates: you can also subclass a gluster-swift class and write your own app if you need to
16:01:35 <discretestates> thanks, semiosis!
16:01:48 <semiosis> yw
16:01:49 <discretestates> thank you everyone :0
16:01:52 <jclift> :)
16:01:53 <discretestates> * :)
16:02:00 <lalatenduM> semiosis, good to know :)
16:02:16 <ndevos> lpabon: now you only need to make g4s location aware, maybe change the storage-url according to which server hosts a brick with the file?
16:02:20 <jclift> re GlusterFlow... I need to get Glupy working in master then 3.5.0 before I do any real promo of it
16:02:22 <lpabon> one thing on REST -- at some point we will need a method to communicate with glusterd for management other than the CLI
16:02:35 <lpabon> ndevos: O.o
16:02:40 <jclift> Once that's done I'll promo the heck out of it :)
16:02:57 <lalatenduM> lpabon, I think thats already on the cards for 3.6
16:03:09 <jclift> k, going to endmeeting in a sec
16:03:16 <lpabon> ndevos: maybe if we send a  metadata with it, a pipeline filter can redirect
16:03:23 <jclift> Any objections?  eg otehr topic ppl want to bring up?
16:03:38 <ndevos> lpabon: and you think I can make sense out of that?
16:03:47 <lpabon> fyi, i have to leave..(im running late for my next meeting) .. ttyl
16:03:54 <semiosis> i'm setting up a SonarQube instance for gluster projects.  if anyone is interested please ping me later
16:03:56 <lpabon> ndevos: :-D.. we can discuss offline.. ttyl
16:04:01 * ndevos really isnt a swift guy, unless its ont a squash court
16:04:01 <jclift> :)
16:04:09 <jclift> #endmeeting