gluster-meeting
LOGS
12:02:54 <atinm> #startmeeting
12:02:54 <zodbot> Meeting started Wed Jul 22 12:02:54 2015 UTC.  The chair is atinm. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:02:54 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:03:18 <atinm> #info agenda is at https://public.pad.fsfe.org/p/gluster-community-meetings
12:03:25 <atinm> #topic Roll Call
12:03:31 * shyam is here
12:03:31 <atinm> who all we have here today?
12:03:38 <atinm> Welcome shyam :)
12:03:39 * krishnan_p _o/
12:03:49 * rtalur is here
12:04:01 * overclk is here
12:04:41 * hchiramm_ is here
12:04:50 <atinm> Come on, I expect few more names :)
12:05:48 <atinm> All right, since no more responses on the roll call, moving on
12:05:51 <krishnan_p> atinm, I have sent a message on #gluster-dev
12:06:06 <atinm> krishnan_p, nice, thanks
12:06:20 <atinm> krishnan_p, probably the same apply at #gluster as well?
12:06:42 * jiffin is here
12:06:44 * anoopcs is here
12:06:48 * kkeithley is here
12:06:53 <atinm> #topic Action items from last week
12:06:57 <krishnan_p> atinm, I am not currently logged in #gluster
12:07:06 * rafi is here
12:07:13 * meghanam is here
12:07:19 <kotreshhr> ***kotreshhr is here
12:07:25 <shyam> did the honors at #gluster
12:07:43 <atinm> shyam, you raced me o-/
12:07:49 <atinm> Moving on
12:08:08 <atinm> #info hchiramm to update about packaging emails - what's the status on this hchiramm ?
12:08:15 <hchiramm_> this AI has to be deferred
12:08:27 <atinm> till?
12:08:53 <hchiramm_> atinm, I would like to revisit this in Aug 2nd week
12:09:00 <atinm> hchiramm, Allright
12:09:45 <atinm> #info hchiramm pushes the AI on updating the packaging emails to 2nd week of August
12:09:57 <hchiramm_> atinm, I am marking it in etherpad
12:10:14 <atinm> #info Next AI is on tigert - summarize options for integrating a calendar on the website, call for feedback
12:10:42 <atinm> tigert, anything on this front?
12:10:56 * atinm believes tigert is not here
12:10:58 <atinm> moving on
12:11:16 <atinm> #info hagarth should have published the 4.0 roadmap to gluster.org last week
12:11:42 * msvbhat arrives late :)
12:11:48 <atinm> hagarth is not here today and I believe this AI will continue for next week as I don't see any mail yet
12:12:05 * kshlm is here now.
12:12:15 <atinm> #info hagarth to  publish the 4.0 roadmap to gluster.org
12:12:24 <atinm> next AI
12:12:37 <atinm> #info hchiramm will send an update on WIP documentation issue - can you update on this hchiramm ?
12:12:42 <hchiramm_> atinm, its done.
12:12:47 <atinm> hchiramm, great
12:12:50 <atinm> hchiramm, thanks
12:12:57 <hchiramm_> today we sent an email about the future plan
12:13:13 <atinm> hchiramm, yes I did see a mail from Anjana
12:13:13 <hchiramm_> kshlm++ rtalur++ ppai++ jeff++
12:13:14 <kkeithley> from Anjana
12:13:21 <hchiramm_> yep..
12:13:33 <atinm> #info next AI, kshlm to add himself to the gluster-office-hours schedule
12:13:54 <kshlm> I've done better.
12:14:04 <kshlm> I moved the table to https://www.ethercalc.org/gluster-community-volunteer-schedule
12:14:15 <kshlm> and added myself on that spreadsheet.
12:14:24 <rtalur> Yay!
12:14:25 <atinm> kshlm, but have you shared the link with the community?
12:14:36 <kshlm> It should be easier for people to add themselves to the schedule now.
12:14:48 <kshlm> atinm, I've added it to the top of the etherpad.
12:14:52 <atinm> kshlm, I remember a mail coming from rtalur
12:14:56 <atinm> kshlm, allright
12:14:59 <atinm> kshlm, thanks
12:15:04 <kshlm> I'll send and update on the mail-thread.
12:15:27 <kshlm> #action kshlm to send update on gluster-office-hours spreadsheet
12:15:39 <kshlm> #info https://www.ethercalc.org/gluster-community-volunteer-schedule
12:15:40 <atinm> #info next AI was on send a request to mailing list requesting for volunteers to be release-manager for 3.7.3
12:15:54 <kshlm> This was not needed.
12:15:56 <atinm> I believe kshlm has volunteered for it last week
12:16:05 <kshlm> I volunteered last week during the meeting.
12:16:22 <atinm> IIRC, we were supposed to release by last week, isn't it?
12:16:28 <atinm> kshlm, so what's the plan now?
12:16:40 <atinm> Probably we can discuss this in 3.7 topic
12:16:47 <kshlm> Yes.
12:16:58 <atinm> #info Next AI : Get hagarth to clear up the speculations about "release-manager'
12:17:12 <kkeithley> what was the speculation?
12:17:15 <kshlm> Done as well.
12:17:26 <atinm> I've no idea what's the status on this
12:17:34 <kshlm> kkeithley, me and obnox were wondering out loud what a 'release-manager' was.
12:17:48 <kkeithley> and the answer is?
12:18:44 <kkeithley> (for the record)
12:18:58 <atinm> kkeithley, :)
12:18:59 <kshlm> A release-manager is like a release-branch maintainer, but instead of maintaining a whole branch, the release-manager just does the tasks for a particular release.
12:19:17 <kkeithley> okay. thanks
12:19:19 <krishnan_p> how does one pass on her responsibility after the particular release?
12:19:20 <atinm> kshlm, cool
12:19:30 <kshlm> krishnan_p, ask for volunteers?
12:19:38 <kkeithley> release managers just fade away?
12:19:42 <krishnan_p> hmm
12:19:45 <atinm> krishnan_p, we should do it on a rotational basis
12:19:45 <kshlm> Or hagarth asks for volunteers.
12:19:58 <atinm> Moving on
12:19:58 <krishnan_p> kkeithley, it appears so.
12:20:29 <atinm> #info last AI mentions overclk should have created a feature page about lockdep
12:20:35 <atinm> overclk, do we have it now?
12:20:44 <overclk> atinm: move it to 2 weeks from now. ENOTIME.
12:20:58 <atinm> overclk, :)
12:21:01 <kshlm> We've still not concluded on the location though.
12:21:17 <krishnan_p> kshlm, we still have the good old email :)
12:21:27 <hchiramm_> kshlm, it can go in glusterfs-specs :)
12:21:31 <overclk> atinm: I do not want to commit for next week and push it again. Probably 2 weeks looks reasonable for me as of now.
12:21:46 <kshlm> overclk, absolutely.
12:21:48 <atinm> #info overclk will put up a feature page on lockdep in couple of week's time
12:21:57 <overclk> thanks atinm
12:22:11 <atinm> kshlm, I saw a mail thread on the location
12:22:21 <atinm> kshlm, but as you said nothing is final yet
12:22:22 <kshlm> hchiramm_, we've just proposed it and haven't had any feedback. But I don't think we should have any opposition.
12:22:34 <hchiramm_> kshlm, true..
12:22:38 <kshlm> Anyways, we've gone off-topic.
12:22:44 <kshlm> Let's get back on track.
12:23:08 <atinm> #topic Gluster 3.7
12:23:33 <atinm> kshlm, what's the plan for 3.7.3?
12:23:51 <atinm> kshlm, should we release it by end of this week?
12:24:15 <kshlm> I hope to. I've not actually gone through the pending bugs.
12:25:16 <kshlm> I'll do that today and take a decision.
12:25:16 <atinm> #info Gluster 3.7.3 is to be released by end of this week
12:25:40 <atinm> Anything on 3.7 before I move to the other branches?
12:26:24 <kshlm> Nothing else.
12:26:28 <atinm> As I get no response, moving on
12:26:29 <pkalever> do we need to consider http://review.gluster.org/#/c/11512/ for 3.7.3
12:26:32 <kkeithley> fyi, I'll be on PTO on Friday, so most likely not packages will get built until Monday.
12:26:39 <kkeithley> no packages
12:26:53 <pkalever> If so we need to review it and merge in mainline first
12:26:53 <kkeithley> In case you want to take that into consideration.
12:27:26 <atinm> pkalever, has any user reported about this problem in the community?
12:27:49 <atinm> pkalever, we need to have a strong reason to take this patch in and consider it as a blocker for 3.7.3
12:27:53 <pkalever> yes, espetially ovirt guys are expecting this since 3.7.1
12:27:55 <krishnan_p> pkalever, specifically on 3.7.2?
12:28:25 <krishnan_p> pkalever, hmm.
12:28:34 <atinm> pkalever, then probably you should send a note on gluster-devel seeking for review help and mentioning how important the patch is
12:29:14 <atinm> #topic Gluster 3.6
12:29:25 <atinm> is raghu around?
12:29:40 <pkalever> Okay
12:29:44 <hchiramm_> atinm, no :(
12:29:59 <atinm> ok
12:30:00 <kshlm> raghu gave his update to me.
12:30:12 <pkalever> atinm: check this out https://bugzilla.redhat.com/show_bug.cgi?id=1181466
12:30:14 <atinm> kshlm, great
12:30:24 <kshlm> (IIRC what he said was)
12:30:52 <kshlm> He hasn't merged enough patches yet so he hasn't made a release yet.
12:31:09 <kshlm> But he hopes to get this done by the end of next week.
12:31:18 <atinm> kshlm, but I saw a mail from him on 15th July saying 3.6.4 is released
12:31:22 * kshlm is paraphrasing, but that shold be the gist.
12:31:31 <kshlm> This is about 3.6.5
12:31:37 <atinm> kshlm, ok
12:31:49 <kshlm> I forgot to update the agenda correctly.
12:32:06 <krishnan_p> pkalever, this is a rpc related change. Raghavendra Gowdappa is the maintainer. He should be able to help with reviews.
12:32:08 <kshlm> He'd done the 3.6.4 release before last week's meeting.
12:32:16 <kkeithley> 3.6.4 was only released on 13 July. I'm all for release-early-release-often, but weekly releases is a bit over the top
12:32:52 <kkeithley> especially with three active, supported releases.
12:33:04 <atinm> kkeithley, I agree
12:33:30 <pkalever> krishnan_p: I shall check with him, thank you :)
12:33:45 <atinm> kkeithley, Probably one month is what we should look at?
12:34:06 <atinm> So shall we reassess the timeline for 3.6.5?
12:34:22 <krishnan_p> pkalever, yw!
12:34:29 <kshlm> kkeithley, I agree.
12:34:31 <hchiramm_> I do second kkeithley .
12:34:38 <kkeithley> yes, as a target or guideline. If something really pressing comes along we can adjust accordingly.
12:34:56 <atinm> #info Reassess timeline for 3.6.5 considering 3.6.4 was released just a week back
12:35:00 <kshlm> I might have just para-phrased incorrectly.
12:35:15 <kshlm> I wasn't paying proper attention when he spoke with me.
12:35:43 <atinm> One more thing I wanted to bring here is netbsd smoke failing in 3.6 branch
12:35:54 <atinm> kshlm, I believe you are aware of it
12:36:05 <atinm> kshlm, some undeclared variable issue
12:36:22 <atinm> ndevos reported it again, I didn't get a chance to look into it
12:36:43 <atinm> what I remember kshlm tried with different BSD versions but all of them compiled properly
12:36:55 <atinm> I am not sure whether that can be a blocker in terms of release
12:37:05 <kshlm> atinm, that doesn't seem to be happening now.
12:37:21 <kshlm> Or maybe not enough release-3.6 patches are being posted.
12:37:33 <atinm> kshlm, I doubt
12:37:53 <atinm> anyways, if it happens again, we would need to look at it
12:38:09 <atinm> so any more topics/questions on 3.6?
12:38:35 <atinm> #topic Gluster 3.5
12:38:51 <atinm> #info 3.5.5 is out few days back
12:39:16 <atinm> ndevos, are you here by chance ;)
12:39:42 <atinm> Do we need anything to discuss on 3.5?
12:39:54 <kshlm> atinm, If he was you wouldn't be chairing this meeting
12:40:05 <atinm> kshlm, :D
12:40:15 <atinm> kshlm, that's why I said 'By chance' :)
12:40:29 <atinm> Moving on
12:40:34 <atinm> Gluster 4.0
12:40:37 <atinm> oops
12:40:41 <atinm> #topic Gluster 4.0
12:41:12 <kshlm> Neither hagarth or jdarcy are around.
12:41:19 <atinm> krishnan_p, ?
12:41:29 <atinm> krishnan_p, you want anything to share?
12:41:49 <krishnan_p> I sent an email on possible improvements on epoll - http://www.gluster.org/pipermail/gluster-devel/2015-July/046107.html. So far no response.
12:41:58 <kshlm> krishnan_p, that's for 4.0?
12:42:20 <krishnan_p> kshlm, epoll becomes significant with brick multiplexing
12:42:38 <shyam> krishnan_p: I will add something to that thread today...
12:42:41 <krishnan_p> i.e, if we had more than one 'brick' being served by a single glusterfsd process
12:42:52 <atinm> #info  krishnan_p sent an email on possible improvements on epoll - http://www.gluster.org/pipermail/gluster-devel/2015-July/046107.html and awaiting response from folks
12:42:52 <krishnan_p> shyam, thanks.
12:43:01 <overclk> krishnan_p: I'll probably take a look at the thread today/tomo
12:43:09 <kshlm> krishnan_p, Ah! I wasn't thinking along those lines.
12:43:29 <krishnan_p> kshlm, neither did I, when I sent it :)
12:44:23 <atinm> from glusterD side, we are working on the high level plan for glusterd 2.0 and the same will be shared soon on the devel list
12:44:53 <krishnan_p> I have been looking at Apache Thrift as a more modern RPC-like system for communication between glusterd (written in golang) and glusterfsd brick processes (written in C).
12:45:22 <atinm> #info GlusterD 2.0 outline will be shared on devel mailing list soon
12:45:23 <krishnan_p> Jeff had this in his 4.0 proposal initially. Has anyone else looked at Thrift before?
12:45:54 <shyam> DHT 2.0 docs are still open for comments
12:46:26 <krishnan_p> As it stands, Thrift's C language support doesn't have support for SSL, rdma or asynchronous communication.
12:46:54 <krishnan_p> So, does anyone know of a cross-language RPC library/ecosystem that could be used instead of Apache Thrift?
12:47:59 <atinm> #info krishnan_p appreciates help if anyone is aware of  a cross-language RPC library/ecosystem that could be used instead of Apache Thrift
12:48:03 <shyam> Active recruitment in progress for more DHT 2 developers , interested people reach out to me :)
12:48:25 <overclk> krishnan_p: what about zerorpc stuffs you were exploring a while back? same problems?
12:48:34 <overclk> shyam: count me in.
12:48:41 <kkeithley> do we really need rdma for the management plane?
12:48:42 <krishnan_p> overclk, zerorpc doesn't provide C language support.
12:49:03 <krishnan_p> kkeithley, not really. Current glusterd is capable of it, when rdma is present on the node.
12:49:14 <shyam> I will also take an action to post a breakdown of tasks/high level items that need addressing for DHT 2 for Gluster 4.0
12:49:19 <krishnan_p> kkeithley, if we do find such a library, we could use it for our I/O plane too.
12:50:09 <shyam> Thanks overclk (you will hear from me soon :) )
12:50:11 <krishnan_p> Jeff's idea was to replace ON/SunRPC with a more modern library/ecosystem. I was thinking along similar lines
12:50:22 <overclk> shyam: thanks a lot.
12:50:23 * shyam let's kp continue
12:50:32 <rtalur> krishnan_p: kkeithley : actually removing that support seems a better thing to do
12:50:41 <kkeithley> ganesha uses ntirpc, 'n' for "new"
12:50:48 * krishnan_p is done with pouring my heart out.
12:50:58 <krishnan_p> kkeithley, Hmm, I will look into it. Does it xdr underneath?
12:51:18 <kkeithley> I believe it's just an extension of old tirpc, so probably
12:51:42 <overclk> krishnan_p: we might also want to look at ceph's messenger library.
12:51:54 <kkeithley> scratch the "I believe" part. It's just an extension of old tirpc
12:52:16 <rtalur> any reason why there is no mention of dbus?
12:52:50 <kshlm> rtalur, dbus is local. We need an over the wire RPC.
12:53:00 <rtalur> not between glusterd and bricks
12:53:38 <kshlm> We don't want to be using 3 different RPC frameworks to slightly different things.
12:54:12 <kshlm> krishnan_p just mentioned glusterd-bricks, but we also need an rpc for glusterd-glusterd.
12:54:21 <krishnan_p> rtalur, kshlm is right.
12:54:21 <kkeithley> Lots of things use DBUS. I was a bit surprised, way back when, that gluster didn't have support. But it's orthogonal to rpc.
12:54:47 <ira> What is the goal?
12:54:48 <kshlm> Talking about dbus, krishnan_p might have someting to share.
12:54:58 * krishnan_p wonders why shouldn't we move to ntirpc for glusterfs
12:55:01 <kkeithley> Use DBUS instead of signals/kill
12:55:13 <rtalur> kshlm: krishnan_p : thanks for the explanation
12:55:23 <ira> why move?  :)
12:55:37 <ira> What is motivating us... what are the things we want to fix?
12:55:49 <krishnan_p> ira, to stop maintaining our custom rpc implementation.
12:56:37 <kkeithley> +1 for that. We should be using standard, off-the-shelf tools as much as possible
12:56:42 <atinm> sorry to interrupt but probably the requirement can be best understood once we send out the mail for glusterd 2.0
12:56:49 <atinm> does it make sense?
12:57:08 <krishnan_p> atinm, yes. I was too tempted to share my woes in finding a suitable RPC library :)
12:57:12 <atinm> and continue our discuss on that mail thread
12:57:38 <ira> atinm: That makes sense... without "why" I have no real thoughts on what, besides "don't" ;).
12:57:56 <atinm> Since we have only 3 more minutes can we move to Open Fllor
12:58:11 * atinm believes we have utilized the time for open floor discussion :)
12:58:35 <shyam> rtalur: distaf update if possible here, or on the ML?
12:58:47 <atinm> #topic Open Floor
12:58:48 * shyam hopes he pinged the right talur
12:59:13 <kshlm> or msvbhat could chime in.
12:59:20 <rtalur> shyam: Will be sending a mail by next week about transition plan
12:59:21 <atinm> shyam, you did :)
12:59:39 <shyam> rtalur: Thanks.
13:00:27 <atinm> All, ndevos has created a public pad for capturing things which we can share on a weekly basis as gluster weekly news
13:00:37 <atinm> #info gluster weekly news @ https://public.pad.fsfe.org/p/gluster-weekly-news
13:00:42 <shyam> Who is doing client side caching improvements? for G4.0 or before?
13:01:16 * shyam thinks it is xavi, ndevos (but could be wrong)
13:01:18 <atinm> we would need to populate it
13:01:37 <rtalur> shyam: from what I know, poornimag , soumya and xavih who are interested in that
13:01:49 <shyam> rtalur: Neat! ty again
13:01:54 <atinm> I am sorry, but I used info keyword instead of action in required places
13:02:23 <atinm> So anything more to be discussed
13:02:25 <atinm> ?
13:03:04 <krishnan_p> None from me
13:03:31 <atinm> Allright
13:03:38 <atinm> thanks all for attending
13:03:43 <atinm> It was quite productive
13:04:09 <atinm> I am recollecting all the action items again just that bot collects them properly
13:04:31 <atinm> #action     hchiramm to update about packaging emails  - Deferred ( Aug 2nd week)
13:04:43 <atinm> #action tigert summarize options for integrating a calendar on the website, call for feedback
13:04:54 <atinm> #action hagarth will publish 4.0 roadmap this week
13:05:09 <atinm> #action kshlm to release 3.7.3 by this week
13:05:31 <atinm> #action raghu to reassess timeline for next 3.6 release
13:06:16 <atinm> #action overclk to create a feature page about lockdep in couple of week's time
13:06:37 <atinm> #endmeeting