gluster_community
LOGS
12:00:49 <kkeithley> #startmeeting Gluster Community
12:00:50 <zodbot> Meeting started Wed Oct 19 12:00:49 2016 UTC.  The chair is kkeithley. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:00:50 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:00:50 <zodbot> The meeting name has been set to 'gluster_community'
12:01:06 <kkeithley> #topic roll call
12:01:09 <kkeithley> who is here?
12:01:10 <post-factum> \o/
12:01:19 * kshlm is here
12:01:30 * Saravanakmr is here
12:01:56 * atinm is partially here
12:02:03 <kshlm> I guess it's been 4 weeks since I was here last time.
12:02:13 * samikshan is here
12:02:15 <post-factum> we missed you kshlm
12:02:16 <kkeithley> good to have you back
12:02:29 <kshlm> :)
12:02:53 * obnox waves
12:02:58 <kshlm> Before you ask, I'm volunteering to host the next meeting.
12:03:26 * karthik_us is here
12:03:34 * rjoseph is here
12:03:38 * msvbhat is present
12:03:39 <kkeithley> kshlm: you're a gentleman and a scholar
12:03:57 <post-factum> quite a lot ppl this time
12:04:14 <kkeithley> one more minute to let people arrive, then we'll start
12:05:13 <kkeithley> #topic: host for next week
12:05:16 <kkeithley> is kshlm
12:05:19 <kkeithley> thanks kshlm
12:05:32 <kkeithley> #topic Gluster 4.0
12:05:59 <post-factum> i saw glusterd 2.0 update on ml
12:06:10 * jiffin is here
12:06:14 <kshlm> Yup.
12:06:23 <kkeithley> got a link?
12:06:26 <kshlm> I did a 2nd preview release.
12:06:47 <post-factum> #link http://www.gluster.org/pipermail/gluster-devel/2016-October/051198.html
12:06:47 <kshlm> https://www.gluster.org/pipermail/gluster-devel/2016-October/051198.html
12:07:04 <kshlm> This release has gRPC and the ability to start bricks.
12:07:11 <kkeithley> #info https://www.gluster.org/pipermail/gluster-devel/2016-October/051198.html
12:07:24 <kshlm> Unfortunately our volfile generation isn't correct so the bricks don't start.
12:07:32 <post-factum> hmm\
12:07:47 <kshlm> I'll be working on fixing that in the coming week.
12:08:09 <kshlm> I'll also be creating docker images and vagrantfiles to make it easier to test/develop GD2.
12:08:35 <kshlm> ppai is working on embedding etcd. This will make it easier to deploy GD2.
12:08:49 <kshlm> That's all the updates from this week.
12:09:25 <kkeithley> jdarcy just joined.  any updates jeff?
12:10:36 <jdarcy> None really.  Been working on some of the underpinnings for brick multiplexing.
12:10:53 <post-factum> any news on memory management?
12:11:49 <jdarcy> Yes, but let's defer that until later.
12:12:03 <kkeithley> okay, moving on
12:12:09 <kkeithley> #topic Gluster 3.9
12:13:06 <kkeithley> pranithk, aravindak: ???
12:13:13 <atinm> this needs a serious discussion
12:13:31 <kkeithley> atinm: please proceed
12:13:45 <atinm> Is there any deadline we have in our mind for releasing it? this can't wait for ever
12:14:12 <kkeithley> 30 September, wasn't it?
12:14:31 <atinm> that's a history :)
12:15:13 <atinm> how about setting a deadline and do a constant follow up on the maintainers list for the ack for every components
12:15:39 <post-factum> we need a benevolent dictator for that
12:16:05 <kshlm> or at least a dedicated release-engineer/manager.
12:16:28 <kkeithley> We have a benevolent triumvirate: pranithk, aravindak, and dblack.
12:16:43 <post-factum> and none of those are here :)?
12:16:44 <kshlm> All of whom are missing here.
12:17:17 <kshlm> atinm, Would you be willing to raise a strom about this on the mailing lists? :)
12:17:56 <atinm> kshlm, I have already passed on my concerns to pranithk
12:18:33 <kkeithley> fwiw, there's a backport of gfapi/glfs_upcall* that needs to be merged into 3.9 before 3.9 can ship, so gfapi as a component isn't ready
12:18:37 <atinm> kshlm, having said that I will see what best I can offer from my end
12:18:49 <kshlm> atinm, in private? It would be good if its out in the open.
12:19:00 <kkeithley> give them a poke. Be gentle
12:19:28 <kkeithley> but not too gentle
12:19:33 <kkeithley> this needs to get done
12:19:53 <kkeithley> shall we have an action item on atinm to poke pranithk and aravindak?
12:20:20 <atinm> kkeithley, go ahead
12:20:51 <kkeithley> #action atinm to poke 3.9 release mgrs to finish and release
12:21:03 <post-factum> here is our dictator!
12:21:15 <kkeithley> #topic Gluster 3.8
12:21:43 <kkeithley> ndevos: Any news
12:21:45 <kkeithley> ?
12:21:46 <kshlm> ndevos is away. But he provided an update on the maintainers list.
12:22:05 <kshlm> "The 3.8.5 release is planned to get announced later today. 3.8.6 should be following the normal schedule of approx. 10th of November."
12:22:30 <kshlm> #link https://www.gluster.org/pipermail/maintainers/2016-October/001562.html
12:22:32 <kkeithley> #info The 3.8.5 release is planned to get announced later today. 3.8.6 should be following the normal schedule of approx. 10th of November.
12:22:40 <kkeithley> #info https://www.gluster.org/pipermail/maintainers/2016-October/001562.html
12:22:44 <kkeithley> is #link a thing?
12:22:58 <kkeithley> yup, it is
12:22:58 <post-factum> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:23:13 <kkeithley> moving on
12:23:18 <kkeithley> #topic Gluster 3.7
12:23:27 <kkeithley> kshlm: your turn
12:23:33 <kshlm> I finally announced 3.7.16 yesterday.
12:23:54 <kshlm> It was tagged early this month on time. But the announcement got delayed due to various reasons.
12:24:20 <kshlm> #link https://www.gluster.org/pipermail/gluster-devel/2016-October/051187.html
12:24:23 <kkeithley> and we're looking for someone to take the reins for 3.7.17, correct?
12:24:30 <kshlm> Yeah.
12:24:37 <kshlm> So any volunteers?
12:24:58 <kkeithley> we'll pause so everyone can look away and shuffle their feet
12:25:45 <kshlm> I'll ask again on the mailing lists.
12:25:59 <samikshan> kshlm: I'll try it. You'd need to mentor me or something. :D
12:26:25 <kshlm> samikshan, I can do that.
12:26:32 <kkeithley> excellent, thanks
12:26:47 <kkeithley> anything else for 3.7?
12:27:03 <samikshan> Cool. Okay that's settled then at last :)
12:27:06 <kshlm> Just that .17 is still on target for 30th.
12:27:14 <kkeithley> kewl
12:27:20 <kkeithley> next....
12:27:26 <kkeithley> #topic Gluster 3.6
12:27:26 <kshlm> Me or samikshan will send out a reminder early next week.
12:27:46 <kshlm> Nothing here.
12:27:48 <kkeithley> #action kshlm or samikshan will send 3.7.17 reminder
12:28:04 <kshlm> Just counting down the days till 3.9 appears.
12:28:07 <kkeithley> okay, 3.6?  Probably nothing to report
12:28:35 <kkeithley> #topic Infrastructure
12:28:42 <kshlm> kkeithley, you can move on.
12:28:44 <kkeithley> misc, nigelb: are you in th ebuilding
12:28:51 <misc> kkeithley: yes, I do
12:29:15 <misc> nothing to announce that would be relevent to the rest of the community
12:29:25 <misc> (I mean, i wrote doc, moved and patched stuff)
12:29:45 <kkeithley> okay
12:29:54 <misc> not sure for nigelb, he may have more
12:30:10 <kkeithley> we'll back up if he shows up
12:30:19 <kkeithley> #topic: nfs-ganesha
12:31:00 <skoduri> issues firing up in bakeathon :)
12:31:02 <kkeithley> Red Hat is hosting NFS Bake-a-thon in Westford this week.  skoduri and myself are testing the bleeding edge bits and have found a couple bugs.
12:31:20 <kkeithley> Actually it was Sun/Oracle that found them.
12:31:36 <skoduri> right..and even steved (redhat client) as well
12:31:38 <kkeithley> 2.4.1 will GA after Bake-a-Thon
12:31:40 <skoduri> issue with ACLs
12:32:23 <kkeithley> yup
12:32:26 <kkeithley> okay, next
12:32:30 <kkeithley> #topic Samba
12:32:40 <kkeithley> obnox: are you lurking?
12:33:04 <kkeithley> no, guess not
12:33:26 <obnox> yes
12:33:37 <kkeithley> proceed
12:33:48 <kkeithley> please proceed
12:33:54 <obnox> yeah
12:34:16 <obnox> samba upstream: 4.5.1 will be released next week
12:34:28 <obnox> along with 4.4.7
12:34:35 <obnox> gluster related:
12:34:59 <obnox> the md-cache improvements are almost done in gluster master, one last patch under review.
12:35:30 <obnox> this speeds up samba metadata-heavy workloads a *lot* on gluster
12:35:40 <obnox> (readdir and create /small file)
12:35:46 <kkeithley> that's excellent news
12:36:06 <kkeithley> #info  md-cache improvements speeds up samba metadata-heavy workloads a *lot* on gluste
12:36:09 <post-factum> will md-cache improvements affect fuse users?
12:36:13 <obnox> other scenarios affected very much as well
12:36:21 <obnox> post-factum: actually yes
12:36:26 <kkeithley> rising tide lifts all boats
12:36:42 <obnox> i just think that samba benefits most due to its heavy use of xattrs and metadata
12:36:43 <post-factum> will that be backported into stable branches?
12:37:02 <obnox> not 3.8 probably, but that's not my call
12:37:07 <obnox> 3.9  I can imagine
12:37:32 <obnox> but currently sanity testing is done with these changes on 3.8 to rule out regressions
12:37:45 <obnox> because it virtually affects every setup
12:37:49 <post-factum> is 3.8 lts?
12:37:58 <atinm> yes
12:38:00 <obnox> poornima could detail more, but she's not here today
12:38:12 <atinm> but my take is not to backport them to 3.8
12:38:26 <obnox> already a few issues have been found and they are beeing rcad
12:38:31 <post-factum> that is why i'm asking. 3.9 then
12:38:51 <obnox> atinm: agreed, some people are testing on 3.8 anyways, without the intention to do an official upstream backport ;-)
12:39:19 <obnox> other things: multi-volfile support has been merged into the glusterfs vfs-module in samba upstream
12:39:27 <obnox> by rtalur
12:39:51 <obnox> and then there is the memory issue
12:39:56 <obnox> is everybody aware?
12:40:03 <post-factum> which one :)?
12:40:18 <obnox> i'll attempt a very brief summary:
12:40:44 <obnox> samba is a multi-process architecture: each tcp connection forking a new didicated child smbd process
12:40:56 <obnox> the connection to the gluster volume is done by the child process
12:41:09 <post-factum> oh, i've reported that issue twice here, iirc
12:41:30 <jdarcy> Sort of the client-side version of what multiplexing addresses on the server side.
12:41:32 <obnox> that means when the glusterfs vfs module is used, each process loads libgfapi and thereby the whole client xlator stac
12:41:35 <obnox> k
12:41:54 <obnox> with the memory use for an instance ranging between 200 and 500 MB
12:42:03 <post-factum> so you have a solution?
12:42:08 <obnox> we can only serve 2-3 clients per GB of free RAM :-/
12:42:19 <obnox> as opposed to 200 clients / GB with vaniall
12:42:33 <obnox> solution would result in a proxy of sorts
12:42:52 <post-factum> like fuse but not fuse
12:42:58 * obnox is thinking about prototyping one from samba's vfs code
12:43:22 <jdarcy> I'm trying to remember what's different between this and GFProxy.
12:43:23 <obnox> post-factum: indeed using fuse instead of libgfapi is such a proxy, but there were reasons originally to move to vfs
12:43:42 <post-factum> obnox, because vfs should introduce much less overhead, obviously
12:43:47 * shyam throws in the thought of AHA + GFProxy presented at the summit
12:43:55 <obnox> jdarcy: gfproxy is on the server (does not need to, maybe)
12:44:03 <post-factum> shyam, do you have video/slides?
12:44:07 * shyam and realizes jdarcy has already done that...
12:44:08 <obnox> so, this is going into a discussion...
12:44:33 <obnox> i suggest taking the discussion out of the community meeting
12:44:37 <obnox> unless you want to have it now
12:44:42 <obnox> this is the status
12:45:08 <obnox> post-factum: gfproxy is done by fb and not (yet) complete or published
12:45:10 <jdarcy> So the AI would be to cue up a separate discussion?
12:45:32 <obnox> jdarcy: yeah. continuing the discussion from the bof at the summit
12:45:34 <shyam> post-factum: GDC videos and slides are being (post) processed, slides should land here: http://www.slideshare.net/GlusterCommunity/presentations
12:45:49 <kkeithley> #action obnox to starting discussion of Samba memory solutions
12:45:56 <post-factum> shyam, thanks!
12:46:20 <kkeithley> shall we move on?
12:46:25 <post-factum> i hope Amye will post the links to ML :)
12:46:27 <obnox> kkeithley: yes. tjanks
12:46:33 <kkeithley> #topic Heketi
12:46:50 <kkeithley> will lpabon or someone else give status?
12:46:58 <kkeithley> is lpabon lurking?
12:47:03 <obnox> i can, let me think...
12:47:12 <obnox> we are currently preparing version 3 of heketi
12:47:27 <obnox> will be announced this week (currently waiting on finishing documentation)
12:47:39 <obnox> this carries quite a few bug fixes and enhancements
12:47:57 <obnox> it is now possible to run heketi directly in kubernetes.
12:48:42 <obnox> some error reportings when creating gluster volumes have been clarified, and it is now possible to creat only replicate (w/o distribute) volumes
12:48:56 <obnox> just some examples.
12:49:41 <obnox> this version of heketi will be used in conjunction with kubernetes 1.4 and openshift 3.4 as dynamic provider of persistent storage volumes for the containers
12:50:01 <shyam> obnox: Are there any Heketi asks/needs from core Gluster? If so where can I find them?
12:50:11 <obnox> this is a tremendous enabler of gluster in the cloud
12:50:28 <obnox> shyam: in the near future. i think.
12:50:58 <kkeithley> great news. anything else on Heketi?
12:50:59 <obnox> shyam: for the next generation , we are thinking about using block storage / iscsi. i think pkalever is working on that
12:51:11 <obnox> kkeithley: that's it from my end
12:51:23 <kkeithley> time check: 9 minutes remaining....
12:51:27 <amye> post-factum, yep, I will post links this week.
12:51:27 <kkeithley> moving on
12:51:45 <kkeithley> #topic last week's action items
12:51:46 <obnox> shyam: but we should take the AI to create a place to communicate on the intersection between heketi and gluster proper
12:52:09 <kkeithley> rastar/ndevos/jdarcy to improve cleanup to control the processes that test starts.
12:52:21 <post-factum> amye, thanks!
12:52:54 * obnox signs off -- next meeting coming up
12:53:01 <shyam> obnox: AI in progress, being discussed at the maintainers meeting later in the day (basically planing pages, hope that solves this problem)
12:53:15 <kkeithley> guess no, leaving open for next week
12:53:18 <jdarcy> Haven't done anything on that front myself, nor heard from rastar/ndevos.
12:53:29 <kkeithley> okay
12:53:43 <kkeithley> unassigned: document RC tagging guidelines in release steps document
12:53:51 <kkeithley> anyone do anything with this?
12:54:35 <kkeithley> I guess I'll take this since it affects building Fedora packages.
12:54:45 <kkeithley> #action kkeithley to document RC tagging guidelines in release steps document
12:54:54 <kkeithley> ditto for the other one
12:54:59 <kkeithley> moving on
12:55:17 <kkeithley> #topic open floor
12:55:20 <kkeithley> dbench in smoke tests - lots of spurious failures, little/no info to debug
12:55:36 <kkeithley> jdarcy, that was you I believe
12:55:58 <jdarcy> Yeah.  Should we keep it as part of smoke, or ditch it?
12:56:13 <jdarcy> It doesn't seem to be helping us in any way AFAICT.
12:56:58 <jdarcy> It's not in our source tree or packages or vagrant boxes, so developers running it themselves is a pain.
12:57:13 <jdarcy> It provides practically no diagnostic output when it fails.
12:57:29 <shyam> Additional context, bug for the same here: https://bugzilla.redhat.com/show_bug.cgi?id=1379228
12:58:21 <kkeithley> doesn't sound very useful to me the way it is now
12:58:39 <kkeithley> how's that for a run-on sentance
12:59:16 <shyam> One of the concern is why is it failing more now, is there an issue in the code that got introduced?
12:59:28 <jdarcy> When it's debugged, it might be useful.  Until then, I propose reverting whichever change added it.
12:59:41 <kkeithley> is it voting?
12:59:46 <jdarcy> Excellent point, shyam.  Its addition seems to be rather unilateral.
12:59:51 <shyam> jdarcy: To my reading this was there from ancient past...
13:00:02 <jdarcy> kkeithley: It causes smoke to fail, changing smoke's vote.
13:00:49 <jdarcy> I've had, and seen, patches delayed unnecessarily because of this.
13:01:58 <shyam> jdarcy: Agreed it is causing more pain at present
13:01:59 <kkeithley> okay, doesn't seem like we can resolve it here. Can it be discussed in IRC or email after the meeting?
13:02:12 <jdarcy> I'll start an email thread.
13:02:28 <kkeithley> thanks
13:03:26 <kkeithley> we're past the hour.  Looks like an announcement that Go bindings to gfapi are moving to github/gluster. If there's more to say about that we'll come back after jdarcy's memory topic
13:03:34 <kkeithley> jdarcy: you have the floor
13:03:44 <jdarcy> Nothing else for now.
13:03:55 <kkeithley> #action: jdarcy to discuss dbench smoke test failures on email
13:04:00 <jdarcy> We can defer the memory stuff to email.
13:04:06 <jdarcy> (Kind of already there)
13:04:06 <kkeithley> okay
13:04:35 <kkeithley> go bindings? anything to say kshlm?
13:04:57 <kshlm> I just wanted to bring it back to everyones notice.
13:05:12 <kkeithley> #info Go bindings to gfapi are moving to github/gluster
13:05:17 <kshlm> I moved it to the Gluster organization. It was under my name till now.
13:05:18 <kkeithley> more good news.
13:05:45 <kkeithley> #topic recurring topics
13:05:47 <kshlm> I'll send out a mailing list announcement about it soon.
13:06:04 <kkeithley> #action kshlm to send email about go bindings to gfapi
13:06:08 <kkeithley> If you're attending any event/conference please add the event and yourselves to Gluster attendance of events: http://www.gluster.org/events (replaces https://public.pad.fsfe.org/p/gluster-events)
13:06:08 <kkeithley> Put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news
13:06:18 <kkeithley> anything else?
13:06:24 <kkeithley> motion to adjourn?
13:06:32 * shyam seconded
13:06:48 <kkeithley> going once
13:06:54 <kkeithley> going twice!
13:07:01 <kkeithley> three.....
13:07:04 <kkeithley> #endmeeting