gluster_community_weekly_meeting_26-oct-2016
LOGS
12:00:11 <kshlm> #startmeeting Gluster Community Weekly Meeting 26-Oct-2016
12:00:11 <zodbot> Meeting started Wed Oct 26 12:00:11 2016 UTC.  The chair is kshlm. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:00:11 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:00:11 <zodbot> The meeting name has been set to 'gluster_community_weekly_meeting_26-oct-2016'
12:00:21 <kshlm> Hey all!
12:00:41 <kshlm> I'll wait a couple of minutes for people to filter in.
12:01:57 <aravindavk> Hi kshlm
12:02:19 <kshlm> Hey aravindavk! Glad you're here.
12:02:22 <rastar> Hello kshlm
12:02:48 <kshlm> Hello.
12:02:58 <atinm> Hello everyone!
12:03:10 <jiffin> Hi kshlm
12:03:44 <kshlm> Okay. Let's start.
12:03:50 <kshlm> #topic Roll call
12:04:01 <kshlm> Let's see who all are actually tuned in.
12:04:03 <ShwethaHP> Hi kshlm
12:04:05 * kshlm o/
12:04:11 * anoopcs is here
12:04:13 * Saravanakmr here
12:04:15 <kshlm> Hi ShwethaHP
12:04:22 <kshlm> Welcome.
12:04:26 <ShwethaHP> kshlm, :-)
12:04:33 <jiffin> o/
12:04:34 * rastar is here
12:04:39 * kkeithley is here
12:04:48 <atinm> \0/
12:05:06 <kshlm> atinm seems very excited today :)
12:05:15 * aravindavk is here
12:05:20 <kshlm> I think we can start.
12:05:21 <atinm> ;)
12:05:32 <kshlm> Welcome everyone to this weeks meeting.
12:05:51 <kshlm> The first meeting to follow the new, no-updates format.
12:06:10 <kshlm> Thanks you everyone that provided updates.
12:06:20 <kshlm> Keep this up.
12:06:43 <kshlm> I'll note some highlights from the updates at the end of the meeting.
12:06:55 <kshlm> Now let's start with the meeting.
12:07:00 <kshlm> #topic Next weeks host
12:07:13 <kshlm> I volunteer to host next weeks meeting.
12:07:45 <kshlm> If anyone else wants a chance to host the meeting speak up now.
12:08:47 <kshlm> No one?
12:09:13 <kshlm> Ok then.
12:09:20 <kshlm> #info kshlm will host the next meeting
12:09:25 <kshlm> Moving on.
12:09:29 <kshlm> #topic Open floor
12:09:30 * kotreshhr is here
12:10:04 <kshlm> We have 2 topics that have been entered by Manikandan.
12:10:10 <kshlm> Is he around?
12:10:33 <atinm> kshlm, could you paste the agenda etherpad link?
12:10:40 * post-factum|afk is late
12:10:47 <kshlm> If he is not around, does anyone else have any background on them.
12:10:52 <kshlm> atinm, Sorry about that.
12:11:05 <kshlm> The agenda is at https://public.pad.fsfe.org/p/gluster-community-meetings
12:11:11 <kshlm> ... as always.
12:11:38 <post-factum> can we talk about memory management?
12:11:56 <kshlm> post-factum sure. Add it to the agenda.
12:12:20 <kshlm> I'll wait for little more time before moving on.
12:12:31 <kshlm> Topics added by Manikandan are
12:12:41 <post-factum> added
12:12:45 <kshlm> - Recognising community contributors(Manikandan Selvaganesh)
12:12:54 <kshlm> - Sending up EasyFix bugs(assigned with a owner) on mailing list so that new contributors can get started(May be?) (Manikandan Selvaganesh)
12:12:57 <atinm> On Recognising community contributors - I'd say that we already do that from Amye's monthly report with top 5 contributors (atleast in some form)
12:13:06 <kshlm> Does anyone else want to discuss these?
12:13:40 <ShwethaHP> kshlm, yes
12:13:44 <kshlm> atinm, Is that just the top 5 committers?
12:13:55 <kshlm> ShwethaHP, Go on.
12:14:08 <atinm> We could additionally have top 5 contributors in mailing list, bugzillas et all
12:14:21 <atinm> kshlm, yes most number of patches contributed
12:14:42 <kshlm> atinm, patches to just glusterfs.git? That's not the whole community is it.
12:15:19 <kshlm> I would be on top if we started counting GD2.
12:15:27 <atinm> kshlm, that's the reason I am saying its not complete ;)
12:15:38 <kshlm> Yeah.
12:15:43 <rastar> we should track other repos in Gluster org, yes
12:15:44 <kshlm> So what do we do about it?
12:15:54 <sankarshan> The topic of recognition of contributions and contributors has come up in the past. But I think one of the issues is that the conversations have been in separate and often isolated threads. Is it worthwhile to bring together the collection of data points which should provide a good gauge of the contributions as well as highlight contributors?
12:16:19 <sankarshan> (This is assuming that there does not exist one place to build out the topic from)
12:16:27 <kshlm> sankarshan, Yes. It would be helpful.
12:16:46 <kshlm> A lot of the developers seem to get more motivated when these sorts of stats are available.
12:17:23 <sankarshan> Oh! No doubt. I do not think that the necessity of tracking these data points need further investigation. It is a given
12:17:45 <kshlm> We had this page which collected information from various glusterfs community sources and display them.
12:17:56 <kshlm> Anyone remember it?
12:18:07 <sankarshan> The bitergia page?
12:18:10 <atinm> bitergia
12:18:16 <kshlm> Yeah that.
12:18:41 <anoopcs> #link http://projects.bitergia.com/redhat-glusterfs-dashboard/browser/
12:18:44 <kshlm> #topic Recognition of community contributors
12:18:54 * kshlm forgot to begin the topic
12:19:11 <kshlm> Okay.
12:19:39 <kshlm> So we agree that getting stats on commits, mails, IRC etc. is useful for recognizing contributors.
12:19:53 <rastar> yes
12:19:56 <kshlm> We also have a dashboard that collects these stats.
12:20:06 <kshlm> Where do we proceed from here?
12:20:22 <kshlm> #link http://projects.bitergia.com/redhat-glusterfs-dashboard/browser/
12:20:40 <kshlm> (gettin the link recorded under the topic)
12:21:04 <sankarshan> I would guess, by the discussions on the topic, that (a) the points/coverage on the bitergia instance is not enough (b) there are well identified additional aspects which need to be considered (c) a method of recognition would be useful
12:21:56 <kshlm> sankarshan, agree with everything.
12:22:02 <obnox_> i am not too big a fan of collecting such stats
12:22:14 <obnox_> at least not of over-estimating those
12:22:32 <obnox_> it's also more of a company, mgmt vehicle than an open source upstream thing, imho
12:23:18 <sankarshan> obnox_: recognition of contributions would possibly require ability to first estimate the areas to be aware of.
12:23:27 <obnox> i mean collecting is fine, but whatever stats you collect, they'll only ever give some aspect of the overall impact and value of a contributor
12:23:28 <sankarshan> I understand that this particular topic was what kept coming up
12:23:52 <sankarshan> obnox: That is true. The absolute-ness of numbers isn't what is being talked here is how I read this
12:24:08 <obnox> but there is a lot of danger in that, and creating false incentives
12:24:10 <obnox> and competitions
12:24:17 <misc> obnox: and it also tend to skew people to work more on recognized stuff rather than part where we didn't for time, technological or others reason collect stats
12:24:37 <misc> as it happened to Fedora
12:25:15 <kshlm> As sankarshan says, this topic keeps coming up again and again.
12:25:22 <obnox> kshlm: right, but why?
12:25:32 <obnox> kshlm: and from where
12:25:34 <kshlm> And we always start with building stats.
12:26:08 <kshlm> And later the discussion reaches a point where all the bad stuff about stats are mentioned.
12:26:14 <kshlm> And it drops off from there.
12:26:19 <obnox> kshlm: lol
12:26:29 <obnox> i'll always mention the dangers that lie therein
12:26:38 <kshlm> only for another member to resurrect it later.
12:26:53 <kshlm> So we want recognition.
12:27:02 <obnox> kshlm: but I fully agree that if stats are to be collected and presented, then they better be as fine-graned and thorough as possible.
12:27:06 <sankarshan> The dangers are manifestations of what happens when the numbers. The numbers themselves aren't the potential cause of malice
12:27:11 <kshlm> Stats seem like a good thing, but are really not.
12:27:17 <kshlm> We need to find something else.
12:27:49 <obnox> sankarshan: sure. the numbers are not bad per se
12:28:16 <kshlm> We have 2 more topics to discuss, and we're halfway through this meeting with just this topic.
12:28:23 <sankarshan> Either way, I don't have a dog in this hunt. Aside from highlighting that this is not the first time we have talked numbers, stats and dashboards.
12:28:35 <kshlm> I suggest we take this discussion up later, on the mailing lists.
12:28:41 <sankarshan> And that it is obvious (fairly) that the bitergia data sets are not quite sufficient
12:28:42 <obnox> kshlm: agreed
12:28:43 <kshlm> Or in further community meetings.
12:28:52 * obnox will shup up about the numbers.
12:28:55 <obnox> (for now)
12:29:14 <kshlm> Do we have a volunteer to take this onto the mailing lists?
12:29:25 <kshlm> If not, I'll just keep the topic open for next week.
12:29:39 <obnox> kshlm: so it was me who this time was the bad guy to destroy the illusion of brave new world of stats for recognition? ;-)
12:30:09 <kshlm> obnox, Thank you for doing it. :)
12:30:19 <kshlm> Okay. I'll keep the topic on for next week.
12:30:21 * amye is here and catching up on backscroll
12:30:25 <obnox> happy to discuss it to a much greater extent later on.
12:30:26 <kshlm> Let's move on.
12:30:31 <kshlm> Hey amye!
12:30:41 <kshlm> #topic Memory management (memory pools, jemalloc, FUSE client leaks etc)
12:30:48 <kshlm> post-factum, You're up!
12:31:18 <post-factum> kshlm, aye, it would be better to see nbalacha here and jdarcy, but the key points are:
12:31:40 <kshlm> post-factum, FYI there was a BOF session on memory management at the Gluster summit on this. There are people here who were involved in it.
12:31:57 * kshlm is looking at obnox
12:32:00 <obnox> :-)
12:32:03 <post-factum> 1) it seems that we are stuck while investigating increased memory usage on client side (FUSE client) due to possible memory fragmentation
12:32:04 <kshlm> post-factum, Go on.
12:32:19 <amye> I will catch up with the discussion of recognition on the mailing lists, I have some solutions beyond 'dashboards' that I haven't put to paper yet.
12:32:45 <post-factum> nbalacha did for sure great job investigating it and preparing some patches, but simple disabling memory pools, as she said, almost mitigates memory leaks
12:32:53 <post-factum> so here goes 2)
12:33:20 <post-factum> shouldn't be this of some high priority as it implies client side heavily and future bricks multiplexing?
12:34:37 <kshlm> post-factum, I know that jdarcy is also facing problems with mempools with brick multiplexing.
12:35:41 <kshlm> He will most likely be working on solving this. But I'm not sure of the priority of this work.
12:35:55 <post-factum> in fact, nothing to add from me more, because i'm looking at it resembling all the experience as a customer
12:36:48 <obnox> i can add some aspect to memory topic
12:36:52 <kshlm> One of the memory management BOF attendees should give the community an overview of what was discussed.
12:37:01 <obnox> i will start
12:37:43 <kshlm> obnox Go on. I'll add an AI on myself to make someone provide the details of memory management bof.
12:37:53 <obnox> so, one thing was the leaking. as mentioned. if I got it right it is partly due to the (lack of) cleanliness in programming, using thes mem pools
12:37:59 <obnox> this is about the bof
12:38:09 <kshlm> #action kshlm will ping someone to share details around the bof.
12:38:22 <amye> kshlm, there's a GDS etherpad that I dropped into the mailing list that might have details (but I doubt it)
12:38:23 * obnox is currently sharing details around the bof
12:38:35 <kshlm> obnox, In that case. I'll mark my AI done.
12:38:36 <obnox> others can fill in more details or crrect me
12:39:05 <obnox> so, it was mentioned that samba has a hierarchical memory allocator with destructors called "talloc" that is a wrapper around malloc/free
12:39:39 <obnox> it facilitates clean programming, reduces leaks (by the hierarchical structure) and has a 'talloc-report' feature for debugging
12:39:49 <obnox> it should be thread safe, but this needs checking
12:39:53 <post-factum> is there any reason to reinvent the wheel and not using jemalloc?
12:40:05 <post-factum> ganesha, afaik, already uses that
12:40:15 <obnox> i don't know jemalloc. talloc is in all the linux distros. it is 20 years old
12:40:18 <obnox> or so
12:40:45 <obnox> the guys who have touched samba code confirm that once you wrote c code with talloc you don't wat to do anything else any more
12:40:55 * obnox needs to google jemalloc
12:41:14 <post-factum> obnox, you mean, they are so exhausted :)?
12:41:28 <obnox> post-factum: no, so thrilled by the pure joy
12:41:38 <obnox> so it was said that s/o could go and check out whether talloc could be used (in some components for a start) in gluster
12:41:41 <post-factum> so talloc is better than lsd, i guess
12:41:49 <obnox> probably ;-P
12:41:49 <amye> Gluster: thrilled by the pure joy
12:42:11 <obnox> amye: that's the new marketing slogan
12:42:29 <post-factum> anyway, personally /me welcomes any idea of reusing existing code that is proven to work instead of reinventing own bells
12:42:42 <obnox> post-factum: ack
12:42:44 <amye> obnox: we could have a few of those. </helpful>
12:42:56 <obnox> next aspect in the bof was the problem with samba's setup where the memory problem is multiplied
12:43:26 <post-factum> obnox, you mean, multiple and separate instances of gfapi
12:43:32 <kkeithley> IIRC jdarcy benchmarked his multiplexing work with libc malloc and jemalloc and found almost no difference
12:43:45 <obnox> every (tcp) connection of an smb client to the samba on top of gluster consumes the full memory of the libgfapi instance
12:44:11 <obnox> tcp conn <--1:1--> smbd child process <--1:n--> instances of libgfapi
12:44:27 <obnox> each system that does multiple mounts per volume can have a similar problem
12:44:27 <post-factum> kkeithley, the point is how it performs against memory pools, and how that helps with memory fragmentation which is pretty hard to reproduce by silly synthetic tests
12:44:46 <kkeithley> you can tell jdarcy that.
12:45:12 <post-factum> kkeithley, he is not here, unfortunately
12:45:21 <kkeithley> he found memory pools -- and the way we use them -- are not helping performance any.
12:45:24 <obnox> talloc also features memory pools / pooled objects so that many malloc calls and can even be faster than malloc for a workload of lot of smaller allocations
12:45:50 <obnox> talloc comes with a small overhead itself low single digit percentage, but the pools mitigate that
12:46:22 <obnox> the pools of talloc do help, because it's per object+child objects, not global
12:46:52 <obnox> so for the samba (+others) multiplied memory problem, we'd need some kind of proxy
12:47:14 <obnox> several approaches: 1. go back to re-exporting fuse mount (should work today)
12:47:26 <obnox> 2. see what the facebook gfproxy looks like once it's availabel
12:47:41 <obnox> 3. implement some kind of proxy daemon ourselves.
12:47:41 <kshlm> I look away for a moment and overshot the time for this topic.
12:48:05 <kshlm> So do we have some outcome / AIs on memory management?
12:48:06 <obnox> poornima: rjoseph_ and I will likely discuss on #3 and come back to the community later
12:48:12 <obnox> yeah
12:48:15 <kshlm> post-factum, ?
12:48:18 <post-factum> kshlm, should some thread be started on ML for this?
12:48:28 <amye> please. :)
12:48:36 <kshlm> post-factum, Please do it if you feel so.
12:48:54 <obnox> kshlm: I can follow up on the samba triggered problem myself
12:49:03 <kshlm> obnox cool.
12:49:03 <post-factum> kshlm, i guess this is the only option now because it is pretty hard to discuss without having key ppl here
12:49:21 <amye> On the other hand, we have 10 minutes left for Other Things
12:49:25 <kshlm> obnox, You already have an AI on the same from last weeks.
12:49:34 <kshlm> Thanks post-factum.
12:49:36 <kshlm> Let's move on.
12:49:41 <obnox> kshlm: oh. well. yeat. was there an eta? ;-)
12:49:56 <kshlm> #topic Glusto-Tests: Libs/Tests
12:50:09 <kshlm> obnox, I've had an AI for almost an year.
12:50:14 <kshlm> ShwethaHP, Are you here?
12:50:21 <ShwethaHP> kshlm, Yes.. i am here
12:50:24 <obnox> :-)
12:50:35 <kshlm> You have the stage.
12:50:40 * obnox afk for another meeting
12:50:40 <kshlm> ShwethaHP,
12:51:08 <ShwethaHP> So, there is a bunch of glusto-gluster libraries added to the glusto-tests repo for writing up gluster tests.
12:51:31 <ShwethaHP> #link https://github.com/gluster/glusto-tests
12:51:53 <ShwethaHP> I have added up a simple case for reference and that's the BVT case too.
12:52:41 <kshlm> ShwethaHP, That is good news.
12:52:54 <kshlm> Are we finally ready to start writing glusto tests?
12:53:27 <ShwethaHP> Yes, most of the basic libs required for writing the tests are in place.
12:53:38 <kshlm> Awesome.
12:54:09 <kshlm> Now we can start enforcing a 'Glusto test required' requirement on 3.10 changes.
12:54:19 <kshlm> Thanks for all this work.
12:54:24 <rastar> Good news.
12:54:27 <ShwethaHP> Volume related libs are almost there. All other component related libs are in place. If there any new things to be added we should be taking that up
12:54:30 <amye> kshlm++
12:54:30 <zodbot> amye: Karma for kshlm changed to 5 (for the f24 release cycle):  https://badges.fedoraproject.org/tags/cookie/any
12:54:56 <kshlm> ShwethaHP++ for the work on Glusto-libs.
12:55:10 <kshlm> loadtheacc++ for his work on Glusto.
12:55:19 <sankarshan> What would be a workflow to ensure that the tests (which are written for Glusto) are correct, appropriate, relevant and maintained? Or, is that something which comes later?
12:55:45 <kshlm> ShwethaHP, What are your next steps?
12:56:01 <kshlm> sankarshan, First we start writing tests.
12:56:24 <kshlm> And make up the rest as we go along. :D
12:56:39 <ShwethaHP> kshlm, My next steps is contributing component specific BVT cases . So that we cover all the BVT's first.
12:57:08 <msvbhat> Setting up a CI running these tests maybe? Agaist git master?
12:57:15 <kshlm> ShwethaHP, Good to know. Let the community know if you need more help doing that.
12:57:20 <sankarshan> Well, that was what I was intending to ask - I read your statement as eventually leading to "no merge without a valid test for Glusto". Is that what you were intending to convey?
12:57:21 <ShwethaHP> msvbhat, nigelb is working on that as well
12:57:30 <kshlm> msvbhat, I think nigelb and loadtheacc are working that.
12:57:33 <msvbhat> Ah, Okay
12:57:49 <kshlm> Last I heard nigelb said he was ready to launch jobs.
12:58:04 <kshlm> I have to stop this topic here.
12:58:12 <kshlm> Thanks for all the good news ShwethaHP.
12:58:13 <ShwethaHP> loadtheacc, and nigelb are working to get it done.
12:58:30 <ShwethaHP> kshlm, :-)
12:58:36 <kshlm> From the next week, you have a Testing/Glusto section in the meeting agenda.
12:58:46 <ShwethaHP> kshlm, will update any new changes in the ML
12:58:51 <kshlm> You can provide weekly updates there.
12:58:56 <ShwethaHP> kshlm, sure
12:59:02 <kshlm> Or to the mailing list and link it in the agenda.
12:59:08 <kshlm> Thank you again.
12:59:18 <kshlm> #topic Important updates.
12:59:42 <kshlm> The important update this week is that aravindavk has called out for a 3.9 test day tomorrow.
12:59:54 <amye> Are we ready for it?
12:59:58 <kshlm> #link https://www.gluster.org/pipermail/gluster-devel/2016-October/051262.html
13:00:06 <kshlm> aravindavk, Do you have anything to add?
13:00:38 <aravindavk> kshlm: No, we are yet to tag rc2, will do that in some time
13:00:42 <kshlm> amye, We need packages built for rc2. That requires rc2 to be tagged.
13:00:53 <amye> Ack
13:00:55 <kshlm> aravindavk, Will it be ready before tomorrow?
13:01:07 <aravindavk> kshlm: yes
13:01:14 <kshlm> Thanks.
13:01:36 <kshlm> I expect everyone here to be testing the rc2 packages tomorrow.
13:01:54 <kshlm> And I expect aravindavk ensures that everyone does it.
13:01:55 <kshlm> :)
13:02:03 <sankarshan> haha
13:02:04 <sankarshan> :D
13:02:08 <kshlm> Alright, that's all the time we have.
13:02:18 <kshlm> #topic Regular announcements
13:02:27 <kshlm> If you're attending any event/conference please add the event and yourselves to Gluster attendance of events: http://www.gluster.org/events (replaces https://public.pad.fsfe.org/p/gluster-events)
13:02:27 <kshlm> Put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news
13:02:27 <kshlm> Remember to add your updates to the next meetings agenda.
13:02:37 <kshlm> Thanks everyone.
13:02:43 <kshlm> This was a good meeting. :)
13:02:57 <kshlm> We'll be continuing this format for the next 2 meetings.
13:03:04 <kshlm> Thanks again.
13:03:07 <kshlm> #endmeeting