gluster-meeting
LOGS
17:30:40 <kshlm> #startmeeting gluster
17:30:47 <kshlm> Let's begin!
17:30:53 <kshlm> Welcome everyone.
17:30:58 <kshlm> #topic Roll Call
17:31:34 * kshlm _o/
17:31:40 * kkeithley is here
17:31:58 * anoopcs is here
17:32:09 <nigelb> o/
17:33:08 * rastar is here
17:33:34 * msvbhat is here
17:33:43 <samikshan> o/
17:33:47 <kshlm> Let's start.
17:33:53 <kshlm> Welcome again everyone.
17:34:10 <kshlm> The agenda is always available at https://public.pad.fsfe.org/p/gluster-community-meetings
17:34:26 * ira is here.
17:34:40 <kshlm> Add any topics you want to discuss that aren't already in the agenda under Open-floor
17:34:53 <kshlm> #topic Next weeks host
17:35:15 <kshlm> First up, do we have volunteers to host the next meeting?
17:35:27 * jiffin is here
17:36:11 <kshlm> No one?
17:37:13 <samikshan> I could try doing that next week. Haven't done it before though.
17:37:28 <kshlm> samikshan, Cool!
17:37:36 <kshlm> It's not too hard.
17:37:59 <kshlm> Just follow the agenda and what we're doing today.
17:38:03 <samikshan> Yep will just follow the process.
17:38:09 <kshlm> There should be people around to help.
17:38:18 <kshlm> #info samikshan is next week's host
17:38:33 <kshlm> Thanks samikshan++
17:38:34 <zodbot> kshlm: Karma for samxan changed to 1 (for the f24 release cycle):  https://badges.fedoraproject.org/tags/cookie/any
17:38:42 <samikshan> :)
17:38:52 <kshlm> #topic GlusterFS-4.0
17:39:10 <kshlm> I don't have much to report with regards to GD2 for this week.
17:39:15 <kshlm> I've not progressed a lot.
17:39:46 <kshlm> And I don't see anyone else around to provide updates on DHT2 and JBR.
17:40:00 <kshlm> I'm assuming not progress there either.
17:40:28 <kshlm> I'll move onto the next topic unless we have anything other related to 4.0 to discuss.
17:41:35 <kshlm> Okay.
17:41:41 <kshlm> #topic GlusterFS-3.9
17:42:01 <kshlm> aravindavk, Any updates for us?
17:42:38 <kshlm> He doesn't seem to be around.
17:42:41 <aravindavk> kshlm: 3.9 branch created, we are working on stabilizing the branch
17:42:48 <kshlm> Oh. There you are.
17:42:58 * ndevos _o/
17:43:06 <aravindavk> Pranith is collecting Tests details which we can run for 3.9
17:43:31 <kshlm> That's cool. I'm just adding tests for GlusterD.
17:44:36 <kshlm> aravindavk, Please make sure to announce the branching on the mailing-lists. (If you'd already done it, I must have missed it. Sorry)
17:44:48 <aravindavk> kshlm: I did
17:44:57 <kshlm> Link?
17:45:29 <aravindavk> kshlm: http://www.gluster.org/pipermail/gluster-devel/2016-September/050741.html
17:45:39 <kshlm> #link https://www.gluster.org/pipermail/gluster-devel/2016-September/050741.html
17:45:42 <kshlm> Thanks aravindavk
17:46:23 <kshlm> aravindavk, Could you also update the mail with the dates you're targetting?
17:46:35 <kshlm> For the rc, beta and final release?
17:47:09 <aravindavk> kshlm: sure
17:47:13 <kshlm> We cannot have patches being accepted till the release date.
17:47:34 <kshlm> I'll add an AI for this on you.
17:48:06 <kshlm> #action aravindavk to update the lists with the target dates for the 3.9 release
17:48:43 <kshlm> aravindavk, Do you have anything else to share?
17:48:55 <aravindavk> kshlm: thanks. thats all from my side
17:49:10 <kshlm> aravindavk, Okay.
17:49:29 <kshlm> Everyone make sure to fill up pranithk's list of tests for your components soon.
17:49:34 <kshlm> Let's move on.
17:49:39 <kshlm> #topic GlusterFS-3.8
17:49:55 <kshlm> ndevos, You're up.
17:50:06 <kshlm> Everything fine in release-3.8 land?
17:50:07 <ndevos> nothing special to note, all goes according to plan
17:50:39 <ndevos> I'll probably start to release tomorrow or friday, and will ask the maintainers to refrain from merging patches
17:50:44 <kshlm> So the next release will happen on the 10th?
17:51:04 <ndevos> yes, I plan to do that
17:51:12 <kshlm> ndevos, Sounds good.
17:51:20 <kshlm> You've got it under control.
17:51:33 <ndevos> (might be an other airport release, we should give the releases useless names)
17:51:34 <kshlm> If there's nothing else, I'll move on.
17:51:46 <ndevos> nope, move on :)
17:51:51 <kshlm> Thank you.
17:51:56 <kshlm> #topic GlusterFS-3.7
17:52:16 <kshlm> 3.7.15 has been released.
17:52:35 <jkroon> bug fixes?  or releast notes?
17:53:04 <jkroon> (we're seeing "random" insane load on 3.7.14, specifically with many, many small files)
17:53:11 <kshlm> jkroon, Notes are available at https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.15.md
17:53:32 <kshlm> The release happened on time, and was smooth.
17:53:38 <kshlm> Thanks everyone who helped.
17:54:10 <kshlm> jkroon, Have you discussed the issue before?
17:54:19 <kshlm> I mean on the mailing lists or on IRC?
17:54:50 <jkroon> kshlm, only saw it yesterday.
17:55:17 <jkroon> we've had a VERY long discussion between JoeJulian and myself a whilst back - that one was resolved by kernel downgrade from 4.6.4 to 3.19.5 (was old version).
17:55:31 <jkroon> seems that was caused by underlying mdadm problems.  in this case however that did not help.
17:56:12 <kkeithley> 3.8.4 will be the Humberto Delgado release.
17:56:56 <kshlm> jkroon, Okay. Please let's discuss your issue further after this meeting.
17:57:13 <kshlm> It can continue on #gluster{,-dev} or on the mailing lists.
17:57:32 <jkroon> kshlm, i'd really appreciate that, i'm in #gluster.
17:57:38 <kshlm> We should be able to help fix it.
17:58:13 <kshlm> That's it for 3.7
17:58:27 <kshlm> I'll move on if we have nothing further to discuss. I don't have anything else.
17:59:35 <kshlm> I'll skip 3.6. Nothing to add there, other than the pending EOL.
18:00:02 <kshlm> #topic Project Infrastructure
18:00:15 <kshlm> nigelb, misc, ^
18:00:23 * post-factum is late
18:00:55 <nigelb> Hello!
18:01:09 <kshlm> nigelb, Hi :)
18:01:19 <nigelb> I've done one round of cleaning up the gluster-maintainers team as I discussed on the mailing list.
18:01:19 <kshlm> Hey post-factum!
18:01:28 <post-factum> kshlm: o/
18:01:39 <nigelb> So if you suddenly lost access to merge, file a bug (if you lost access, you should have gotten an email from me though)
18:02:01 <nigelb> I'm working with Shyam to get some upstream performance numbers.
18:02:20 <kshlm> nigelb, Was there a lot to cleanup?
18:02:28 <nigelb> About 5 or 6.
18:02:54 <nigelb> The rough strategy is we'll publish the scripts to get performance numbers, our machine specs and the numbers we get on our infra.
18:03:21 <kshlm> nigelb, Where will these be running?
18:03:34 <nigelb> It may be running on internal hardware.
18:03:37 <kshlm> On centos-ci? Or do have new infra available?
18:03:43 <nigelb> This is why we'll publish the specs.
18:04:07 <nigelb> I'm still working out the details (in fact, I have a call after this)
18:04:09 <kshlm> So, it's still unknown then.
18:04:16 <kshlm> Okay.
18:04:29 <kshlm> We all look forward to see this done.
18:04:33 <nigelb> I've been putting some work into the failure dashboard
18:04:41 <nigelb> The latest is here -> https://2586f5f0.ngrok.io/
18:04:49 <nigelb> I think it's ready to be our single souce of truth.
18:04:55 <nigelb> It has information from the last 2 weeks.
18:05:07 <nigelb> (you're hitting my laptop, so go easy there)
18:05:55 <kshlm> Oh. I thought it was static.
18:06:01 <nigelb> There's a bit more clean up I need to do so it doesn't get DDoSed. And I'll get it on a public URL.
18:06:10 <nigelb> Nope, there's a good amount of db queries.
18:06:44 <nigelb> Rather than sending emails, I wanted one place that would record the information.
18:06:53 <nigelb> And you could do an archive search if you wanted.
18:07:21 <kshlm> nigelb, I'd still want emails.
18:07:43 <kshlm> It's a good way to nag people to fix whatever needs fixing.
18:07:51 <nigelb> Sure, but now we can do targetted emails. Or bugs.
18:07:52 <ndevos> yeah, emails are easier to follow up and check progress
18:08:02 <nigelb> Like that list has 3 failures which need looking into
18:08:08 <nigelb> because they've happened 10+ times.
18:08:47 <ndevos> what job results are this?
18:08:50 <kshlm> Targetted is good.
18:08:54 <nigelb> centos and netbsd
18:08:58 <kshlm> As long as we don't swamp -devel or maintainers as we do now.
18:09:14 <ndevos> no, I mean, is it running git/HEAD, or patches that are posted
18:09:34 <ndevos> in the 2nd case, many patches need to update/correct the .t files, and that is done only after it fails a couple of times
18:09:51 <nigelb> Yeah, this is from patches submitted.
18:09:55 <nigelb> so that sort of failure is not excluded.
18:10:05 <ndevos> (of course developers should run the tests before posting patches, but that rarely happens)
18:10:13 <nigelb> next week on, we'll have enough data from regression-test-burn-in as well
18:10:22 <nigelb> which should give us the status of master.
18:10:35 <kshlm> We fixed regression-test-burn-in yesterday!
18:10:38 <ndevos> ok, results from regression-test-burn-in would definitely be more useful
18:10:50 <nigelb> indeed.
18:10:58 <nigelb> And we also have a working strfmt_errors job.
18:11:08 <nigelb> master is green and a few release branches need to be green.
18:11:11 <nigelb> then I can turn it on.
18:11:22 <nigelb> (turn voting on it, rather)
18:11:34 <ndevos> ah, yes, but strfmt is not correct for release-3.7 and 3.8 yet, I've send backports to get those addressed
18:11:46 <nigelb> ndevos: can you link me to the patches?
18:11:57 <nigelb> I'll watch them, so I can turn them to vote after those are merged.
18:12:09 <ndevos> #link http://review.gluster.org/#/q/I6f57b5e8ea174dd9e3056aff5da685e497894ccf
18:12:09 <nigelb> misc is working on getting VMs on the new hardware.
18:12:15 <kkeithley> strfmt fixes for 3.7 is/are already merged
18:12:24 <ndevos> thanks kkeithley!
18:12:24 <nigelb> We're targetting moving rpm jobs to our hardware in the near future.
18:12:30 <nigelb> And then centos regression.
18:12:47 <nigelb> hopefully, in a few months, we shouldn't have any rackspace machines.
18:12:57 <kkeithley> \o/
18:13:08 <nigelb> and we may have machines that we can use and rebuild for tests.
18:13:30 <nigelb> The next round of cleanup is going to target jenkins users.
18:13:43 <nigelb> I'll send an email to devel once I have the energy to fix it up.
18:13:50 <ndevos> we want to drop all rackspace machines? or did their sponsoring completely stop?
18:14:02 <nigelb> the sponsoring situation hasn't changed.
18:14:11 <misc> I do not think we should drop, but reduce for sure
18:14:24 <nigelb> drop all rackspace machines -> for CI.
18:14:30 <nigelb> we'll still have some machines there.
18:14:34 <mchangir> oh ... we could have a build farm and a test farm
18:14:45 <nigelb> But sort of as a burstable capacity.
18:14:55 <nigelb> mchangir: we only have 4 machines, not enough to run a test farm.
18:15:06 <nigelb> Our test farm will have to be centos CI.
18:15:22 <nigelb> That's all from infra, unless you have questions for me.
18:15:24 <kshlm> nigelb, Awesome news!
18:15:33 <kkeithley> we have other machines (internal) for running test farms on
18:15:36 <kshlm> But I'd like to move on to the next topic now.
18:15:51 <kshlm> Thanks nigelb
18:15:59 <kshlm> #topic NFS-Ganesha
18:16:01 <ndevos> and the CentOS CI has many machines we can use for testing too - just needs someone to write/maintain tests
18:16:03 * ** rastar is now known as rtalur_afk
18:16:12 * ** rtalur_afk is now known as rastar_afk
18:16:43 <kshlm> kkeithley, ndevos ?
18:16:52 <kkeithley> ?
18:16:56 <kkeithley> oh
18:17:16 <kshlm> kkeithley, #topic NFS-Ganesha
18:17:35 <kkeithley> 2.4rc3 was tagged yesterday.  expect GA in two weeks if we can figure out the perf regressions from dev29 by then
18:18:11 <kkeithley> (dev29 was the last -dev release before rc1)
18:18:35 <kshlm> kkeithley, Thanks.
18:18:40 <kkeithley> I believe skoduri has some patches out for review to address some of the perf regressions. so we're optimistic
18:18:58 <kshlm> The regression was in gluster?
18:19:18 <kkeithley> no, in nfs-ganesha
18:19:33 <kkeithley> all testing has been on top of glusterfs-3.8.2
18:20:21 <kkeithley> or 3.8.1
18:20:39 <kshlm> kkeithley, I hope the regressions get fixed :)
18:20:51 <kshlm> I'll move on if there is nothing else.
18:20:52 <kkeithley> you're not the only one. ;-)
18:21:51 <kshlm> Thanks again kkeithley
18:21:55 <kshlm> #topic Samba
18:21:57 * obnox is summoned
18:22:07 <kshlm> obnox, Yes you are.
18:22:13 <obnox> it seemes samba 4.5.0 is going to be released today
18:22:22 <obnox> (instead of 4.5.0 rc4)
18:22:52 <obnox> rastar_afk: has proposed a patch to samba upstream to teach the gluster vfs module about multiple volfile servers
18:23:01 <obnox> (as supported by libgfapi)
18:23:19 <obnox> performance testing with samba with the md-cache improvements is ongoing
18:23:30 <obnox> that's all i can think of right now
18:23:49 <obnox> ah on a related note:
18:24:10 <kshlm> obnox, How is the performance? All good?
18:24:19 <obnox> there are requests of extending CTDB to provide better ha management for gluster-nfs
18:24:31 <obnox> kshlm: poornima would have the latest infos on that
18:24:47 <obnox> i think it is going well, nice improvements
18:25:05 <obnox> currently testing also that no general functional regressions are introduced when using samba on top of those patches
18:25:59 <kshlm> obnox, I hope there are none.
18:26:02 <kshlm> Thanks obnox.
18:26:12 <kshlm> I'll move on if you're done.
18:26:16 <obnox> yep, please
18:26:40 <kshlm> Thanks again.
18:26:49 <kshlm> #topic Last weeks AIs
18:26:57 <kshlm> #topic improve cleanup to control the processes that test starts
18:27:05 <kshlm> Doesn't have an assignee.
18:27:19 <kshlm> Who was this AI on?
18:27:47 <ndevos> either nigelb, jdarcy or me, I guess?
18:28:01 <ndevos> or maybe rastar_afk
18:28:16 <jiffin> kshlm: I guess it was on rastar
18:28:32 <kshlm> I'm looking at the logs,
18:28:34 <ndevos> I've not seen any progress there, but I might have missed it...
18:28:43 <kshlm> and ndevos jdarcy and rastar_afk are involved.
18:28:54 <kshlm> ndevos, Keep it open for next week>
18:28:54 <ndevos> it was about killing processes that started in the background
18:29:12 <ndevos> yes, thats fine
18:29:41 * ndevos doesnt know how urgent it is anyway
18:29:49 <kshlm> #action rastar_afk/ndevos/jdarcy to  improve cleanup to control the processes that test starts
18:30:02 <kshlm> No more AIs. So...
18:30:07 <kshlm> #topic Open Floor
18:30:17 <post-factum> memory leaks! i guess we need dedicated topic for them
18:30:20 <kshlm> Looking at the list looks like just announcements.
18:30:30 <kshlm> From me and kkeithley
18:30:41 <kkeithley> well, info, not announcements.
18:31:01 <kshlm> yeah.
18:31:01 <kkeithley> also no hot sauce, no Ghiradelli chocolate until I get my reviews.
18:31:22 <kkeithley> ;-)
18:31:54 <kshlm> post-factum, You're the force wiping the leaks.
18:32:03 <post-factum> kshlm, i've managed how to deal with massif valgrind profiler and updated BZ about FUSE client leak. Nithya should take a look at that.
18:32:05 <kshlm> Just keep going.
18:32:11 <kshlm> :)
18:32:35 <kshlm> Awesome! So finally you got it figured.
18:33:36 <kshlm> post-factum, I'm okay with adding a mem-leak topic, if you agre  to provide updates on your adventures wiping leaks.
18:33:57 <kkeithley> I should set up some VMs to run longevity on 3.7.x.  (current longevity is on bare metal.)  I'd also like to test other things besides DHT+AFR volume, e.g. tiering
18:34:25 <post-factum> kshlm: unfortunately, no. those are non-systematic, and my plan is to move to RH in next 2 weeks, so I'm not sure I'll be able to have some deal with Gluster for the next couple of months
18:35:14 <kshlm> post-factum, That's okay. Just pop in the topic under open-floor whenever you have anything.
18:35:27 <kshlm> And nbalacha has just shown up.
18:35:30 <post-factum> kshlm: okay, sure
18:35:35 <post-factum> nbalacha: just in time :)
18:35:43 <kshlm> If you need to discuss leaks, do it on #gluster-dev
18:35:51 <kshlm> I'm want to end this meeting soon.
18:36:02 <post-factum> okay
18:36:15 <kshlm> I'll dump all the Open-floor topics here.
18:36:34 <kshlm> More review needed for https://github.com/gluster/glusterdocs/pull/139, this is the update to the release process doc.
18:36:38 <kshlm> It needs to merge.
18:36:51 <kshlm> From kkeithley
18:37:00 <kshlm> 'please review nine remaining 'unused variable' patches, otherwise I'm bringing my laser eyes to BLR next week to glare at you.
18:37:00 <kshlm> reviewers, you know who you are! (And so do I)'
18:37:16 <kkeithley> chocolate and hot sauce are the reward!
18:37:31 <kshlm> kkeithley again on the longetivity cluster
18:37:34 <kkeithley> for everyone
18:37:44 <kshlm> '''longevity cluster (36 days running) https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/
18:37:44 <kshlm> glusterd������������������������ RSZ 22132 -> 22132, VSZ� 673160 ->� 673160 (small change in VSZ from last week)
18:37:44 <kshlm> �glusterfs (fuse client)� RSZ 59628 -> 61460, VSZ� 800044 ->� 947508 (small change in RSZ, no change in VSZ� from last week)
18:37:44 <kshlm> glusterfs (shd)������������� RSZ 58928 -> 59388, VSZ� 747476 ->� 813012 (tiny change in RSZ, no change in VSZ from last week
18:37:44 <kshlm> �glusterfsd��������������������� RSZ 47452 -> 47832, VSZ� 1319908 -> 1584108 (small change in RSZ, no change in VSZ from last week)
18:37:44 <kshlm> (no, I haven't added state dumps yet. still running 3.8.1)
18:37:44 <kshlm> '''
18:37:46 <ndevos> #link http://review.gluster.org/#/q/status:open+project:glusterfs+branch:master+topic:bug-1369124
18:38:05 <kshlm> And finally, the regular announcements.
18:38:17 <kshlm> If you're attending any event/conference please add the event and yourselves to Gluster attendance of events: http://www.gluster.org/events (replaces https://public.pad.fsfe.org/p/gluster-events)�
18:38:24 <kshlm> Put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news
18:38:37 <kshlm> This should be fortnightly news now.
18:38:57 <ndevos> hmm, I thought I send a pull-request for the events page last week...
18:39:02 <kshlm> Anyways, I'm ending this meeting.
18:39:03 * ** skoduri|training is now known as skoduri
18:39:08 <kshlm> Thanks everyone.
18:39:13 <ndevos> thanks kshlm!
18:39:20 <kshlm> Let's meet next week.
18:39:24 <kshlm> samikshan will be hosting.
18:39:27 <kshlm> #endmeeting