weekly_community_meeting_03-aug-2016
LOGS
12:01:38 <kkeithley> #startmeeting Weekly community meeting 03-Aug-2016
12:01:38 <zodbot> Meeting started Wed Aug  3 12:01:38 2016 UTC.  The chair is kkeithley. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:01:38 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:01:38 <zodbot> The meeting name has been set to 'weekly_community_meeting_03-aug-2016'
12:01:38 <zodbot> hagarth: Error: Can't start another meeting, one is in progress.
12:01:41 <post-factum> zod's dead, baby
12:01:46 <hagarth> ah good
12:01:49 <kkeithley> #chair ankitraj
12:01:49 <zodbot> Current chairs: ankitraj kkeithley
12:01:56 <ankitraj> #topic Roll call
12:02:00 <post-factum> o/
12:02:05 * kkeithley is here
12:02:05 * jiffin is here
12:02:06 <kshlm> o/
12:02:12 * anoopcs is here
12:02:17 * loadtheacc is here
12:02:26 <ankitraj> I'll be waiting for a minute to the agenda open.
12:02:26 <ndevos> hi!
12:02:26 * skoduri is here
12:03:21 <ankitraj> Welcome everyone!
12:03:36 <ankitraj> #topic Next weeks meeting host
12:03:51 * ira is here.
12:04:11 <kshlm> I'll do it next week.
12:04:14 <ankitraj> anyone interested for hosting meeting next week
12:04:28 <ankitraj> ok kshlm I will assign it you
12:04:29 <kshlm> I'll alternate every other week from now on.
12:04:32 * aravindavk is here
12:05:02 <ankitraj> #topic
12:05:02 <ankitraj> GlusterFS 4.0
12:05:08 * hagarth is here (on and off)
12:05:09 <ankitraj> #topic GlusterFS 4.0
12:05:28 <kshlm> Are the 4.0 developers around?
12:05:32 <kshlm> atinm, jdarcy?
12:05:40 <kshlm> I'll give an update on GD2.
12:05:44 <ankitraj> ok
12:05:52 <kshlm> I've nearly go the multi-node txn working.
12:06:08 <kshlm> It's been moving very slowly.
12:06:13 <atinm> jdarcy is on a vacation and there is no movement on JBR as he is occupied with brick multiplexing stuffs
12:06:19 <kshlm> Some existing code needed refactoring.
12:06:30 <kshlm> Also, I set up CI for GD2 on centos-ci this week.
12:06:38 <anoopcs> Cool
12:06:44 <kshlm> (One AI ticked for this week)
12:06:47 <atinm> I haven't heard any progress on DHT2 from shyam
12:07:06 <kshlm> But it has some poor github integration causing builds to marked as failure even when they aren't.
12:07:45 <kshlm> Anyone else around to provide updates on DHT2 and JBR?
12:07:58 * msvbhat arrives bit late
12:08:55 <ankitraj> is there anything more to discuss on this topic
12:09:21 <ankitraj> ok now moving to next topic
12:09:25 <ankitraj> #topic GlusterFS 3.9
12:09:39 <kshlm> aravindavk, Any updates?
12:09:40 <ankitraj> any updates
12:09:56 <kshlm> Where is pranithk? He should attend these meetings.
12:10:49 <post-factum> he is idle in -dev
12:10:50 <kshlm> aravindavk, ?
12:10:50 <aravindavk> nothing much from last week
12:10:57 <atinm> we have got some bunch of eventing patches getting into mainline is what I can update from 3.9 perspective
12:10:58 <skoduri> aravindavk, I added a feature-spec for posix-locks reclaim support
12:11:01 <skoduri> http://review.gluster.org/#/c/15053/
12:11:07 <aravindavk> expecting some more pull requests for roadmap page
12:11:09 <skoduri> aravindavk, I appreciate if you can add it to 3.9
12:11:40 <aravindavk> skoduri: edit the roadmap page by clicking "Edit" link in footer
12:11:52 <skoduri> aravindavk, okay will do..thanks
12:12:11 <kshlm> pranithk and I have been discussing sub-directory mounts on and off.
12:12:28 <kshlm> Still no idea if we can get it done for 3.9
12:12:28 <ndevos> ... they still work with NFS and Samba ;-)
12:12:36 <post-factum> kshlm: that is long-awaited killer-feature
12:13:01 <kshlm> ndevos, Yeah. They're always there.
12:13:04 <kshlm> post-factum, Yup.
12:13:07 <hagarth> +1 to sub-directory mounts .. would be really cool to have with fuse.
12:13:18 <kshlm> But pranithk is just so busy right now.
12:13:29 <hagarth> kshlm: anything worth discussing on -devel wrt sub-dir mounts?
12:13:33 <kshlm> I can hear him around, but can't see.
12:13:50 <aravindavk> kshlm: will call him, I can see
12:13:54 <kshlm> hagarth, Mainly on the UX.
12:14:10 <kshlm> The UX for authentication.
12:14:38 <kshlm> We also need to better understand the patch under review sent by jdarcy.
12:14:46 <hagarth> kshlm: ok
12:15:04 <kshlm> We wanted to sit together to understand it better before starting a mailing list conversation.
12:15:33 <kshlm> I guess that's all for 3.9 this week.
12:15:35 <ankitraj> ok now moving to next topic
12:15:39 <ankitraj> #topic GlusterFS 3.8
12:15:49 <ankitraj> any updates?
12:16:15 <ndevos> nope, nothing special
12:16:32 <kshlm> Still on track for 10th?
12:16:33 <ndevos> 3.8.2 will get done in a week from now, things look good
12:16:40 <kkeithley> infra team was planning things for 8-9th.
12:16:42 <kshlm> Awesome.
12:16:59 <kshlm> kkeithley, I forgot that. Thanks for bringing that up.
12:17:02 <kkeithley> if their things don't go as planned, then what?
12:17:02 <ndevos> oh, I thought that got rescheduled by amye, kkeithley?
12:17:08 <kkeithley> did it?
12:17:17 <kshlm> I'm not clear about it either.
12:18:01 <kshlm> amye did mention that the move would be scheduled after 3.8.2
12:18:08 <kshlm> No specific date was given.
12:18:11 <ndevos> but yes, if that happens, Jenkins/Gerrit might missbehave and the release gets delayed
12:18:41 <misc> mh  ?
12:19:05 <misc> I said in meeting after the release
12:19:15 <misc> cause I do not want to make any change while a release is going on
12:19:25 <ndevos> thanks misc!
12:19:44 <ndevos> what meeting was that, and is there a chat log?
12:19:53 <kshlm> https://www.gluster.org/pipermail/gluster-infra/2016-July/002523.html was the mail amye sent.
12:20:14 <misc> my team meeting, and the cage one (both internal meeting)
12:20:27 <ndevos> ah, ok
12:20:36 <misc> mhh so yeah, i did miss that email
12:20:55 <misc> but yeah, not gonna do when a release is out
12:21:11 <kkeithley> sounds like we're good then
12:21:16 <misc> and so far, we still didn't got the server being physically moved, waiting on network
12:21:17 <kshlm> There should be a big enough window between 3.82. and 3.7.15
12:21:36 <misc> kshlm: yeah
12:21:37 <post-factum> 3.82 is somewhat distant...
12:21:38 <kkeithley> 3.6.10?
12:21:50 <kkeithley> 3.8.2 is in 7 days
12:22:05 <kshlm> kkeithley, We're not doing anymore regular 3.6.x releases
12:22:19 <misc> and no irregular release :) ?
12:22:24 <kshlm> post-factum, :)
12:22:34 <kshlm> misc, That we're not sure of yet.
12:22:50 <ndevos> kshlm: there were some 3.6.x bugs reported recently, some looked quite severe
12:22:52 <kkeithley> there was something on #gluster yesterday about 3.6 that sounded like it was worthy of a 3.6.10 release
12:22:55 <post-factum> when we are sure 3.9 is out, and then we are certainly sure
12:22:56 <misc> kshlm: how long before would it be decided ?
12:23:15 <kshlm> ndevos, I didn't know that.
12:23:42 <kshlm> kkeithley, Any links?
12:23:44 <kkeithley> and yes, I know we're not doing _regular_ 3.6 releases
12:23:57 <misc> so basically, if we want to touch to service, doing that more than 1 week before a release would be ok ?
12:24:15 <kkeithley> I'll have to look at the #gluster logs to reconstruct
12:24:21 <kshlm> kkeithley, Thanks.
12:24:27 <kshlm> misc, I think so.
12:24:41 <ndevos> latest 3.6 bugs are on https://bugzilla.redhat.com/buglist.cgi?f1=bug_status&f2=version&list_id=5591260&o1=notequals&o2=regexp&order=changeddate%20DESC%2Ccomponent%2Cbug_status%2Cpriority%2Cassigned_to%2Cbug_id&product=GlusterFS&query_based_on=&query_format=advanced&v1=CLOSED&v2=%5E3.6
12:24:51 <ndevos> well, thats an ugly urls
12:24:52 <misc> kshlm: unless there is a emergency security release, but I can imagine that being "exceptional"
12:25:00 <misc> and well, if we are unlucky, we are unlucky
12:25:23 <ndevos> ... also, wasnt this the 3.8 topic?
12:25:31 <kshlm> Yeah. We got carried away.
12:25:36 <misc> oups sorry :
12:25:37 <misc> (
12:25:49 <ankitraj> ok we are moving to next topic
12:25:53 <ankitraj> #topic GlusterFS 3.7
12:26:02 <kshlm> So 3.7.14 was done.
12:26:04 <post-factum> v3.7.14 is the first release that contains all memleak-related fixes planned after 3.7.6. yay!
12:26:11 <kshlm> post-factum, Yay!!!
12:26:23 <post-factum> so long road
12:26:23 <kshlm> This was a pretty good release.
12:26:40 <kshlm> This went out nearly on time.
12:26:55 <kshlm> The builds were done really quickly.
12:26:58 <kkeithley> hurray for going out (nearly) on time.
12:27:01 <kshlm> Thanks ndevos and kkeithley
12:27:07 <kshlm> ndevos++ kkeithley++
12:27:07 <zodbot> kshlm: Karma for devos changed to 1 (for the f24 release cycle):  https://badges.fedoraproject.org/tags/cookie/any
12:27:26 <kshlm> I haven't heard of any complaints yet.
12:27:44 <kshlm> And I think this has also fixed the problems with proxmox.
12:27:52 <kshlm> A really good release overall.
12:28:02 <ndevos> nice!
12:28:06 <kshlm> 3.7.15 is now on track for 10th September
12:28:14 <post-factum> let 3.7.15 be as good as 3.7.14
12:28:20 <ndevos> and lets try not to break it with the next release ;-)
12:28:35 <kshlm> The tracker is open https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.7.15
12:28:36 <glusterbot> Bug glusterfs: could not be retrieved: InvalidBugId
12:28:37 <ndevos> 3.7.15 would be 30 August?
12:28:54 <kshlm> Oops! Got confused
12:29:03 <kshlm> It's 30 August.
12:29:04 <ndevos> :)
12:29:13 <kshlm> I'll continue as the release-manager.
12:29:24 <kshlm> kkeithley++
12:29:25 <glusterbot> kshlm: kkeithley's karma is now 9
12:29:30 <kshlm> You didn't get the karma earlier.
12:29:35 <kkeithley> heh
12:29:41 <kshlm> And that's it.
12:30:02 <ankitraj> #topic GlusterFS 3.6
12:30:21 <post-factum> goto out;
12:30:30 <kshlm> post-factum, not so fast.
12:30:35 <post-factum> :)
12:30:52 <ndevos> I did not get any reactions on http://www.mail-archive.com/gluster-devel@gluster.org/msg09771.html yet
12:31:10 <ndevos> that is for going through the open 3.6 bugs and closing the non-important ones out
12:31:35 <ndevos> or moving them to mainline, in case it is a feature request
12:31:46 <kshlm> ndevos, I a few bug closed messages from bugzilla in my inbox.
12:31:58 <kshlm> I thought someone was already looking into this.
12:32:18 <ndevos> kshlm: ah, good, maybe people started to look into it without letting anyone know
12:32:32 <kshlm> Do you think it'll be good to set up a time for people to get together on this>
12:32:35 <kshlm> ?
12:33:27 <ndevos> could be the most efficient, I was hoping for suggestions from others ;-)
12:34:36 <aravindavk> ndevos: will add needinfo on reporter for georep bugs. If not expecting in 3.6, I will close it
12:35:04 <kshlm> I would join if a time was setup. But I'll not be available for the rest of the week.
12:35:39 <kshlm> Also, if we're planning on doing another 3.6 release, we need to find a new release-manager.
12:35:44 <kshlm> The current manager is MIA.
12:36:07 <kshlm> ndevos, I'll reply to the mail thread.
12:36:17 <ndevos> aravindavk: thats a good approach, mention a deadline for the reply in the comment too, and close it after it passed
12:36:22 <kshlm> We can continue with the other topics here.
12:36:31 <aravindavk> ndevos: ok
12:37:08 <ankitraj> #topic Community Infrastructure
12:37:39 <kshlm> misc, nigelb any updates?
12:37:55 <kshlm> I have a complaint though. I'll go after you've given your updates.
12:38:03 <misc> kshlm: besides the server stuff, not much
12:38:17 <misc> nigelb has maybe more (but right now, i am in another meeting)
12:38:25 <kshlm> misc, Ok.
12:38:39 <kshlm> I wanted to check about planet.gluster.org.
12:38:45 <misc> also, if complain is not in bugzilla, it doesn't exist :p  </nigelb>
12:38:57 <kshlm> I doesn't seem to have synced my post.
12:39:01 <kshlm> To my blog.
12:39:06 <misc> mhh, again ?
12:39:14 <kshlm> s/To/from/
12:39:29 <kshlm> I had a 3.7.14 announcement.
12:39:31 <misc> it kidna fixed by itself last time, so I didn't found the problem, I guess I need to investigate again
12:39:36 <kshlm> It hasn't showed up yet.
12:39:47 <misc> kshlm: can you fill a bug ?
12:39:53 <kshlm> misc, Okay.
12:40:10 <kshlm> I've got nothing more.
12:40:43 <misc> mhh, the build is showing error
12:40:55 <misc> mhh, nope
12:41:13 <kshlm> misc, We can take it up after the meeting.
12:41:21 <kshlm> Let's move on to the next topic.
12:41:46 <ankitraj> ok  we are moving to next topic
12:42:01 <ankitraj> #topic Community NFS Ganesha
12:42:22 <kkeithley> nothing, just waiting to wrap up development and GA 2.4
12:43:09 <ndevos> kkeithley: when is that planned?
12:43:26 <kkeithley> don't know the exact date yet. Soon. (I hope)
12:43:38 <post-factum> "when it is ready"
12:44:01 <ndevos> ok, that gives a little more time to get the cache invalidation memory corrections in
12:44:04 <kkeithley> Was originally scheduled for February.
12:45:03 <kkeithley> It was originally....
12:46:40 <ankitraj> Anything else to discuss on NFS Ganesha?
12:46:48 <ankitraj> or we move to next topic
12:47:14 <ankitraj> #topic Community NFS Samba
12:47:35 <anoopcs> ??
12:47:36 <post-factum> nfs samba?
12:47:37 * ira didn't know Samba served NFS.
12:47:47 * ira learned something new today.
12:48:19 <kkeithley> I am trying to carve out time to take in Jose's extended HA (storhaug) for 3.9
12:48:26 <kkeithley> just Samba
12:48:29 <post-factum> ira: even windows has bash now, so why not
12:48:40 <kkeithley> Windows has NFS too these days
12:48:41 <ira> Samba is for SMB... not NFS. :)
12:49:00 <ndevos> oh, I thought someone finally did the NFS part for Samba!
12:49:07 <ira> Ok... Samba 4.5.0rc1, has been cut.
12:49:08 <ankitraj> typo error was that
12:49:13 <ankitraj> #topic Community Samba
12:49:18 <ira> Ok... Samba 4.5.0rc1, has been cut.
12:49:23 <anoopcs> And we now have the required changes for exposing gluster snapshots to Windows clients(as shadow copies), known as Volume Shadow Copy Services or VSS, via Samba's vfs shadow_copy module merged into master branch. kudos to rjoseph for getting it done in vfs shadow_copy module inside Samba.
12:49:40 <ira> Yep.
12:49:47 <kshlm> Cool!
12:49:56 <kshlm> Is this in the rc?
12:50:04 <anoopcs> It should be..
12:50:15 <ira> yeah, as is some ACL code refactoring between Ceph and Gluster.
12:50:28 <kshlm> Cool! I need to test out samba and nfs-ganesha sometime.
12:50:47 <kshlm> The gluster integration in these.
12:52:20 <ankitraj> anything else on samba or we move to next one
12:52:24 <ira> Please do, BZ and patches accepted ;)
12:52:29 <ira> Move along afaik.
12:52:49 <ankitraj> #topic Community AI
12:53:09 <kshlm> All done! Yay!
12:53:35 <kshlm> Possibly for the first time ever.
12:53:41 <ndevos> ankitraj: hmm, prepended a space, and in some other #topic commands too, I guess you need to edit the minutes before sending them out
12:54:01 <ndevos> we need to find something to assign to kshlm now...
12:54:20 <ira> Running a meeting 2 weeks from now :P
12:54:48 <kshlm> That's not an AI for this week.
12:54:56 <kkeithley> maybe he can find the 3.6 crash discussion from yesterday or the day before in #gluster. The one I can't find now.
12:55:05 <kkeithley> or #gluster-dev
12:55:22 <kshlm> kkeithley, I could try.
12:55:26 <kkeithley> ;-)
12:56:00 <ndevos> hehe, I think kshlm does not sit idly around, there are many things he needs to do :)
12:56:25 <kkeithley> when you want something done, ask a busy person
12:56:27 <kshlm> I'll probably set a time to weed out the 3.6 list as ndevos wanted.
12:56:42 <kshlm> That should be my 1 AI for the week.
12:57:21 <ndevos> oh, that would be very good!
12:57:40 <kshlm> #action kshlm to setup a time to go through the 3.6 buglist one last time (everyone should attend).
12:58:05 <kkeithley> maybe do that in the bug triage meeting?
12:58:28 <kshlm> kkeithley, That would be good as well.
12:58:46 <kshlm> I'll continue this discussion on the mailing lists.
12:58:57 <kshlm> Let's finish this meeting first.
12:59:01 <ankitraj> ok
12:59:38 <ankitraj> #topic Open Floor
13:00:00 <kshlm> kkeithley and loadtheacc have things to share.
13:00:26 <kkeithley> what do people think about switching to the CentOS Storage SIG for the 3.7 EL packages? (except epel-5 perhaps?)
13:00:28 <kshlm> loadtheacc, Thanks for putting up the videos. I've not gone through them all yet, but I plan to do it soon.
13:00:57 <ndevos> kkeithley: ok from me, and yes, no epel-5 because that is not an option for the SIG at the moment
13:00:59 <misc> kkeithley: will it break the upgrade for people using the current 3.7 package ?
13:00:59 * post-factum does not care since builds packages by himself
13:01:06 <kshlm> kkeithley, +1 from me if there are no problems.
13:01:14 <jiffin> kkeithley: +1
13:01:37 <kkeithley> it will certainly break people who have /etc/yum.repos.d/gluster*.repo files pointing at d.g.o
13:01:47 <post-factum> misc: probably, upgrade could be fixed with proper redirection
13:01:48 <kkeithley> they will have to switch to the Storage SIG repos
13:01:53 <ndevos> misc: maybe we need some redirection on download.gluster.org, or something like that
13:02:11 <misc> well, I guess we can try the redirect
13:02:20 <kkeithley> that could work, although off the top of my head I'm not sure of the details
13:02:22 <misc> provided yum do folow redirect and this kind of stuff
13:02:31 <kshlm> #link https://www.gluster.org/pipermail/gluster-devel/2016-July/050307.html
13:02:37 <kshlm> For the glusto videos.
13:02:39 <kkeithley> s/could/might
13:02:42 <ndevos> misc: it should, isnt that how some of the mirrors work?
13:03:05 * ndevos still needs to watch the Glusto videos
13:03:11 <misc> ndevos: better double check :)
13:03:19 <kshlm> kkeithley, Could you just sync the SIG packages to d.g.o?
13:03:26 <misc> I think mirror work by getting a list of mirror, then using that
13:03:32 <misc> but yeah, dnf do that too
13:03:50 <kkeithley> we originally said we were only going to force people over to the Storage SIG starting with 3.8
13:03:57 <ndevos> misc: on Fedora yes, for CentOS I thought it was just redirection/geo-ip
13:04:38 <misc> also, it will break people using curl directly
13:04:46 <misc> but I doubt there is a lot of people doing that
13:04:51 * kshlm notes we're over time
13:04:55 <post-factum> -L switch :)?
13:04:59 <kkeithley> and FYI, after almost six months of continuous uptime running 3.7.8, running load (https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity378/)
13:05:05 <misc> post-factum: not default, hence the breakage :)
13:05:14 <kkeithley> I updated the longevity cluster to 3.8.1
13:05:25 <loadtheacc> kshlm, thanks. looking forward to feedback and next steps.
13:05:26 <kkeithley> https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/
13:05:41 <kshlm> kkeithley, What load is it serving?
13:06:06 <kkeithley> fsstress.py.  Ben England's home grown load generator
13:06:26 <post-factum> kkeithley: it would leaked for us for 40G of RAM, i guess, for those six months
13:06:42 <kshlm> Cool.
13:06:53 <kkeithley> you can see how much it leaked in the logs at the above location
13:07:01 <kkeithley> we'll see how 3.8.1 compares
13:07:09 * post-factum should show his zabbix charts somewhen
13:07:38 * kshlm wants to leave, big team dinner tonight!
13:07:43 <kkeithley> post-factum should write a blog post, or submit a talk at the upcoming Gluster Summit in Berlin
13:07:54 <ndevos> kshlm: I hope you're not wearing slippers!
13:07:58 <post-factum> kkeithley: let me join RH first ;)
13:08:08 <kkeithley> finally, any thoughts on new pkg signing keys for 3.9?
13:08:21 <kshlm> ndevos, Thankfully I read the mail before coming into work today.
13:08:23 <kkeithley> are slippers the same as sandals?
13:08:47 <kshlm> kkeithley, What would new signing keys for each release help with?
13:08:49 <ndevos> kkeithley: I was wondering about that too - and also how it would apply to foreigners
13:09:06 <ndevos> kkeithley: yes, what is the advantage of new keys?
13:09:23 <kshlm> OT, slippers in India genrally stand for flip-flops.
13:10:02 <ankitraj> I think  we are over with meeting
13:10:04 <kkeithley> more secure. Even if I'm not paranoid, someone could compromise pkgs if they've managed to break the key.
13:10:18 <ndevos> kkeithley: ok, then go for it :)
13:10:48 <ndevos> kkeithley: and distribute the keys from a different (or just more) server(s) than the one where the packages are
13:11:08 <kshlm> It's possibly more work for you. But I say go for it as well.
13:11:30 <kkeithley> the private key(s) are not on d.g.o
13:12:01 <ndevos> no, but what sense does it make to sign packages when the public key used to check the signature comes from the same server?
13:12:19 <post-factum> ankitraj: you may use the force to end this overtime ;)
13:12:20 <ndevos> at least there should be the option to verify the key from a different server
13:13:05 <ankitraj> Thanks to all for participating community meeting
13:13:09 <kkeithley> if someone changes the pub key on d.g.o but the pkgs are signed with the private key on another server, then the verification would fail.
13:13:11 * msvbhat leaves the meeting goes for his evening run
13:13:15 <ankitraj> I am ending it now
13:13:17 <ndevos> thanks ankitraj!
13:13:25 <ankitraj> #endmeeting