gluster_community_weekly_meeting
LOGS
12:00:23 <kkeithley> #startmeeting Gluster community weekly meeting
12:00:23 <zodbot> Meeting started Wed Jul 20 12:00:23 2016 UTC.  The chair is kkeithley. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:00:23 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:00:23 <zodbot> The meeting name has been set to 'gluster_community_weekly_meeting'
12:00:40 <kkeithley> #topic roll call
12:01:05 <kkeithley> who is here
12:01:17 * Saravanakmr is here
12:01:30 <nigelb> o/
12:01:43 * karthik_ is here
12:01:48 * loadtheacc is here
12:01:54 * atinm waves his hands
12:02:11 * samikshan is here
12:02:19 * rjoseph o/
12:02:44 * anoopcs is here
12:03:02 * rafi is here
12:03:20 * ndevos is here
12:03:23 <kkeithley> good, looks like a reasonable turnout
12:03:31 * jdarcy o/
12:03:33 <kkeithley> #topic next week's chair/host/moderator
12:03:38 <kkeithley> this time for real
12:03:53 <kkeithley> may we please have a volunteer?
12:04:26 * kshlm is here
12:04:29 <kshlm> I volunteer.
12:04:33 * ramky is here
12:04:46 * aravindavk is here
12:04:47 <jdarcy> kshlm++
12:04:48 <zodbot> jdarcy: Karma for kshlm changed to 1 (for the f24 release cycle):  https://badges.fedoraproject.org/tags/cookie/any
12:04:48 <glusterbot> jdarcy: kshlm's karma is now 4
12:05:04 <kkeithley> excellent, thank you.
12:05:05 * jiffin will here for half an hour
12:05:18 <kkeithley> good, we can quickly progress now
12:05:27 <kkeithley> #topic GlusterFS 4.0
12:05:50 <kkeithley> Or I should probably do AIs from last week first
12:06:14 <kkeithley> #topic last week's AIs
12:06:24 * post-factum is here
12:06:28 <atinm> GD2 updates - kshlm continues to work on the txn rpc framework and ppai has done some amount of bug fixes in etcd clustering
12:06:48 <kshlm> We'd changed the order because AIs used to take up a lot of time.
12:07:01 <kkeithley> ah, then let's continue with 4.0 status
12:07:18 <kkeithley> #topic GlusterFS 4.0 again
12:07:33 <kkeithley> #info kshlm continues to work on the txn rpc framework and ppai has done some amount of bug fixes in etcd clustering
12:08:01 <kkeithley> anything else to report on 4.0?
12:08:04 <kkeithley> anyone?
12:08:17 <kshlm> dht2 and jbr?
12:08:24 <atinm> jdarcy, do you want to add anything from jbr side?
12:08:43 <ndevos> brick multi-plexing?
12:08:53 <atinm> good point ndevos :)
12:09:04 <atinm> I was about to come to that, you stole it :D
12:10:10 <jdarcy> Multiplexing is progressing nicely.
12:10:20 <jdarcy> http://review.gluster.org/#/c/14763/
12:10:35 <jdarcy> I have some additional changes to address pid-file issues, but those aren't fully baked yet.
12:10:46 <jdarcy> Reviews much appreciated, even though it's not ready for merging yet.
12:10:55 <jdarcy> JBR is on hold for now as far as I'm concerned.
12:11:06 <jdarcy> That's it for me.
12:11:23 <kkeithley> is jbr blocked by something? Or just lack of cycles to work on it?
12:11:39 <jdarcy> Lack of cycles.  Lots of meetings this week, so only time left to work on one thing.
12:11:45 <atinm> jdarcy, one quick question, do you think brick multiplexing can make into 3.9?
12:11:54 <ndevos> jdarcy: is there a description of the protocol changes to enable brick-multiplexing, or should I just read the code?
12:11:57 <jdarcy> atinm: Maybe.
12:12:11 <atinm> jdarcy, ok, thanks!
12:12:19 <post-factum> when 3.9 merge window closes?
12:12:19 <jdarcy> ndevos: There was email a while ago, let me find that.
12:12:45 <atinm> post-factum, I saw an email from aravindavk that rebase to happen on 31st Aug
12:12:57 <ndevos> jdarcy: yeah, I got that somewhere too, did not have time to go through yet... will try to do that this week
12:13:01 <post-factum> atinm: oh, ok
12:13:02 <kkeithley> rebase, or branch?
12:13:14 <atinm> sorry, branch
12:13:25 <jdarcy> http://www.gluster.org/pipermail/gluster-devel/2016-June/049801.html (this is the "single graph" model)
12:13:34 <kkeithley> #info 3.9 branch scheduled for 31 August (2016)
12:13:47 <jdarcy> Shouldn't be any protocol changes, mostly it's in glusterd/glusterfsd.
12:14:30 <kkeithley> nice seque to the next topic
12:14:34 <kkeithley> #topic GlusterFS 3.9
12:14:46 <kkeithley> is aravindak here?
12:14:55 <aravindavk> sent pull request to update roadmap page https://www.gluster.org/community/roadmap/3.9/
12:15:33 <aravindavk> next step is to add missing features to roadmap if any. and collect status from each feature owners and send mail to devel
12:16:13 <ndevos> aravindavk: I've spoken to Dustin Black yesterday, he would be willing to assist documenting/tracking features too
12:16:31 <ndevos> aravindavk: he should get in touch with you, but dont hesitate to reach out to him
12:16:46 <aravindavk> ndevos: sure, thanks
12:17:25 <kkeithley> my Keen Eye For The Obvious tells me that 31 August is less than six weeks away.  It'll be here before you know it.
12:17:38 <kkeithley> okay, next topic
12:17:44 <kkeithley> #topic GlusterFS 3.8
12:17:58 <post-factum> 3.8 is in arch linux already
12:18:05 <ndevos> nice!
12:18:20 <ndevos> 3.8.1 got released on time, early last week
12:18:28 <post-factum> glusterfs 3.8.1 built on Jul 18 2016 17:29:49
12:18:47 <ndevos> the blog post was done on Monday, and I've sent the email with it earlier today
12:18:52 <kshlm> First distro get it?
12:19:01 <post-factum> seems so
12:19:05 <kshlm> Cool!
12:19:06 <ndevos> Fedora 24 has it too, I think
12:19:28 <post-factum> at least, it works with 3.7 server, and that is ok for me now :)
12:19:30 * msvbhat joins bit late
12:19:32 <ndevos> there is also a tweet from me, atinm retweeted it, and others may want to do so as well
12:20:13 <ndevos> 3.8.2 is scheduled for ~10th of August, I do not forsee any issues with that
12:20:15 <atinm> ndevos, yup, it all in my social accounts now :)
12:20:24 <kkeithley> Sorry, Fedora 25 (rawhide) had it at 2016-07-08 17:13:42
12:21:01 <ndevos> seems you won, kkeithley
12:21:19 <kkeithley> ;-)
12:21:19 <post-factum> what a victory
12:21:37 <ndevos> kshlm: there was the etherpad with the release things and backport criteria, is that part of the docs yet?
12:21:51 <kshlm> ndevos, Nope! I need to get back to it.
12:22:12 <kshlm> I should send a PR. That should help it get more attention.
12:22:28 <ndevos> kshlm: ok, lets gets that done soon, there have been some patches proposed for 3.8 that are dubious
12:22:55 <kshlm> Ok. I'll try to get a PR sent later today.
12:23:02 <ndevos> many thanks!
12:23:11 <kkeithley> anything else on 3.8?
12:23:15 <kshlm> #action kshlm to send PR to update release/backport criteria
12:23:25 <ndevos> no, nothing from me
12:23:53 <kkeithley> #topic GlusterFS 3.7
12:24:09 <kkeithley> hagarth is not around
12:24:16 <anoopcs> Do we have a tracker for 3.7.14?
12:24:55 <kshlm> anoopcs, Not yet.
12:25:02 <ndevos> anoopcs: doesnt look like it, it does not show up on http://bugs.cloud.gluster.org/
12:25:17 <kshlm> I need to do close the 3.7.13 bugs and open the new tracker.
12:25:17 <anoopcs> ndevos, kshlm : Ok.
12:25:23 <anoopcs> Cool.
12:25:28 <kshlm> I've got a lot of TODOs.
12:26:13 <kkeithley> #action kshlm to  close the 3.7.13 bugs and open the new tracker.
12:26:30 <kshlm> I need to send an announcement first!
12:27:16 <kkeithley> 3.7.14 release on the 30th?  Or does that seem unlikely?
12:27:42 <kkeithley> 3.7.14 release on the 30th?  Or does that seem unlikely?
12:28:23 <kkeithley> judging by the lack of response I'm guessing it's highly unlikely.
12:28:29 <post-factum> 3.7.13+extra patches works well for us
12:28:37 <kshlm> I'll do it if any patches get merged by then.
12:28:43 <kshlm> I've not checked the branch yet.
12:28:45 <post-factum> but release-3.7 already contains nice fixes
12:29:00 <post-factum> 9 in total
12:29:25 <kshlm> post-factum, These are on top of 3.7.13?
12:29:40 <post-factum> did git shortlog v3.7.13..upstream/release-3.7 just now
12:29:52 <post-factum> http://termbin.com/b1g8
12:29:58 <kkeithley> #info kshlm will release 3.7.14 on or around 30 July if there are patches merged. (and there are.)
12:30:15 <kshlm> Thank post-factum! git pull is weirdly slow.
12:30:20 <post-factum> and we cherry-picked these http://termbin.com/wrhr for us
12:30:39 <kkeithley> anything else on 3.7?
12:31:00 <kkeithley> no?
12:31:04 <post-factum> no
12:31:08 <kkeithley> #topic GlusterFS 3.6
12:31:15 <post-factum> also no :)
12:31:23 <kkeithley> rabhat? rtalur?
12:31:51 <kshlm> I'm yet to send the EOL announcement. Another TODO for me.
12:32:05 <kkeithley> yup, that'll come up in the AIs. ;-)
12:32:06 <post-factum> is that agreed to EOL 3.6 now?
12:32:11 * kshlm thinks it's time he kept a todo list.
12:32:23 <kshlm> post-factum, it EOLs when 3.9 arrives.
12:32:44 <kshlm> Till then only security fixes.
12:32:53 <kkeithley> #topic Infrastructurea
12:32:55 <kkeithley> #topic Infrastructure
12:32:56 <kshlm> And fixes for something seriously broken
12:33:00 <kkeithley> nigelb? misc?
12:33:40 <nigelb> hello!
12:33:47 <nigelb> I have an announcement and so does misc.
12:33:53 <kkeithley> the floor is yours
12:33:59 <kkeithley> also the ceiling and walls
12:34:09 <ndevos> and my balcony
12:34:16 <post-factum> but no windows allowed
12:34:43 <nigelb> Mine is simple: The netbsd machines are slowly getting cleaned up with disk space. Hopefully this means less bustage in the next few weeks. Detailed email coming to gluster-devel soon
12:35:28 <nigelb> misc may have stepped afk for lunch. His announcement is that cage is ready and we have two new powerful machines in the cage for us to build VMs on.
12:35:49 <misc> iota: yeah
12:35:50 <ndevos> oh, thats great, but I was expting an announcement that you guys were pregnant
12:35:51 <misc> oups
12:35:54 <misc> yeah, exactly that
12:36:02 <nigelb> ndevos: lol
12:36:14 <nigelb> pregrant with VMs.
12:36:21 <misc> I tought it was beer
12:36:28 <misc> but as I do not drink, that explain it
12:36:29 <post-factum> it was
12:36:44 <ndevos> nigelb: so, we can actually have clean VMs to run tests on, and not re-use them?
12:36:54 <kkeithley> @@
12:37:01 <msvbhat> nigelb: Does it mean we get make use vagrant setup suggested by rtalur for running tests?
12:37:24 <nigelb> ndevos: Eventually, yeah. Once we set everything up.
12:37:36 <msvbhat> nigelb: Along with CenotOS setup that is
12:37:40 <ndevos> nigelb: that is really awesome!
12:37:41 <nigelb> msvbhat: I haven't yet gotten details of this. So I can't say for sure.
12:37:55 <misc> ndevos: we could had in the past, we just needed someone to write the tool to do that
12:38:08 <nigelb> ndevos: I want it to be like the centos system. Use and throw VMs.
12:38:17 <nigelb> So we don't carry over side effects into the next build.
12:38:23 <kkeithley> #info public cage is operational, with new machines that will support lots of VMs for our gerrit and jenkins CI infra
12:38:47 <ndevos> misc: yeah, and make sure we can install the VMs correctly etc... the work you guys have done for the automation is great
12:38:47 <misc> also, i want to move the server someday in the future
12:38:57 <misc> so we need to plan downtime some time
12:38:58 <kkeithley> +1 to moving the server
12:39:09 <misc> (hopefully, it will be smoother than the first move from iweb)
12:39:44 <kkeithley> anything else on Infra?
12:39:58 <nigelb> Not at the moment :)
12:40:03 <kkeithley> #topic Ganesha
12:40:53 <kkeithley> we finally got soumyak's EX (extended FSAL API) changes merged. Winding down to GA sometime in August hopefully
12:41:32 <kkeithley> that's it for Ganesha
12:41:35 <kkeithley> #topic Samba
12:41:52 <kkeithley> ira? obnox?
12:42:13 <ndevos> anoopcs, rjoseph: ?
12:42:51 <anoopcs> Nothing that I am aware of.
12:43:16 <anoopcs> important to integration with GlusterFS
12:43:19 <ndevos> once both protocols are supported at the same time, we should consider an animation of Ganesha dancing the Samba
12:43:37 <kshlm> ndevos, :)
12:43:49 <kshlm> I'm trying to picture it now!
12:43:59 <ndevos> :D
12:44:00 <post-factum> new logo idea
12:44:11 <post-factum> instead of ant
12:45:11 <kkeithley> #topic Last Week's AIs
12:45:23 <kkeithley> kshlm and ndevos to respond to
12:45:23 <kkeithley> http://www.gluster.org/pipermail/maintainers/2016-July/001063.html
12:46:07 <kshlm> None of my AIs are done. :(
12:46:23 <ndevos> uh, I dont even remember that AI...
12:46:42 * kshlm has been playing too much Pokemon.
12:46:53 <kkeithley> we'll skip the next one because I know it's not going to be done for a while (setting up faux/pseudo user  email for gerrit, bugzilla, github)
12:47:04 <kkeithley> rastar to look at 3.6 builds failures on BSD
12:47:15 <kkeithley> rastar aka rtalur
12:47:17 <kshlm> kkeithley, I think we can drop this.
12:47:35 <kkeithley> okay, I'll take it out
12:48:15 <kshlm> Cool!
12:48:16 <kkeithley> and the last two also aren't done (3.6 EOL mail and glusterd-2.0 in CentOS CI)
12:48:25 <kshlm> Yes.
12:48:36 <kkeithley> #topic Open Floor
12:48:57 <kshlm> I'm dropping out now. Need to go pick-up my mother.
12:49:06 <kkeithley> Glusto/Distaf discussion?  Whose is that?
12:49:14 <kkeithley> ciao
12:49:17 <loadtheacc> that's mine
12:49:18 <ndevos> I'd like to request others to think about what what testing we should do, like integration testing with other projects
12:49:35 <kkeithley> the floor is yours loadtheacc
12:50:26 <loadtheacc> discussion has been around moving to a Python standards (PyUnit, PyTest, Nose) standard format for test scripts and Glusto framework supporting that.
12:50:47 <ndevos> #link http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/15946
12:51:16 <loadtheacc> I sent a "brief" email introducing the concept last night and looking for opinion from the community
12:51:33 <msvbhat> https://www.gluster.org/pipermail/gluster-devel/2016-July/050137.html
12:52:09 <ndevos> loadtheacc: like kshlm mentioned, a demo would be really nice
12:52:41 <ndevos> https://asciinema.org/ seems very suitable for that, I think others used it before already
12:53:15 <loadtheacc> ndevos, I can put one together
12:53:42 <ndevos> that would be great, it makes it so much easier to understand what it is about :)
12:53:44 <kkeithley> #action loadtheacc will make a demo of Glusto
12:54:28 <kkeithley> anything else?
12:55:02 <ndevos> loadtheacc: I'll reply to you email too, but I would like to see an example of a test-case, just to get the idea
12:55:47 <kkeithley> #topic Recurring Topics
12:55:57 <loadtheacc> ndevos, cool cool. and we have a gluster base class prototype coming together as well.
12:56:16 <ndevos> kkeithley: I'd like to request others to think about what what testing we should do, like integration testing with other projects
12:56:54 <kkeithley> indeed.
12:57:02 <ndevos> loadtheacc: sounds promising, I hope some of the distaf work can be (or already is) re-used
12:57:43 <ndevos> so on the integration testing, we'll be running some Gluster + NFS-Ganesha in the CentOS CI, and it'll run standard cthon04 and pynfs tests
12:57:50 <loadtheacc> ndevos, definitely. not a total rewrite for libraries, etc. mostly simple s/this/that syntax with sed and some config
12:58:24 <ndevos> I also want to run the upstream QEMU tests (Advocado) in the CentOS CI, just to make sure we do not break QEMU+libgfapi again
12:58:55 <ndevos> or, as they will be daily/weekly runs, we at least find the new bugs early and can fix them soon
12:59:35 <kkeithley> almost out of time
12:59:42 <kkeithley> #topic Recurring Topics
13:00:12 <kkeithley> FYI FOSDEM announced dates,  4-5 Feb 2017.  Probably means DevConf will be 10-12 Feb 2017
13:00:31 <kkeithley> If you're attending any event/conference please add the event and yourselves to Gluster attendance of events: https://public.pad.fsfe.org/p/gluster-events
13:00:32 <ndevos> #help ndevos would like to receive ideas for more testing related to interation with other projects
13:00:39 <kkeithley> Put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news
13:00:50 <kkeithley> Use the following etherpad for backport requests  https://public.pad.fsfe.org/p/gluster-backport-requests
13:00:58 <kkeithley> anything else before we adjourn?
13:01:45 <kkeithley> going once?
13:01:56 <kkeithley> going twice
13:02:20 <kkeithley> #endmeeting