weekly_community_meeting_25may2016
LOGS
12:02:07 <rastar> #startmeeting Weekly community meeting 25/May/2016
12:02:07 <zodbot> Meeting started Wed May 25 12:02:07 2016 UTC.  The chair is rastar. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:02:07 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:02:07 <zodbot> The meeting name has been set to 'weekly_community_meeting_25/may/2016'
12:02:23 <rastar> #topic Rollcall
12:02:29 <post-factum> o/
12:02:33 <nigelb> o/
12:02:42 * anoopcs is here
12:02:46 * Saravanakmr here
12:02:57 * jiffin is here
12:03:01 * karthik___ is here
12:03:12 * ndevos _o/
12:03:22 * jdarcy \o
12:03:29 <rastar> awesome, we have good number of participants
12:03:53 * partner late :/
12:04:04 <rastar> partner: just in time
12:04:10 <rastar> partner: we are doing roll call
12:04:22 <rastar> ok then, next topic
12:04:24 <jdarcy>12:04:48 <rastar> #topic Next weeks meeting host
12:04:50 <anoopcs> jdarcy, That's cool.
12:05:10 <rastar> jdarcy: I wish you had raised your hand after I asked that question :)
12:05:12 * skoduri is here
12:05:26 <jdarcy> rastar: Yeah, dodged a bullet there.
12:05:39 <rastar> anyways, I am unsure of my availability next week..
12:05:50 <rastar> kshlm is out next week too
12:05:58 <rastar> anyone wants to volunteer
12:06:00 <rastar> ?
12:06:56 <rastar> anoopcs: ?
12:07:35 * shyam is here at times...
12:07:36 <rastar> no answer
12:08:00 <rastar> ok, I will put my name for now
12:08:16 <rastar> oh, hi shyam and atinm
12:08:40 <rastar> This time we thought we could do interesting stuff early in the meeting
12:08:48 <rastar> changing the order of topics
12:09:09 <rastar> #topic GlusterFS 4.0
12:09:37 <rastar> jdarcy: shyam atinm all yours
12:10:25 <jdarcy> For JBR, there's a patch out that adds (very) basic reconciliation.  Finishing that up, and starting on the "in memory map" (to speed reads from the journal).
12:10:49 <jdarcy> Shyam, Atin, and I are also discussing some ideas about how to identify subvolumes.  Expect -devel mail soon.
12:11:27 <shyam> Ok, here I go... kickstarted DHT2 upstream with the first commit for xlator init and layout management abstraction
12:12:02 <jdarcy> 👏
12:12:06 <shyam> Next tasks are to close the init routines for the server side DHT2 xlator and add a posix2 xlator shell. At which point we should start with the FOPs (when it gets interesting)
12:12:51 <rastar> Thats cool
12:13:00 <shyam> Then we can start making some noise in the community I hope ;)
12:13:08 <jdarcy> 👍
12:13:27 <rastar> shyam: with respect to FOPs, are we going to merge them on the master branch?
12:14:04 <shyam> Yup, all work is now on the master branch, the gerrithub instance where we were working is closed (or not being worked on any more, and that was the POC)
12:14:48 <rastar> shyam: ok!, looking forward
12:14:52 * atinm is little late to the meeting
12:15:11 <rastar> atinm: we started with Gluster 4.0
12:15:16 <atinm> From GlusterD2 side, we have started working on the establishing the txn framework support for multi nodes
12:16:06 <atinm> I've sent a pull request https://github.com/gluster/glusterd2/pull/89 to get started on that
12:16:27 <rastar> nice, we should really start having show and tell sessions for all this new stuff , to see them in action
12:16:33 <atinm> Also we are trying to work on a PoC for the flexi volgen idea which has been evolved after iterative discussions
12:17:20 <rastar> so that makes a good update on 4.0
12:17:31 <rastar> #topic GlusterFS 3.8
12:17:48 <ndevos> we're making progress there
12:17:55 <atinm> rastar, yes, we do have a plan on that, I am waiting on hagarth to finalize the plan
12:18:02 <ndevos> release candidate 2 has just been tagged an hour(?) ago
12:18:22 <ndevos> #link https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.0.md
12:18:54 <ndevos> #halp need assistance from feature owners and maintainers to get the release-notes in order
12:19:58 <ndevos> there are also patches under review that need more +1/+2's : http://review.gluster.org/#/q/status:open+project:glusterfs+branch:release-3.8
12:20:38 <ndevos> at the moment, packages are being build for the CentOS Storage SIG, and maybe kkeithley is doing the Fedora 24 ones
12:20:47 <kkeithley> indeed
12:20:49 <kkeithley> I am
12:21:05 <ndevos> I'll move all the bugs to ON_QA later today
12:21:42 <ndevos> I am not aware of any blockers that prevent releasing 3.8.0 next week
12:21:56 <rastar> ndevos: Thanks a lot for all the work , that changelog is really long and useful
12:22:13 <ndevos> #link https://bugzilla.redhat.com/showdependencytree.cgi?id=glusterfs-3.8.0&hide_resolved=1
12:22:49 <ndevos> rastar: yeah, lets show what's been done - and the description for the features are still missing
12:23:08 <jdarcy> 2/3 of those patches failed regression because of tests/bugs/replicate/bug-977797.t
12:23:18 <rastar> ndevos: yes, all of us have got requests from amye to fill out the feature descriptions
12:23:47 <ndevos> rastar: thats nice, but the etherpad only has very few details :-(
12:24:01 <jdarcy> Is anybody actively looking into why that particular test keeps failing in 3.8?
12:24:05 <ndevos> rastar: it would be good if people actually add the descriptions at one point!
12:24:09 * msvbhat arrives late to the meeting and reads the chat log
12:24:47 <rastar> ndevos: I agree, we should all start poking the feaure owners
12:25:08 <rastar> jdarcy: I have not looked into it.. and not aware of anyone who is
12:25:45 <ndevos> jdarcy: I did not check that, but want to note the test failure with the "recheck ... because tests/.../...t failed" comments, it would help in seeing the details in Gerrit
12:26:26 <ndevos> rastar: please help jiffin with that if you can :)
12:26:28 <jdarcy> ndevos: Good idea.  I assume our trigger code will still work if there's extra text like that.
12:26:41 <ndevos> jdarcy: yes it does :)
12:26:49 <rastar> jdarcy: ndevos yes, it should work
12:27:03 <jiffin> rastar, ndevos: I started to poke some of feature owners
12:27:15 <ndevos> jiffin: thanks!
12:27:29 <ndevos> any questions for 3.8?
12:28:04 <shyam> ndevos: Looks like a much better job than previous releases... :) good going
12:28:18 <ndevos> shyam: cool, good to hear!
12:28:33 <rastar> ndevos: thats true
12:28:38 <ndevos> ... I guess rastar can continue with the next topic
12:28:41 <rastar> no more questions I guess
12:28:59 <rastar> #topic GlusterFS 3.7
12:29:48 <atinm> ndevos++
12:29:48 <rastar> hagarth would be sending out a mail on this soon
12:29:48 <glusterbot> atinm: ndevos's karma is now 12
12:30:42 <atinm> I must say ndevos has sticked to the schedule
12:30:53 <rastar> as of now, we are looking at component based checks by maintainers before a release is done
12:31:15 <rastar> moving on
12:31:21 <rastar> #topic GlusterFS 3.6
12:31:58 <vangelis> hi! sorry, it's my first time here... for the 3.6 just one question related to the smoke tests in BSD, it appears that they are failing
12:32:34 <vangelis> does any one has an idea on how to fix this ?
12:33:05 <rastar> vangelis: do you have a link for one such failure?
12:33:15 <ndevos> vangelis: I thought the voting for *BSD on 3.6 was disabled, and the errors would not get propegated to Gerrit?
12:33:23 <atinm> vangelis, indeed it's failing, http://review.gluster.org/#/c/12710/ tried to fix it, but it didn't
12:33:37 <atinm> rastar, http://build.gluster.org/job/freebsd-smoke/10844/
12:34:35 <rastar> ndevos: that was mostly for regressions
12:34:43 <rastar> ndevos: not for smoke
12:34:50 <rastar> atinm: thanks!
12:34:50 <vangelis> you were faster :-) actually they are 3 review requests that fail for compile errors
12:34:51 <vangelis> http://review.gluster.org/#/c/14007/ http://review.gluster.org/#/c/14403/ http://review.gluster.org/#/c/14418/
12:35:25 <rastar> vangelis: thanks for pointing it out, I will have a look
12:35:35 <vangelis> thx a lot!
12:35:40 <rastar> #action rastar to looks at 3.6 builds failures on BSD
12:35:41 <ndevos> rastar: oh, but maybe you can disable the smoke job in a similar way?
12:36:19 <jdarcy> rastar++
12:36:19 <zodbot> jdarcy: Karma for rastar changed to 3 (for the f23 release cycle):  https://badges.fedoraproject.org/tags/cookie/any
12:36:20 <glusterbot> jdarcy: rastar's karma is now 6
12:36:27 <rastar> ndevos: we can, if it is incompatibility then I will, just want to see if it is anything genuine
12:36:36 <atinm> rastar, look at this : http://review.gluster.org/#/c/13633/
12:36:49 <atinm> rastar, this indicates we don't trigger smoke
12:37:17 <atinm> rastar, somehow I have a feeling that this got enabled again
12:37:17 <ndevos> rastar: I tried to fix it before, but that didnt work out so well, I *think* it has to do with parrallel make or something
12:37:36 <rastar> atinm: we don't trigger smoke as in?
12:38:04 <rastar> ndevos: worst case, change smoke.sh to have a if condition around make?
12:38:05 <atinm> rastar, patch hasn't considered any vote from smoke is what I can see
12:38:31 <atinm> rastar, rephrase, *BSD smoke
12:39:01 <rastar> atinm: you are right
12:39:11 <ndevos> rastar: maybe, but that is my guess... and I failed to fix the Makefile.am :-/
12:39:12 <rastar> atinm: we are hitting the status overwrite error again
12:39:27 <ndevos> rastar: but, its appreciated if you have a look at it :)
12:39:49 * shyam needs to drop the kids to school, but would like it if "gluster design summits" were discussed in the open floor time (like have a virtual one every 4 months etc. to shore up designs for gluster, get more focused attention on things under design/development and seed future needs)
12:39:52 <rastar> ndevos: :), not much hopeful after you try
12:40:20 <ndevos> rastar: you never know!
12:41:03 <rastar> shyam: would you prefer to send a mail?
12:41:38 <rastar> ok, point noted, moving on
12:41:48 <rastar> #topic GlusterFS 3.5
12:42:01 <rastar> no updates here I guess, waiting for EOL mostly
12:42:11 <ndevos> indeed, nothing of interest to note
12:42:14 <kkeithley> Nuke it from orbit, it's the only way to be sure.  3.5 that is.
12:42:24 <rastar> kkeithley: :)
12:42:51 <rastar> would like to skip last weeks AIs unless people have update on it
12:43:01 <ndevos> lets get 3.8.0 out and we'll add a "dead road ahead" commit in 3.5 :)
12:43:27 <rastar> I don't see any one here who should respond on the AIs except Saravanakmr
12:44:05 <rastar> ok, that was a heads up to save time
12:44:11 <rastar> I will come back to AIs later
12:44:19 <rastar> #topic NFS Ganesha and Gluster
12:45:10 <ndevos> kkeithley, skoduri or jiffin?
12:45:25 <kkeithley> heh
12:45:41 <kkeithley> nothing much to report,  work on 2.4 continues
12:46:29 <kkeithley> won't GA until all the FSALs have been ported to ex_api (i.e. the new FSAL interface)
12:46:45 <kkeithley> but nobody is asking for 2.4 yet anyway.
12:47:10 <kkeithley> Gluster has already submitted patches for FSAL_GLUSTER
12:47:39 <skoduri> yes..under review & more patches yet to come
12:48:12 <rastar> cool
12:48:28 <rastar> #topic Samba and Gluster
12:48:50 <rastar> not much to report here too
12:49:13 <rastar> but the work on integrating vfs_shadow_copy2 and Gluster is under testing stage now
12:49:39 <rastar> it feels good to see file snapshots in windows the way they are meant to be seen
12:50:05 <ndevos> rastar: oh, do you know if anyone is looking into adding glfs_lseek() with SEEK_DATA/HOLE to Samba vfs_gluster?
12:50:41 <rastar> ndevos: no
12:51:03 <ndevos> ok, I'll keep it in the back of my head then
12:51:12 <rastar> I will put it on our backlog list
12:51:23 <rastar> next topic
12:51:56 <rastar> #topic Saravanakmr to add documentation on how to add blogs
12:52:05 <Saravanakmr> this is done #link https://github.com/gluster/glusterdocs/pull/114
12:52:11 <rastar> Thanks Saravanakmr
12:52:58 <ndevos> #link http://gluster.readthedocs.io/en/latest/Contributors-Guide/Adding-your-blog/
12:53:16 <Saravanakmr> ndevos, Thanks! makes sense :)
12:53:32 <rastar> I will skip the rest of the AIs
12:53:44 <rastar> #topic Open Floor
12:53:59 <post-factum> #link http://review.gluster.org/#/c/14399/
12:54:07 <post-factum> would like to get some review please
12:54:39 <rastar> that is a  nice feature to have
12:54:58 <ndevos> "glusterfsd/main: Add ability to set oom_score_adj"
12:55:31 <ndevos> should be interesting for anyone that wants to control memory usage of gluster processes
12:55:46 <post-factum> iow, for everyone
12:55:48 <partner> speaking of memory..
12:55:48 <rastar> post-factum: added myself as reviewer, you should add others to notify them of the patch
12:55:59 <atinm> post-factum++
12:56:01 <glusterbot> atinm: post-factum's karma is now 2
12:56:07 <ndevos> rastar: who do you suggest to get added?
12:56:13 <post-factum> hagarth should do the review, but i so not see him here
12:56:16 <atinm> post-factum, I liked the commit heading, but didn't have chance to go through it
12:56:18 <post-factum> *do not
12:56:23 <partner> i can take this to other channel but libgfapi seems to make libvirtd leak memory badly and they forward to here..
12:56:37 <rastar> ndevos: considering that all files are under core component, all the core component owners?
12:56:59 <rastar> partner: what version of gluster?
12:57:04 <partner> 3.6.6
12:57:39 <partner> some rumours around its still in 3.7.11 too
12:58:10 <partner> but if anybody knows anything on the topic i'd be happy to hear and perhaps get some help on debugging the issue.
12:58:25 <partner> other than that it seems after a year of gone i seem to be again using glusterfs here :)
12:58:39 <rastar> partner: we fixed many memory leaks in 3.6, so you should have them
12:58:52 <ndevos> partner: do you have a way to reproduce the leak? something scripted maybe?
12:58:52 <partner> rastar: rgr, will try out 3.6.9
12:58:59 <rastar> partner: we know of some areas where there are more leaks
12:59:13 <partner> yes, when used with openstack a simple attach/detach will make it visible
12:59:23 <rastar> partner: does your use case involve a lot of VMs spin up and shutdowns?
12:59:28 <partner> yes
12:59:49 <post-factum> rastar: added hagarth explicitly ;)
12:59:56 <ndevos> partner: how about one VM, and attach/detach a 2nd disk repeatedly?
13:00:19 <partner> ndevos: same things happen with a single vm
13:00:32 <partner> i can try to collect some evidence to make it visible
13:00:35 <rastar> partner: yes, a little more detail is that whenever a gfapi init fini cycle is done, some memory is leaked
13:00:48 <ndevos> partner: ok, that should make it scriptable, and much easier than requirng openstack :)
13:01:29 <rastar> you wouldn't see the problem if your VMs just kept running. Not to say there is no problem, this is just the root cause. This certainly needs to be fixed.
13:01:38 <ndevos> yes, and recent 3.7 versions should reduce the leaks a little more too
13:01:45 <rastar> thats right
13:01:53 <partner> but, feel free to close the meeting, i'll get back on the other channel later on, thanks for your comments!
13:02:29 <rastar> ok, we have run out of time and it does look good if we cross the time limit in every meeting
13:02:45 <rastar> we were not able to discuss on virtual design summits that shyam proposed
13:03:00 <partner> oh, sorry :o
13:03:46 <rastar> partner: not a problem at all. Knowing about what bothers community the most is equally important
13:04:01 <rastar> I hope shyam sends a mail or else we will discuss it in next meet
13:04:08 <rastar> Thanks for attending the meeting everyone
13:04:14 <rastar> Until next week
13:04:24 <rastar> #endmeeting