weekly_community_meeting_18may2016
LOGS
12:06:54 <rastar> #startmeeting Weekly community meeting 18/May/2016
12:06:54 <zodbot> Meeting started Wed May 18 12:06:54 2016 UTC.  The chair is rastar. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:06:54 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:06:54 <zodbot> The meeting name has been set to 'weekly_community_meeting_18/may/2016'
12:07:08 <rastar> #topic Rollcall
12:07:12 * post-factum is here
12:07:19 * kkeithley is here
12:07:40 * hagarth is here
12:08:50 <rastar> I know there are others who joined too
12:09:10 * jdarcy is here
12:09:28 <rastar> #topic Next weeks meeting host
12:10:13 <rastar> I guess this is on me, given that I skipped the last time.
12:10:37 <post-factum> great admission
12:10:55 <rastar> #topic kshlm & csim to set up faux/pseudo user email for gerrit, bugzilla, github
12:11:07 * anoopcs is also here..
12:11:34 * ndevos _o/
12:11:54 <rastar> ndevos: Saravanakmr anoopcs hi :)
12:12:21 <rastar> we don't have kshlm or csim here
12:12:23 <rastar> moving on
12:12:40 <ndevos> I guess you can add nigelb to that AI too
12:13:19 <rastar> #action kshlm/csim/nigelb to set up faux/pseudo user email for gerrit, bugzilla, github
12:13:34 <rastar> #topic amye to check on some blog posts being distorted on blog.gluster.org, josferna's post in particular
12:13:39 <rastar> wasn't this resolved?
12:14:00 <ndevos> aravinda offered to look into that too, I've not heard about any outcome
12:14:57 <rastar> don't see any update..
12:15:09 <rastar> #action aravinda/amye to check on some blog posts being distorted on blog.gluster.org, josferna's post in particular
12:15:27 <rastar> #topic pranithk1 sends out a summary of release requirements, with some ideas
12:16:21 <rastar> is this the same topic as release cycles or something else?
12:16:40 * ira is here.
12:16:54 <hagarth> rastar: I think so
12:17:02 <rastar> hagarth: ok
12:17:16 <hagarth> rastar: however I think we need to close that discussion soon
12:17:18 <ndevos> I think that was similar to what aravinda wrote, but more with respect to what users and devs expect from releases
12:17:29 <hagarth> rastar: it is pending for a while now
12:17:52 <rastar> hagarth: yes
12:18:13 <ndevos> it would be good to know what we'll follow for the 3.8 release, we should tell our users what the plan for that version is :)
12:18:22 <rastar> I think apart from the inputs in the email thread, we won't get it from anyone else who are not in this meeting
12:18:25 <hagarth> ndevos: agree
12:19:03 <hagarth> ok, let us take an action item to close it out this week
12:19:15 <ndevos> I think amye was collecting some information about a LTS release, maybe she can weigh in with that?
12:19:43 <rastar> ok then, we will have an action item but resolve it in email
12:20:05 <rastar> who should own that? pranith has done his work
12:20:36 * ndevos points to hagarth
12:20:47 <rastar> perfect timing :)
12:20:51 <ndevos> hehe
12:21:15 <rastar> I would still say it should be hagarth
12:21:29 <ndevos> sure, put it on him
12:21:50 <rastar> #action hagarth to announce release strategy after getting inputs from amye
12:21:54 <jdarcy> ndevos apparently has the pointing-finger of death
12:22:07 <post-factum> that is magic wand
12:22:08 <rastar> hagarth: we assigned that AI to you
12:22:37 <rastar> #topic kshlm to check with reported of 3.6 leaks on backport need
12:22:59 <rastar> kshlm is not here, moving on..
12:23:12 <rastar> #action kshlm to check with reported of 3.6 leaks on backport need
12:23:14 <hagarth> rastar: cool, thanks
12:23:38 <rastar> #topic GlusterFS 3.7
12:24:15 <post-factum> hagarth: 3.7.12?
12:24:37 <ndevos> we missed the 3.7.12 release date with almost 3 weeks now :-/
12:25:17 <hagarth> post-factum: will take that topic this week with maintainers
12:25:46 <rastar> hagarth: I remember, 3.7.12 is what lead to the idea of maintainers signoff
12:25:46 <hagarth> ndevos: if there's no urgency, I don't mind missing a scheduled date
12:25:53 <kkeithley> well, 3.7.11 was released only about 12 days before 3.7.12 was due
12:26:38 <ndevos> hagarth: yeah, thats fine, I would expect 3.7.12 at the end of this month, just one date skipped
12:26:38 <rastar> kkeithley: yes, thats true
12:26:44 <kkeithley> which itself was three weeks late
12:27:04 <hagarth> ok, I will follow up on 3.7.12
12:27:20 <ndevos> skipping a release should be fine, as long as there are no urgent issues to fix
12:27:36 <rastar> hello misc
12:27:49 <misc> hi
12:28:36 <rastar> misc: we have covered AIs, nothing to update there I guess
12:28:36 <kkeithley> who decides that there are no urgent issues? Is it a community decision?
12:29:03 <hagarth> kkeithley: yes, if there is an urgent issue I would expect to hear about it in community forums
12:29:05 <ira> kkeithley: Lack of patches? :)
12:29:25 * kkeithley believes there are plenty of 3.7 patches queued up
12:29:29 <post-factum> ira: there are lots of commits merged for .12 already
12:29:29 <rastar> ira: sometimes it is the other way around
12:29:49 <rastar> some commits don't get backported till a bug is filed for 3.7
12:30:28 <ira> Then why not release?  If there's patches made...
12:30:46 <post-factum> ira: oh :)
12:30:47 <ndevos> kkeithley: I would say it is up to the assigned release engineer, and I suggest that even 5+ minor patches would be sufficient for a release
12:30:52 * kkeithley kinda expects that if someone fixes a bug in master/mainline that the dev will automatically file a bug for it in 3.8 and 3.8
12:31:13 <kkeithley> 3.8 and 3.7
12:31:17 <rastar> kkeithley: I expect that too, just my observation that it is not always true
12:31:24 <kkeithley> at this stage of the game
12:31:40 <ndevos> and even 3.6, if the bug is there too
12:32:37 <rastar> ok, i guess we have decided on required things, hagarth will talk to maintainers and we will have a 3.7.12 release by end of this month
12:32:37 <kkeithley> you're correct, it's not always true. But if I keep saying it enough maybe it'll start happening on a more regular basis
12:32:47 <rastar> kkeithley: :)
12:33:06 <kkeithley> I want world peace too.
12:33:06 <rastar> #topic GlusterFS 3.6
12:34:32 <ndevos> raghu isnt there... and nobody stepped up to help him with 3.6 for all I know
12:34:44 <rastar> rabhat had requested for help in maintaining 3.6
12:34:58 <rastar> ndevos: yes
12:35:22 <rastar> anyone?
12:35:38 <hagarth> the patch volume is low atm in release-3.6: http://review.gluster.org/#/q/status:+open+branch:+release-3.6
12:36:31 <rastar> hagarth: yes, just looking for a backup
12:36:42 <rastar> ok , I will put my name here
12:36:49 <rastar> will work with rabhat on this
12:37:32 <hagarth> rastar: great, thank you!
12:37:32 <rastar> i updated the etherpad
12:37:39 <rastar> I think that is enough.
12:37:50 <rastar> #topic GlusterFS 3.5
12:38:18 <ndevos> no ugs have been brought to my attention, and I have not noticed any patches that got submitted
12:38:25 <ndevos> ugs = bugs
12:38:42 <rastar> ndevos: cool, anyways it is at the end of life
12:38:47 <ndevos> there is no plan to release an other 3.5 update
12:39:04 <ndevos> yes, it'll be officially EOL when 3.8.0 ships
12:39:12 <rastar> next topic, the coolest thing for now
12:39:15 <rastar> #topic GlusterFS 3.8
12:40:06 <ndevos> maintainers and packagers are already looking into 3.8rc1
12:40:11 <ndevos> #link http://thread.gmane.org/gmane.comp.file-systems.gluster.maintainers/727
12:40:23 <ndevos> we got some feedback from the Debian maintainer
12:40:33 <kkeithley> 3.8(.0)rc1 is packaged for Fedora24 and F25. It'll be in Fedora24 Updates-Testing repo soon.
12:40:38 <ndevos> ... and that is the only one that gave direct feedback :-/
12:41:18 <ndevos> cool, thanks kkeithley! please remind everyone by sending a reply to the announcement
12:41:19 <kkeithley> I'm debating (or wondering) whether to package it for other distributions, e.g. Ubuntu or SuSE.
12:41:41 <post-factum> no el7 packages?
12:41:46 <hagarth> I see two problems so far in my limited testing with 3.8:
12:42:09 <hagarth> 1. afr op-version dependency problems. itisravi is aware of this problem.
12:42:11 <kkeithley> I believe ndevos is getting el7 and el6 in the CentOS Storage SIG. TBA
12:42:24 <ndevos> post-factum: those are in the CentOS Storage SIG, there is a link in the email for those
12:42:38 <post-factum> ndevos: by bad, already looking at koji
12:42:46 <rastar> I don't feel good about shipping rc1 to users
12:42:59 <ndevos> el6 isnt ready yet, that is something the CentOS team still needs to setup for us
12:43:04 <post-factum> hagarth: i was unable to mount 3.7 volume by master client... probably, it is related
12:43:33 <hagarth> 2. rolling upgrades being broken due to additional things we are looking for in dictionary during handshake
12:43:46 <hagarth> possibly related to 2bfdc30e0e7fba6f97d8829b2618a1c5907dc404
12:44:01 <post-factum> hagarth: kinda of "Unable to fetch afr pending changelogs. Is op-version >= 30707? [Invalid argument]"
12:44:13 <hagarth> post-factum: this is problem 1. for me
12:44:21 <ndevos> hagarth: file a bug with steps to reproduce and send a mail (one per issue) to the -devel list?
12:44:25 <post-factum> hagarth: no rolling upgrades O_o?
12:44:43 <hagarth> post-factum: yes, rolling upgrades (with clients online) is broken atm
12:44:45 <kkeithley> FYI (reminder) we don't (can't) ship in EPEL because RHS/RHGS client-side pkgs are in RHEL.  We took a decision to not provide el[567] pkgs on download.gluster.org because they will be in the CentOS Storage SIG
12:44:50 <post-factum> carp :(
12:44:53 <post-factum> *crap
12:45:17 <post-factum> hagarth: hope it could be fixed
12:45:24 <ndevos> kkeithley: you should #info that :)
12:45:43 <kkeithley> We took a decision to not provide el[567] 3.8 pkgs on download.gluster.org because they will be in the CentOS Storage SIG
12:45:44 <hagarth> ndevos: too much of overhead with my limited cycles but will try doing that
12:46:22 <ndevos> hagarth: sharing experience is rather important, but yes, we're all quite busy...
12:46:23 <kkeithley> #info FYI (reminder) we don't (can't) ship in EPEL because RHS/RHGS client-side pkgs are in RHEL.  We took a decision to not provide el[567] 3.8 pkgs on download.gluster.org because they will be in the CentOS Storage SIG
12:47:06 <ndevos> anyway, 3.8 is progressing
12:47:16 <hagarth> kkeithley: would we be able to get download stats from centos SIG?
12:47:36 <ndevos> we still need all feature owners to provide release-notes on https://public.pad.fsfe.org/p/glusterfs-3.8-release-notes
12:47:54 <kkeithley> hagarth: that's a good question. I'll ask kbsingh
12:47:56 <ndevos> hagarth: not really, it is just like other distributions
12:48:36 <rastar> ok, is that all on 3.8?
12:49:01 <rastar> moving on, we are running out of time
12:49:04 <rastar> #topic GlusterFS 4.0
12:49:06 <ndevos> the release for 3.8.0 is still planned for the end of this month (or the 1st few days in June)
12:49:22 <ndevos> ... and thats all :)
12:49:27 <post-factum> ndevos: is that even possible given only one rc is released?
12:49:33 <jdarcy> I pushed a big blob of crappy reconciliation code for people to laugh at.
12:49:39 <ndevos> post-factum: more will follow
12:49:53 <post-factum> ndevos: ah (where is my popcorn)
12:49:54 <rastar> jdarcy: atinm all yours :)
12:50:13 <jdarcy> Haven't heard much from the other 4.0 leads, so I'll let them speak for themselves.
12:50:54 <hagarth> post-factum: rolling upgrades can be fixed. release cannot happen till we fix that problem.
12:51:19 <post-factum> hagarth: glad to hear that
12:51:26 <rastar> hagarth: nice
12:52:02 <rastar> jdarcy: ok
12:52:19 <rastar> I don't see any other updates on 4.0
12:52:35 <rastar> new topics on the agenda :)
12:52:53 <rastar> #topic NFS Ganesha and Gluster updates
12:53:12 * ndevos points to kkeithley
12:53:28 <kkeithley> oh
12:53:57 <post-factum> wingardium leviosa
12:54:08 <kkeithley> NFS-Ganesha starts to become the default NFS server in 3.8
12:54:13 * ndevos "avada ka..."
12:54:22 <kkeithley> work is progressing on 2.4 with dev-18 posted on Friday
12:54:30 <post-factum> ndevos: with great power comes great responsibility
12:54:32 <kkeithley> obliviate
12:54:53 <hagarth> lumos
12:54:58 <ndevos> and we'll start testing the combination more in the CentOS CI soon too, hopefully
12:55:05 <jdarcy> Ash nazg durbatuluk . . . oops, wrong canon.
12:55:55 <atinm> sorry I was afk :(
12:56:17 <rastar> Sonorus: That is how we refer to harry potter, Gluster community++
12:56:21 <post-factum> kkeithley: had some issues with nfs-ganesha+dovecot storage :((
12:56:40 <kkeithley> lease work in gluster is progressing.  Leases are the basis for NFS reservations and Samba op-locks, and will even use the lease framework for pNFS layout recall.
12:57:05 <post-factum> kkeithley: with nfs kernel client blocking in D state. so if it really will replace builtin nfs server, i need to find out what happens there
12:57:32 <rastar> post-factum: was the nfs client on the same node as server?
12:57:37 <post-factum> nope
12:57:44 <kkeithley> post-factum: let's follow up in #gluster-dev after the meeting
12:57:52 <post-factum> kkeithley: okay
12:57:55 <ndevos> post-factum: replacement will happen at one point, 3.8 will have Gluster/NFS disabled by default to encourage users to migrate to Ganesha
12:58:22 <post-factum> ndevos: ah ok, so it could be enabled again
12:58:39 <ndevos> post-factum: yes, it's only one volume option away
12:58:58 <hagarth> nfs.disable off will bring back gNFS right with 3.8?
12:59:08 <ndevos> post-factum: but we want to fix any issues that you have with ganesha
12:59:10 <kkeithley> just `gluster v set $vol nfs.disable false` to bring it back.  (in 3.8)
12:59:15 <ndevos> hagarth: yes
12:59:18 <kkeithley> it's not disabled in the build or anything.
12:59:21 <post-factum> ndevos: so want I :)
12:59:33 <hagarth> ok, cool!
12:59:42 <rastar> ok, next topic
12:59:48 * ndevos points to https://public.pad.fsfe.org/p/glusterfs-3.8-release-notes
13:00:22 <rastar> #topic Samba and Gluster
13:00:23 <post-factum> ndevos: after kkeithley's obliviate it is hard to remember...
13:00:54 <kkeithley> be glad I didn't use sectumsepre
13:01:02 <kkeithley> sectumsempre
13:01:06 <ndevos> who wants Samba if you can have Ganesha?
13:01:09 <rastar> Some updates from my side
13:01:21 <post-factum> ndevos: we want ;)
13:01:25 <rastar> :)
13:01:31 <kkeithley> crazy Windows users
13:01:39 <Saravanakmr> :)
13:01:51 * ndevos ... heh
13:01:54 <post-factum> but it is all about memory consumption for samba+gfapi
13:02:31 <rastar> ok , leases xlator was merged in 3.8 and will be basis for leases in Samba too.
13:02:51 <kkeithley> isn't that called op-locks in Samba?
13:03:01 <ira> kkeithley: No, leases.
13:03:10 <ndevos> no, leases, and delegations in NFS
13:03:10 <rastar> post-factum made a good observation, gfapi based access takes a lot of RAM when used with Samba because every connection is a new process with Samba
13:03:32 <post-factum> :(
13:03:39 <rastar> kkeithley: think of leases in SMB as op-locks done right, that is op-locks2
13:03:48 * kkeithley wonders where he got op-locks from
13:04:05 <ira> it used to be oplocks, pre 2.1.
13:04:17 <ndevos> kkeithley shows his age
13:04:24 <kkeithley> Only 30
13:04:33 <post-factum> *still 30
13:04:33 <rastar> hagarth: do we strictly say no to FUSE reexport method for Samba use cases?
13:04:38 <ira> kkeithley: In which base? ;)
13:04:50 <hagarth> rastar: are there any benefits in doing so?
13:04:52 <ira> rastar: Yes.
13:05:03 <ndevos> rastar: I do not think we can prevent users from setting that up...
13:05:13 <rastar> yes, one glusterfs process instead of as many as smbd processes
13:05:21 <rastar> hagarth: ^^
13:05:22 <ira> ndevos: We can't stop them from using a POSIX FSAL with Ganesha.
13:05:35 <hagarth> rastar: what is the footprint of samba without libgfapi?
13:05:46 <rastar> hagarth: around 10 MB
13:05:55 <ndevos> ira: indeed, we can neither prevent them from exporting fuse mounts with kernel-nfs, but we can strongly recommend against it
13:05:56 <rastar> hagarth: with Gluster it is around 200MB
13:06:06 * ndevos *cough*!
13:06:13 <kkeithley> gfapi has some memory issues itself (that I'm looking into)
13:06:24 <post-factum> rastar: I'd say, 100–120, but that does not change things much
13:06:39 <hagarth> rastar: would that be true if we disable all perf xlators?
13:06:40 <kkeithley> other people can look too.
13:06:53 <rastar> hagarth: that would reduce it to 60 maybe
13:07:08 <kkeithley> and then what happens to performance?
13:07:27 <ira> kkeithley: For non-metadata ops, it'll probably improve ;)
13:08:10 <rastar> looks like it deserves a mail thread of its own, I will start it
13:08:14 <hagarth> rastar: agree
13:08:21 <rastar> we have crossed our time limits
13:08:26 <rastar> #topic open floor
13:08:44 <post-factum> hagarth: http://review.gluster.org/14399 passed regression tests, need code review :)
13:09:54 <hagarth> post-factum: will do :)
13:10:05 <post-factum> hagarth: thanks!
13:10:14 <hagarth> did you folks check out lio + tcmu-runner with libgfapi?
13:10:36 <ndevos> oh, thats reminds me
13:10:57 <kkeithley> thought for the day: devs should occasionally do a build on a 32-bit system. It's a good way to catch log/printf format string mistakes
13:11:04 <ndevos> #help packagers for lio/tcmu-runner with libgfapi wanted to get the packages in the CentOS Storage SIG
13:11:24 <ndevos> kkeithley: just #idea that
13:11:41 <kkeithley> #idea thought for the day: devs should occasionally do a build on a 32-bit system. It's a good way to catch log/printf format string mistakes
13:11:52 <hagarth> ndevos: let us send out a note on -devel and -users. actually I want to write up about this integration
13:12:06 <hagarth> as it addresses a long pending request in the community about block storage
13:12:14 <ndevos> hagarth: I assume that with "us" you mean yourself ;-)
13:12:35 <hagarth> ndevos: check the second part of the same sentence ;)
13:12:43 <ndevos> hagarth: I also expect to see it integrated with storhaug (sp?) at one point
13:13:01 <hagarth> ndevos: possibly yes
13:13:42 <rastar> I have few things to add, there is SDC happening in BLR next week. We have presentations from rafi  on tiering in Gluster, surabhi is presenting on Multichannel in Samba and I am sure there was a third presentation that I am not remembering now.
13:14:13 <hagarth> rastar: from atinm?
13:14:16 <ndevos> rastar: cool! is that on the event page already?
13:14:35 <post-factum> rastar: is multichannel like multipath-tcp but for poor people :)?
13:14:43 <rastar> hagarth: i am not aware of but atinm might have
13:14:50 <ndevos> #link https://www.gluster.org/events/
13:14:53 <rastar> ndevos: it has not been updated yet, I will ask them to
13:15:00 <Saravanakmr> I have something to ask about gluster blog..
13:15:16 <Saravanakmr> Do we have some document which explains how to add a blog in gluster.org?
13:15:38 <rastar> post-factum: I don't know about multipath-tcp but this enables windows clients to contact server on as many NICs as it can get route to
13:15:49 <ira> post-factum: No, it is the enabling technology for working SMB Direct, and RDMA ;).
13:15:56 <ndevos> Saravanakmr: add an entry in https://github.com/gluster/planet-gluster/blob/master/data/feeds.yml and it'ss get synced on plant.gluster.org
13:16:10 <post-factum> ira: sounds like mature enterprise, ok
13:16:24 <rastar> post-factum: and the connection dies only when the last tcp route is gone
13:16:43 <ira> allows for multiple TCP connections per nic for better throughput... etc.
13:16:48 <ira> Good stuff.
13:16:53 <Saravanakmr> ndevos, thanks! can have this added as part of documentation somewhere ?
13:17:14 <ira> obnox_: can fill you in with lots, and lots of details ;)
13:17:26 <ndevos> Saravanakmr: sure, probably suitable under http://gluster.readthedocs.io/en/latest/Contributors-Guide/Index/
13:17:44 <rastar> #action Saravanakmr to add documentation on how to add blogs
13:17:47 <Saravanakmr> ndevos, ok..will check and add here
13:18:01 <Saravanakmr> another one, related to blog
13:18:03 <Saravanakmr> what is the difference between planet.gluster.org and blog.gluster.org ?
13:18:18 <Saravanakmr> I can see different blogs updated in both the pages -  can we have one single blog interface please?
13:19:23 <ndevos> Saravanakmr: blogs.gluster.org was supposed to get removed, not sure why it is still around
13:19:23 <kkeithley> ndevos: should maintainers merge patches for 3.8 or do you want to manage those?
13:19:27 <rastar> we have really run out of time. kkeithley changed the topic too
13:19:41 <kkeithley> that was subtle, wasn't it. ;-)
13:19:42 <rastar> kkeithley: yes, relevant question
13:19:56 <ira> kkeithley: as subtle as a brick through a window.
13:20:21 <ndevos> kkeithley: I do not need to be the only one that merges backports in 3.8, maintainers are free to merge too
13:20:35 <rastar> ndevos: cool, thanks!
13:20:37 <Saravanakmr> ndevos, but I can see "Using LIO with Gluster" blog is updated in blogs.gluster.org and not in planet.gluster.org
13:20:42 <amye> Saravanakmr, blog.gluster.org is a WordPress blog -
13:21:04 <amye> planet.gluster.org is a feed aggregator
13:21:25 <ndevos> Saravanakmr: well, I dont know, but I thought the infra team didnt want to keep maintaining their own blog instance, and the planet.gluster.org was the better approach
13:21:50 <rastar> kkeithley: ira :)
13:22:03 <ndevos> but, maybe amye decided differently and we'll keep the wordpress application around anyway?
13:22:26 <amye> ndevos, the blog.gluster.org is how we're able to put out posts that don't need to be on the main webpage -
13:22:38 <amye> this is the first I've heard that the infra team didn't want to maintain it anymore
13:22:55 <rastar> I will end this meeting now, please discuss rest of the topics in a mail thread or on gluster-devell
13:23:09 <ndevos> amye: oh, well, the plan to drop it caused planet.gluster.org to popup...
13:23:42 * ndevos isnt much in favour of two sources with the same(?) information
13:23:43 <Saravanakmr> amye, ndevos I think it is better to have one single link for all blogs related to gluster from gluster.org TOP page.
13:23:55 <amye> ndevos, aha, makes sense, and that feed makes sense to keep around. Blog.gluster.org is something that causes the twitter feed to automagically link. :)
13:24:05 <obnox_> pong
13:24:13 <obnox_> ira: srry late pong - what's up>
13:24:14 <obnox_> ?
13:24:23 <amye> Saravanakmr, so things like the posts for the newsletter?
13:24:39 <ira> just multichannel discussion.
13:24:53 <rastar> Thank you everyone for attending.
13:24:58 <obnox> ah
13:25:03 <rastar> #endmeeting