weekly_community_meeting_21-sep-2016
LOGS
11:59:57 <kshlm> #startmeeting Weekly community meeting 21-Sep-2016
11:59:57 <zodbot> Meeting started Wed Sep 21 11:59:57 2016 UTC.  The chair is kshlm. Information about MeetBot at http://wiki.debian.org/MeetBot.
11:59:57 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
11:59:57 <zodbot> The meeting name has been set to 'weekly_community_meeting_21-sep-2016'
12:00:06 <kshlm> #topic Roll Call
12:00:31 <kshlm> I'll wait for 2 minutes for people to filter in.
12:02:11 <kshlm> Is no one joining the meeting today?
12:02:16 * kkeithley is here
12:02:21 * samikshan is here
12:02:22 <kshlm> kkeithley, Hey!
12:03:05 <kkeithley> kshlm: hey
12:03:31 <kshlm> Hey samikshan!
12:03:32 * rjoseph is here
12:03:38 <kshlm> Hey rjoseph!
12:03:45 <kshlm> Is it just the 4 of us here?
12:03:53 <rjoseph> hi kshlm
12:04:01 <rjoseph> seems like
12:04:11 * ndevos _o/
12:04:20 <kshlm> I'd rather end the meeting, than continue with such low attendance
12:04:28 * ankitraj here
12:04:29 <kshlm> Hey ndevos!
12:04:39 <samikshan> Hey kshlm
12:04:46 <kkeithley> I see 38 users, and I know a few of them are actually on-line
12:04:51 <kshlm> Looks like people are coming in.
12:05:00 <kshlm> I'll announce on the other channels again.
12:05:29 <Klas> kkeithley: some of us are just idlears though (just interesting to read)
12:05:44 <nigelb> o/
12:06:11 <kshlm> Hey nigelb!
12:06:17 <kshlm> Welcome Klas.
12:06:35 <kshlm> I'll start the meeting. We've got enough attendees.
12:06:39 <nigelb> (I'll drop out early because I have another meeting)
12:06:47 <kshlm> We can finish quickly.
12:07:04 <kshlm> nigelb, In that case, can you provide infra updates now?
12:07:57 <kshlm> nigelb, ?
12:08:46 <kshlm> Looks like he dropped off already.
12:08:57 <kshlm> #topic Next weeks host
12:09:02 <kshlm> Do we have a volunteer?
12:10:04 * samikshan will do it
12:10:06 <nigelb> woops
12:10:11 <kshlm> samikshan, Thanks!
12:10:12 <nigelb> I'm still here.
12:10:19 <kshlm> #info samikshan is next weeks host
12:10:21 <kshlm> nigelb, cool
12:10:26 <kshlm> #topic Project Infrastructure
12:10:38 <nigelb> okay, we're still debugging our first VM in the cage.
12:10:44 <nigelb> The tests keep failing at random points.
12:10:51 <nigelb> Officially, we need help :)
12:11:20 <kshlm> Awesome!
12:11:25 <misc> we did fix a few stuff however
12:11:32 <nigelb> If someone has time, talk to me or misc.
12:11:33 <kshlm> misc, Awesome!
12:11:46 <nigelb> We'll get you access so you can help debug.
12:11:57 <misc> and a cookie if that work
12:12:06 <nigelb> We've done a massive Jenkins access clean up.
12:12:08 <kshlm> nigelb, misc, Get this onto gluster-devel@ More people follow that.
12:12:13 <nigelb> Noted.
12:12:21 <nigelb> Most people who need access will continue to have it.
12:12:27 <nigelb> we have removed a lot of dormant accounts
12:12:36 <nigelb> and cycled out the jenkins password across all nodes.
12:12:43 <nigelb> and the ssh keys for root and jenkins user.
12:12:59 <nigelb> reasonably confident that the CI system is more secure, but there's no easy way to tell, though.
12:13:36 <nigelb> that's it for infra from me.
12:13:44 <kshlm> That's awesome work!
12:13:47 <kshlm> nigelb++
12:13:50 <kshlm> misc++
12:13:50 <zodbot> kshlm: Karma for misc changed to 9 (for the f24 release cycle):  https://badges.fedoraproject.org/tags/cookie/any
12:14:59 <kshlm> Thank you misc and nigelb.
12:15:13 <kshlm> I'll move onto the other topics if you have nothing more to add.
12:15:45 <kshlm> Thanks again.
12:15:49 <nigelb> Nope
12:15:50 <kshlm> #topic GlusterFS-4.0
12:16:10 <kshlm> GD2 has moved quite a bit this week.
12:16:29 <ndevos> nice!
12:16:33 <kshlm> I got the mutli-node transaction framework changes merged!
12:16:45 <kshlm> We've also been doing some cleanups to the code.
12:17:03 <kshlm> I hope to do a build/release for people to test.
12:17:15 <kshlm> Expect this before this time tomorrow.
12:17:26 <kshlm> I'll also be sending an update out to gluster-devel.
12:18:28 <kshlm> In other 4.0 news, jdarcy updated gluster-devel on his progress on brick-multiplexing.
12:18:55 <kshlm> #link https://www.gluster.org/pipermail/gluster-devel/2016-September/050928.html
12:19:11 <kshlm> Good progress there.
12:19:30 <kshlm> He's asked for help to move progress forward, and there are volunteers already.
12:19:52 <kshlm> Refer to the linked mail-thread for more information.
12:20:01 <kshlm> That's all I have for 4.0.
12:20:14 <kshlm> Do we have anything else re. 4.0 to discuss?
12:20:27 <kshlm> I'll move on if not.
12:21:30 <kshlm> #topic GlusterFS-3.9
12:21:40 <kshlm> aravindavk, Are you here?
12:22:25 <kshlm> I haven't noticed any updates on 3.9 on the lists.
12:22:26 <aravindavk> kshlm: hi
12:22:46 <kshlm> aravindavk, Do you have anything to share on 3.9?
12:22:50 <aravindavk> kshlm: Today, Pranith will be tagging RC1
12:23:20 <kshlm> aravindavk, Cool!
12:23:32 <kshlm> It would he nice if this was announced on the lists.
12:23:43 <ndevos> pranith was doing some release notes just before the meeting, it should be up there soon
12:23:49 <aravindavk> kshlm: Pranith sent WIP patch for release notes http://review.gluster.org/#/c/15538/
12:24:25 <kshlm> ndevos, Yup. He asked me about the release-notes script.
12:24:26 <ndevos> aravindavk: just update the release notes for each rc that follows :)
12:24:37 <kshlm> Though I don't see him at his place now.
12:24:41 <ndevos> kshlm: I was standing next to him when he executed it ;-)
12:25:09 <aravindavk> kshlm: he is not able to connect to freenode IRC, looks like some issue
12:25:10 <kshlm> aravindavk, Does the review-request require more reviews?
12:25:41 <kshlm> I had that problem yesterday. Seems like too many users from the same location.
12:25:50 <aravindavk> kshlm: for RC1 patch is fine, we will send mail asking update for release notes
12:25:56 <kshlm> s/yesterday/two days before/
12:26:03 <kshlm> aravindavk, Awesome!
12:26:20 <kshlm> Keep up the awesome work you guys are doing.
12:26:39 <kshlm> aravindavk, Shall I move onto the next topic if you're done?
12:26:58 <aravindavk> kshlm: sure. Thanks
12:27:05 <kshlm> #topic GlusterFS-3.8
12:27:17 <kshlm> ndevos, Your turn.
12:27:41 <ndevos> ... I dont think there is anything to mention
12:27:55 <kshlm> That's good!
12:28:03 <kshlm> Everything's moving along smooth.
12:28:06 <ndevos> oh, the next release is basically when the Gluster Summit is happening :D
12:28:26 <kshlm> We'll be having a summit release then!
12:28:47 <ndevos> yes, looks like it
12:29:00 <kshlm> Cool!.
12:29:33 <kshlm> Shall I move on?
12:29:43 <ndevos> sure
12:29:45 <kshlm> #topic GlusterFS-3.7
12:29:55 <kshlm> Not much to add here.
12:30:10 <kshlm> I asked for volunteers to take over release maintenance
12:30:23 <kshlm> Got no replies.
12:30:35 <kshlm> So I guess I'm doing the release again this time.
12:31:01 <kshlm> The release is still on for 30th September.
12:31:20 <kshlm> I guess 3.7 could also do a summit release.
12:31:39 <kshlm> But I'd rather not.
12:32:05 <kshlm> I'll be sending out reminders about the release over the weekend.
12:32:12 <kkeithley> I think it'd be okay to skip this cycle due to the summit,
12:32:19 <kkeithley> but your call
12:32:48 <kkeithley> how many patches are there?
12:32:58 <kshlm> kkeithley, 16 right now.
12:33:06 <kshlm> I think I'll do it.
12:33:50 <kshlm> I'll move on to the next topics if there is nothing else 3.7 to discuss.
12:34:27 <kshlm> #topic NFS Ganesha
12:35:07 <kkeithley> 2.4 is set to GA later today or early  US west coast time tomorrow
12:35:07 <kshlm> kkeithley, ndevos, What's new in Ganesha land?
12:35:29 <kshlm> Nice!
12:35:30 <kkeithley> the libntirpc dependency GA'd the other day
12:36:56 <kshlm> So we'll be having a new NFS-Ganesha to play with soon.
12:37:04 <kkeithley> yup
12:37:10 <kshlm> Shall I move on?
12:37:19 <kkeithley> yes
12:37:28 <kshlm> Thanks kkeithley
12:37:34 <kshlm> #topic Samba
12:37:59 <kshlm> Any Samba devs active here right now?
12:38:06 <kshlm> obnox, ?
12:38:51 <kshlm> Doesn't seem to be here.
12:38:53 <kshlm> I'll move on.
12:39:14 <kshlm> #topic Last weeks AIs
12:39:26 <kshlm> #topic rastar_afk/ndevos/jdarcy to improve cleanup to control the processes that test starts.
12:39:39 <kshlm> I don't think anythings been done about this.
12:40:00 <kshlm> ndevos, Have you done anything about this?
12:40:27 <kshlm> #action rastar_afk/ndevos/jdarcy to improve cleanup to control the processes that test starts.
12:40:32 <kshlm> I'll carry this forward.
12:40:40 <ndevos> no, I dont
12:41:07 <kshlm> ndevos, Any idea if either rastar or jdarcy are looking at this?
12:41:22 <ndevos> sorry, I dont know
12:41:43 <kshlm> Okay.
12:41:47 <kshlm> #topic RC tagging to be done by this week for 3.9 by aravindavk.
12:41:53 <kshlm> This is in progress.
12:42:11 <kshlm> I'll keep it open for this week as well.
12:42:19 <kshlm> #topic RC tagging to be done by this week for 3.9 by aravindavk/pranithk
12:42:34 <kshlm> #action RC tagging to be done by this week for 3.9 by aravindavk/pranithk
12:42:39 <kshlm> #topic jdarcy will bug amye regarding a public announcement for Gluster Summit talks
12:42:50 <kshlm> amye sent out an announcement to the mailing list.
12:43:02 <kshlm> jdarcy's bugging worked!
12:43:08 <samikshan> :D
12:43:27 <kshlm> #link https://www.gluster.org/pipermail/gluster-devel/2016-September/050888.html
12:43:36 <kshlm> That's it with the AIs.
12:43:42 <kshlm> #topic Open floor
12:43:58 <kshlm> #topic RHEL5 build issues
12:44:02 <kshlm> nigelb, added this.
12:44:12 <kshlm> What are these issues?
12:44:33 <ndevos> there was an email about that
12:44:36 <rjoseph> I have an update on our upstream documentation improvement effort
12:44:49 <ndevos> we dont build for rhel5, because it does not have all the dependencies
12:44:56 <kshlm> rjoseph, Give us a minute. We'll get to you next.
12:45:07 <rjoseph> sure
12:45:07 <aravindavk> kshlm: one issue was related to running eventskeygen.py file
12:45:41 <aravindavk> kshlm: rhel5/centos5 has python 2.4, incompatible syntax
12:45:43 <ndevos> we need a './configure --without-server' to only build the client side bits, or maybe additional --without-... options
12:45:44 <kshlm> aravindavk, A particualr python feature not supported on el5?
12:46:01 <ndevos> and userspace-rcu is too old there
12:46:02 <kshlm> ndevos, That would be a very good thing to have.
12:46:16 <kshlm> ndevos, We've worked around rcu issues on el5.
12:46:25 <ndevos> kshlm: ah, ok :)
12:46:26 <aravindavk> kshlm: string.format and "with" statement was not available in py 2.4
12:46:28 <nigelb> yeah what ndevos said.
12:46:40 <nigelb> so, I'm thinking of setting up mock builds for el5
12:46:58 <nigelb> I'll report back once the fixes are merged in and we have something stable.
12:47:02 <kkeithley> is it that userspace-rcu is too old, or that RHEL5 doesn't have preadv and pwritev?
12:47:04 <kshlm> Just a thought, should we still be supporting el5?
12:47:20 <kshlm> It's getting a little too old now.
12:47:24 <misc> it will soon no longer be supported, no ?
12:47:33 <nigelb> *clients* will be supported, not server.
12:47:38 <misc> (iirc, we have a "no support el5 party planned" for ansible)
12:47:49 <ndevos> aravindavk: if the event things are usable for el5, you may want to add a configure.ac detection or just have a --without-eventing or something?
12:48:16 <ndevos> we dont have packages for el5 for a long time now, nobody has requested them either
12:48:38 <aravindavk> ndevos: Not events feature, that script generates C header file for clients
12:48:45 <kshlm> misc, el5 has already reached end of prod phase 1.
12:48:49 <nigelb> If downstream wants to support them and provide patches, I'm guessing we're okay with supporting it?
12:49:09 <misc> kshlm: yeah, but centos 5 is still supported
12:49:24 <misc> so I would suggest at a minima to not support longer than centos
12:49:35 <ndevos> aravindavk: yes, but if it is missing, should building not be possible without eventing?
12:49:44 <kkeithley> atm we don't have a way to elide the -server subpackage for EL5 builds
12:49:44 <kshlm> misc, Agree.
12:50:04 <misc> also, would a older version of the client be compatible with gluster ?
12:50:28 <ndevos> 'mostly'
12:50:55 <nigelb> so the way I see it, if we get the logic to separate client/server and the patches to fix issues, we're good?
12:51:35 <misc> seems so, yes
12:51:37 <kshlm> If we could get `--no-glusterd --no-geo-rep --no-events`, that should be good.
12:51:46 <kshlm> Looks easy to do as well.
12:51:51 <misc> now, we do not run any test on EL5, so we can as well declare that unsupporter
12:52:08 <nigelb> so, I'll circle the wagons to getting this done.
12:52:10 <kshlm> If it's just those 3, but I know I've missed others.
12:52:11 <kkeithley> per http://www.gluster.org/pipermail/gluster-devel/2016-April/048955.html we said we're not providing 3.8.x packages for EL5
12:52:20 <nigelb> and see if I can a mnimnal test so we know if we broke it.
12:52:56 <kshlm> kkeithley, Oh! Thanks for reminding all of us of it.
12:53:27 <aravindavk> kshlm: I will send fix soon to make it work with centos5
12:53:28 <kshlm> So with this in mind, should we still be talking of el5?
12:53:36 <ndevos> hah, nope!
12:53:48 <kshlm> This is basically a downstream issue now.
12:54:08 <ndevos> but we'll accept patches for more --without-... options
12:54:18 <nigelb> again, if downstream is goign to bear the pain.
12:54:18 <kshlm> aravindavk, Please do send it. It'll be good to have as many configure flags as possible.
12:54:21 <nigelb> why not?
12:54:27 <nigelb> (the pain of supporting it)
12:54:55 <kshlm> nigelb, We'll accept such changes upstream. But we won't be providing el5 packages.
12:55:05 <ndevos> nigelb: and additional work for packagers, and the CentOS SIGs do not have el5 either
12:55:09 <misc> but so, how do we test the changes ?
12:55:51 <ndevos> build changes are simple, just add --without-server and see if server bits get build
12:56:14 <ndevos> the other one is ssl, which is an older version in el5 than that we depend on....
12:56:19 <kkeithley> we're allowed to change our mind about providing packages----   If a --without-server option gets added we can revisit the decision
12:56:40 <nigelb> misc: that one, we have to figure out :)
12:56:41 <kkeithley> but don't expect it to be made lightly
12:56:49 <nigelb> we don't want to provide packages.
12:56:56 <nigelb> we want to make sure we don't make breaking changes.
12:57:00 <nigelb> subtle difference.
12:57:16 <misc> maybe we can contact downstream and ask if they wish to support el5 ?
12:57:22 <nigelb> They do.
12:57:30 <nigelb> which is why I added this to the agenda.
12:57:52 <kshlm> nigelb, It's only for the client bits.
12:57:55 <nigelb> yes.
12:58:04 <nigelb> so we'll figure out how to make sure the client bits can be tested
12:58:05 <kkeithley> and the bits can be in the source without us being required to provide community packages
12:58:11 <ndevos> they just should use NFS or Samba ;-)
12:58:18 <misc> nigelb: a centos 5 in cage ?
12:58:19 <kshlm> Okay.
12:58:24 <nigelb> misc: even a mock will do.
12:58:29 <nigelb> just make sure it builds.
12:58:39 <misc> oh just that
12:58:40 <nigelb> we'll discuss this on devel in the coming days.
12:58:48 <nigelb> with downstream figuring out their plan of action.
12:59:03 <kshlm> So to summarize the el5 issue, we'll be will to accept changes which help build gluster on el5. But we'll not be providing any el5 packages upstream.
12:59:18 <nigelb> and we may test el5 builds
12:59:23 <kshlm> nigelb, Yup. Let's continue this discussion on the mailing lists.
12:59:24 <misc> but not the code
12:59:32 <kkeithley> we never provide packages "upstream" ;-)
12:59:52 <kkeithley> pfft.  ignore my last
12:59:58 <kshlm> s/upstream/from the community/
13:00:25 <kshlm> We're out of time.
13:00:45 <misc> there is sill rjoseph and doc
13:00:50 <kshlm> Can anyone add a summary of what was discussed here to the mail-thread?
13:00:56 <kshlm> misc, I'll get to it.
13:01:44 <rjoseph> Any way I am planning to send a mail in devel
13:01:52 <kshlm> #link https://www.gluster.org/pipermail/gluster-infra/2016-September/002821.html
13:02:03 <rjoseph> If the time is up we can discuss on devel
13:02:06 <kshlm> #topic Updates on documentation
13:02:20 <kshlm> rjoseph, If you can provide a breif update, do it.
13:02:27 <rjoseph> ok
13:02:32 <kshlm> We can continue any discussion on the mailing lists.
13:02:33 <rjoseph> As mentioned by Amye, Red Hat is planning to contribute RHGS documentation upstream
13:02:53 <rjoseph> I am working with the Red Hat documentation team on this.
13:02:57 <kkeithley> hurray
13:03:03 <amye> Totally hurray
13:03:06 <rjoseph> I merged our upstream documenation with the Red Hat documentation and removed few Red Hat specific contents. As a POC I hosted this combined doc on gitbook.com
13:03:07 <ndevos> wohooo!!! \o/
13:03:31 <misc> mhh I did cleaned the doc this weekend, bad timing I guess
13:03:51 <rjoseph> https://rajeshjoseph.gitbooks.io/test-guide/content/
13:04:05 <rjoseph> misc: that will also help
13:04:08 <kkeithley> is there a plan to keep it in sync?
13:04:10 <kshlm> gitbooks is cool!
13:04:10 <rjoseph> I can take a look
13:04:33 <misc> rjoseph: that was mostly typo and stuff like this
13:04:34 <rjoseph> we still need to do some cleanup
13:04:37 <kkeithley> in sync, iow up to date
13:04:46 <misc> and well, the doc had lots of out of date stuff
13:04:53 <amye> Ideally, we're going to be able to have downstream continually help with upstream but we started with the admin guide only
13:05:16 <kshlm> This is good news rjoseph and amye!
13:05:29 <misc> rjoseph: where is the current code hosted ?
13:05:33 <kshlm> Please share this on the lists.
13:05:39 <kkeithley> Upstream First!
13:05:39 <misc> rjoseph: and you used pandoc for converting to asciidoc ?
13:05:40 <amye> There's been a long thread on the mailing lists but telling everyone else here first is more fun.
13:06:16 <rjoseph> the content is hosted on github
13:06:23 <rjoseph> https://github.com/rajeshjoseph/doctest
13:06:38 <rjoseph> misc: I used pandoc for conversion
13:06:50 <kshlm> amye, Yup. But we're out of time. I want to end this meeting.
13:07:15 <kshlm> We're over time now! I'll be ending the meeting.
13:07:17 <amye> yes of course.
13:07:20 <rjoseph> sure
13:07:26 <kshlm> rjoseph, Please share this good news on the lists.
13:07:27 <rjoseph> I will provide more update in mail
13:07:37 <kshlm> Thanks for attending today's meeting everyone.
13:07:53 <kshlm> #Announcements
13:08:02 <kshlm> #topic Announcements
13:08:09 <kshlm> If you're attending any event/conference please add the event and yourselves to Gluster attendance of events: http://www.gluster.org/events
13:08:16 <kshlm> Put (even minor) interesting topics on https://public.pad.fsfe.org/p/gluster-weekly-news
13:08:22 <kshlm> Thanks again everyone!
13:08:26 <kshlm> #endmeeting