15:00:29 <hagarth> #startmeeting 15:00:29 <zodbot> Meeting started Wed Feb 19 15:00:29 2014 UTC. The chair is hagarth. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:29 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic. 15:00:41 <sas> hagarth, hi 15:00:46 <hagarth> who do we have here today? 15:00:49 * lalatenduM here 15:00:54 * purpleidea ay 15:00:55 <lpabon> Hello 15:00:57 <ndevos> hello! 15:00:59 * sas here 15:01:02 * jclift_ gets coffee 15:01:11 <jclift_> But I'm here. :) 15:01:12 * kkeithley aqui 15:01:22 <hagarth> #topic AI from last week 15:01:34 * xavih here 15:01:47 <hagarth> I think xavih's patches did get reviewed over last week 15:01:54 <hagarth> we need to get that in. 15:01:57 * ira is here. 15:02:04 <dbruhn> Here 15:02:28 <hagarth> I have updated the 3.6 schedule in the planning page as per last week's discussion 15:02:41 <hagarth> #link http://www.gluster.org/community/documentation/index.php/Planning36 15:02:48 <kkeithley> xavih's patches only have +1 on release-3.4 and release-3.5. Both pass regression. Did it get +2 on master? 15:03:24 <hagarth> kkeithley: good catch, we still need to review it on master 15:03:34 <hagarth> will do that for master 15:03:35 <xavih> kkeithley: +1 on 3.4 and 3.5, not reviewed on master 15:04:08 <hagarth> lalatenduM managed to send out an email on CentOS SIG, thanks for that! 15:04:29 <hagarth> I think we are fairly covered on AIs, let us move on 15:04:44 <hagarth> #topic 3.5.0 15:05:11 <hagarth> beta3 has got some test coverage 15:05:43 <hagarth> we have encountered issues in encryption and compression - my take is to call them as beta features for 3.5.0. any thoughts? 15:06:02 <jdarcy> Sounds reasonable. 15:06:13 <jdarcy> Have the issues been BZed? 15:06:18 <ira> Depends on the issues? 15:06:29 <hagarth> jdarcy: yes, they have been BZed. 15:06:39 <lalatenduM> hagarth, agree with ira 15:07:04 <hagarth> ira: I encountered one data corruption with both compression + encryption loaded 15:07:10 <lalatenduM> What if feature is mostly broken 15:07:22 <ira> hagarth: Do you know which was the culprit? 15:07:37 <hagarth> both xlators cannot work well wit most of our performance xlators 15:08:03 <ira> Are they even beta quality? 15:08:23 <hagarth> ira: I suspect that it is related to compression - https://bugzilla.redhat.com/show_bug.cgi?id=1065634 15:08:24 <glusterbot> Bug 1065634: urgent, unspecified, ---, vbellur, NEW , Enabling compression and encryption translators on the same volume causes data corruption 15:08:43 <hagarth> ira: I think they are ready to be tested out in this release 15:09:07 <lalatenduM> I also encountered an issue with compression xlator, input/output error https://bugzilla.redhat.com/show_bug.cgi?id=1065644 15:09:09 <glusterbot> Bug 1065644: unspecified, unspecified, ---, vraman, NEW , With compression translator for a volume fuse mount I/O is returning input/output error 15:09:30 <lalatenduM> however needinfo is on me now for the bug 15:09:45 <hagarth> lalatenduM: right 15:09:46 <jdarcy> For encryption, I'm a little more worried about 1065639 ( Crash in nfs with encryption enabled). 15:10:11 <ira> jdarcy: Ironically, crashes worry me less than the straight out data corruptions ;) 15:10:15 <jdarcy> If encryption and compression can't be used together, well that's sad, but OK. If encryption can't be used with *NFS* that seems more seriously. 15:10:21 <jclift_> Hmmm, do we have an existing way to include new features in a release (eg the network.compression stuff), but state that it's "for preview only". Eg don't use it for production data 15:10:33 <jclift_> ? 15:10:44 <hagarth> jdarcy: encryption cannot be used with NFS - crypt blocks all access fops 15:10:47 <jdarcy> ira: In my world, data loss/corruption is way worse than a crash, because it implies *permanent* loss of access to that data. 15:11:08 <ira> jdarcy: (nod) We're in agreement. 15:11:16 <jdarcy> hagarth: I'd call that a blocker for including encryption in a release. 15:11:17 <hagarth> jclift_: we will obviously release note it 15:11:24 <jclift_> np 15:11:46 <ira> +1 on NFS + encryption. 15:11:47 <hagarth> jdarcy: even for getting some testing coverage with other access protocols? 15:11:59 * jclift_ apologises in advance if his communicatino is a bit crap. Very stressed. 15:12:21 <hagarth> other access protocols being fuse mostly :) 15:12:37 <ira> I assume libgfapi also ;) 15:12:58 <hagarth> right, libgfapi as well 15:12:59 <kkeithley> fuse=gluster=gfapi 15:13:02 <jdarcy> hagarth: My gut says yes, even then. We just don't have enough control over that, and if they *just once* try to access that volume over NFS then they risk screwing it up even for subsequent FUSE access. 15:13:09 <kkeithley> versus NFS 15:13:40 <hagarth> I think the crash in bz 1065639 is due to a graph cleanup change which has been added in 3.5 15:13:46 <ira> How much of our client access is over NFS? 15:13:58 <jdarcy> hagarth: OTOH, if it's just a locking problem, then they wouldn't actually be losing data. 15:14:07 * jdarcy waffles. Send syrup. 15:14:23 <purpleidea> jdarcy: canada has syrup. you send waffles 15:14:24 <hagarth> init() failed with encryption and the resulting graph cleanup caused the process to crash. 15:14:52 <hagarth> ira: for most non-linux, non-windows use cases, users fallback to NFS. 15:15:32 <jdarcy> I'd feel a bit more comfortable if we could temporarily *enforce* mutual exclusivity of NFS and encryption. 15:15:32 <ira> hagarth: 1% 5% 10% 25% 50% 90%? 15:15:33 <hagarth> ok, here's a quick poll - should we not package encryption given that it doesn't work over NFS? options: 15:16:06 <purpleidea> hagarth: rephrase to remove the not? 15:16:18 <hagarth> purpleidea: here goes 15:16:34 <hagarth> should we package encryption since it does not work over nfs? 15:16:36 <hagarth> options: 15:16:57 <hagarth> 1. yes 2. No, let us get some testing coverage by bundling it in a release 15:17:31 <lalatenduM> 3. package it but document it as broken 15:17:43 <hagarth> jdarcy: crypt blocks access() and sends back an error to the client, hence operations from nfs clients fail when crypt is loaded. 15:17:45 <lalatenduM> does the 3rd option make sense 15:17:49 <kkeithley> er, where do we want encryption with NFS? We're not talking about NFS+krb5p 15:18:02 <hagarth> lalatenduM: 2 and 3 seem to be related 15:18:10 <purpleidea> 3 15:18:14 <sas> 2 15:18:17 <ira> 2. I don't want to corrupt someone's data. 15:18:32 <jdarcy> I vote yes, package it, but let's make sure front-line folks are prepared for calls about NFS access hanging. 15:18:40 <lalatenduM> hagarth, agree if option 2 includes documentation to declare it broken 15:18:47 <social> Isn't the fix just to require nfs.disable option on volume and still go with option 2? 15:18:57 <lalatenduM> 3 15:18:59 <ira> One sec... BOTH options result in release. 15:19:05 <ira> MU! 15:19:05 <jclift_> ira: OSX users use NFS a fair bit, depending on their environment (can be a pain) 15:19:08 <hagarth> hmm, I think I screwed up on the options :) 15:19:33 <jdarcy> social: I would be in favor of that too. 15:20:00 <ira> I'd release it if it forced the issue. 15:20:02 <hagarth> but we seem to be trending towards packaging with adequate documentation + warnings & possibly disabling nfs when crypt is enabled 15:20:05 <ndevos> kkeithley: I think like this: client <- NFSv3 -> nfs-server / glusterfs-client <- encrypted -> glusterfs-server 15:20:17 <hagarth> ndevos: that is right 15:20:18 <lalatenduM> hagarth, yes 15:20:28 <lalatenduM> hagarth, I mean thats right 15:20:35 <ndevos> isnt it just safer to enforce nfs.disable=on when encryption is enabled? 15:20:43 <hagarth> ndevos: right 15:21:01 <jdarcy> I think it would be fairly easy for the crypt translator to check whether it's running in the same graph as NFS. 15:21:17 <social> also you could prepare whole documentation and say it's demo and it might get enabled later if the nfs gets fixed 15:21:43 <lalatenduM> social, yeah agree 15:21:53 <jclift_> We shouldn't rely on people just reading docs. Just saying. 15:21:53 <hagarth> social, jdarcy: agree, let us ensure that we don't give users a chance to have nfs and crypt work together 15:22:00 <hagarth> till issues are addressed 15:22:02 <jclift_> Yeah 15:22:18 <jdarcy> What about compression? 15:22:44 <hagarth> I think the same applies to compression 15:22:54 <hagarth> IMO it is not clean enough to be declared GA 15:23:03 <jdarcy> As long as there's potential for data corruption *with* encryption, I'll bet there's potential *without* as well. 15:23:41 <jdarcy> So I'd say turn it off until we better understand that corruption. 15:23:43 <purpleidea> jdarcy: ^^ then it should be nacked 15:24:14 <hagarth> none of these translators should be enabled by default 15:24:36 <hagarth> I will get some more tests going with compression 15:24:40 <jdarcy> Not much informaton in https://bugzilla.redhat.com/show_bug.cgi?id=1065634 unfortunately. 15:24:42 <glusterbot> Bug 1065634: urgent, unspecified, ---, vbellur, NEW , Enabling compression and encryption translators on the same volume causes data corruption 15:24:54 <jdarcy> Also unknown: how much does compression even help? 15:25:06 <ndevos> yes, dont ship anything (without a BIG warning) where adventurous users suddenly have some corruption 15:25:20 <hagarth> mea culpa - will try to add some more detail to that bz 15:25:31 <hagarth> jdarcy: that needs to be quantified as well 15:25:37 <hagarth> ndevos: +1 15:25:54 <overclk_> jdarcy: with zlib .. not much. 15:26:00 <lalatenduM> ndevos, +1 15:26:35 <hagarth> ok, let us take a call on compression in our next meeting or over a ML discussion after we get some more detail. 15:27:10 <lalatenduM> hagarth, howmuch time we have for 3.5 release? 15:27:23 <social> There will be possibility to drop the warning and say that it's now safe during the minor release or it'll have to wait in the off mode till next major? 15:27:29 <hagarth> we also had a 3.5 test week in BLR Red Hat office last week, lalatenduM - would you want to provide some updates from there? 15:27:41 <lalatenduM> hagarth, sure 15:28:06 <hagarth> lalatenduM: there is a memory leak reported on gluster-devel, want to understand that better and get convinced myself that geo-rep & quota work fine 15:28:07 <lalatenduM> We had organised a test day for 3.5beta3 and a heckathon 15:28:20 <hagarth> once those are addressed, we can firm up on a release date for 3.5 15:28:26 <lalatenduM> around 20 people turned up 15:28:31 <kkeithley> my experience years ago with lbx in X11 is that compression was a win only on low speed links and some of the win was probably lost due to the cpu (50Mhz 486 and Pentium Pro days) being a bottleneck. 15:28:33 <lalatenduM> hagarth, yeah 15:29:18 <ira> kkeithley: for a wan link, I can see it, especially if we are using high speed compressors like lz4... 15:29:19 <lalatenduM> There are 21 bugs logged on the qa day and 2 code patches and 1 doc patch 15:29:51 <kkeithley> lalatenduM: nobody tested 3.4.3alpha1 on the test day? 15:30:00 <hagarth> ira: for most client - server transfers we normally do not go over a wan link 15:30:13 <lalatenduM> kkeithley, nope, may b ewe missed that one 15:30:15 <ira> kkeithley: georep. 15:30:50 <ira> kkeithley: Otherwise, I pretty much agree... little use. 15:30:53 <social> kkeithley: we are running most of 3.4.3 patches already on production (really needed the memleak fix) 15:31:14 <hagarth> lalatenduM: thanks for the update 15:31:22 * lalatenduM not sure if he is interrupting the other discussion l 15:31:26 <lalatenduM> :) 15:31:27 <kkeithley> social: which BZ for the memleak? 15:31:31 <hagarth> we are moving to 3.4, so let me update the topic :) 15:31:35 <hagarth> #topic 3.4 15:31:59 <social> kkeithley: sec I'll just open our git and paste the stuff we have 15:32:07 <hagarth> social: are you referring to the libxattr mem leak? 15:32:25 <hagarth> s/libxattr/libxlator/ 15:33:00 <social> BUGS: 977497 841617 1057846 971805 15:33:01 <hagarth> seems to be bz 841617 15:33:37 <social> we better pulled in stuff that seemd urgent as we won't have another window for some time 15:34:19 <jclift_> kkeithley: I used to use the compression in X11 remotely (eg to data centers in other countries), because it make it workable. Seriously annoyed they dropped it because it made things unworkable. :( 15:34:20 <kkeithley> I already have it as a (candidate) blocker in the tracker bz 15:34:55 <hagarth> kkeithley: what would be our preferred mode for tracking - backport wishlist or the tracker bz? 15:35:14 <kkeithley> tracker bz I suppose. 15:35:15 <social> I'd note that we'd love to see this fixed 1063832 - it's annoying 15:35:23 * ndevos prefers a tracker, that gets updated automatically 15:35:53 <hagarth> right, we probably should add a pointer to this bz from the backport wishlist wiki page 15:36:34 <hagarth> ok, anything more on 3.4? 15:37:03 <hagarth> social: please feel free to update the tracker bz or let kkeithley know 15:37:09 <hagarth> moving on 15:37:16 <hagarth> #topic rpm packaging changes 15:37:18 <kkeithley> looks like the fix is in master, needs backport to 3.4 15:37:35 <hagarth> ndevos, jclift_: do we need any action on rpm packaging? 15:37:44 <jclift_> Yeah 15:37:58 <ndevos> hagarth: no not really ... oh well, I'll leave that to jclift_ 15:38:20 <hagarth> jclift_: what are the changes that we need now? 15:38:51 <jclift_> Niels has proposed to split out glupy and other less used translators into a glusterfs-extra-xlators rpm 15:39:02 <ndevos> (glusterfs-server and glusterfs-geo-replication stay as it currently is) 15:39:10 <hagarth> ndevos: right 15:39:28 <jclift_> AFAIK, it's so there no longer needs to be a dependency on Python for the base glusterfs rpm 15:39:40 * jclift_ isn't against it 15:40:02 <hagarth> jclift_: sounds like a good idea, shall we consider this for 3.6? 15:40:13 <jdarcy> I'm generally in favor of splitting out parts that add extra dependencies. 15:40:18 <jclift_> I'm more wondering if we want it for 3.5 (Niels is more for this), or for 3.6 15:40:34 <hagarth> jdarcy: +1 15:40:37 * jdarcy hates it when other packages pull in dependencies specific to features he doesn't even use. 15:40:45 <jclift_> I'm kind of thinking it's a bit late in the cycle for 3.5, but I've already written the code to do it, and it's in Gerrit waiting for review 15:41:06 <ndevos> so, glupy in the glusterfs (base) package introduces a dependency on Python - thats the reason to put it in glusterfs-extra-xlators 15:41:07 <jclift_> So we could literally do it today (I need to fix a glupy test first though I realised earlier) 15:41:21 <hagarth> jclift_: I am open to 3.5 as well. Let us review it and take it further? 15:41:26 <ira> Do you want to break out glupy on its own because it hauls in python? 15:41:39 * jclift_ shrugs 15:41:50 <hagarth> #action /me to consider new rpm packaging for 3.5 15:41:59 <ira> mark it as a python module... 15:42:03 <jclift_> Honestly I'm not bothered. Glupy isn't the only thing that uses python 15:42:03 <jdarcy> ndevos: I'd actually say we should have gluster-python to include the gfapi bindings as well. 15:42:18 <lalatenduM> kkeithley, should put samba hook script pkg also in 3.5? 15:42:20 <jclift_> Heh. I suggested something like this. :) 15:42:29 <lalatenduM> s/should/should we/ 15:42:35 <purpleidea> what machines running gluster don't already have python installed? 15:42:38 <ndevos> jdarcy: that is currently in glusterfs-api, together with libgfapi 15:42:59 <ira> You shouldn't get python unless you ask for it here... ;) 15:43:01 <jclift_> purpleidea: Ndevos mentioned that some minimal cloud images are able to exclude it as part of their slimming down 15:43:09 <purpleidea> jclift_: ah 15:43:09 <jdarcy> purpleidea: It's actually not just python, but python-devel as a build dependency too. 15:43:16 <ndevos> purpleidea: it is intended to keep images for cloud environments small, these dont seem to have python installed 15:43:21 <kkeithley> we have samba4 rpms on download.g.o for 3.4.2+ 15:44:05 <jclift_> In short, it seems like re-arranging the rpms isn't a problem 15:44:16 <jclift_> We just need to figure out how, and if for 3.5/3.6 15:44:20 <hagarth> jclift_: right, let us get it in soon if we need it for 3.5 15:44:24 <ira> I'd agree with that. 15:44:45 <hagarth> ok, anything more on rpm packaging? 15:44:45 <jdarcy> If I saw a patch to split that stuff out, I'd +1 it. 15:44:51 <purpleidea> +1 it would be great to get this is so people can use it sooner. it's a great project 15:44:58 * jclift_ points out that even for 3.5, we need _something_ for glupy which has to have gluster.py renamed anyway 15:45:15 <jclift_> (eg glupy is currently broken in release-3.5 and master branches) 15:45:16 <kkeithley> geo-rep has a lot of python in it. I guess those cloud environments just aren't going to do geo-rep. 15:45:28 <jclift_> kkeithley: Yeah, the syncdaemon 15:45:32 <jdarcy> kkeithley: Apparently not. 15:45:43 <hagarth> kkeithley: we probably should re-write syncdaemon in C too ;) 15:45:56 * ndevos has python on all his systems... 15:46:07 <lpabon> <cough> erlang </cough> 15:46:19 <jdarcy> There used to be ways to turn Python blobs into standalone executables, not sure if any of them still work. 15:46:36 <jclift_> jdarcy: Here's a patch to split Glupy into a new glusterfs-extra-xlators rpm: http://review.gluster.org/#/c/6979/ 15:46:50 <hagarth> let us move on folks - we seem to have lot more topics than time permits us today. 15:46:58 <hagarth> #topic 3.6 15:46:59 <jclift_> jdarcy: It's failing on glupy.t regression test though. /me will fix today. 15:47:02 <jclift_> np 15:47:28 <kkeithley> I'm surprised someone hasn't written a python front end for gcc. 15:47:45 <hagarth> as noted earlier, I have moved the 3.6 schedule as per last week's meeting. 15:47:50 <purpleidea> kkeithley: http://gcc.gnu.org/wiki/PythonFrontEnd 15:48:07 <lalatenduM> kkeithley, http://gcc.gnu.org/wiki/PythonFrontEnd 15:48:08 <hagarth> if you have a feature to submit, please do so by the feature proposal deadline 15:48:14 <kkeithley> yup, I just googled it myself and found it 15:48:16 <lalatenduM> purpleidea, you beat me :) 15:48:19 <hagarth> #link http://www.gluster.org/community/documentation/index.php/Planning36 15:48:24 <purpleidea> lalatenduM: hehe 15:48:42 <hagarth> we also have had snapshot patches that landed on master for review yesterday 15:49:02 <hagarth> those patches almost DDOS'd our build infra :) 15:49:03 <jclift_> Didn't thousand-node-glusterd get moved to 4.x? 15:49:03 <lalatenduM> hagarth, cool :) 15:49:25 * lpabon dreams of glusterfs all in python. no memory allocations, buffer overruns, classes... 15:49:26 <social> hagarth: we have geo-rep on cloud >.> 15:49:29 <hagarth> jclift_: it did, we can update the page after we have a meeting in 2 weeks 15:49:42 <purpleidea> lpabon: +1 15:49:50 <lalatenduM> lpabon, haha 15:49:58 <hagarth> rjoseph updated the whole of snapshot feature into a single patch today 15:50:03 <lalatenduM> lpabon, what bt performance :) 15:50:14 <lalatenduM> hagarth, that's nice 15:50:22 <hagarth> we need some help in reviewing that patch 15:50:29 <jdarcy> Kudos to rjoseph for that. :) 15:50:31 <purpleidea> lalatenduM: something, something, only a linear difference + rewrite slow parts in c 15:50:41 <jclift_> np 15:50:42 <hagarth> #link http://review.gluster.org/7128 15:50:58 <lpabon> concurrent python is fast enought (but i don't want to steer the conversation) we can take it offline :-) .. (fyi look at openstack swift) 15:51:09 <hagarth> I will start a mailing list discussion on that - we probably can divide and conquer 15:51:18 * lalatenduM will look for probable Coverity complains :) 15:51:30 <hagarth> lalatenduM: thanks! 15:51:55 <hagarth> so one more topic that spans both 4.0 and 3.6 15:51:56 <lalatenduM> lpabon, yeah another perspective :) 15:52:04 <kkeithley> hagarth: so no more 50+ patch patchbombs in gerrit? 15:52:35 <hagarth> kkeithley: no thankfully, I just have to figure out the magic gerrit gsql query to abandon those patches 15:52:52 <hagarth> #action hagarth to start a thread on review of snapshot patch 15:52:53 <jclift_> DROP TABLE 15:53:02 <hagarth> jclift_: :) 15:53:05 <jclift_> :) 15:53:15 <hagarth> we have revamped stripe proposal in 4.0 15:53:36 <hagarth> should we consider getting that into 3.6 as our current implementation is mostly unusable? 15:54:00 <jclift_> Damn. Didn't know that. 15:54:07 <hagarth> this scheme also has the benefit of offloading something from 4.0 15:54:47 <hagarth> If we don't have immediate ideas, I can follow up on this with a broader discussion on gluster-devel 15:55:26 <hagarth> that seems to be it #action hagarth to start a mailing list discussion on striping for 3.6 15:55:27 <jdarcy> hagarth: I'd favor bringing it forward if you think it's feasible 15:55:54 <hagarth> jdarcy: thanks, let us try evolving more details on the ML thread 15:55:55 <jclift_> Prob better to discuss on mailing list. :) 15:56:04 <lalatenduM> jclift_, +1 15:56:14 <hagarth> #topic open discussion 15:56:16 * jdarcy wasn't aware that the current implementation was mostly unusable. Can someone please send a short email explaining why? 15:56:40 <jdarcy> Our good buddies in Ceph-land have erasure coding and tiering as of yesterday. 15:56:43 <hagarth> jdarcy: sure, will do or have a2 follow up on that. 15:56:45 <lpabon> I have a topic: https://bugzilla.redhat.com/show_bug.cgi?id=1067059 - Unit tests 15:56:46 * lalatenduM agrees with jdarcy 15:56:50 <glusterbot> Bug 1067059: low, unspecified, ---, lpabon, ASSIGNED , Support for unit tests in GlusterFS 15:57:10 <purpleidea> jdarcy: btw, thank you for your email. i haven't had time to go through it in detail yet. of course patches are always welcome too! but now i have some homework. 15:57:17 <hagarth> lpabon: shall we queue it for the last? 15:57:21 <lpabon> sure 15:57:33 <jclift_> For discussion. Jenkins. 15:57:46 <jdarcy> lpabon: Should the xlator-test framework be considered part of that, or separate? 15:57:54 <hagarth> jclift_: go ahead with your points for discussion on Jenkins 15:57:59 <lpabon> i think of it as "phase 2" 15:58:02 <lpabon> of that project 15:58:11 <lpabon> ^^ @jdarcy 15:58:21 <jdarcy> purpleidea: I actually have a much more developed Python script (written on the plane) containing the recursive implementation. 15:58:39 <purpleidea> jdarcy: ! sweet... can you send it to me? 15:58:43 * lalatenduM looking forward to unittests in GlusterFS 15:58:47 <jclift_> Re: Jenkins. In right-now terms, we seem to be hitting lots of regression failures in master, on rpm.t. 15:58:52 <purpleidea> jdarcy: s/sweet/totally awesome/ 15:59:09 <hagarth> jclift_: right, have we debugged that failure? 15:59:22 <jclift_> Re: Jenkins. Someone that knows rpm.t should be able to figure it out pretty quickly. I don't think anyone'slooked into it yet. 15:59:31 <kkeithley> I'll take a look 15:59:35 <jclift_> Tx 15:59:42 <hagarth> #action kkeithley to look into rpm.t failure 15:59:43 <jdarcy> Thank you, kkeithley. 15:59:56 <kkeithley> been meaning to, just haven't gotten there yet 15:59:58 <jclift_> kkeithley: There are two example failing URLs in the Etherpad that might be helpful 16:00:13 <hagarth> we need to scale out jenkins - maybe I'll push this topic for the next meeting. 16:00:13 <kkeithley> yup 16:00:29 <jclift_> hagarth: Yeah, it'd be useful to have JM around 16:00:34 <hagarth> topic 2. bug triage guidelines 16:00:38 <jclift_> He can likely do a better call to action, etc. 16:00:44 <jdarcy> For scaling Jenkins (kinda related), I'm willing to pay for a couple of extra build servers in DigitalOcean or similar out of my own pocket as long as the setup's not too onerous. 16:00:54 <hagarth> lalatenduM: would you want to update on bug triaging? 16:00:59 <lalatenduM> hagarth, regarding bug triage sas is interested to do bug triage for GlusterFS, I think JM will be happy to hear that 16:01:01 <ira> jdarcy: It shouldn't come to that.... 16:01:11 <purpleidea> hagarth: i'm happy to help scaling jenkins on the sysadmin side if there is funding for it 16:01:17 <lalatenduM> sas, welcome to qe team :) 16:01:20 <kkeithley> fwiw, eng-ops is starting to rack 20+ servers from the old Sunnyvale lab (although nothing in racks yet.) Once they go in we can add them as jenkins slaves. 16:01:28 <hagarth> jdarcy, purpleidea: thanks, let us figure out more when JM is around. 16:01:29 <sas> lalatenduM, thanks :) 16:01:34 <jdarcy> kkeithley: Ah, excellent idea. 16:01:37 <lpabon> jdarcy: I have access to Rackspace from johnmark, I can spin up N number of VM nodes 16:01:39 <jclift_> jdarcy: (we should be able to put out a "call to action" and get donated resources. lets discuss next meeting?) 16:01:42 <hagarth> sas: welcome aboard! 16:01:44 <kkeithley> add some of them 16:01:51 <lalatenduM> hagarth, wil come up the doc for bug triage 16:01:51 <ira> lpabon's idea seems good ;) 16:01:53 <sas> hagarth, yes!! 16:01:54 <jdarcy> Can we repurpose some of the heka machines as a temporary stopgap? Should we? 16:02:04 <lalatenduM> lpabon, cool 16:02:06 <hagarth> lalatenduM: adding an AI for you 16:02:12 <lalatenduM> hagarth, sure 16:02:17 <lpabon> That is why we use 4 for gluster-swift 16:02:23 <hagarth> #action lalatenduM to set up bug triage process page in wiki 16:02:30 <lpabon> We can also spin up CentOS and Fedora VMs 16:02:38 <lpabon> and other linuxes :-) 16:02:41 <kkeithley> I think people are using the heka machines. At least they're reserved in beaker. 16:02:41 <hagarth> lpabon: certainly 16:03:04 <jdarcy> I'll take an action item to talk to my friends @RAX about developer/open-source discounts. 16:03:22 <lpabon> jdarcy: we don't need at , afaik, we have an account 16:03:29 <lpabon> s/at/to/ 16:03:29 <hagarth> since we seem to be running out of time, shall we just have the unit test discussion and carry over the other "open topics" to next meeting? 16:03:35 <jclift_> Just a general note, the machines that get setup need to have publically accessible interface. 16:03:36 <kkeithley> and we do have one jenkins slave available now, running NetBSD, but it's not in use yet. 16:03:43 <jdarcy> lpabon: Cheaper is still cheaper. :) 16:03:51 <jclift_> So stuff behind corp firewall with no way inside isn't workable for this 16:03:53 <hagarth> kkeithley: will sync up with you on NetBSD later this week 16:04:05 <lpabon> jdarcy: good point 16:04:06 <kkeithley> ssh reverse tunnels ftw 16:04:13 <hagarth> I take that as a yes to my question ;) 16:04:22 <jclift_> hagarth: Can we extend the meeting by 20 mins? 16:04:24 <jdarcy> kkeithley: I guarantee InfoSec would come down on us for that. Don't ask how I know. 16:04:29 <kkeithley> and eng-ops is going to work with IT on an "official" solution to that 16:04:34 <hagarth> jclift_: I am fine 16:04:45 <jclift_> np here either. 16:04:51 <hagarth> Can most of us stay back for 20 more minutes? 16:04:58 <kkeithley> eng-ops as much as told me to use reverse tunnels. 16:05:02 <purpleidea> yes 16:05:10 <jdarcy> kkeithley: That's eng-ops. Different group. 16:05:14 <jclift_> kkeithley: Lets discuss in -devel or somethere else? 16:05:35 <jclift_> Next topic? 16:05:38 <lpabon> hagarth: i can continue on -devel if that is ok 16:05:46 <hagarth> lpabon: that works too, thanks. 16:05:57 <hagarth> #topic Mailing list vs IRC for "binding" stuff 16:06:01 <hagarth> jclift_: all yours 16:06:05 <jdarcy> I don't mind extending a bit BTW, but still need to shower for 9am PST keynote. 16:06:20 <ira> Along with this: Can we get all the tests run on build.gluster.org checked into the tree? 16:06:27 <kkeithley> jdarcy: I don't need to ask, I can imagine. 16:06:42 <jclift_> Yeah. I'm just a bit confused about if stuff discussed on IRC but not on the mailing-list is considered "good enough" for major changes 16:06:46 <jclift_> eg rpm packaging 16:06:53 <jclift_> By IRC, I mean -devel, not here. 16:07:02 <hagarth> jclift_: my take would be this 16:07:26 <jclift_> Recent example was seeing a patch to merge geo-rep into gluster-server. I'm not against it, but was surprised at it only having been discussed on -devel. 16:07:36 <hagarth> irrespective of where we have discussions 16:07:39 <jclift_> (the IRC channel) 16:07:51 <hagarth> it would be good to communicate the proposal and the rationale on gluster-devel 16:07:58 <jclift_> (mailing list?) 16:08:32 <jclift_> "on gluster-devel", the _mailing list_ yeah? 16:08:35 <kkeithley> but the patch was reviewed, and much discussion ensued; 'though not much was captured anywhere 16:08:37 <hagarth> jclift_: yes 16:08:39 <jdarcy> hagarth: Agreed. It's reasonable to assume that people can come to IRC for a discussion *if they're notified on the ML*, but not that they hang out there to see everything. 16:09:07 <jclift_> No worries. Am just clarifying, because I wasn't sure. 16:09:27 <jdarcy> This is especially important because of time-zone issues. 16:09:39 <hagarth> should we just lay down this rule somewhere so that new comers understand better? 16:09:49 <jclift_> We had significant problems with previous project (Aeolus) having stuff discussed only on IRC, which excluded a lot of people that were affected/involved in things. 16:09:49 <jdarcy> Topic in -devel? 16:10:16 <hagarth> jdarcy: right, who can set the topic in IRC -devel? 16:10:35 * sas leaves now, will sync up with recorded chats 16:10:51 <hagarth> sas: thanks 16:10:53 <jclift_> hagarth: yeah, we should probably write this rule/guideline down somewhere 16:11:01 <hagarth> jclift_: AI for you? ;) 16:11:06 <jclift_> Community Standards or Governance or something 16:11:08 * lpabon going to discuss unit tests on -devel 16:11:11 <jclift_> Bleargh 16:11:14 <jclift_> Yeah, I suppose 16:11:16 <jclift_> ;D 16:11:27 <jclift_> Actually, how about we JM it? :D 16:11:44 <hagarth> #action jclift_ and johnmark to update guidelines on community standards 16:11:47 <jdarcy> We might need a bot to maintain that. 16:11:49 <jclift_> Heh 16:11:50 <jclift_> ;) 16:12:07 <hagarth> we probably could program glusterbot :) 16:12:11 <hagarth> #topic Policy on build breakers in git 16:12:19 <jclift_> Heh, me again. 16:12:21 <purpleidea> jdarcy: someone likes my bot! https://github.com/purpleidea/jmwbot/ free for all to make your own 16:12:49 <jclift_> Again a clarification thing. There's a change in git master that's causing make glusterrpms to fail on F19/F20/EL7, etc. 16:12:59 * jdarcy LOLs @ JMWbot 16:13:03 <hagarth> jclift_: My take would be this - 16:13:03 <jclift_> It's been hanging around for days. A fix has been proposed. 16:13:27 <hagarth> 1. Let the build breaker hang around if a patch is available. 16:13:35 <hagarth> 2. If not, we revert it. 16:13:50 <jdarcy> +1 16:13:51 <jclift_> k, that good clarity 16:14:07 <hagarth> #topic Patch queue cleanup 16:14:10 <ira> hagarth: 3, ask how it got there, so we don't see it again? ;) 16:14:25 <hagarth> ira: +1 :) 16:14:33 <jclift_> ira: No automatic testing of things on stuff other than CentOS 6.x atm. 16:14:34 <hagarth> jclift_: your topic again :D 16:14:52 <hagarth> jclift_: right, scaling jenkins instances will definitely help here 16:14:58 <jclift_> Yeah. It's a concern that we have outstanding open changes for review in Gerrit dating back to May 2012. 16:14:58 <ira> jclift_: Doesn't even sound hard to countermeasure ;) 16:15:03 <jdarcy> hagarth: I suggest that we send out a call to update or abandon patches over 1yo. 16:15:11 <hagarth> jclift_: +1 16:15:16 <jclift_> +1 16:15:18 <hagarth> jdarcy: right, will do that. 16:15:22 <jdarcy> I know they May 2012 one (which is mine) is so stale that I'd have to redo it anyway. 16:15:33 <hagarth> I also have been thinking of auto-abandoning aged patches 16:15:39 <jclift_> Yeah, some of the stuff looks potentially useful, but no idea if it's relevant. etc. Such a call should help cull it. 16:15:42 <hagarth> via a gerrit script 16:15:49 <jdarcy> It's not like we totally lose the history. Those patches are still there. 16:15:57 <hagarth> jdarcy: right 16:16:12 <jdarcy> IMO we should auto-prune anything that's greater than 1yo *and* rfc. 16:16:27 <hagarth> #action hagarth to send out a note on abandoning patches over 1yo 16:16:32 <hagarth> jdarcy: +1 16:16:33 <jdarcy> If it has a bug number, perhaps we should treat that differently. 16:16:37 <kkeithley> I'm wonder what we want to do with all of Amar's outstanding patches? 16:16:44 <kkeithley> s/wonder/wondering/ 16:16:54 * lalatenduM needs to leave now, will catch the log later 16:17:03 <hagarth> Amar is still active in the community - some of us can inherit them if he happens to be busy 16:17:23 <jclift_> Let's ask him if he can kill the ones he's not interestd in finishing, etc (as per the note) 16:17:24 <jdarcy> Who's the best person to solicit Amar's input? 16:17:37 <hagarth> I still have some thoughts on review discipline - will take that up in a subsequent meeting 16:17:44 <hagarth> jdarcy: I can reach out to Amar 16:18:17 <hagarth> ok, we seem to have come to the end of our listed topics 16:18:24 <hagarth> one more quick topic 16:18:38 <hagarth> I don't think I will be able to attend next week's meeting at this hour 16:18:47 <ira> I doubt I will. 16:18:59 <ira> (likely the same reason.) 16:19:05 <hagarth> can anybody run the meeting or should we cancel next week's instance? 16:19:07 <jdarcy> hagarth: To amend the previous suggestion, let's queue up an IRC meeting to decide disposition of old patches where the authors ignored the call-to-action. 16:19:26 <hagarth> jdarcy: right, noted. 16:19:35 <kkeithley> aren't about half the people here now going to be unavailable next week for the same reason? 16:19:41 <jclift_> hagarth: Lets re-schedule it? 16:19:46 <ira> That's my guess. 16:19:48 <hagarth> kkeithley: mostly yes 16:19:57 <hagarth> jclift_: most of us will be busy next week 16:20:00 <kkeithley> wish I had that excuse 16:20:01 <jdarcy> I'll be available, but it might not be worth having the meeting without quorum. 16:20:05 <jclift_> ahhh, k. 16:20:12 <jclift_> yeah 16:20:26 <hagarth> ok, we seem to be trending towards a cancellation. Let us do that and meet again in 2 weeks 16:20:30 <jclift_> Lets punt to the weel after then 16:20:33 <jclift_> week 16:20:44 <jclift_> Last topic? "< lpabon> I have a topic: https://bugzilla.redhat.com/show_bug.cgi?id=1067059 - Unit tests" ? 16:20:47 <glusterbot> Bug 1067059: low, unspecified, ---, lpabon, ASSIGNED , Support for unit tests in GlusterFS 16:21:04 <hagarth> I thought lpabon was about to send out a mail on gluster-devel 16:21:09 <lpabon> i can discuss now or in gluster-devel 16:21:11 <jclift_> Oops, missed that 16:21:14 <ira> -devel. 16:21:14 <jclift_> I'm ok either way 16:21:17 <lpabon> sure 16:21:20 <jclift_> np :) 16:21:21 <lpabon> -devel it is 16:21:21 <hagarth> lpabon: gluster-devel would be better 16:21:35 <hagarth> thanks everyone for staying back, talk to you all in 2 weeks. 16:21:41 <hagarth> #endmeeting