gluster-meeting
LOGS
12:07:33 <JustinClift> #startmeeting Weekly GlusterFS Community Meeting
12:07:33 <zodbot> Meeting started Wed Sep 24 12:07:33 2014 UTC.  The chair is JustinClift. Information about MeetBot at http://wiki.debian.org/MeetBot.
12:07:33 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
12:07:42 <JustinClift> kshlm: Yeah, I'm way behind some threads too
12:07:47 <JustinClift> k, Roll call
12:07:59 <Humble> kshlm, we discussed that in last meeting iirc
12:08:17 * kshlm is here.
12:08:23 * Humble here
12:08:24 * lalatenduM is here
12:08:32 <kshlm> Humble, I wasn't part of the last meeting as well.
12:08:37 <Humble> ok :)
12:08:45 * itisravi is here
12:08:45 <JustinClift> overclk: Are you "here" for the meeting? ;)
12:08:57 * overclk is here
12:08:58 <JustinClift> Cool, at least it's not just me. :D
12:09:05 <Humble> I think till November we are planning to continue in the same time..
12:09:13 <JustinClift> #topic Action items from the last meeting
12:09:19 <JustinClift> "ndevos and hagarth to discuss on gluster-devel (by 24th Sept) the outstanding 3.5.3 blocker BZ's, and which ones to move to 3.5.4"
12:09:28 <JustinClift> I don't remember seeing that ^
12:09:32 <JustinClift> Anyone else know?
12:09:41 <Humble> neither me
12:09:56 <itisravi> nope :(
12:10:03 * JustinClift has a feeling this will be a short meeting
12:10:14 <JustinClift> k, it's marked as still needing tbd
12:10:16 <JustinClift> "davemc to get the GlusterFS Consultants and Support Company's page online"
12:10:26 <JustinClift> That's not there yet either
12:10:54 <JustinClift> Soumya Deb has been discussing overall web strategy thoughts which are useful
12:11:09 <Humble> yeah, discussions are going on about that
12:11:13 * JustinClift would prefer that to be on gluster-infra instead of in private email
12:11:18 <hagarth> JustinClift: +1
12:11:24 <JustinClift> s/prefer/strongly prefer/
12:11:27 <Humble> JustinClift, it will be in gluster infra soon
12:11:37 <Humble> just placing a decent draft before reaching there
12:11:44 <hagarth> Humble: I think we should extend invite to Soumya Deb to these meetings
12:11:52 <Humble> I will let him know
12:12:09 <JustinClift> Humble: tx :)
12:12:14 <Humble> np :)
12:12:28 <JustinClift> #action Humble to invite Soumya Deb to the Weekly GlusterFS Community Meetings
12:12:46 <JustinClift> hagarth: how did "ndevos and hagarth to discuss on gluster-devel (by 24th Sept) the outstanding 3.5.3 blocker BZ's, and which ones to move to 3.5.4" go?
12:12:52 <JustinClift> Still TBD?
12:13:02 <hagarth> JustinClift: yes
12:13:46 <JustinClift> k.  What's a realistic eta for this?  1 week?  2 weeks? x weeks? :)
12:14:06 <hagarth> JustinClift: I expect this week .. but will check with ndevos
12:14:29 <JustinClift> k. Won't attach an eta to it then
12:14:54 <JustinClift> The GlusterFS Consultants and Support Company page isn't having much luck is it?
12:15:20 <JustinClift> Humble: Should we wait for Soumya Deb for that?
12:15:38 <hagarth> JustinClift: might be a good idea to have davemc and Soumya Deb work together on this one
12:15:40 <Humble> depends.
12:16:19 <JustinClift> k.  I'll just mark it as TBD and we can figure it out later
12:16:20 <Humble> I can pass this to Soumya and discuss..
12:16:34 <JustinClift> Humble: Please do
12:16:39 <Humble> k..
12:16:51 <JustinClift> #action Humble to discuss the GlusterFS Consultants and Support Company page with Soumya
12:16:57 <JustinClift> "JustinClift to retrigger regression runs for the failed release-3.4 CR's"
12:17:05 <JustinClift> Done.  They failed too (so, non spurious)
12:17:11 * JustinClift hasn't looked at them since though
12:17:17 <JustinClift> "hagarth to work with Raghavendra G on dht bug fixes for 3.4.x"
12:17:24 <JustinClift> hagarth: How'd that go?
12:17:37 <hagarth> JustinClift: the patches are landing
12:18:02 <hagarth> JustinClift: http://review.gluster.org/#/q/status:+open+branch:+release-3.4,n,z
12:18:33 <hagarth> we possibly need to overcome a regression failure.. (possibly a spurious one)
12:19:04 <JustinClift> hagarth: k. :)
12:19:17 <JustinClift> "Humble to email gluster-devel and gluster-users about the upcoming test day, and including thest 3.6_test_day page"
12:19:21 <JustinClift> Done.
12:19:23 <Humble> done :)
12:19:23 <JustinClift> Humble: Thanks. :)
12:19:26 <Humble> ;)
12:19:30 <hagarth> Humble++ :)
12:19:32 <JustinClift> #topic 3.4
12:19:41 <Humble> hagarth++ :)
12:20:02 <JustinClift> Also: http://review.gluster.org/#/q/status:open+project:glusterfs+branch:release-3.4,n,z
12:20:29 <JustinClift> Looks like lots of dht stuff still needing to get in?
12:20:33 <hagarth> maybe we should re-trigger regression for the 3.4 dht patches
12:20:34 <kkeithley_> nothing much to report on 3.4. Now that Denali is released maybe....
12:20:44 <kkeithley_> 1. Raghavendra's DHT
12:20:49 <kkeithley_> 2. memory leak
12:20:56 <kkeithley_> will get some love
12:21:03 <JustinClift> Yeah.
12:21:07 * JustinClift hopes so
12:21:20 <JustinClift> People seem to have been pretty slammed last few weeks
12:21:30 <JustinClift> k, lets see next week how we're tracking
12:21:41 <JustinClift> #topic 3.5
12:22:07 <JustinClift> As per the above 3.5.3 thought, it's still in progress
12:22:14 <JustinClift> So, we'll figure this one out next week too :)
12:22:21 <JustinClift> #topic 3.6
12:22:38 <JustinClift> hagarth: How's the beta2 preparation looking?
12:22:47 <hagarth> 3.6.0beta1 was released over the weekend
12:22:51 <Humble> hagarth++
12:22:52 <glusterbot> Humble: hagarth's karma is now 3
12:23:02 <hagarth> a few bugs were identified and patches have landed since then
12:23:13 <hagarth> plan to do beta2 later today
12:23:15 <JustinClift> hagarth: Should we build beta2 rpms and sanity test them before announcing the new tarball + rpms them?
12:23:34 <hagarth> JustinClift: there are a few folks who make use of the tarball too
12:23:35 <JustinClift> Or would this stuff now show up in sanity tests, and we need active testers?
12:23:44 <JustinClift> s/now/not/
12:23:49 <kkeithley_> the beta1 RPMs were broken, so yes, we should build beta2 RPMs
12:24:13 <hagarth> yes, testing of RPMs would be good before we announce them.
12:24:16 * JustinClift is just thinking we might not want to announce tarball separately to the rpms
12:24:21 <andersb_> As I have said on the mailing lists, I currently have problems building Qemu on Fedora-20
12:24:27 <kkeithley_> well, only the -server RPM was broken
12:24:35 <lalatenduM> kkeithley_,  we have updated d.g.o with 3.6.0-0.2.beta1
12:24:37 <hagarth> andersb_: I face that on one of my systems too
12:24:42 <lalatenduM> and it is not broken anymore
12:24:44 <kkeithley_> lalatenduM++
12:24:45 <glusterbot> kkeithley_: lalatenduM's karma is now 2
12:24:52 <Humble> yeah , beta1 rpms are avialable
12:24:54 <JustinClift> So, maybe we prepare initial tarball, build rpms from that, and do basic sanity testing (eg qemu building on f20), and then announce them if it all passes
12:24:58 <Humble> and its usable ..
12:25:17 <hagarth> JustinClift: building qemu is a different problem .. maybe I'll hold it for beta3
12:25:26 <JustinClift> k
12:25:36 <lalatenduM> andersb_, hagarth yeah thats an issue
12:25:40 <hagarth> I can hold the announcement for beta2 and notify the rpm packagers about the tarball
12:25:46 <lalatenduM> not sure how to fix it
12:25:49 <Humble> hagarth, yeah
12:25:50 <JustinClift> hagarth: Lets try that
12:25:54 <hagarth> we can send out an announcement once the rpms are built
12:25:59 <hagarth> ok, cool
12:26:02 <Humble> indeed thats better
12:26:16 <lalatenduM> hagarth +1
12:26:18 <JustinClift> hagarth: Yeah, that's also more in line with how other projects do it too
12:26:24 <JustinClift> In this case, I think it's a good thing :)
12:26:43 <lalatenduM> at least in beta builds, it is a good idea
12:27:00 <hagarth> It would also be good if we review documentation for 3.6 now
12:27:04 <JustinClift> #action hagarth to send beta2 announcement once the tarball and rpms are ready
12:27:10 <hagarth> or at least new features introduced in 3.6
12:27:12 <lalatenduM> hagarth, kkeithley_ we need http://review.gluster.org/#/c/8836/ for beta2
12:27:14 * kkeithley_ wonders if DPKGs for Debian and Ubuntu are useful.
12:27:16 <JustinClift> hagarth: That's a very good idea
12:27:27 <hagarth> lalatenduM: will push that soon
12:27:34 <JustinClift> kkeithley_: Yes, is it feasible to have them available at the same time?
12:27:49 <lalatenduM> hagarth, thanks, the same patch is available for master too
12:27:53 <hagarth> kkeithley_: +1 to that, getting the builds done on time is hard
12:27:58 <Humble> hagarth, once u push the release, we will try out best to make rpms available asap
12:28:02 <andersb_> putting up the Qemu rpm's as well, would work for me (hopefully the patch in BZ1145993 is enough), machine is still building
12:28:05 <JustinClift> If we can have tarball, rpms, and deb's all at the same time, that would be pretty optimal
12:28:06 <hagarth> Humble: cool
12:28:14 <kkeithley_> I would not hold the announcement waiting for DPKGs
12:28:34 <JustinClift> kkeithley_: No worries. :)
12:28:34 <hagarth> maybe we should have one grand script that automates building packages for all distros :)
12:28:35 <kkeithley_> I agree it'd be a "nice to have"
12:29:06 <hagarth> I'll send a list of documents that we have for new features in 3.6
12:29:24 <lalatenduM> hagarth, pkg building for EL and Fedora is not difficult :) but the steps after that take time :)
12:29:32 <hagarth> we can start reviewing and polishing those docs
12:29:38 <lalatenduM> when the specfile is ready
12:29:39 <kkeithley_> hagarth: maybe we can get an intern to work on that
12:29:47 <Humble> lalatenduM, above apply only when there is not much change in spec file
12:29:47 <JustinClift> hagarth: make an AI for it :)
12:29:56 <lalatenduM> Humble, right , agree
12:29:59 <kkeithley_> i.e. the one, grand, build everything script
12:30:01 <Humble> for first build of 3.6.0 , it was not the case
12:30:10 <kkeithley_> written in DTRT.
12:30:10 <lalatenduM> Humble, yup
12:30:15 <hagarth> #action hagarth to send out documentation pointers to new features in 3.6
12:30:39 <JustinClift> #action JustinClift to update GlusterFS OSX Homebrew formula for beta2
12:30:57 <JustinClift> ^ that will let people on OSX test the client side bits easier
12:31:03 <hagarth> JustinClift: cool
12:31:16 <JustinClift> hagarth: If you let me know when the tarball for beta2 is online, I'll do that.  It's pretty quick
12:31:27 <hagarth> JustinClift:  should we reach out to FreeNAS to see if they have any interest with the FreeBSD port?
12:31:35 <hagarth> JustinClift: will do
12:31:40 <JustinClift> Can't see why not. :)
12:31:41 <Humble> hagarth, by documentation pointers , r u  referring 'admin guide' as well ?
12:31:48 <hagarth> Humble: primarily admin-guide
12:31:54 <Humble> ok.. cool
12:32:10 <hagarth> in the absence of relevant chapters in admin-guide, I'll resort to feature pages :)
12:32:17 <JustinClift> #action JustinClift to reach out to the FreeBSD and FreeNAS Communities, asking them to test beta2
12:32:19 <Humble> that would be good,
12:32:25 <hagarth> we do need more testing of beta2
12:32:28 <Humble> our 'features' folder looks ok now :)
12:32:46 <JustinClift> Corvid Tech may be able to throw a few hundred nodes at it
12:32:47 <hagarth> please drag and involve whomever you can find to test beta2 :D
12:32:59 <JustinClift> Probably just using the existing features rather than new ones, but I'm unsure
12:33:21 <hagarth> JustinClift: yes, feedback on both old & new would be great
12:33:48 <JustinClift> k.  We need to plan out the marketing activities for 3.6.0 as well
12:34:12 <JustinClift> hagarth, davemc, johnmark and I have started initial planning discussions off-list
12:34:24 <JustinClift> But, I'm not really seeing why we shouldn't do this on-list somewhere
12:34:32 <andersb_> Would it be possible to let new versions include a compatibility libglusterfs.so.0, to avoid having to rebuild everything?
12:34:54 <lalatenduM> JustinClift, +1 for moving discussion to a list
12:34:58 <hagarth> andersb_: I think we could do that
12:35:14 <lalatenduM> andersb_, lets disscuss this in gluster-devel, we need to fix this issue
12:35:33 <JustinClift> lalatenduM: With PostgreSQL we set up an "Advocacy and Marketing mailing list" and people got involved in that
12:35:42 <hagarth> lalatenduM: http://www.gluster.org/community/documentation/index.php/Planning/GlusterCommunity has some early thoughts
12:35:45 <JustinClift> But I think gluster-devel would be suitable for us for now
12:35:53 <lalatenduM> hagarth, andersb_, else qemu users, samba users cant update glusterfs from 3.5 to 3.6
12:35:56 <kkeithley_> libglusterfs.so.0?  AFAIK we're only bumping the SO_VERSION in libgfapi.so
12:36:10 <lalatenduM> kkeithley_, yes
12:36:20 <Humble> for rpm based installation , we need to uninstall glusterfs-api and install glusterfs-api
12:36:34 <lalatenduM> Humble, thast is not working
12:36:40 <lalatenduM> Humble, I just tried that
12:36:41 <Humble> I feel it should work..
12:37:08 <lalatenduM> Humble, I have uninstalled qemu and samba-vfs-glusterfs, then installed 3.6
12:37:15 <hagarth> let us think through this .. it is an important issue to be fixed for 3.6
12:37:19 <andersb_> OK, probably sloppy interpretation of upgrade failure reasons on my part :-(
12:37:30 <lalatenduM> and again tried installed qemu and samba-vfs-glusterfs, it is failing
12:37:57 <lalatenduM> hagarth, yup, else it would stop upgrade of 3.5 to 3.6
12:38:30 <Humble> I think only glusterfs-api is affected lalatenduM
12:38:49 <lalatenduM> Humble, yes
12:39:07 <Humble> which got dependencies through libgfapi versioning, not entire glusterfs packages
12:39:08 <andersb_> Correct, should be libgfapi.so.0 :-(
12:39:49 <kkeithley_> That's why we need to notify devel@fedoraproject.org, so that those maintainers will rebuild and new versions of qemu, samba, and ganesha land at the same time as the glusterfs-3.6.0
12:39:57 <lalatenduM> andersb_, nope, if the api is changed the version should be bumped up
12:40:09 <lalatenduM> kkeithley_, +1
12:40:25 <Humble> kkeithley_, yep..
12:40:25 <lalatenduM> kkeithley_, but I am wondering hwo that will work for EL
12:40:50 <andersb_> Not possible to do a compatiblity lib then? Makes upgrading much harder :-(
12:40:55 <kkeithley_> we don't have glusterfs in EPEL
12:40:55 * lalatenduM thinking of EL5, 6, 7
12:41:05 <lalatenduM> kkeithley_, CentOS SIG
12:42:09 * lalatenduM thinks we should discuss this in gluster-devel ML
12:42:18 <kkeithley_> and we (as in you) are the maintainer of samba and ganesha in the CentOS Storage SIG.
12:42:23 <hagarth> lalatenduM: +1
12:42:33 <kkeithley_> so you'll be on top of rebuilding those.
12:42:34 <lalatenduM> kkeithley_, yeah, I can rebuild :)
12:42:37 <lalatenduM> kkeithley_++
12:42:39 <glusterbot> lalatenduM: kkeithley_'s karma is now 5
12:43:25 <JustinClift> k, is there an action item here?
12:43:30 <JustinClift> or items? :)
12:43:35 <lalatenduM> kkeithley_, but dont have qemu in SIG till now
12:43:48 <lalatenduM> will discuss this offline with you
12:43:55 <kkeithley_> who maintains qemu in CentOS SIG?
12:44:10 <lalatenduM> kkeithley_, not sure , will find out
12:44:18 <lalatenduM> JustinClift, you can put that on me
12:44:37 <JustinClift> lalatenduM: You can make it :)
12:44:44 * JustinClift isn't sure of the details for the ai
12:44:45 <lalatenduM> #AI notify devel@fedoraproject.org, so that those maintainers will rebuild and new versions of qemu, samba, and ganesha land at the same time as the glusterfs-3.6.0
12:44:46 <Humble> Michael Tokarev <mjt@tls.msk.ru> not 100%sure though
12:44:58 <JustinClift> lalatenduM: thx
12:45:11 <JustinClift> It's more like this though:
12:45:21 <JustinClift> #action lalatenduM to notify devel@fedoraproject.org, so that those maintainers will rebuild and new versions of qemu, samba, and ganesha  land at the same time as the glusterfs-3.6.0
12:45:36 <lalatenduM> yeah ;)
12:45:38 <JustinClift> (in theory, that should work.  we get to find out)
12:45:50 <JustinClift> k, anything else for 3.6?
12:46:18 <JustinClift> Cool, moving on. :)
12:46:28 <JustinClift> #topic Other items to discuss
12:46:43 <JustinClift> The etherpad doesn't have any
12:46:48 <JustinClift> Anyone got stuff?
12:46:52 <overclk> yep
12:46:59 <itisravi> overclk:  BTRFS stuff
12:47:00 <JustinClift> The floor is yours... :)
12:47:02 <overclk> hagarth, want to discuss about our meeting last week
12:47:31 <overclk> itisravi, yes, thanks!
12:47:32 <hagarth> overclk: go ahead!
12:48:03 <overclk> so, last week a handful of folks met and discussed about using BTRFS as bricks and using some of it's "killing" features :)
12:48:13 <JustinClift> Cool
12:48:19 <kkeithley_> btw, the longevity cluster is running 3.6.0beta1
12:48:29 <JustinClift> kkeithley++ :)
12:48:30 <Humble> kkeithley++
12:48:31 <glusterbot> Humble: kkeithley's karma is now 2
12:48:33 <lalatenduM> kkeithley_, awesome :)
12:48:39 <overclk> features such as data/metadata checksumming, subvolumes, etc...
12:48:47 <kkeithley_> http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/
12:48:50 <JustinClift> Yeah.  checksumming sounds super useful :)
12:48:52 <JustinClift> kkeithley_: Wait
12:48:59 <JustinClift> kkeithley_: Let overclk finish first
12:49:05 <kkeithley_> sry
12:49:09 <overclk> kkeithley_, np
12:49:45 <overclk> Form the handful of features, we decided to explore checksumming and subvolumes to start with. checksumming helps with offloading bitrot detection to btrfs
12:49:55 <overclk> s/Form/From/
12:50:11 <hagarth> overclk: might be worth a check to see if folks are using glusterfs with btrfs in the community already
12:50:26 <JustinClift> overclk: Would it be useful to have a vm in rackspace with btrfs attached disk, for running regression tests on?
12:50:44 <itisravi> JustinClift: That would be cool.
12:50:46 * kkeithley_ has used btrfs for bricks in his testing. Hasn't done anything exotic with it though
12:51:08 <hagarth> JustinClift: yes, that would be great!
12:51:09 <overclk> hagarth, yep! I plan to mail gluster-devel@ asking for inputs and our stratergy
12:51:22 <hagarth> overclk++ :)
12:51:58 <JustinClift> itisravi hagarth: I can create a basic slave node in Rackspace pretty easily, but it would be better if someone clueful with btrfs did the btrfs setup bit
12:52:03 <JustinClift> Any volunteers?
12:52:14 <overclk> JustinClift, me
12:52:24 <itisravi> I can pitch in too
12:52:27 <JustinClift> overclk: Cool. :)
12:52:28 <overclk> cool!
12:52:33 <JustinClift> itisravi: :)
12:52:46 <itisravi> JustinClift: :)
12:52:51 <overclk> That about it for BTRFS as of now ... hopefully next week we'd have much to talk about :)
12:53:08 <JustinClift> #action JustinClift to setup new slave vm in Rackspace for btrfs testing, and pass it across to overclk + itisravi for btrfs setup
12:53:11 <overclk> Now, w.r.t. BitRot (without BTRFS :))
12:53:30 <JustinClift> overclk itisravi: Which base OS should go on it?  CentOS 7?
12:53:46 <hagarth> JustinClift: yes, CentOS 7 would be right
12:53:50 <overclk> JustinClift, I'm OK with that. Anyone else has another opinion?
12:54:01 <itisravi> JustinClift: what kernel version does CentOS 7 run on?
12:54:07 * JustinClift looks
12:54:16 <lalatenduM> we can use Fedora as well
12:54:20 <hagarth> overclk: go ahead with bitrot (without btrfs)
12:54:25 <kkeithley_> it's something like 3.10 AFAIK
12:54:33 <lalatenduM> Fedora would have latest karnel
12:54:33 <itisravi> Fedora 20 has 3.16X as of now, so maybe that's a better choice
12:54:40 <JustinClift> yeah, 3.10 with backported patches
12:54:44 <overclk> I would like 3.17-rc6 to be used and the latest btrfs-progs (just to minimize data losses :P)
12:55:01 <hagarth> overclk: and the deadlock too ;)
12:55:08 <overclk> hagarth, :)
12:55:22 <JustinClift> Right, we're installing IlluminOS then :p
12:55:53 <JustinClift> k, so Fedora <latest available> that Rackspace has then
12:55:54 <kkeithley_> Fedora21 alpha seems pretty stable too. You might get a longer run over the life of f21
12:56:04 <lalatenduM> kkeithley_,+1
12:56:22 <JustinClift> Sure.  I'll see what Rackspace has, and also if we can force it up to F21alpha
12:56:27 <JustinClift> Anyway, moving on...
12:56:42 <overclk> OK. so to bitor
12:56:43 <JustinClift> overclk: You were saying bitrot detection without btrfs
12:56:47 <overclk> bitrot*
12:57:20 <overclk> yep.. So, I had sent out a basic approach for BitRot (basically Bitrot daemon) a while ago.
12:57:46 <hagarth> overclk: right..
12:57:50 <overclk> That was based on a long mail thread started by Shishir...
12:58:08 <overclk> with a few changes here and there... but the approach is pretty much the same.
12:58:18 <hagarth> yeah..
12:58:41 <overclk> So, before I send out the task breakup I would appreciate some inputs on the doc.
12:59:13 <hagarth> overclk: will do.
12:59:21 <itisravi> overclk: I will go through it  over the weekend
12:59:23 <overclk> Once we all agree what's expected and the overall approach, things can move forward.
12:59:40 <overclk> hagarth, itisravi thanks!
12:59:58 <kkeithley_> Joe Fernandes is working on complance and tiering, which includes bitrot. He should be involved in bitrot
12:59:58 <andersb_> is that http://www.gluster.org/community/documentation/index.php/Features/BitRot
13:00:02 <hagarth> overclk: cool, thanks for the detailed update!
13:00:23 <overclk> andersb_, yep
13:00:33 <kkeithley_> Just make sure everything plays well together
13:00:46 <overclk> kkeithley_, I'll loop in joe
13:00:57 <JustinClift> Make an AI for this :)
13:01:08 <JustinClift> eg: "#action [name] [stuff to be done]"
13:01:18 <kkeithley_> dan lambright too
13:01:40 <overclk> #AI overclk to loop in Joe|Dan regarding BitRot stuffs
13:01:51 <JustinClift> #action overclk to loop in Joe|Dan regarding BitRot stuffs
13:01:56 <JustinClift> Close ;)
13:01:57 <hagarth> overclk: do sync up with rabhat too :)
13:02:02 <overclk> JustinClift, Thanks! :)
13:02:08 <overclk> hagarth, sure
13:02:18 <JustinClift> k, we're outta time.
13:02:26 <overclk> I'm done :)
13:02:29 <lalatenduM> the cppcheck fixes from kkeithley_  are not merged yet ;(
13:02:36 <JustinClift> Thanks for attending everyone.  kkeithley_, thanks for the longevity cluster too
13:02:38 <hagarth> lalatenduM: on my list for this week
13:02:43 <hagarth> JustinClift: one sec
13:02:46 <JustinClift> hagarth: Make an ai
13:02:48 <lalatenduM> hagarth, thanks
13:02:48 <JustinClift> Sure.
13:02:51 * JustinClift waits
13:03:04 <hagarth> next Wed happens to be the eve of a long weekend in India
13:03:17 <JustinClift> Ahhh.  Skip next week then?
13:03:29 <hagarth> JustinClift: yeah, that might be better
13:03:32 <lalatenduM> hagarth, http://review.gluster.org/#/c/8213/ too :)
13:03:34 <JustinClift> np
13:03:49 <hagarth> #action hagarth to review cppcheck and http://review.gluster.org/#/c/8213/ this week
13:03:54 <JustinClift> :)
13:04:15 <JustinClift> k, all done?
13:04:17 <hagarth> that's all from me
13:04:24 <JustinClift> Cool.  :)
13:04:28 <JustinClift> #endmeeting