gluster-meeting
LOGS
11:05:36 <jdarcy> #startmeeting
11:05:36 <zodbot> Meeting started Thu Feb 26 11:05:36 2015 UTC.  The chair is jdarcy. Information about MeetBot at http://wiki.debian.org/MeetBot.
11:05:36 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
11:05:46 <jdarcy> #topic Roll call
11:06:05 * jdarcy is here (obviously).
11:06:07 * xavih is here
11:06:10 * ndevos is here too
11:06:33 <jdarcy> #topic Current status
11:06:58 <jdarcy> Basically, most people are tied down with 3.7 stuff.
11:07:04 * kshlm is here
11:07:16 <jdarcy> Hi kshlm.
11:07:24 <kshlm> Hi :)
11:07:37 <ndevos> yes, 3.7 seems to be a hot topic at the moment...
11:07:43 <jdarcy> I know Dan and Joseph have put up their tiering code for review.
11:08:02 <jdarcy> That's a step toward data classification, or at least not a step away.
11:08:30 <jdarcy> Shyam is still chugging along on DHT2 designs.
11:08:56 <jdarcy> I've decided to split the NSR work into a server-side/leader-based part and a log-based part.
11:09:12 <jdarcy> The server-side part is also potentially applicable to AFR and EC, and can be done sooner.
11:09:42 * JustinClift arrives late
11:09:45 <kshlm> WRT, GlusterD-scalability, KP and I are planning to spend 1 day every week on it.
11:10:01 <kshlm> (till our 3.7 stuff gets done)
11:10:07 <jdarcy> Did some experiments with up to four clients, server-side AFR (otherwise unchanged) got nearly twice as much bandwidth *and twice as many IOPS* compared to client-side AFR.
11:10:28 <ndevos> wow, impressive!
11:10:33 <jdarcy> Median latency slightly better, 99th-percentile latency slightly worse.
11:10:52 <jdarcy> That was on Digital Ocean.  Rackspace results weren't quite as impressive.
11:11:08 <jdarcy> I expect bare-metal results to track DO more than RS.
11:11:34 <jdarcy> So there'll be some patches for that next week.
11:11:34 <JustinClift> Did we fill available disk or network bandwidth?
11:12:01 <jdarcy> JustinClift: The disks were bored.  Pretty much filled up the network.
11:12:14 <jdarcy> The instances I was using were all SSD.
11:12:16 * krishnan_p is here
11:12:19 <JustinClift> jdarcy: Cool :)
11:12:43 <kshlm> jdarcy, awesome!
11:13:03 <JustinClift> jdarcy: It'll be interesting to test with IB, and see what the limits it hits are
11:13:33 <jdarcy> Yeah, there'll be a lot more testing.  If I make it configurable via an option, should be easy enough.
11:13:43 * JustinClift nods
11:14:15 <jdarcy> Funny thing is, performance isn't really the reason I care about this.  As far as I'm concerned, split-brain resistance is.
11:14:35 * hagarth joins in now
11:14:37 <JustinClift> Makes sense to me
11:14:41 <jdarcy> Hi hagarth.
11:14:43 * shyam1 joined
11:14:58 <hagarth> jdarcy: hello
11:14:58 <JustinClift> However, I always like seeing things hit the physical limits performance wise :)
11:15:04 <jdarcy> So, anything else for current status?  Shyam or KP, perhaps?
11:15:32 <JustinClift> Btw, are we ok to communicate the above testing results via blog post / mailing list ?
11:15:41 <krishnan_p> I haven't been fortunate enough to work on my 4.0 items. No update this week
11:15:50 <jdarcy> JustinClift: I'll be posting something soon-ish, with pretty graphs etc.
11:16:03 <JustinClift> :)
11:16:15 * atinmu is here too
11:16:15 <shyam1> Well on DHT2, thoughts and discussions on possibilities and problems are ongoing, but not enough time spent formalizing anything near to a design
11:16:25 <krishnan_p> jdarcy, look forward to the blog. Would love to understand how moving afr to server-side is giving it split-brain resistance.
11:17:42 <jdarcy> krishnan_p: Basically it's because there are never two clients issuing writes simultaneously with different connectivity to servers or ideas about quorum.
11:18:18 <jdarcy> Moving along...
11:18:34 <shyam1> Interestingly on DHT2 with limited subvols holding the name/dentry information for directories, we may need to consider meta data server _like_ approaches in the future, or at least see how best to keep size and time information on the name servers updated
11:18:47 <krishnan_p> jdarcy, thanks.
11:18:52 <shyam1> Just wanted people to know the broad direction of thought with the above statement
11:19:39 <jdarcy> shyam1: But still more of a metadata *cluster* (like Ceph) than a single server (like crappy file systems) right?
11:20:06 <shyam1> Yes absolutely :) , so correction, meta data serverS
11:20:17 <jdarcy> (That wasn't a leading question at all.)
11:20:23 * krishnan_p is wondering if we should consider moving towards a mon cluster many services interested in consensus could use ...
11:20:49 <jdarcy> krishnan_p: Yes, that's still essential for many things.
11:22:01 <shyam1> krishnan_p: I would say based on some form of consensus that the layout update needs and who holds what regions, what you are thinking is essential for that as well
11:22:14 <jdarcy> #topic New business (and plans)
11:22:16 <JustinClift> Interesting.  So at low node counts we have some services doing centralised stuff (but resilient/redundant), and as we scale out we drop the need for them?
11:22:28 <JustinClift> Gah, I type too slow pre-coffee
11:22:33 <jdarcy> JustinClift: Yeah, pretty much.
11:22:49 <JustinClift> Cool :)
11:23:08 <jdarcy> JustinClift: That way we can maintain strong consistency of our config data, and high-level operational state.
11:23:40 <jdarcy> Anybody have any specific plans for 4.0-related work over the next couple of weeks?
11:23:54 <JustinClift> I'm all for this.  I was making snarky hints/comments about having stuff like that last year :)
11:23:57 <jdarcy> I'll be posting patches and blocks about the server-side framework, as mentioned.
11:24:00 <ndevos> well, not so much of a plan, more something of an idea?
11:24:08 <jdarcy> s/blocks/blogs
11:24:16 <jdarcy> (Too early in the morning, obviously)
11:24:18 <krishnan_p> I could review those patches as a start.
11:24:37 <jdarcy> ndevos: Ideas are good.
11:24:39 <ndevos> I'd like to make gluster/nfs completely optional for 4.0, and not have it installed by default - giving way for NFS-Ganehsa
11:24:51 <ndevos> *Ganesha
11:24:52 <krishnan_p> kshlm, and I intend to spend some time thinking about we should design glusterd geared for 4.0
11:25:28 <jdarcy> ndevos: Interesting.  Seems reasonable to me.  Any thoughts on that, hagarth?
11:25:31 <krishnan_p> jdarcy, any ideas of revisiting the stacked volumes idea
11:25:55 <hagarth> jdarcy: sounds good to me too.
11:26:04 <jdarcy> krishnan_p: Yes, but probably not in the next couple of weeks.  Any particular concern there?
11:26:22 * shyam1 intends to beef up some parts of the posted document on DHT2 with current thoughts on the way forward. Will that be done in 2 weeks, is still a doubt considering other priorities at the moment, but hope to
11:26:25 <krishnan_p> jdarcy, no concern. Would like to know how that shapes in the context of the next-gen glusterd.
11:27:20 <shyam1> ndevos: considering the way forward as Ganesha, I would want the same for Gluster NFS
11:27:40 <hagarth> I would like to drop FS from GlusterFS for 4.0. We should probably brand ourselves as GlusterDS - Gluster Distributed Storage ;)
11:27:50 <jdarcy> krishnan_p: Yeah, need to think about how glusterd needs to be involved in "thinking about" the relationships between super- and sub-volumes, especially with regard to failures etc.
11:27:58 <ndevos> shyam1: how do you mean?
11:28:10 <jdarcy> hagarth: Hm,
11:28:24 <JustinClift> hagarth: Interesting.  That could spun positively marketing wise
11:28:45 <shyam1> ndevos: making gluster/nfs optional... ++ is what I meant
11:28:54 <ndevos> shyam1: ah, ok!
11:29:17 <hagarth> we aren't just a FS - remember this from jdarcy's presentation at Red Hat Summit a while back
11:29:20 <jdarcy> Isn't "FS" one thing that distinguishes us from every not-an-FS out there?
11:30:00 <jdarcy> Maybe keep "GlusterFS" as a part of the larger "GlusterDS" project, like CephFS kind of is for Ceph?
11:30:16 <hagarth> jdarcy: yeah, something like that would be cool.
11:30:58 <jdarcy> Maybe we should open up that discussion on the mailing list.
11:31:22 <hagarth> jdarcy: yes, we should. And maybe one of the topics that we can discuss in the offline summit being planned.
11:31:39 <jdarcy> #action jdarcy to start discussion of "GlusterDS" name change
11:31:57 <jdarcy> #topic Next meeting
11:32:14 <jdarcy> Thursday two weeks from now is the middle of Vault.
11:32:25 <jdarcy> Not a problem for me, but thought I'd mention it.
11:32:46 <jdarcy> Anyone object to same time/place on March 12?
11:32:55 <krishnan_p> None
11:32:59 * ndevos will be at vault, not sure if he would be able to join
11:33:12 * hagarth likewise as ndevos
11:33:18 <jdarcy> ndevos: Yeah, that's wicked early for you.
11:33:34 <ndevos> oh, right, timezones!
11:33:41 * shyam1 maybe travelling to vault at that time to get to Boston on time
11:33:43 <hagarth> jdarcy: wouldn't you have spring forward by an hour?
11:34:12 <jdarcy> I'm OK with that.  So 12:00 UTC?
11:34:36 <hagarth> sounds good
11:34:48 <jdarcy> Oh wait, you meant DST kicks in.  Hmm.
11:35:12 <jdarcy> So 11:00 UTC would be 07:00 EDT (instead of 06:00 EST right now).
11:35:28 <hagarth> jdarcy: right
11:35:36 <jdarcy> 08:00 EDT might be a little late for people at the conference.
11:36:03 <JustinClift> Push it back by a week?
11:36:08 <jdarcy> (Switch to EDT on March 8, for those who didn't already look it up)
11:36:13 <JustinClift> Or make it a better time?
11:36:24 <jdarcy> Since a lot of people are busy, maybe another week would be good.
11:36:30 <ndevos> the schedule of VAULT starts at 9:00
11:36:56 <jdarcy> So . . . March *19*, 11:00 UTC (07:00 EDT)?  Going once...
11:37:18 <hagarth> sounds good!
11:37:21 * krishnan_p won't make it on March 19, on a short vacation.
11:37:42 <ndevos> 19th works for me too
11:37:43 <krishnan_p> jdarcy, don't bother. I will catch up on what transpired once I am bck.
11:37:49 <JustinClift> :)
11:37:52 <jdarcy> krishnan_p: So you get a free pass.  ;)
11:38:04 <jdarcy> Going twice...
11:38:08 <jdarcy> Done.
11:38:10 <jdarcy> (heh)
11:38:37 <jdarcy> Anything else before I end the meeting?
11:39:17 <ndevos> thanks for getting up early, jdarcy!
11:39:28 <jdarcy> ndevos: No problem.
11:39:33 <jdarcy> #endmeeting