rdo_meeting_(2016-06-08)
LOGS
15:00:36 <number80> #startmeeting RDO meeting (2016-06-08)
15:00:36 <zodbot> Meeting started Wed Jun  8 15:00:36 2016 UTC.  The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:36 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
15:00:36 <zodbot> The meeting name has been set to 'rdo_meeting_(2016-06-08)'
15:00:42 <openstack> Meeting started Wed Jun  8 15:00:36 2016 UTC and is due to finish in 60 minutes.  The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:46 <openstack> The meeting name has been set to 'rdo_meeting__2016_06_08_'
15:01:21 <leifmadsen> o/
15:01:31 <jpena> o/
15:01:43 * rbowen waves
15:01:57 <number80> agenda is here
15:02:02 <number80> https://etherpad.openstack.org/p/RDO-Meeting
15:02:10 <number80> #chair jpena leifmadsen rbowen
15:02:10 <zodbot> Current chairs: jpena leifmadsen number80 rbowen
15:02:11 <openstack> Current chairs: jpena leifmadsen number80 rbowen
15:02:23 <imcsk8> o/
15:02:27 <Duck> p/
15:02:37 <Duck> why two bots?
15:02:41 <jpena> dmsimard: we need you to do your magic to the openstack bot
15:02:54 <eggmaster> o/
15:03:31 <dmsimard> ok
15:03:33 <dmsimard> all good
15:03:37 <number80> #chair imcsk8 Duck dmsimard eggmaster
15:03:37 <zodbot> Current chairs: Duck dmsimard eggmaster imcsk8 jpena leifmadsen number80 rbowen
15:03:45 <Duck> do the bots handle people on tatamis instead of chairs?
15:03:53 * dmsimard thinks he should add a command to rdobot so rdobot could do it
15:04:29 <number80> we should be good now, apevec and chandankumar are excused
15:04:44 <number80> #topic DLRN instance migration to ci.centos infra (recurring)
15:04:49 <number80> jpena: ?
15:05:38 <jpena> so we were only waiting for the CI promotion jobs. I *think* it was almost finished, but that's where I need dmsimard's help
15:05:49 <jpena> or maybe trown? ^^
15:06:09 <dmsimard> yeah we were waiting for promotions
15:06:13 <dmsimard> not sure if it's happened yet
15:06:15 <dmsimard> I'll look
15:06:46 <dmsimard> I don't see a current-tripleo in centos-master
15:06:48 <dmsimard> so guess not
15:06:50 <dmsimard> trown: ^ ?
15:07:04 <trown> o/ whoops Im here :)
15:07:24 <trown> dmsimard: there has been a promotion since switching to the new method
15:07:35 <dmsimard> trown: don't see it on the dlrn server
15:07:48 <number80> #chair trwon
15:07:48 <zodbot> Current chairs: Duck dmsimard eggmaster imcsk8 jpena leifmadsen number80 rbowen trwon
15:07:52 <trown> dmsimard: https://trunk.rdoproject.org/centos7-master/current-tripleo/
15:08:06 <dmsimard> trown: http://paste.openstack.org/show/508955/ :P
15:08:11 <trown> from 06/03
15:08:13 <dmsimard> that's not internal dlrn
15:08:39 <dmsimard> hm, for some reason I thought the tripleo promotion would've been done on internal dlrn
15:08:48 <dmsimard> so we could switch tripleo to either passive or buildlogs by itself
15:08:59 <dmsimard> misunderstanding from my part
15:09:28 <dmsimard> I can take that hash and symlink it manually
15:09:59 <trown> how do I promote on the internal dlrn... I am using trunk-primary in https://ci.centos.org/job/tripleo-dlrn-promote/configure
15:10:09 <number80> please log #action and #info :)
15:10:44 <dmsimard> jpena: looks like there might be a nfs issue on internal dlrn
15:10:53 <dmsimard> or nm, it's just slow
15:11:16 <jpena> dmsimard: you scared me :). I've just checked and it works for me
15:11:26 <dmsimard> #action dmsimard to symlink hashes on internal dlrn (current-passed-ci, current-tripleo)
15:11:53 <trown> dmsimard: that works once, but how do we solve promotion not happening there?
15:12:18 <dmsimard> trown: trunk-primary dns has to be moved
15:12:29 <trown> oh right
15:12:34 <dmsimard> I guess we want to make sure it's picked up by buildlogs sync cron and stuff first
15:12:45 <trown> k, thanks for taking that action
15:12:51 <trown> makes sense to me now
15:13:00 <jpena> ok, so thinking about the migration itself
15:13:21 <jpena> the dns switch is a pre-requisite for the CI promotion, isn't it?
15:14:08 <jpena> can we switch the DNS for trunk-primary before doing the rest, so we can be sure promotion is ok? With that, we'd only have to plan the switchover and execute it
15:14:20 <dmsimard> yeah I think we can't do without having a bit of lag between the two
15:14:33 <dmsimard> as in, we can sync manually things over if need be
15:15:39 <dmsimard> but the problem is sometimes we have hash mismatches between public and internal dlrn
15:15:50 <dmsimard> for example right now public mitaka has current-passed-ci -> c6/21/c621c5cc82b0eb57f945f340a609b4daa7bc9cef_f20fcda9
15:15:55 <dmsimard> but that hash doesn't exist on internal dlrn
15:15:59 <jpena> that's why I want to switch the DNS for trunk-primary before anything else
15:16:16 <dmsimard> trown: the get hash job already uses primary, right ?
15:16:21 <trown> dmsimard: ya
15:16:32 <dmsimard> jpena: yeah, should change primary first
15:16:37 <dmsimard> jpena: but, after test days ? :)
15:16:49 <number80> *nods*
15:16:54 <dmsimard> not like anything's going to break, just because
15:16:55 <jpena> sure :)
15:17:09 <number80> are we good to move to the next topic?
15:17:13 <jpena> a sec
15:17:26 <jpena> #action jpena to switch DNS for trunk-primary to the ci.centos.org instance on Jun 13
15:17:38 <jpena> now :)
15:17:45 <number80> thanks :)
15:17:52 <number80> #topic Test day readiness
15:18:13 <number80> so are we ready to go?
15:18:24 <dmsimard> we got fairly recent CI promotions so I think that's good
15:18:44 <number80> trown: what about 3o images?
15:18:46 <dmsimard> do we have the whole framework of things where people can read docs on how to participate, what we want to test, how to report bugs, etc etc ?
15:19:10 <trown> ya, master images are not going to buildlogs, but there is an unsolved issue with CDN images anyways
15:19:13 <number80> Yeah, on the website, rbowen also asked to help him to improve it
15:19:26 <trown> number80: but the artifacts url works just as well from what I have seen
15:19:46 <rbowen> We have the test day page, but it can *always* be improved.
15:19:48 <number80> https://www.rdoproject.org/testday/
15:20:05 <rbowen> In particular, https://www.rdoproject.org/testday/newton/milestone1/
15:20:19 <rbowen> We have test scenarios at https://www.rdoproject.org/testday/newton/testedsetups1/
15:20:31 <rbowen> but it would be really helpful, as always, for folks to look over that and add/remove what's needed.
15:20:59 <rbowen> And, where appropriate, write more detailed test instructions for stuff that's vague and handwavey.
15:21:43 <number80> #action everyone help rbowen to update test scenarios
15:22:36 <number80> so nothing preventing us to hold the test days tomorrow?
15:22:55 <number80> 3
15:22:57 <number80> 2
15:22:59 <number80> 1
15:23:02 <trown> 0
15:23:06 <number80> \o/
15:23:11 <number80> next topic then
15:23:19 <number80> #topic Packstack refactor
15:23:26 <number80> jpena, imcsk8: stage's yours
15:23:36 <rdogerrit> Leif Madsen proposed DLRN: Update documentation for new projects.ini options  http://review.rdoproject.org/r/1312
15:23:44 <imcsk8> i liked what jpena is proposing
15:24:12 <number80> one manifest is an improvement, and I like using ansible for pre-provisionning
15:24:18 <jpena> after some discussions with imcsk8 and EmilienM, I went and tried to refactor the way packstack runs manifests. The result is currently in https://github.com/javierpena/packstack/tree/feature/manifest_refactor
15:24:40 <jpena> it's not ready to merge yet, but I'd really appreciate feedback
15:24:49 <imcsk8> jpena: i'm about to test it
15:25:09 <EmilienM> jpena: ack, I'll review that
15:25:33 <jpena> and after that, we've started thinking beyond that. apevec suggested to keep all-in-one single node only, and add Ansible wrapper (in unsupported contrib/ subfolder) reading *_HOSTS parameters for backward compat
15:25:36 <EmilienM> https://github.com/javierpena/packstack/commit/affad262614a375ed48eac5964dd477d392cf4ca crashed my browser
15:26:17 <imcsk8> i'm not sure about keeping it single node
15:26:39 <jpena> I gave it a twist and came up with https://etherpad.openstack.org/p/packstack-refactor-take2
15:27:08 <jpena> one of apevec's concerns is that we are currently not testing packstack multinode at all
15:27:23 <jpena> dmsimard: do you know if it would be possible with WeiRDO?
15:27:32 <dmsimard> weirdo does whatever upstream does
15:27:34 <dmsimard> :)
15:27:40 <imcsk8> nice!
15:27:59 <imcsk8> dmsimard: so we could make it test packstack multinode ¿right?
15:28:07 <dmsimard> EmilienM tried to do multinode upstream and it was apparently a pita
15:28:13 <jpena> ok, let's put it in a different way: would there be a way to have a packstack multinode CI job? It looks like upstream has issues with multinode testing
15:28:22 <EmilienM> actually it's getting better now
15:28:27 <EmilienM> tripleo is trying it atm
15:28:33 <EmilienM> slagle has some WIP on it for it ^
15:28:39 <jpena> I was thinking of having a job in ci.centos.org and use it as an external CI
15:28:59 <EmilienM> https://review.openstack.org/#/q/status:open+topic:tripleo-multinode
15:29:21 <dmsimard> fwiw, though, I think packstack should stay single node *BUT* allow multi-node. For example, if I want to have multi-node, it's not magically done by packstack. The user runs packstack on one node but that node has just keystone, then another node nova and so on.
15:29:22 <imcsk8> i would like to create the multinode tests for packstack, it would be a good learning experience
15:29:42 <dmsimard> I don't think packstack should be handling the multinode setup by itself
15:30:00 <dmsimard> but that's just what I think.
15:30:04 <dmsimard> I think about a lot of things.
15:30:30 <dmsimard> like, if eventually we tear out all that ssh and copy puppet module madness out of packstack
15:30:37 <dmsimard> and packstack becomes ansible driven like we brainstormed around
15:30:46 <jpena> I still think multi-node support has a point. We see people using it every day
15:30:46 <dmsimard> someone could manage what is installed where through ansible inventory
15:30:53 <leifmadsen> wouldn't you just use tripleO for this? maybe I don't have enough understanding of what is attempting to be accomplishd
15:30:54 <imcsk8> jpena: +1
15:31:08 <dmsimard> jpena: I'm not saying there's no point in multinode
15:31:44 <imcsk8> i was checking the ansible API, and had the crazy idea of making packstack use it
15:31:51 <dmsimard> I'm saying the implementation of multi node should really be a bunch of single nodes and the user decides what to put on it
15:32:08 <number80> leifmadsen: until 3o has lower requirements, I still prefer keeping packstack multinode capabilities
15:32:28 <leifmadsen> number80: you mean memory usage?
15:32:39 <number80> yes
15:32:43 <amoralej> sorry, i was so focused on testing new packstack that i forgot about the meeting
15:32:46 <jpena> dmsimard: the "use ansible+remove ssh/copy stuff" idea is where I'm trying to get at with phase 2 refactor
15:32:47 <amoralej> :)
15:32:54 <imcsk8> dmsimard: if understand correctly you're suggesting that for multinode we use multiple runs of packstack disabling all stuff but nova, in other neutron, etc...
15:33:13 <number80> #chair amoralej
15:33:13 <zodbot> Current chairs: Duck amoralej dmsimard eggmaster imcsk8 jpena leifmadsen number80 rbowen trwon
15:33:32 <jpena> mmm... my idea of multinode is way simpler: controller, network, compute. Going per-service is actually way more advanced than that
15:33:45 <dmsimard> imcsk8: no, I'm saying packstack by itself should have no concept of multi node. If someone wants to do "multi-node" with packstack, it essentially boils down to doing several single installations where the user chooses what to put on it.
15:33:50 <leifmadsen> sounds like composable services/roles :)
15:34:25 <dmsimard> I think discussing the roadmap of packstack is outside the scope of the meeting and we're going down a rabbit hole right now though
15:34:31 <number80> well, I kinda agree w/ jpena, we don't need full multinode in packstack
15:34:51 <jpena> dmsimard: agreed, let's get back on track
15:34:55 <amoralej> dmsimard, but then we need a orchestration layer on top...
15:35:21 <imcsk8> jpena: i will test your code after the meeting
15:35:30 <number80> let's continue the discussion later
15:35:32 <jpena> should we open a discussion about phase 2 in the mailing list?
15:35:40 <imcsk8> +1
15:35:48 <EmilienM> instead of manual testing why not submitting the patch in Gerrit and make it pass CI?
15:35:49 <number80> ideally yes
15:35:58 <dmsimard> EmilienM: we will, eventually
15:35:59 <EmilienM> so we can use Gerrit to review & use OpenStack Infra to check that is actuall works
15:36:13 <EmilienM> (except multinode)
15:36:21 <number80> #action jpena put packstack phase 2 discussion on the list
15:36:37 <number80> anything else before we move on?
15:36:56 <jpena> EmilienM: I'll open the review once I have polished a couple things which I know are failing right now
15:37:07 <jpena> but wanted earlier feedback
15:37:34 <number80> #topic     away with the manifest refactor).
15:37:47 <number80> #undo
15:37:47 <zodbot> Removing item from minutes: <MeetBot.items.Topic object at 0x7f69deb97dd0>
15:38:08 <number80> #topic Demos needed for RDO booth @ Red Hat Summit
15:38:17 <number80> https://etherpad.openstack.org/p/rhsummit-rdo-booth
15:38:21 <ccamacho> Hey guys! quick question im deploying liberty with quickstart and after the process is finished without errors, when deploying the overcloud im getting stuck here "2016-06-08 15:27:07 [NetworkDeployment]: CREATE_IN_PROGRESS  state changed" anyone had this error before? I tried 3 different methods all with the same result..
15:38:22 <number80> link to the schedule
15:38:25 <rbowen> #link https://etherpad.openstack.org/p/rhsummit-rdo-booth
15:38:43 <rbowen> There are lots of available slots, full hour and half hour.
15:38:46 <number80> ccamacho: just leave us few minutes so we wrap up with the current meetings
15:38:57 <rbowen> Demos, or just stand around and answer questions and keep me company. :-)
15:39:06 <ccamacho> number80, sure.
15:39:15 <rbowen> Looks like Cumulus has already snagged a spot, which is awesome.
15:39:27 <number80> anyone of you going to RH Summit?
15:40:13 <number80> (*you* not being limited to the participants of the meeting ;-) )
15:40:13 <imcsk8> i would like to but i don't have money for the plane :P (18 pesos x dollar)
15:40:13 <rbowen> ok, that's all I had to say. :-)
15:40:23 <jpena> I used to go when I was a customer... not anymore :)
15:40:40 <amoralej> i was last year
15:40:43 <number80> #info if you have a cool demo to show @ RH Summit ping rbowen
15:41:04 <number80> well, maybe we'll stronger presence next year
15:41:14 <number80> #topic open floor
15:41:23 <number80> last call if you have anything to discuss here
15:41:53 <leifmadsen> what is the status of RDO having the ability to pull from locations other than from rdoinfo for building?
15:42:02 <leifmadsen> s/RDO/DLRN
15:42:15 <leifmadsen> is anyone working on an example driver for pulling info from other locations?
15:42:28 <jpena> leifmadsen: sure, let me find the review
15:42:28 <number80> leifmadsen: no example driver but the review was merged
15:42:34 <leifmadsen> right, i noticed that yesterday
15:42:42 <leifmadsen> I'm interested in building for OVS
15:42:46 <leifmadsen> that's my angle :)
15:42:46 <jpena> https://review.rdoproject.org/r/1100 adds a second plugin
15:42:50 <leifmadsen> ahh
15:42:59 <jpena> and it's merged already, actually :)
15:43:09 <leifmadsen> oh, that just merged a few hours ago I guess
15:43:38 <leifmadsen> ok cool, I'll will take a look and ask follow up questions. Thanks
15:43:38 <jpena> it follows the idea of the rpm-packaging project, although it lacks a little thing to work perfectly
15:43:48 <jpena> number80, that's something I have to chat about with you
15:43:54 <leifmadsen> which little thing? :)
15:44:55 <jpena> it needs a better way to find metadata. Right now, in https://github.com/openstack/rpm-packaging/tree/master/openstack we have one non-openstack project (openstack-macros)
15:45:24 <number80> jpena: I should be available tomorrow or friday
15:45:26 <jpena> with no source git to get info from. Since we have no rdoinfo or similar thing, we'd need to have something to tell dlrn what to do
15:45:58 <jpena> number80: cool, I'll ping you tomorrow
15:45:59 <leifmadsen> k thx
15:49:24 <jpena> I think that's all for now
15:49:47 <larsks> champson: I think the meeting has wound down, so come on back :)
15:49:54 <champson> Ah great :)
15:50:09 <imcsk8> jpena: for the refactor are talking of something like this? https://github.com/imcsk8/ansible-tools/blob/master/playbooks/packstack/allinone.yml
15:50:22 <jpena> number80: endmeeting?
15:50:28 <number80> #endmeeting