rdo_meeting_-_2016-12-14
LOGS
15:01:50 <jpena> #startmeeting RDO meeting - 2016-12-14
15:01:50 <zodbot> Meeting started Wed Dec 14 15:01:50 2016 UTC.  The chair is jpena. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:50 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
15:01:50 <zodbot> The meeting name has been set to 'rdo_meeting_-_2016-12-14'
15:02:01 <jpena> #topic roll call
15:02:03 <jschlueter> o/
15:02:04 <mengxd> o/
15:02:04 <dmsimard> \o
15:02:08 <rbowen> o/
15:02:08 <hrybacki|mtg> o/
15:02:09 <jruzicka> o7
15:02:30 <jpena> #chair jschlueter mengxd dmsimard rbowen hrybacki jruzicka
15:02:30 <zodbot> Current chairs: dmsimard hrybacki jpena jruzicka jschlueter mengxd rbowen
15:02:46 <jpena> if you have any topic to discuss, please add it to https://etherpad.openstack.org/p/RDO-Meeting
15:03:23 <EmilienM> dmsimard: lol
15:03:59 <rbowen> jpena: well, that was a fast meeting.
15:04:03 <trown> o/
15:04:04 <jpena> :)
15:04:08 <jpena> #chair trown
15:04:08 <zodbot> Current chairs: dmsimard hrybacki jpena jruzicka jschlueter mengxd rbowen trown
15:04:14 <rbowen> Oh, someone added something. :-)
15:04:27 <mengxd> btw, i want to get some update about altarch enablement
15:04:54 <jpena> #topic CentOS 7.3 + qemu-kvm-ev situation
15:05:01 <flepied> o/
15:05:29 <amoralej> o/
15:05:40 <jpena> #chair amoralej flepied
15:05:40 <zodbot> Current chairs: amoralej dmsimard flepied hrybacki jpena jruzicka jschlueter mengxd rbowen trown
15:05:42 <mengxd> will a fix be released for it soon?
15:05:45 <jpena> So for those unaware, CentOS 7.3 is now out. This has created some issues for us, specially around qemu-kvm-ev.
15:05:56 <EmilienM> jpena: I'll repeat (sorry) but adding #chair is useless. Anyone can already take #action etc
15:06:16 <jpena> EmilienM: it's mostly in case my connection drops
15:06:49 <jpena> https://www.redhat.com/archives/rdo-list/2016-December/msg00028.html includes some background information
15:07:27 <jpena> if I understood it correctly, qemu-kvm-ev 2.6.0 is in the process of being signed and distributed to the mirrors. That should fix one of our issues
15:07:31 <jpena> amoralej ^^ ?
15:07:41 <amoralej> yes
15:08:19 <dmsimard> The tl;dr is that there's some issues with qemu-kvm <=2.3 on 7.3 and then some other issues with qemu-kvm >=2.6 on 7.3
15:08:30 <amoralej> build is tagged yet and centos team is working hard to get all builds signed and in official repo
15:08:36 <jruzicka> EmilienM, old habits die hard
15:09:33 <jpena> that's mostly it, anyone has something else to add to the topic?
15:09:56 <amoralej> iiuc, fix for issue running in qemu mode will be fixed in puppet-nova, right?
15:10:17 <jpena> I've proposed a patch to puppet-nova, will update after the meeting to address some comments
15:10:35 <jpena> There's also a patch to puppet-openstack-integration (merged), and a wip patch for packstack
15:10:54 <dmsimard> EmilienM: the tripleo patches I sent failed CI
15:11:00 <dmsimard> I don't know if you can help with that
15:11:10 <EmilienM> dmsimard: I'll help for sure
15:11:13 <dmsimard> for libvirt cpu_mode
15:11:30 <mengxd> one question: if qemu-kvm 2.6.0 has issues with centos 7.3, then when can we expect a full fix ?
15:11:44 <leifmadsen> o/
15:11:49 <dmsimard> EmilienM: oh actually my patch suceeded, it's yours that didn't :p https://review.openstack.org/#/c/410358/
15:12:10 <dmsimard> mengxd: unknown at this time, there is a bugzilla to follow I guess: https://bugzilla.redhat.com/show_bug.cgi?id=1371617
15:12:27 <dmsimard> mengxd: they would like to make it so qemu ignores unknown flags or something like that, to prevent this from re-ocurring
15:13:04 <mengxd> dmsimard: ok, thx for this info.
15:13:14 <dmsimard> mengxd: the gist of the issue is libvirt trying to pass cpu extensions that are unknown to qemu and when using qemu as the hypervisor it fails to spawn the VM
15:13:37 <dmsimard> but then again, I also heard similar stories when also using host-model with kvm -- I haven't personally witnessed that one, though.
15:14:46 <jpena> ok, time for next topic
15:14:55 <jpena> #topic Ceph Hammer going EOL in the middle of Mitaka release: what to do
15:15:33 <dmsimard> EmilienM, gfidente ^
15:15:47 <dmsimard> It's going to be quite challenging, if at all possible, to update everything in Mitaka to use jewel instead
15:17:08 <dmsimard> This is due to hammer -> jewel upgrade being quite complex in itself, it's not just a yum update or anything -- just one example is that in one release Ceph runs under the root user and in the other it runs under the Ceph user -- there's also the switch from upstart/sysvinit to systemd
15:17:15 <trown> when exactly is Hammer EOL?
15:17:19 <dmsimard> trown: november 2016
15:17:29 <trown> umm... last month?
15:17:39 <dmsimard> http://docs.ceph.com/docs/master/releases/
15:17:42 <dmsimard> yeah, they're late
15:18:27 <dmsimard> we're probably going to hit the same issue when jewel goes EOL
15:18:35 <trown> what does EOL effectively mean... can we just keep it going for a couple months?
15:18:39 <dmsimard> although perhaps it won't be as big of a deal in terms of complexity
15:18:46 <trown> mitaka EOL is not that far off because of ocata short cycle
15:19:09 <dmsimard> trown: I actually asked the question in #openstack-dev yesterday but didn't get an answer
15:19:17 <dmsimard> typically releases have gone EOL ~1yr after release
15:19:30 <jpich> florianf: Do you know what is the status about the tripleo-ui-deps rpm? Now that the dependencies-update patch has merged upstream I'm a bit worried if we'll end up with a broken DLRN RPM again
15:19:51 <jpich> (I'm sorry I didn't realise there was a meeting on, I'll follow up after)
15:20:12 <dmsimard> trown: hm, says april 2017 https://releases.openstack.org/
15:20:35 <dmsimard> it was released in april 2016
15:20:53 <dmsimard> so the ocata short cycle will temporarily lead to a couple months more of rolling releases
15:20:56 <trown> oh weird... I assumed it would be after ocata release
15:20:56 <dmsimard> unless they change that
15:21:21 <dmsimard> back to ceph, though, like.. does hammer even work against 7.3 ? amoralej ?
15:21:50 <amoralej> yes
15:22:02 <amoralej> it can work but needs some tricks with packages installed
15:22:20 <amoralej> the real problem is that we have two sets of ceph related libraries
15:22:31 <amoralej> in centos base and in storage-sig
15:22:34 <dmsimard> ktdreyer: what does EOL mean for a ceph release ? end of security/fixes ?
15:22:35 <amoralej> with same nvr
15:22:39 <amoralej> and not compatible
15:23:13 <amoralej> if we force to install everything from storage sig repo, it should work fine
15:23:28 <amoralej> i tested it will minimal reproducer
15:23:37 <dmsimard> amoralej: so we need something like yum priority ?
15:23:39 <amoralej> not full p-o-i but i can
15:23:50 <amoralej> yes, that's the easiest solution
15:24:04 <amoralej> a bit fragile
15:24:12 <amoralej> but, from our side, no much to do
15:24:22 <amoralej> it has been reported to storage sig also
15:24:31 <dmsimard> reported? where ?
15:24:35 <amoralej> https://github.com/CentOS-Storage-SIG/centos-release-ceph-hammer/issues/2
15:25:05 <amoralej> at least they need to update repo rpm to set priorities
15:25:16 <dmsimard> ok, I pinged fcami in that issue
15:25:32 <amoralej> for p-o-i we can move repo configuration to yumrepo and add priorities
15:25:44 <amoralej> in puppet-ceph
15:26:08 <dmsimard> so in the end, we still don't know what we want to do I guess ? Keep hammer if we can ? Otherwise spend whatever time is required to move everything to jewel ? TripleO, oooq, p-o-i and all the involved CI jobs ?
15:26:40 <dmsimard> amoralej: downstream OSP doesn't have that issue because they're using RH Storage which does not go EOL I guess ?
15:26:40 <amoralej> according to previous conversation with EmilienM, moving it to jewel is no-way
15:27:12 <amoralej> yes, and libraries in base repo must work with RH storage (i guess)
15:27:33 <dmsimard> ok, let's wait to hear back from fcami before deciding on anything
15:27:44 <dmsimard> it's not officially EOL (yet)
15:28:25 <amoralej> i'll prepare the patch to puppet-ceph in the meanwhile so that we can test it
15:28:57 <dmsimard> why patch puppet-ceph ?
15:29:07 <dmsimard> wouldn't we just move puppet-ceph pin to master branch instead of stable/hammer ?
15:29:08 <amoralej> to enable priorities there
15:29:12 <dmsimard> oh
15:29:18 <amoralej> i mean, to keep it in hammer
15:29:23 <dmsimard> yeah
15:30:04 <chandankumar> amoralej: jpena apevec_ https://review.rdoproject.org/r/#/q/topic:tempest-plugin-entrypoint 5 patches stil pending!
15:30:07 <dmsimard> jpena: I think we can move to next topic
15:30:15 <jpena> ok, let's move on
15:30:23 <jpena> #topic DLRN API patch, please review, this is important moving forward
15:30:42 <dmsimard> jpena: you want to give a tl;dr of the goal ?
15:30:42 <jpena> #link https://review.rdoproject.org/r/#/c/3838/
15:31:07 <chandankumar> \o/
15:31:14 <jjoyce> o/
15:31:19 <jpena> the main goal is to have a REST API associated to DLRN, initially to allow CI jobs to send their votes for each repo they test
15:31:58 <jpena> so for example, if we have 5 different CI jobs testing repo abc, 4 of them pass and 1 fails due to whatever, we can re-test the failing one against the same repo, and have a passing one
15:32:49 <jpena> after that, we can extend the API to provide additional facilities/information on the repos, e.g. the much-needed retry option without having to log-on on the DLRN instance itself
15:34:37 <dmsimard> yeah basically I think we can eventually expose whatever CLI commands on the dynamic web interface through the API
15:34:51 <dmsimard> opens up quite a bit of opportunities
15:35:19 <dmsimard> so please review it if you have the chance -- if you're bad at python like me you can just ask questions or add comments/ideas
15:38:05 <jpena> ok, let's move on
15:38:15 <jpena> #topic altarch enablement
15:38:36 <mengxd> yes, i want to get some update about ppc64le enablement status
15:38:51 <jpena> mengxd: I know number80 was working on this, but I'm not aware of the current details. He's on holidays this week
15:38:55 <mengxd> i noticed that number80 has submitted a few patches out
15:39:17 <mengxd> ok, then maybe we have to wait until he is back.
15:39:51 <mengxd> btw, there are two Power8 servers that will be shipped to CentOS UK data center.
15:40:04 <dmsimard> mengxd: we've been hearing that for a long time :(
15:40:30 <mengxd> It is on its way to UK now, i think it will arrive there by end of this week.
15:40:40 <dmsimard> ah, well, there we go
15:42:14 <jpena> so that's it for the topic, I guess
15:42:24 <jpena> #topic chair for the next meeting
15:42:30 <jpena> any volunteer?
15:43:08 <mengxd> Do we still meet next week as it is approaching the holiday?
15:43:20 <dmsimard> Need week is 21st, I think that's fine
15:43:30 <dmsimard> but the week after that should probably be skipped
15:43:38 <dmsimard> 28th
15:43:46 <jpena> yep, I'll still be around next week
15:44:02 <dmsimard> which reminds me I didn't request my PTO, thanks :P
15:44:38 <amoralej> i can chair
15:44:54 <jpena> and we have a winner! :)
15:44:57 <jpena> thx amoralej
15:45:08 <jpena> #action amoralej to chair next meeting
15:45:21 <jpena> #topic open floor
15:45:42 <dmsimard> That's all I've got: *explosion sounds*
15:45:48 <dmsimard> If you like explosions, this is a good week
15:46:29 <dmsimard> BTW rcip-dev cloud is still experiencing network-related issues
15:46:42 <dmsimard> They're still working on it
15:46:49 <jpena> I saw swift uploads failing, is it still being worked on?
15:47:21 <dmsimard> Latest status is that one of the controllers failed and they're having a hard time with pacemaker (lack of quorum) and getting the failed controller back online
15:47:24 <jpena> #info rcip-dev cloud having network-related issues, review.rdoproject.org can be affected
15:48:49 <jpena> if that's all, I'll close the meeting in 3
15:48:55 <jpena> 2
15:48:56 <jpena> 1
15:49:13 <dmsimard> rbowen: entirely unrelated but we should probably remove fedora from the topic
15:49:23 <dmsimard> "RDO: An OpenStack Distribution for CentOS/Fedora/RHEL"
15:50:46 <jpena> #endmeeting