fedora-qadevel
LOGS
15:00:22 <tflink> #startmeeting fedora-qadevel
15:00:22 <zodbot> Meeting started Mon Nov 20 15:00:22 2017 UTC.  The chair is tflink. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:22 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
15:00:22 <zodbot> The meeting name has been set to 'fedora-qadevel'
15:00:29 <tflink> #topic Roll Call
15:00:40 <tflink> #chair kparal lbrabec
15:00:40 <zodbot> Current chairs: kparal lbrabec tflink
15:00:53 <tflink> #chair jskladan
15:00:53 <zodbot> Current chairs: jskladan kparal lbrabec tflink
15:01:04 * jskladan rushes in
15:01:40 * kparal is here
15:01:47 * kparal pokes lbrabec frantisekz
15:01:54 * lbrabec is here
15:02:00 * frantisekz is here
15:02:54 <jskladan> dem meetings
15:05:16 <tflink> cool, let's get started
15:05:25 <tflink> #topic Announcements and Information
15:05:35 <tflink> #info got libtaskotron POC using openstack working - tflink
15:05:36 <tflink> #info more work on using disk-image-builder to build images for Taskotron - tflink
15:05:36 <tflink> #info added 'pull_request' type to libtaskotron -- kparal
15:05:36 <tflink> #info built scratch libtaskotron and taskotron-trigger and deployed to dev in order to get github pull-request triggered tasks working -- kparal
15:05:36 <tflink> #info updated yumrepoinfo in libtaskotron and on servers to reflect F27 being stable -- kparal
15:06:04 <tflink> anything to add?
15:06:59 <tflink> comments/questions?
15:07:21 <kparal> how well the openstack integration works?
15:07:57 <tflink> I think it works well. it certainly makes some bits easier
15:08:07 <kparal> also, have you modified the standard or the ansiblized libtaskotron?
15:08:13 <tflink> ansiblized
15:08:37 <kparal> ok, good. another branch to merge would get complicated
15:09:27 <kparal> will we have a separate topic regarding openstack, or is the time to delve into details now?
15:09:39 <tflink> I have it set as another topic
15:09:41 <jskladan> yaay, moar branches!
15:09:45 <kparal> ok
15:09:50 <kparal> no further questions from me then
15:10:25 <tflink> I was going to go into the staging topic first as that sets the stage for "why bother with openstack?"
15:11:08 <jskladan> tflink: well, openstack was always kind of cool, but IIRC the "who will take care of the openstack instance" was always the bummer
15:11:08 <kparal> go ahead
15:11:32 <tflink> jskladan: infra - they're moving to a more supported openstack instance as I understand it
15:11:33 <kparal> jskladan: we might have an answer for that soon :)
15:11:40 <kparal> even better
15:11:57 <tflink> yeah, I'm not nearly crazy enough to take that on :)
15:12:09 <jskladan> tflink: just checking :)
15:12:26 <tflink> it's a valid concern :)
15:12:34 <tflink> anyhow, moving on
15:12:41 <tflink> #topic What to do about staging?
15:13:22 <kparal> iirc we wanted to replace it with a fake resultsdb that people can test against
15:13:37 <tflink> Parts of this conversation have been going on for a while but have been given a timeline as the now ancient qa boxes are being turned off and recycled in early December
15:14:08 <tflink> Both beaker and openqa will lose some of their worker boxes
15:14:20 <kparal> do they *have to* be turned off?
15:14:21 <tflink> I'm not so worried about beaker right now since it's not running much
15:14:23 <tflink> yes
15:14:33 <kparal> why is that?
15:14:35 <tflink> they need to go off to the big server farm in the sky
15:14:46 <jskladan> kparal: warranty, I guess?
15:14:49 <tflink> they're ancient, out of warranty and not worth moving to the new racks
15:14:52 <kparal> euthanasia is not legal yet in here
15:15:09 <tflink> we have new systems to replace them, too
15:15:14 <kparal> so, what exactly will we lose?
15:15:19 <tflink> qa01-qa08
15:15:23 <kparal> and where do we get the replacement boxes from?
15:15:31 <tflink> we got them back in May
15:15:43 <tflink> but you're getting ahead of me
15:16:10 <kparal> hmm, I don't see qa01 in the ansible inventory. what am I doing wrong...
15:16:15 <tflink> the concern is the loss of throughput for openqa
15:16:28 <tflink> kparal: qa01 and qa03 are kinda special cases
15:17:12 <tflink> earlier this year, we got a bladecenter to replace the old boxes that we're losing
15:17:40 <tflink> while we could just use that for openqa workers, I'm thinking it would be a good time to look into moving to the cloud
15:17:59 <tflink> that bladecenter is identical to the machines which will make up the new openstack instance
15:18:33 <tflink> my proposal is to start moving taskotron workers into the cloud, free up some client hosts and use those as openqa workers
15:18:44 <kparal> so some of these boxes are used for beaker, some for openqa. how is this related to stg?
15:18:56 <kparal> (I assumed stg means taskotron.stg)
15:19:11 <tflink> I probably should have said dev/stg
15:19:59 <tflink> if we decide that we want to move to the cloud, that can't really start en masse until after all the reracking is done in December
15:20:12 <tflink> which leaves us with a period of time with less openqa throughput
15:20:32 <tflink> stg comes into this with what kparal was talking about earlier
15:20:46 <kparal> this might need to be discussed with adamw. we still have F27 modular server to test in openqa for a month or so
15:20:54 <jskladan> tflink: is that such big of a deal, though (the slower openqa testing) at the moment?
15:20:56 <tflink> I've talked to him about this
15:21:12 <tflink> jskladan: it's not fatal but it's not ideal
15:21:43 <tflink> about a month ago, kparal and I were talking to infra folk about putting our stg instance on the prod networks
15:22:04 * tflink apologies for the winding-ness of this topic, hopes it'll be clearer in a moment
15:22:40 <tflink> what we came up with was to get rid of stg as we currently know it and just have a resultsdb instance since that's what others interface with
15:23:06 <tflink> possibly some scripts to put some results into that stg instance for the purposes of testing
15:23:26 <tflink> if we did that, we'd be freeing up one client host which could be used for openqa
15:24:28 <tflink> we'd be losing one of our testing grounds for taskotron until we started going all cloudy but I think that's a reasonable compromise
15:24:36 <tflink> is this making sense so far?
15:24:46 <jskladan> tflink: yup
15:24:49 <kparal> yes
15:25:38 <tflink> any objections to this plan, assuming that we do end up "embracing the cloud"?
15:25:59 <jskladan> none at the moment
15:26:12 <kparal> do you intend to set up the fake resultsdb infra in stg soon?
15:26:18 <tflink> yeah
15:26:28 <kparal> I'm not sure how trivial or not trivial it's gonna be
15:26:34 <tflink> it's just a resultsdb instance
15:26:41 <jskladan> kparal: nothing hard about it
15:27:00 * tflink was going to rebuild stg w/o master or other taskotron components
15:27:13 <tflink> the current dev becomes our stg/testing instance
15:27:14 <kparal> with a trigger attached that intercepts all new koji builds and bodhi updates etc and submits some random results
15:27:36 <tflink> I was thinking that could be manual
15:27:38 <kparal> it doesn't have to be difficult, right. it will just take some effort to keep it in sync with our production setup
15:27:40 <tflink> thinking/hoping
15:27:43 <kparal> hmm
15:27:51 <tflink> yeah, at least a plan of what will happen with it
15:27:55 <jskladan> kparal: would it not be better to intercept resutlsdb fedmessages, and just store the results added to prod?
15:28:23 <kparal> jskladan: we need to intercept stg fedmsgs
15:28:29 <jskladan> (if the goal is having same-ish data)
15:28:44 <kparal> the goal is to have any data (pass/fail doesn't have to match)
15:28:50 <jskladan> kparal: me no understand - why exactly? if it is just an instance with random data
15:28:57 <jskladan> what is the difference?
15:29:03 <kparal> e.g. pingou wants to see some results in stg bodhi as soon as a stg koji build is done
15:29:30 <tflink> I suspect that this would be a good conversation to have with folk who interface with resultsdb (greenwave, bodhi, etc.)
15:29:48 <kparal> manual submission is fine, as long as we abstract it enough - the people won't know the right fields to submit to resultsdb
15:30:01 <kparal> but at that point we might make it even automated
15:30:22 <jskladan> kparal: depends on what the goal is...
15:30:46 <tflink> yeah, I can see that making sense but I'm not sure I fully understand what the folks interfacing with resultsdb would need/want
15:30:46 <kparal> that's what I understood from our past conversation with pingou
15:31:01 <kparal> ok, anyway, ack in general
15:31:10 <jskladan> in general, yes
15:32:02 <tflink> #info no immediate objections to the idea of dismantling most of taskotron stg to free up a worker for openqa
15:32:06 <tflink> does that sound OK?
15:32:44 <kparal> ack
15:32:46 <jskladan> in practice, it really depends on the usecases, and why are we doing that. Especially as in _us_ directly, pingou could easily set his own resultsdb instance for the staging bodhi, and fill it with "random" data directly)
15:32:52 <jskladan> tflink: ack
15:33:19 <kparal> jskladan: there are interactions between multiple systems, like bodhi+greenwave
15:33:26 <kparal> sure, he can set up a custom resultsdb
15:33:30 <tflink> jskladan: yeah, it'll require some more discussions with the other folk involved
15:33:40 <kparal> but that's exactly why they want us to set up a proper stg service
15:33:55 <jskladan> kparal: sure, I'm just debating, whether our limited pool of HW should be used for "somebody else", and why
15:34:00 <kparal> so that they don't need to do it for each their service. and that the workflows are tested in full
15:34:15 <tflink> jskladan: one resultsdb instance isn't a big deal in my mind
15:34:26 <jskladan> tflink: it is a precedens, though :)
15:34:26 <kparal> it's in our best interest to make sure bodhi and greenwave works
15:34:31 <tflink> and we can move it to other infra machines if it ends up being an issue
15:34:53 <kparal> also, the traffic in stg is extremely low
15:34:54 <jskladan> ok s/our HW/our resources/ to be more accurate
15:34:54 <tflink> if I'm remembering the conversation correctly, that was offered
15:34:55 <jskladan> but whatever
15:35:16 <jskladan> let's move on, we can tackle this once it is on the table
15:35:21 <tflink> sounds good
15:35:45 <tflink> #topic Moving to the cloud?
15:37:11 <tflink> I think that we've covered a lot of this in other topics already but for the sake of being complete, I've been working on a libtaskotron POC which uses openstack instances in place of local VMs
15:37:23 <jskladan> tflink: love it!
15:37:37 <kparal> what are the drawbacks/pitfalls/not implemented parts?
15:37:48 <kparal> in this otherwise awesome project?
15:38:01 <tflink> I've not gotten into image discovery/choosing
15:38:10 <tflink> the code has hardcoded image names right now
15:38:22 <kparal> do you expect this to be a problem?
15:38:45 <tflink> no more than what we currently do, no
15:38:49 <kparal> how exactly work VM images? you upload them into openstack with some ID and then specify this ID when requesting a new VM instance?
15:39:03 <tflink> pretty much, yeah
15:39:17 <kparal> sounds like it could fit our workflow
15:39:30 <tflink> the idea in the back of my head is to pretty much use what we have been using WRT naming convention
15:39:30 <jskladan> kparal: tflink: and now you broke the spell - I hoped we'd finally get out of the image-building bussiness
15:39:33 * jskladan is sad panda
15:39:51 <tflink> jskladan: that's another thing I've been working on, actually
15:40:03 <kparal> jskladan: do you think openstack creates the VMs out of thin air? :)
15:40:22 <jskladan> kparal: it should solve all the problems - its cloud!
15:40:31 <kparal> I forgot
15:40:33 <tflink> but long story short, I don't think we're going to be completely out of the image creation business anytime soon
15:40:49 <jskladan> kparal: honestly, I'd just hope the infra guys would finally be responsible for that :)
15:41:21 <tflink> the other complications that I suspect we'd hit would come with implementation
15:41:47 <tflink> since the openstack instance is more or less isolated from the rest of infra, we'd need to think about how the buildmaster/buildslave communication is set up
15:42:12 <kparal> btw, openstack requires cloud-init to be installed, right?
15:42:30 <jskladan> tflink: I have a mad idea - we could have a machine in cloud, that would create the base images for said cloud!
15:42:43 <tflink> my idea there is to move the buildslaves to a persistent cloud instance which ends up requesting more openstack instances and communicates with those instances in a private ip space
15:43:04 <tflink> jskladan: not sure if you're serious or not but that's kind of what I had in mind :)
15:43:12 * jskladan was serious
15:43:25 <tflink> but we'll get to that part in a minute
15:43:52 <kparal> tflink: will the VMs have access to all koji/bodhi/etc services? or in what sense it's isolated?
15:44:02 <tflink> the other pitfall that I can think of is that we'd be losing some of our "run it on your machine and it's as close to production as we can make it"
15:44:23 <tflink> kparal: we'd be accessing them using public hostnames as if we weren't in infra
15:44:41 <jskladan> tflink: well, that is a compromise I'm more than willing to do (that run on your machine stuff)
15:44:45 <kparal> that's much better than dealing with private hostnames honestly
15:45:12 <tflink> the cloud is in PHX2 but the networking is routed such that it's effectively in another networking space
15:45:37 <tflink> I figure that we'd probably end up keeping testcloud for local running and testing
15:46:03 <tflink> so the local use case would continue to be testcloud but the production use case would start using openstack
15:46:07 <kparal> if we can boot those VMs with testcloud, I think the promise is still kept the same way we spawn VMs now
15:46:20 <tflink> yeah, it's close but not quite the same thing
15:46:32 <kparal> I don't have a problem with that
15:46:33 <tflink> that being said, I'm not sure many people actually use the local execution bits :)
15:46:59 <tflink> other than the question about images, any other questions/concerns?
15:47:00 <kparal> but regarding networking, what about our heavy access to koji pkgs. it's still on local super fast network, right?
15:47:09 <tflink> it should be, yes
15:47:17 <jskladan> kparal: tflink: that S-word, though :)
15:47:25 <tflink> oh, another alternative would be to use a vpn
15:47:32 <kparal> no Swear words in here!
15:47:46 <tflink> jskladan: true but I think some of that is unavoidable at the moment
15:48:21 <tflink> from what I understand, newer openstack versions have a kind of vpn-as-a service that we could utilize
15:48:37 <jskladan> tflink: I'm just cheerfully poking, I absolutely know this is a S-kind territorry
15:48:39 <tflink> but that's another of those "cross that bridge when we get there" kind of things
15:48:44 <tflink> no worries
15:48:45 <jskladan> tflink: agreed
15:49:09 <kparal> ok, so what's blocking us except for image selection?
15:49:15 <tflink> the way I see it, the problem could be solved by either a VPN or putting more of our stuff in the cloud
15:49:30 <tflink> the current cloud is not really production ready
15:49:31 * kparal brb
15:49:52 <tflink> there are plans in motion to build a new cloud that will be production ready but that won't be ready until mid-december at the earliest
15:50:26 * kparal is back
15:50:50 <kparal> ok
15:50:57 <tflink> the remaining bits would be image creation, image selection, sorting out openstack user/tenant/quota bits and a lot of testing
15:51:06 <kparal> at least we have time to iron out the details
15:51:19 <tflink> we should be able to use the current cloud for development
15:51:38 <tflink> more s-words but the python API shouldn't change significantly between openstack releases
15:51:39 <kparal> great
15:52:25 <tflink> wow, I didn't think it was this late already
15:52:50 <tflink> do we have a qa meeting today?
15:52:55 * tflink doesn't remember seeing an announcement
15:53:16 <tflink> ah, canceled
15:53:40 <tflink> are folks OK with going past the top of the hour?
15:53:46 <kparal> fine with me
15:54:03 <jskladan> yup
15:54:18 <tflink> cool
15:54:38 <tflink> any objections to what we've talked about so far?
15:54:48 <tflink> "building" images will be the next topic
15:55:22 <kparal> no objections. sudo make it happen
15:55:39 <jskladan> no objections\
15:55:41 <tflink> sudo -u kparal make it happen
15:55:48 <tflink> rather
15:55:53 <tflink> sudo -u kparal do all the work
15:55:58 <tflink> :-D
15:56:00 <kparal> damn, I forgot to cast a protection from evil spell first
15:56:40 <tflink> #info no immediate objections to the idea of moving taskotron to use openstack instances instead of local VMs on client hosts for our deployments
15:57:02 <tflink> sound OK?
15:57:28 <jskladan> ack
15:57:46 <kparal> ack
15:58:07 <tflink> #topic image "building" monkey business
15:58:40 <tflink> the base of where I'd like to go with this is diskimage-builder
15:58:42 <tflink> https://docs.openstack.org/diskimage-builder/latest/
15:59:18 <tflink> the reason I'm proposing this instead of doing things the way that openqa does them is mostly because diskimage-builder is designed to customize and update released images
15:59:34 <tflink> so with one command, I can build an updated F26 image that has libtaskotron installed
15:59:57 <tflink> "DIB_RELEASE=26 disk-image-create -o taskotron-fedora-26-20171114 -p libtaskotron-fedora fedora vm"
16:00:06 <jskladan> tflink: can you run any command, or just something from a pre-defined set of commands?
16:00:20 <tflink> jskladan: not sure what you mean by command
16:00:21 <jskladan> asking mostly because of the dnf cache voodoo we do
16:00:23 <kparal> that rebuilds the image from scratch or modifies the existing image by mounting it and installing the package?
16:00:45 <tflink> it modifies the existing base image by mounting it, updating it and installing packages
16:00:46 <kparal> jskladan: we no longer do dnf cache voodoo, I believe
16:01:19 <kparal> I guess it uses libguestfs to do that
16:01:34 <kparal> so no arbitrary command, but some selected useful things
16:01:39 <tflink> I'm still trying to understand the mechanism through which you can change individual files
16:01:58 <tflink> it's there but I'm having trouble understanding exactly how it's invoked
16:02:39 <kparal> the first time you create the image, does it use anaconda or just installs into chroot?
16:02:47 <tflink> and if all else fails, we can write elements to do what we need
16:03:03 <tflink> it starts with the released image
16:03:50 <kparal> ok, then it doesn't really answer the question, but we might not care. we simply always update the released one
16:04:03 <tflink> I haven't tried it yet but we should be able to do rawhide with no real customization by downloading the image first and pointing diskimage-builder at the downloaded image instead of relying on known urls for released images
16:04:09 <kparal> anyway, sounds interesting, and could make our image building process a lot more reliable
16:04:33 <tflink> kparal: I think it does answer the question. anaconda is not involved because it starts from a released base cloud image
16:04:37 <jskladan> kparal: well, since we would not be building them :)
16:04:41 <tflink> so there's no anaconda invoked
16:05:07 <kparal> tflink: oh, so it can't create an image from scratch, and always needs one as a base. ok, that answers the question, yes
16:05:26 <kparal> good enough for us, I think
16:05:37 <tflink> yeah, that's one of the things that appeals to me about this method - we use the images that infra produces instead of trying to replicate their build system
16:05:40 <jskladan> tflink: that is as close to "we are not building the images" as we could get, IMO
16:05:43 <kparal> unless we want to change the filesystem, or something
16:05:55 <jskladan> that was going to be my question
16:05:58 <tflink> the downside is that the diskimage-builder package in fedora is out of date
16:05:59 <kparal> hm, what about partitions sizes?
16:06:14 <tflink> kparal: that's something that can be customized
16:06:21 <kparal> ok
16:06:40 <tflink> https://docs.openstack.org/diskimage-builder/latest/user_guide/building_an_image.html#disk-image-layout
16:06:59 <tflink> I know that there was talk of updating diskimage-builder but I don't think that has happened yet
16:07:27 <tflink> I've been doing testing with diskimage-builder from pip
16:08:16 <tflink> we'd still need to have some code/automation around building new images, uploading them to openstack and purging old images
16:08:27 <tflink> but I don't see that as a deal breaker
16:08:33 <jskladan> tflink: neither do I
16:08:47 <tflink> as it would be a huge improvement over the current setup
16:09:37 <kparal> yep
16:09:40 <jskladan> it depends on the APIs, but the basic housekeeping would IMO be done (the algorithm) the same way we do it now
16:10:08 <jskladan> not that most of the code could be reused, but at least we have a known system in place :)
16:10:39 <tflink> I would expect that the openstack apis for image management are stable and relatively well documented
16:11:06 <tflink> honestly, I'd like to see that move to a task that's run in response to 'repo push complete' fedmsgs
16:11:56 <tflink> or compose complete in the case of branched or rawhide
16:12:34 <tflink> to circle back, the big TODOs in this that I see are:
16:13:05 <tflink> 1. make sure that the "specify a downloaded image" bit works the way I think it will WRT not-yet-released images (rawhide, branched)
16:13:38 <tflink> 2. figure out which, if any, image customization we still need to figure out
16:14:18 <tflink> 3. get the diskimage-builder package updated and figure out who's going to own that going forward.
16:14:19 <kparal> can we use openstack with our current image-factory built images? or is this a requirement for openstack?
16:14:45 <tflink> we could use the current images that we build but they'd still need to be uploaded to openstack
16:15:01 <kparal> sure. ok
16:15:05 <tflink> the code we have for syncing images and pruning old images would likely not work
16:15:23 <tflink> unless jskladan wants to delve into the bits of imagefactory that upload stuff to openstack :)
16:15:30 <jskladan> tflink: kparal: and possibly changed, no? Since we disable the cloud-init
16:15:42 <tflink> we do?
16:15:57 <kparal> we configure it to not connect to online servers
16:16:01 <tflink> ah
16:16:08 <jskladan> tflink: we did, IMO, make it "all but disabled"
16:16:09 <kparal> but that might be done by testcloud, not us
16:16:14 <jskladan> since it was causing all kinds of trouble
16:16:16 <tflink> I don't think that would be needed as much
16:16:30 <jskladan> kparal: nope, it is in the KS
16:16:31 <tflink> since cloud-init would be partially relying on openstack
16:16:56 <tflink> actually, I think that would have to go away
16:20:46 <jskladan> but that's a small change, just remember that Testcloud was naughty when images were trying to connect to the external server
16:20:46 <tflink> yeah, cloud-init is loved by all
16:20:46 <tflink> :)
16:20:46 <jskladan> (also, testcloud should finally die)
16:20:46 <tflink> jskladan: why's that?
16:20:46 <kparal> 💗
16:20:47 <kparal> (that was for cloud-init)
16:20:47 <jskladan> I just have an animosity towards it :) always seemed to cause more  trouble than anything else for me
16:20:47 <tflink> jskladan: you're welcome to start using cloud images locally without it
16:20:47 <tflink> but I'm pretty sure we were talking about keeping it around to facilitate VM spawning in the local case for libtaskotron
16:20:48 <jskladan> tflink: well, that's the thing - the causality was IIRC "we are using cloud images, because we already have testcloud", not "we want to use cloud images for local vm execution, if only we had something like testcloud" :)
16:21:16 <jskladan> whatevs, as long as at least on of the Imagefactory/Testcloud dies in fire, I'm celebrating :)
16:21:30 <tflink> I thought it was more "we want to use the cloud base images so we're going to use testcloud because that makes the most sense"
16:22:26 <kparal> if cloud-init was not required for openstack images, we might find easier ways to run the images
16:22:27 <tflink> as I'm not sure there are any alternatives to the cloud images unless we want to build our own stuff
16:23:07 <kparal> or we can finally patch up testcloud to not require initial user configuration, which is the most hassle here
16:23:09 <tflink> or use vagrant or something like that
16:23:24 <tflink> kparal: I'm not sure how that would work
16:23:36 <tflink> there is no user present in the base cloud iamge
16:23:37 <tflink> image
16:24:16 <jskladan> tflink: I don't really care that much, if testcloud stays for users outside our deployment, but we don't use it, I consider it a win
16:24:29 <tflink> we wouldn't be using it in production, no
16:24:35 <kparal> let's leave this discussion for another time
16:24:47 <tflink> but we would still need to be testing it some to make sure it continues to work
16:25:02 <tflink> but I suspect that would happen as a consequence of development
16:25:39 <jskladan> tflink: yeah, but if it stops, it does not need to bother us as much as if we were dependent on it for our deployment
16:25:49 <tflink> true
16:25:59 <tflink> if there are small lags in boot time, it doesn't matter so much
16:26:09 <jskladan> and there are easy workarounds for the "local" user
16:26:16 <tflink> any other questions/comments/concerns here?
16:26:36 <jskladan> nope, loving the "lets get rid of imagefactory/testcloud" in production movement :)
16:27:34 <tflink> #info no immediate objections to explore switching over to diskimage-builder to create images
16:27:38 <tflink> sound OK?
16:27:48 <kparal> 👍
16:28:01 <jskladan> ack
16:28:10 <tflink> cool
16:28:22 <tflink> yikes, already 1.5 hours
16:28:41 <tflink> #topic initial plan of action
16:29:16 <tflink> who all has time to work on the various parts of this?
16:29:30 <tflink> time/interest
16:29:32 <jskladan> I do
16:29:52 <kparal> what are our plans regarding ansiblize?
16:29:55 <jskladan> but, I'd also like to see the ansiblize branch deployed for testing :) so... I might have clash of interests
16:29:58 <tflink> I see dismantling stg as being one of the more urgent bits
16:30:04 <tflink> fair enough
16:30:32 <kparal> btw, we might get some resources back if we finally kill phab
16:30:47 <tflink> does it make sense to get ansiblize deployed to dev at least before moving on the other parts of this?
16:31:06 <tflink> kparal: I'm not sure where you're getting the resource scarcity to the extent from
16:31:13 <tflink> not that I don't need to kill off phab
16:31:24 <kparal> what is blocking deploying ansiblize?
16:31:32 <kparal> upgrading to F26+?
16:31:35 <jskladan> tflink: would make sense to me (deploying ansiblize to dev first)
16:31:53 <tflink> do we have any tasks left to port?
16:32:10 <kparal> sure, but that doesn't matter. we need some POC running
16:32:10 <tflink> but yeah, redeployment is one of those things
16:32:22 <tflink> oh, I was thinking of production
16:32:34 <kparal> I was talking about dev
16:32:45 <kparal> I was under impression that the ball was in your field regarding ansiblize dev :)
16:33:00 <tflink> I'm not aware of anything that's really blocking deployment on dev other than the selinux-fixing home dir moving thing that jskladan was/is working on
16:33:29 <jskladan> tflink: for the lack of a better place for it, I sent a review request via email
16:33:30 <kparal> I'll be happy to continue polishing ansiblize and porting tests, but I'd really appreciate seeing something being run in dev
16:33:38 <tflink> jskladan: did I miss it, then?
16:33:51 <tflink> kparal: yeah, agreed that needs to happen
16:34:01 <jskladan> it was just today, so you might have (had it done since wednesday, but.. life)
16:34:55 <jskladan> https://lists.fedoraproject.org/archives/list/qa-devel@lists.fedoraproject.org/thread/ADQQOZ5XZJN7I3HBHAYXONVTQMZFWROK/
16:34:59 <tflink> oh
16:35:07 * tflink hasn't gotten to checkign lists yet today
16:35:31 <kparal> so, for me, I'd really like to see ansiblize deployed to dev, before tackling stg
16:35:38 <jskladan> kparal: +1
16:35:58 <kparal> but note that dev is on F25 atm
16:36:00 <tflink> so long as we get both done before openqa loses its old workers, fine by me
16:36:16 <tflink> kparal: I got stuck on the client host and never got to the master or resultsdb
16:36:22 <tflink> the client-host is F26
16:36:52 <kparal> frantisekz was mentioning he would be interested to help out with the upgrade/deployment stuff
16:37:06 <tflink> I'm certainly not going to turn down help
16:37:23 <kparal> he's going through the RH ansible course to learn ansible
16:37:38 <kparal> so, just a note
16:38:00 <kparal> he's not in sysadmin group yet, though
16:38:07 <tflink> that's pretty easy to fix
16:39:07 <tflink> it seems reasonable to me to shoot for dev redeployment this week
16:39:18 <tflink> well, it would if it wasn't a holiday week here
16:39:43 * jskladan is off on Friday, available on Thu, if needed
16:40:07 <tflink> thu and fri are US holidays
16:40:23 <tflink> and I expect a bunch of folk to be on PTO this week
16:40:49 <jskladan> how about next monday, then?
16:41:03 <tflink> as a backup, sure
16:41:13 <tflink> I still think that this week is possible if nothing goes wrong
16:41:21 <tflink> so ... next week :)
16:41:36 <jskladan> depends on you
16:41:46 <jskladan> I just won't be available over night on wed
16:42:00 <jskladan> frantisekz: might be free to help, though
16:42:28 <kparal> I have PTO on Wed and Fri
16:42:31 <frantisekz> yep, you can count on me throughout the entire week :)
16:42:46 <tflink> would tomorrow work for doing the deployment?
16:43:01 <jskladan> frantisekz: ^^
16:43:03 <tflink> after the team meeting
16:43:07 <frantisekz> yeah, sure
16:43:12 <kparal> he'll probably need to set up the infra tokens and everything
16:43:13 <tflink> assuming that doesn't get canceled
16:43:53 <tflink> frantisekz: do you have a few minutes after this meeting to go over how to get signed up?
16:44:22 <frantisekz> tflink: few minutes should be fine
16:44:25 <tflink> so, summarizing in the interest of not going past 2 hours:
16:45:10 <tflink> the two immediate priorities are getting dev ansible-ized and dismantling stg
16:45:38 <jskladan> tflink: ack
16:45:51 <tflink> ansible-izing is mostly needing redeployment of dev and will hopefully be done this week or early next week at the latest if things go wrong
16:46:43 <tflink> dismantling stg is mostly needing a talk with folks who interface with resultsdb to figure out what, if any, fake data will need to be fed into the stg instance
16:47:38 <tflink> jskladan: are you wanting/planning to help with the redeployment?
16:48:05 <jskladan> not really wanting, but I could be present, if help is needed
16:48:25 <jskladan> but from what I remember, it is mostly waiting :)
16:48:48 <tflink> jskladan: what about finding the resultsdb-using folk and starting to figure out what will be needed there
16:48:54 <tflink> dev is less waiting, usually
16:49:14 <tflink> more poking at stuff and figuring out why something doesn't work anymore :)
16:50:06 <jskladan> I could be there, then, no problem
16:50:44 <tflink> jskladan: does that mean you're planning to help with the deployment or talking with the folks who use resultsdb?
16:51:14 <jskladan> ad resultsdb-wanting folk - do we have a list? If not, I could ping pingou to find out his side, and hopefully get the list of "ohers" from him
16:51:34 <tflink> pingou and threebean would be good to start with
16:51:39 <jskladan> tflink: I can stay at work, and help with the redeployment
16:51:42 <jskladan> tomorrow
16:52:07 <tflink> ok. hopefully it will be relatively smooth :)
16:52:59 <tflink> are there remaining questions about what to work on for the next week?
16:53:55 <kparal> none here
16:53:58 <tflink> I expect things will be more organized-looking by next week if dev is redeployed and working so testing and polish work on tasks can keep moving
16:55:06 <tflink> since we're getting close to the 2 hour mark ...
16:55:10 <tflink> #topic Open Floor
16:55:18 <tflink> assuming there were no questions left about the previous topic
16:55:31 <kparal> nothing from me
16:55:32 <tflink> any additional topics that folks wanted to bring up?
16:55:55 * jskladan has nothing
16:56:17 <tflink> ok. thanks for coming and putting up with the long meeting today
16:56:23 * tflink will send out minutes shortly
16:56:29 <tflink> #endmeeting