ansible_windows_working_group
LOGS
20:00:00 <nitzmahone> #startmeeting Ansible Windows Working Group
20:00:00 <zodbot> Meeting started Tue May  3 20:00:00 2022 UTC.
20:00:00 <zodbot> This meeting is logged and archived in a public location.
20:00:00 <zodbot> The chair is nitzmahone. Information about MeetBot at https://fedoraproject.org/wiki/Zodbot#Meeting_Functions.
20:00:00 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
20:00:00 <zodbot> The meeting name has been set to 'ansible_windows_working_group'
20:00:04 <nitzmahone> bam
20:00:07 <briantist> hey-ooo
20:00:07 <nitzmahone> #chair jborean93
20:00:07 <zodbot> Current chairs: jborean93 nitzmahone
20:00:16 <jborean93> howdy
20:00:32 <nitzmahone> #info agenda https://github.com/ansible/community/issues/644
20:01:25 <nitzmahone> #topic Static docs builds on GHP for Windows collections : https://github.com/ansible/community/issues/644#issuecomment-1116309488
20:01:36 <nitzmahone> briantist around today?
20:01:39 <briantist> yup!
20:02:30 <nitzmahone> So long as the docs builds are reasonably stable (and/or version-pinned to ensure that) at first glance it seems like a good idea to me...
20:02:31 <briantist> I can speak to any of that of course but put it in the agenda so it was all together, can give a few minutes to digest if needed
20:03:32 <briantist> they are reasonably stable, fixed quickly if there's anything wide that's broken, and in the worst case, workflows can be disabled (and reenabled) from the Actions tab in the GitHub UI, no commit is needed
20:04:01 <nitzmahone> How are branches/versions handled?
20:04:23 <jborean93> yea I'm all for removing another step and those docs in favour of something a bit more automatic if it's stable and usable
20:04:25 <nitzmahone> I see the branch name in the rendered output for one of the samples
20:04:56 <briantist> yup, as written it publishes `/branch/BRANCHNAME`, `/tag/TAGNAME`, and `pr/PRNUMBER`
20:05:04 <briantist> the pr ones get deleted on PR close
20:05:48 <briantist> by limiting the triggers of the calling workflow (the one you put in the collection repo), you can limit which branches/tags, or not do them, etc.
20:06:34 <nitzmahone> Do we want to try it out on community.windows and if it's looking good do ansible.windows?
20:06:47 <briantist> I've been using this in some form for a long time, the main thing that's new is supporting GH pages as a publishing target.
20:07:38 <jborean93> I'm happy to try it out on c.w, hoping to push a new release for both soon
20:08:09 <nitzmahone> I can flip the GHP switch there right now if we want
20:08:38 <nitzmahone> I'm +1 for at least community.windows, and if it's going well, do ansible.windows not long after
20:08:49 <briantist> great! I can put up a PR today, I need someone with permissions to do the GHP part (I did post step by step instructions for that part)
20:10:02 <nitzmahone> OK, so it wants to own the `gh-pages` branch, and then just does subdirs under that for branch/tag/PR ?
20:10:22 <briantist> also re: the branch being part of it, I've been putting an `index.html` in the root of the site that redirects to `branch/main` (sample file is in the GHP instructions wiki)
20:10:38 <briantist> yes, the `gh-pages` branch is an orphan that holds only the static site content
20:11:43 <nitzmahone> Yeah, that was what I was wondering- static redirect works for now since most collections aren't supporting multiple active branches, but I suspect that'll change at some point, so figuring out a way to select versions would be Nice To Have ;)
20:12:19 <briantist> possibly, yeah. I might just stick a static link into the README for those maybe?
20:12:32 <briantist> `stable-1` README links to `branch/stable-1`, etc.?
20:13:19 <nitzmahone> That could work too- it's not really a problem right now, just something we've had to solve $too_many_times in other places, so just thinking ahead. It's nothing we need to worry about right now
20:13:28 <briantist> yup, certainly room for improvement
20:14:42 <nitzmahone> Unless jborean93 has any other concerns, I'll go ahead and push you an empty orphan `gh-pages` branch and light it up on c.w
20:14:57 <jborean93> I'm all good
20:15:21 <nitzmahone> #agreed nitzmahone will enable gh-pages for community.windows
20:15:38 <briantist> sgtm! if no index page, it'll need something committed to push, an empty `.nojekyll` file is a good candidate
20:18:57 <nitzmahone> OK, branch is there are GHP is pointed at it
20:19:01 <nitzmahone> *and
20:19:24 <briantist> woohoo! I see the workflow ran as well
20:19:38 <briantist> (the GHP automatic one)
20:19:59 <jborean93> nice
20:20:26 <nitzmahone> Anything else you need from us to run with that for the moment?
20:20:33 <briantist> nope!
20:20:36 <nitzmahone> thanks!
20:20:38 <briantist> I'm actually pushing up the PR now
20:21:10 <briantist> so I'll link to that momentarily, in the meantime we cna mov eon meeting-wise :)
20:21:13 <nitzmahone> sweet
20:21:46 <nitzmahone> I don't have anything burning- 2.13rc1 went up recently IIRC
20:22:18 <jborean93> yea I'm boring this week, just catching up on whatever I missed last week
20:22:33 <nitzmahone> If jborean93 doesn't have anything else, we can wrap up
20:22:47 <briantist> sounds good to me
20:22:52 <briantist> docs PR: https://github.com/ansible-collections/community.windows/pull/391
20:22:52 <jborean93> pyspnego was updated to include it's own md4 hasher to have NTLM still work in OpenSSL 3.x land that's coming in new distros
20:23:19 <jborean93> pywinrm will probably have issues as it still uses an older library, need to think about how to solve that without too much friction
20:23:50 <jborean93> sigh I really got to fix that stupid Galaxy problem with c.w in CI
20:24:11 <briantist> what happened in OpenSSL 3.x? md4 was removed?
20:24:40 <jborean93> it's off by default and the way Python is compiled against it means it isn't accessible through hashlib anymore
20:25:11 <jborean93> unfortunately NTLM being the annoying child it is, still uses md4 and will most like never be updated
20:25:58 <briantist> oof, that's true
20:26:25 <jborean93> So far I know Ubuntu 22.04 and Fedora 36 (when released) ship with OpenSSL 3.x
20:26:51 <nitzmahone> ... and apparently even when it's not available, it still reports as available, just becomes a runtime error if you try to use it... That sure seems like a Python bug to me
20:27:04 <briantist> oh that's going to be fun.. my WSL is still 18.04 and I was just thinking about skipping 20.04 and going to 22.04
20:27:07 <jborean93> CredSSP is also starting to get harder out of the box for Server 2012 and 2012 R2. There's only a few cipher suites still allowed by default now that those Windows versions support
20:27:35 <nitzmahone> Hey, adjusting the cipher suites is super easy with Powershell... oh wait... ;)
20:27:35 <briantist> I've been meaning to ask when Ansible will drop support for those
20:27:56 <jborean93> and potentially those common cipher suites are hardened enough that the auto generated cert CredSSP uses isn't strong enough
20:28:12 <jborean93> https://github.com/jborean93/requests-credssp/issues/27 for that whole saga
20:28:13 <nitzmahone> We've thus far been supporting things until Microsoft drops them
20:28:27 <briantist> got it, so around Oct '23? https://endoflife.date/windowsserver
20:28:43 <jborean93> unfortunately yes
20:29:14 <briantist> I am kinda surprised 2012 and R2 end at the same time
20:29:22 <jborean93> 08 was the same
20:29:27 <briantist> sort of a 2.9 / 2.10 situation 😉
20:30:00 * nitzmahone plots napalm destruction for 2.9
20:30:19 <nitzmahone> It's only *mostly* dead :(
20:32:17 <briantist> jborean93: what's up with the c.w tests? you mentioned an issue with galaxy?
20:33:10 <jborean93> too many tests run at the same time breaking the ansible-galaxy dep install on some tests
20:33:30 <jborean93> https://dev.azure.com/ansible/community.windows/_build/results?buildId=41579&view=logs&jobId=a38e0e34-eddc-57dd-c1dd-ca2b9fd71fe3&j=a38e0e34-eddc-57dd-c1dd-ca2b9fd71fe3&t=7f437ab1-bba0-576d-460b-feb5306f804e
20:33:42 <briantist> ouch, I do kinda wish I had the problem of TOO MUCH concurrency 🤣
20:34:32 <jborean93> The best thing to do is probably have a single pre-step that does the download of the requirements and somehow install that offline in the tests themselves
20:34:40 <jborean93> I just haven't gotten to it
20:35:11 <briantist> I don't know if AZP has it, but the caching support in GHA or something like it could do that. By using the run ID as part of the cache key it would prevent using stale caches between jobs
20:35:17 <nitzmahone> IIRC at one point we had an integration test that would actually bring down community galaxy
20:36:25 * nitzmahone has some battle scars from GHA cache - it works great except when it doesn't
20:36:56 <briantist> heh, yes, it's easy to cache-poison yourself with it
20:37:22 <briantist> the way they implemented it basically seems to have done with nodeJS packages.lock files in mind
20:37:35 <briantist> and other uses can be harder to implement effectively and safely
20:37:40 <nitzmahone> Well, more the concurrency/flaky jobs problem- I haven't hit it in awhile, but it used to have a thing where if a job crapped out or was canceled at just the right time, it'd lock the cache key out for 24-72h
20:38:12 <nitzmahone> So you had to go adjust anything else that would use the same key or the cache action would blow up on the way in
20:38:50 <briantist> ahh maybe that was a while ago, now it seems to know when another job has the key in use and doesn't try to stomp on the cache, but I haven't seen a cancelation break it (maybe I just got lucky)
20:38:52 <nitzmahone> (pyyaml CI tries to cache libyaml builds to speed things up)
20:39:38 <nitzmahone> It's probably been at least a year since I've seen it happen, but yeah, it was a huge PITA when it did
20:40:45 <nitzmahone> OTOH I love GHA's approach to artifacts over AZP's
20:40:57 <briantist> how does it differ?
20:41:56 <nitzmahone> AZP artifacts are per-job and "navigable", where GHA's are per-run and a blob zip, but you can transparently combine artifacts from the same run into a single archive without needing a post-run job to aggregate them
20:42:34 <nitzmahone> (the navigable part of AZP is nice, so if you have a directory artifact, you can access its contents directly through the API)
20:43:26 <nitzmahone> The transparent combination thing on GHA is super nice so long as you know your jobs aren't generating name collisions within the artifact archive
20:43:39 <briantist> yeah that part is lacking in GHA; you can ONLY download a zip of the artifacts of a given name. And if you have a huge number, you hit 429s trying to upload them all, so you might archive them first, and then you have a double archive
20:44:49 <jborean93> AZP is also better for download URLs, I believe the GHA ones are blocked by a sign on whereas with AZP you can give it to anyone to download
20:44:56 <nitzmahone> "a huge number" meaning multiple artifacts, or a single job artifact upload with lots of files in it? I've mostly done "lots of jobs each contributing to a single artifact", so the former's not been a problem for me
20:45:24 <briantist> mostly meaning many small files, since it does a separate request for each one
20:45:31 <briantist> it's documented in the action
20:45:47 <briantist> https://github.com/actions/upload-artifact#too-many-uploads-resulting-in-429-responses
20:45:55 <nitzmahone> Oh yeah, I never noticed that- you can see the artifacts anonymously, but you have to be logged in to download
20:46:45 <briantist> right, it makes cross-workflow artifact download a non-starter unless you're already using a PAT for some reason
20:47:51 <nitzmahone> Yeah, GHA definitely has some limitations that we've hit as well for scaling and cross-repo stuff. Dynamic matrices are also way too hard in GHA :(
20:48:14 <briantist> I've luckily not needed them 😬
20:48:39 <briantist> from what I understand you need to spin up a job to generate the JSON and reference it later to get something like that
20:49:18 <nitzmahone> Yeah, and even then it's very limited- I've tried it a couple times and always been disappointed
20:49:55 <nitzmahone> I just tried it for something a few weeks ago and ended up going a different way
20:51:27 <nitzmahone> Welp, I'm gonna go find some lunch- thanks for running with the docs stuff!
20:51:31 <nitzmahone> Til next week...
20:51:35 <nitzmahone> #endmeeting