ansible_windows_working_group
LOGS
20:00:02 <nitzmahone> #startmeeting Ansible Windows Working Group
20:00:02 <zodbot> Meeting started Tue May 24 20:00:02 2022 UTC.
20:00:02 <zodbot> This meeting is logged and archived in a public location.
20:00:02 <zodbot> The chair is nitzmahone. Information about MeetBot at https://fedoraproject.org/wiki/Zodbot#Meeting_Functions.
20:00:02 <zodbot> Useful Commands: #action #agreed #halp #info #idea #link #topic.
20:00:02 <zodbot> The meeting name has been set to 'ansible_windows_working_group'
20:00:06 <nitzmahone> boo
20:00:09 <nitzmahone> #chair jborean93
20:00:09 <zodbot> Current chairs: jborean93 nitzmahone
20:00:15 <jborean93> that's unlike you nitz
20:00:25 <nitzmahone> I got squirelled
20:00:49 <briantist> hey
20:00:59 <nitzmahone> #info agenda https://github.com/ansible/community/issues/644
20:01:15 <nitzmahone> Nothing new there, though I did get around to poking at the docs build stuff
20:01:35 <briantist> oh nice, anything to share?
20:02:46 <nitzmahone> While I agree you've got it locked down as well as it can be, and it does look pretty safe in its current incarnation, I think the general consensus among us is that it still should probably not run on PRs, at least for Red Hat supported repos. I haven't invoked our prodsec folks, but I'm betting they'd agree.
20:03:42 <briantist> ok, it sounds like abundance of caution more than anything specific?
20:03:57 <nitzmahone> There are just enough moving parts that a write token to the target repo just shouldn't ever exist in a workflow IMO- if it were publishing those to a separate repo, I'd be all for it
20:04:57 <nitzmahone> Yeah, pretty much- your implementation looks very solid and keeps the write token limited to a single job where no user content is run, so I'd have no objections if you want to do that on non-RedHat Supported collections
20:05:43 <nitzmahone> (eg any of the community.* collections, just not the ansible.* and other supported stuff)
20:06:01 <briantist> ok, publishing to a separate repo is possible, to do that we'd only need someone to create the repo and enable GH pages, and we'd need a PAT that can write to that repo's contents, and then that PAT would be stored in a secret on the repo where the ansible content is
20:07:08 <briantist> in that scenario, the calling workflow needs  a write token (because that's actually what gives read access to secrets), but technically the called workflow could drop to read in that scenario because the secret value has to passed in explicitly (reusable workflows can't read the running repository's secrets even with a write token)
20:07:17 <briantist> so there's some steps to it, but it would be doable
20:07:28 <nitzmahone> Yeah- I guess I'd leave that discussion up to the community folks to hash out, but I'd be supportive of that- at worst case, leaking the PAT allows defacement of the docs site, which it technically already does via PRs anyway (just not the main directory)
20:08:11 <briantist> yeah exactly, the distinction makes sense. I appreciate the thorough look at it!
20:08:53 <briantist> I can modify the PR to not do the PR build, and just do the push workflows (which would publish `main` & tag sites), if that's acceptable
20:09:08 <briantist> that would at least let us get rid of the RST docs and the process of generating them manually
20:09:19 <nitzmahone> Works for me
20:10:01 <briantist> cool, thanks again
20:10:13 <nitzmahone> I do wish they'd get that `force_orphan` support working with `keep_files` though- that would alleviate my other primary concern with the overall approach (repo bloat), though that can be manually mitigated already as needed
20:10:48 <briantist> yeah agreed; it's planned, so just a matter of maintainer time, or a contributor doing it
20:11:11 <briantist> separate repo also mitigates that, since we wouldn't care about bloating a docs-only repo probably
20:11:13 <nitzmahone> Seems like that could probably be manually simulated by doing `keep_files` false and manually reassembling the dir or something with `force_orphan`
20:11:22 <nitzmahone> yeah, true
20:12:09 <nitzmahone> I really wish GHP could use a separate internal repo or submodule or something... Actually wait- can it?
20:12:20 <nitzmahone> I never really thought through a submodule for that
20:13:23 <briantist> right, I could make workflow steps that could save the docs content and restore it or something, at that point it's almost like, why use the action instead of just doing git commands. Anyway bloat will also be much less concern (for this repo) without the PR workflow which is where most of the churn would come from
20:13:53 <nitzmahone> yep
20:14:06 <briantist> hmm I don't think a submodule is supported, it requires a branch
20:14:47 <nitzmahone> Oh I just meant as far as if the pages engine would fetch a referenced submodule in the gh-pages branch content
20:15:34 <briantist> ahhh interesting.. I doubt it though. The jobs run in the background are now visible in the actions tab so you can at least see their output, ight provide a clue
20:15:40 <briantist> can't see their source though
20:15:41 <nitzmahone> It doesn't directly solve the problem of needing to update stuff, other than it could potentially be done in a completely different repo and just update the submodule ref on a trigger while still allowing the docs pages to show up in the original repo
20:16:25 <nitzmahone> IIRC submodule refs have to be a SHA, not a branch/tag
20:16:33 <nitzmahone> Might be worth poking at though
20:17:00 <nitzmahone> a branch/tag would be perfect, but I don't know what you'd have to do to get it to re-fetch without a push
20:17:12 * nitzmahone makes note to play with that
20:17:42 <nitzmahone> Anyway, that's all I had on that for today
20:17:48 <nitzmahone> #topic open floor
20:18:03 <jborean93> I've got nothing thrilling to add
20:18:12 <nitzmahone> #agreed briantist to update c.w PR to only use `push` trigger on `main`
20:19:05 <briantist> I'm wondering if it would be worth it though if there's another repo anyway, pages showing up in the actual repo isn't a huge concern from my POV anyway. The domain is based on the org (and in any case a custom domain could be used with pages), and the repo looks like a subdirectory
20:19:14 <nitzmahone> https://docs.github.com/en/pages/getting-started-with-github-pages/using-submodules-with-github-pages
20:19:18 <nitzmahone> Looks like you can...
20:19:33 <briantist> oh neat
20:20:26 <briantist> in some ways, a single central docs repo that all the included collections can use might be nice. More upfront setup but then maybe simpler per-repo setup... could be something worth looking into
20:21:00 <briantist> I guess that is a big attack surface for any one repo being compromised , able to deface the docs for all the collections
20:21:26 <briantist> though these are not supposed to be the official docs really.. but still something of a concern for blast radius
20:21:43 <nitzmahone> Yeah, could maybe also do that but use per-repo staging branches in the single shared repo, and submodules for docs in the leaf repos so the docs still appear in their respective places
20:22:33 <briantist> ahhh true ture
20:23:13 <nitzmahone> (though doesn't solve the blast radius problem any better, other than still maybe not requiring pushes to the target repo)
20:23:47 <briantist> I haven't used PATs much but I assume they cannot be scoped to a single branch
20:24:27 <nitzmahone> At a glance it looks like the submodule ref still has to always be a SHA (ie, not a branch/tag), even if you use .gitmodules or something, so while submodules would very much solve the bloat problem, it doesn't alleviate the need to make *some* kind of update to the original repo to see updated rendered docs contents :(
20:24:40 <nitzmahone> Nope, repo-only granularity IIRC
20:25:08 <nitzmahone> (well, you can do branch protection that would apply to PATs, but IIRC the PAT itself is always write-all)
20:25:33 * nitzmahone double-checks that
20:26:00 <briantist> could have a job that updates the submodule SHA.. but seems like it's turning into a rube-goldberg machine with a lot more moving parts at that point
20:26:45 <nitzmahone> exactly- I had a similar thought, but I think the `pull_request_target` + having the `content: write` stuff in a dedicated job more or less achieves the same isolation with a lot less work :D
20:27:52 <nitzmahone> I was initially looking at attacking the runner agent until I realized you'd isolated that to a single job, so reusing an agent that was compromised by a previous read-only step shouldn't be possible
20:28:34 <briantist> yup, that's what I was explaining previously, but I think sometimes you have to go through the motions make it click :)
20:29:16 <nitzmahone> Yeah, I had to really trace through the whole thing- I think the weakest link in the whole chain ATM is the github-docs-build repo since it's using a branch ref instead of a SHA
20:30:22 <briantist> yes, it's really tricky, I have ideas on how to address that, they are unfortunately far from simple: https://github.com/ansible-community/github-docs-build/issues/4
20:30:56 <nitzmahone> Looked like pretty much everything else is locked to a specific SHA, and the NPM stuff for the GHA publish action is all locked to specific versions/hashes
20:31:58 <nitzmahone> s/GHA/GHP
20:32:02 <briantist> right, I am trying to move in that direction for everything, in all the repos I work on; I usually give a pass to GitHub-owned actions (anything that starts with `actions/`) and use the major version tags, but for anything else, going SHA
20:32:38 <nitzmahone> Heh, yeah, if someone successfully attacks the GHA infra and core actions, we're all screwed
20:32:55 <briantist> I still need to tinker with dependabot for helping make that process a bit easier in terms of keeping the SHAs up to date but for now I'm ok with manually doing it here and there
20:32:59 <briantist> haha right
20:34:06 <nitzmahone> Welp, nothing else from me for today
20:34:49 <briantist> same
20:36:24 <nitzmahone> OK, til next week then- thanks all!
20:36:30 <briantist> thanks!
20:36:32 <nitzmahone> #endmeeting