fedora-ai-ml
LOGS
<@tflink:fedora.im>
16:30:57
!startmeeting fedora ai-ml
<@meetbot:fedora.im>
16:30:57
Meeting started at 2024-08-15 16:30:57 UTC
<@meetbot:fedora.im>
16:30:58
The Meeting name is 'fedora ai-ml'
<@man2dev:fedora.im>
16:31:10
Hello 😊
<@jsteffan:fedora.im>
16:31:15
!hi
<@zodbot:fedora.im>
16:31:16
Jonathan Steffan (jsteffan)
<@tflink:fedora.im>
16:31:38
!hi
<@zodbot:fedora.im>
16:31:38
Tim Flink (tflink)
<@man2dev:fedora.im>
16:32:03
!hi
<@zodbot:fedora.im>
16:32:04
Mohammadreza Hendiani (man2dev)
<@tflink:fedora.im>
16:32:55
let's get started, there's plenty on the agenda
<@tflink:fedora.im>
16:33:04
!topic previous meeting follow-up
<@tflink:fedora.im>
16:33:38
tflink to change meeting name, move calendar from google to fedora, clarify that future meetings will all be on matrix unless otherwise scheduled
<@tflink:fedora.im>
16:33:53
this is done as far as I know, if I missed something please let me know
<@man2dev:fedora.im>
16:34:05
Didn't we agree to this already
<@tflink:fedora.im>
16:34:19
yes, the action item was on doing the changes
<@tflink:fedora.im>
16:34:35
tflink to create ai-ml-sig pagure repo
<@tflink:fedora.im>
16:34:51
this is also done
<@tflink:fedora.im>
16:34:59
<@tflink:fedora.im>
16:35:12
tflink to file ticket about clarifying whether the existing ai-ml FAS groups map to dist-gi
<@man2dev:fedora.im>
16:35:30
I added one more
<@tflink:fedora.im>
16:35:31
this is also done and completed to the best of my knowledge
<@tflink:fedora.im>
16:35:44
<@tflink:fedora.im>
16:36:06
cool, if there are any others, please add them to the ticket
<@tflink:fedora.im>
16:36:23
I think that's all of the previous meeting followup. anything else?
<@man2dev:fedora.im>
16:36:38
I think that's not we can close it is somthing is found we can reopen it
<@jsteffan:fedora.im>
16:36:42
that list looks complete based on my recollection
<@man2dev:fedora.im>
16:36:55
That's it*
<@man2dev:fedora.im>
16:38:19
So are we closing the Issue?
<@ludiusvox:fedora.im>
16:38:28
What's this chatbot
<@ludiusvox:fedora.im>
16:38:36
At least I can multitask right now haha
<@tflink:fedora.im>
16:38:57
yeah, we can close the issue.
<@tflink:fedora.im>
16:38:59
moving on
<@tflink:fedora.im>
16:39:07
!topic proposal for a chatbot project
<@man2dev:fedora.im>
16:39:16
You are doing two meetings? 😂
<@tflink:fedora.im>
16:39:17
daMaestro: this is your item, take it away
<@ludiusvox:fedora.im>
16:39:31
lol yes, It's a town down lecture from google atm
<@ludiusvox:fedora.im>
16:39:37
**top down
<@jsteffan:fedora.im>
16:39:54
okiedokie -- so this is half baked so don't jump in with questions too quickly because i don't have answers
<@trix:fedora.im>
16:40:16
what's the oven temp set too?? 350??
<@jsteffan:fedora.im>
16:41:10
back in the day i had created searchfedora that was based on datapark search and it only indexed fedora resources. it was nice because when i wanted to find something fedora related i could do it fairly easily. that is long since something of the past as search engines got good. like really good. well... now they suck again
<@jsteffan:fedora.im>
16:41:36
i can't find anything via goofoo about fedora anymore, so now i have bookmarks (fine, but grr)
<@ludiusvox:fedora.im>
16:42:20
yes so I have a suggestion, if you index the up to date fedora documentation you can use a technique called R-A-G (Retrieval automated generation) to generate in context responses from a chatbot that is going to access fedora embeddings documentation, and it's a specialized chatbot to give answers right?
<@ludiusvox:fedora.im>
16:42:59
I have a demonstration and code base of wizard of Oz chatbot acting in character using R-A-G I will give you a link so you can look up the code
<@ludiusvox:fedora.im>
16:43:13
it's a wizard of Oz dataset
<@jsteffan:fedora.im>
16:43:31
* consider putting a web interface in front of it for a new fedorasearch
<@jsteffan:fedora.im>
16:43:31
so i wondered... could we take the open source llm from instructlab and fine tune it so we could:
<@jsteffan:fedora.im>
16:43:31
<@jsteffan:fedora.im>
16:43:31
* talk to the fedora documentation and policies
<@jsteffan:fedora.im>
16:43:31
* ask it questions like "how do i become an ambassador"
<@jsteffan:fedora.im>
16:43:31
* ask it questions like "what steps do i need to take to review a package"
<@jsteffan:fedora.im>
16:43:31
* ask it questions about programs on my compute (i.e. manpages, etc)
<@jsteffan:fedora.im>
16:44:45
this was born of frustration i couldn't find valid docs on how to become an ambassador... and now living out of my bookmarks for common stuff i do
<@ludiusvox:fedora.im>
16:44:49
It's really difficult to determine what is better between fine tuning and RAG
<@ludiusvox:fedora.im>
16:45:06
Fine tuning requires the ludwig package
<@ludiusvox:fedora.im>
16:45:32
It's a good idea
<@tflink:fedora.im>
16:45:35
iirc, instructlab is designed to allow for fine tuning
<@tflink:fedora.im>
16:45:48
allow for/enable
<@jsteffan:fedora.im>
16:45:55
yeah, ignore my suggestion of the technique. i was suggesting using the open source model from ibm and seeing if we could turn it into a cool specialized agent that knows *all about* fedora and is promptengineered in a way that we get high quality hits about our stuff and not some hat someone is selling on the internet
<@man2dev:fedora.im>
16:46:02
I mean the documentation, the wiki and all of day's information are not complete. It's one thing if it was the arch wiki That's up to date. Maybe if we subsidize their data based on previous talks that have been done by Fedora members, but I'm not sure about the legality of any of this.
<@tflink:fedora.im>
16:46:51
anything submitted to the wiki is under the fpcl, I think
<@jsteffan:fedora.im>
16:47:05
yeah, a majority of the hard work is going to be getting the data, cleaning it, and validating it
<@tflink:fedora.im>
16:47:07
which is pretty permissive
<@jsteffan:fedora.im>
16:47:26
yes, FPCL or some other openly licensed content is all i'd be suggesting
<@ludiusvox:fedora.im>
16:47:36
the legality is okay these models are open source and you are free to modify such as Mistral 8x7B but it costs money to host
<@tflink:fedora.im>
16:48:51
sure, we'd have to work out the details of what is legal and what we have the resources to do but I think the idea was to see if there was enough interest to start looking into the idea and work on it
<@jsteffan:fedora.im>
16:48:55
things like the wiki (with weighting on staleness), discourse (?), ask fedora (?, at least we have "answers" there), rpm package meta (the description and scrape the first page of the url for the package), all %doc data from packages, manpages, and heavily weight on the official docs
<@man2dev:fedora.im>
16:49:09
Yeah, I know that everything Fedora Post is under an open license, even the videos I think are under a CC License. But I don't know, some people have major ethical issues with these sort of tools. Seems like the kind of things that needs to be discussed outside of the AI SIG as well, for perhaps in the Fedor discussions? And madtodon? ...
<@ludiusvox:fedora.im>
16:49:28
yeah that's fine
<@jsteffan:fedora.im>
16:49:40
i'd propose we get a prototype working before wider discussion
<@jsteffan:fedora.im>
16:49:52
theory and product are different discussions
<@ludiusvox:fedora.im>
16:49:55
my biggest concern is the budget for the GPU server
<@ludiusvox:fedora.im>
16:50:05
fedora budget is only 250k a year
<@jsteffan:fedora.im>
16:50:05
theory will lead to bikeshedding, product will lead to problem solving
<@tflink:fedora.im>
16:50:16
we can deal with the budget if/when we get there
<@jsteffan:fedora.im>
16:50:29
i'm suggesting we create something that can be ran locally
<@tflink:fedora.im>
16:50:32
it could end up being a "download this through podman desktop and run it locally"
<@jsteffan:fedora.im>
16:50:53
yup, and if it's actually useful it could be considered to be hosted in the future
<@ludiusvox:fedora.im>
16:50:59
this is easy
<@jsteffan:fedora.im>
16:50:59
the hosting isn't the interesting part :-)
<@ludiusvox:fedora.im>
16:51:10
I have prototypes just sitting there on dagshub
<@man2dev:fedora.im>
16:51:21
The way I see it, the problem is shipping the data. What if the end user decides to use such a tool? For example, we don't train the data, we just have a tool that trains based on the man pages...
<@tflink:fedora.im>
16:52:05
yeah, that will be part of the question of "can we do this" but I agree that if we start the conversation with purely theory, it won't be terribly productive
<@ludiusvox:fedora.im>
16:52:14
my biggest problem is it requires a serious computer to make those embeddings
<@ludiusvox:fedora.im>
16:52:24
it took 95 GB of RAM to make embeddings for the last dataset
<@ludiusvox:fedora.im>
16:52:33
if I could figure out how to chunk it somehow
<@tflink:fedora.im>
16:52:39
we're getting a bit off into the weeds here
<@man2dev:fedora.im>
16:52:59
So basically we ship the tokenized data set as well as the tool to train it such as Lama-CCP ehat have you and the actual compilation and making of the AI is done on device.
<@ludiusvox:fedora.im>
16:53:16
yeah agreed the system needs to be precompiled embeddings and it can be done on device
<@tflink:fedora.im>
16:53:31
we do have some access to hardware, especially for one-off or occasional tasks. I'm not worried about getting that taken care of
<@ludiusvox:fedora.im>
16:53:46
that's cool
<@jsteffan:fedora.im>
16:53:52
i *think* i'm proposing the sig has a pretrained fine tuned model and end-users just run it locally -- and of course we open source everything we did to train it
<@tflink:fedora.im>
16:54:18
that's more or less what I had understood
<@tflink:fedora.im>
16:55:20
are there folks interested in working on this? I'm interested but I don't have much time to offer ATM
<@jsteffan:fedora.im>
16:55:39
what is temp of an easybake oven? ;-)
<@ludiusvox:fedora.im>
16:56:16
I have a fine tuning notebook already but I have to get a dataset I haven't sucessfully gotten fine tuning working, RAG is much easier in difficulty using langchain
<@trix:fedora.im>
16:56:25
easybake oven these days would have LED lights.. so 35C ?
<@ludiusvox:fedora.im>
16:56:27
for a Q:A dataset
<@tflink:fedora.im>
16:56:58
instructlab might be really helpful here - there is a format for adding data to the model
<@jsteffan:fedora.im>
16:57:28
yes, i'm trying not to suggest solutions yet but instructlab is what i'd use if i did it
<@ludiusvox:fedora.im>
16:57:50
<@ludiusvox:fedora.im>
16:57:50
I have never used instructlab but I checked it on github it's definitely open possibility
<@ludiusvox:fedora.im>
16:57:50
https://github.com/sarthakkaushik/LLM-Functional-Code/blob/main/Ludwig_%2B_DeepLearning_ai_Efficient_Fine_Tuning_for_Llama2_7b_on_a_Single_GPU.ipynb
<@tflink:fedora.im>
16:58:56
daMaestro: it sounds like there is some interest if you're willing to do the cat herding but that's up to you. do you want to discuss this more here or move to matrix/discourse to get into more details?
<@ludiusvox:fedora.im>
16:59:28
I will say that RAG is easier than fine tuning
<@jsteffan:fedora.im>
16:59:39
if there is interest beyond me thinking "that'd be cool and i'd use it", i could sharpen up a phase 1 test
<@ludiusvox:fedora.im>
16:59:40
I need to learn fine tuning
<@man2dev:fedora.im>
16:59:45
Guys, how about another way to address this issue? The actual problem is that the information isn't easily available, right?
<@jsteffan:fedora.im>
17:00:18
it's easily available, but you have to know where to find it and you have to know if it's still valid or not
<@man2dev:fedora.im>
17:00:57
If approved by the community and legal, why not just ship the dataset so that anyone can use any other chatbot they want to ask their questions.
<@ludiusvox:fedora.im>
17:01:20
well the way that ludwig works it's not a dataset
<@ludiusvox:fedora.im>
17:01:31
it's a model on hugging face they pull it off hugging face for "fedora 40" lets say
<@ludiusvox:fedora.im>
17:01:38
and just Q-A the bot
<@tflink:fedora.im>
17:01:52
I think that would be part of this, if it happens. daMaestro said that part of this would be making things available in an open source way
<@tflink:fedora.im>
17:02:07
but we're definitely getting off into the weeds and we have 3 topics left to cover
<@tflink:fedora.im>
17:02:25
any objection to moving farther discussion to post-meeting matrix or discourse?
<@ludiusvox:fedora.im>
17:02:25
okay moving on
<@jsteffan:fedora.im>
17:02:33
i agree that a significant amount of the work would be packaging the data we have into a high quality resource, and that would be valuable for others too
<@tflink:fedora.im>
17:03:18
I take silence as OK with moving on
<@jsteffan:fedora.im>
17:03:25
!action @damaestro to scope a phase one for a chatbot to ask questions about fedora using fedora
<@tflink:fedora.im>
17:04:02
anything else on this topic before moving on?
<@jsteffan:fedora.im>
17:04:07
i'll see if i can put this idea on rails, we can move on
<@tflink:fedora.im>
17:04:22
!topic status check on F41 features
<@tflink:fedora.im>
17:04:42
I just wanted to do a quick check-in on the F41 ai-ml related features to see if anything was needed
<@tflink:fedora.im>
17:05:03
I know that there have been pytorch 2.4 builds in rawhide and F41 - I assume that is on track?
<@trix:fedora.im>
17:05:52
it was until rocm 6.2 update.
<@ludiusvox:fedora.im>
17:06:20
@F40 it's fully featured, I haven't seen anything that I haven't been able to do so far Torch/Tensorflow works on NVIDIA GPU's, I don't have an AMD ROCm card to test a tensorflow pipeline, and I am not sure how to do ROCm
<@tflink:fedora.im>
17:06:23
ah, do you foresee problems or need help with getting it done?
<@trix:fedora.im>
17:06:49
see my topic later on testing.
<@tflink:fedora.im>
17:07:01
ok, does that apply to the rocm 6.2 update as well?
<@trix:fedora.im>
17:07:09
yes.
<@tflink:fedora.im>
17:07:16
ok, let's just move on then
<@tflink:fedora.im>
17:07:39
just a note that the deadlines for feature completeness are coming up soon
<@tflink:fedora.im>
17:07:53
I think it's august 27 or so
<@trix:fedora.im>
17:07:58
llvm18 has no landed yet and that was called out as a risk.
<@tflink:fedora.im>
17:08:42
yeah, that's always an issue - the schedules don't line up well and llvm always drops late
<@trix:fedora.im>
17:08:46
and it seems like rawhide updates have not really happend this week.
<@trix:fedora.im>
17:09:15
i am _still_ waiting for rocsparse to land in the mirror and i built that 2-3 days ago.
<@tflink:fedora.im>
17:09:49
there are usually a few hiccups when branch happens but I wouldn't expect it to be that long
<@trix:fedora.im>
17:09:52
i will be on pto next week so.. maybe someone can finish the stack ?
<@tflink:fedora.im>
17:10:34
yeah, I imagine that jeremy or I can finish the builds if needed but we can sync up offline
<@ludiusvox:fedora.im>
17:10:44
is there a video i am missing some of hte conversation
<@tflink:fedora.im>
17:10:57
a video? for what?
<@ludiusvox:fedora.im>
17:11:26
I am miss reading
<@tflink:fedora.im>
17:11:54
it sounds like work is still needed for the F41 features and some of it is waiting on llvm18 which hasn't landed in rawhide yet
<@tflink:fedora.im>
17:12:03
but we'll get it figured out
<@trix:fedora.im>
17:13:17
llvm17 was an issue in F40. .. even if we get the stack in there is going to be 2 full rebuilds to do. one for rawhide, one for F41 now.
<@tflink:fedora.im>
17:13:59
both of which take a non-trivial amount of time
<@tflink:fedora.im>
17:14:03
fun
<@trix:fedora.im>
17:14:10
right.
<@tflink:fedora.im>
17:14:29
anyhow, shall we move on to the testing topic or is there more to discuss in meeting?
<@trix:fedora.im>
17:14:46
move on.
<@tflink:fedora.im>
17:14:56
!topic Improving Testing
<@tflink:fedora.im>
17:15:09
Tom Rix: this is your topic, take it away
<@trix:fedora.im>
17:16:03
i am looking for ideas on how to improve the testing to our packages, i _think_ it is some manual testing on my part and when i get busy, there is no testing.
<@trix:fedora.im>
17:16:15
anyone else testing our packages ?
<@tflink:fedora.im>
17:16:26
I do as I have time
<@tflink:fedora.im>
17:16:37
but that has lessened as of late :(
<@jsteffan:fedora.im>
17:16:55
i only test cpu and nvidia when i'm asked or am actively participating in an update
<@ludiusvox:fedora.im>
17:17:00
Well we had a week of flock that slowed things down
<@man2dev:fedora.im>
17:18:07
Very rarely because most times it needs a python package that we don't have
<@tflink:fedora.im>
17:18:22
what are we missing?
<@man2dev:fedora.im>
17:19:09
Are you talking about AI Python packages?
<@jsteffan:fedora.im>
17:19:23
are we to the point where we need to create something like the kernel test tool for the test days? something that installs everything needed and then runs some suite of simple tests? that way it should be easier for us to recruit testers that have the right hardware
<@man2dev:fedora.im>
17:19:31
Or torch packages?
<@jsteffan:fedora.im>
17:19:49
https://pagure.io/kernel-tests
<@tflink:fedora.im>
17:20:05
whatever might be missing to test ai-ml packages. I think that trix is mostly talking about torch and rocm ATM, though
<@trix:fedora.im>
17:20:22
yes.
<@man2dev:fedora.im>
17:21:08
There are a lot I'll make an issue about it and add to it each time I find one
<@tflink:fedora.im>
17:21:26
yeah, I think that a big part of the problem is that we don't have test cases or much documentation. not sure we could do exactly what the kernel test stuff does but I also think that containers would be our friend
<@trix:fedora.im>
17:21:52
is there a fedora bot thing that could at least build cpu torch and its dependencies.. then run some cpu torch example/tests ?
<@tflink:fedora.im>
17:22:14
bot thing? I'm not sure I understand
<@jsteffan:fedora.im>
17:22:37
https://docs.fedoraproject.org/en-US/ci/
<@tflink:fedora.im>
17:22:38
there are package-specific testing mechanisms that run in EC2 cpu-only instances
<@trix:fedora.im>
17:22:48
a build bot... something that would just build-n-test..
<@man2dev:fedora.im>
17:23:11
I'm pretty sure packit does this
<@tflink:fedora.im>
17:23:13
nothing that does both that I'm aware of unless you count packit+testing farm
<@ludiusvox:fedora.im>
17:23:48
Is this CI usable as a jenkins build system where I can just pull code from a repo and run it?
<@jsteffan:fedora.im>
17:24:05
!idea this question should be answered correctly by a phase 1 chatbot
<@tflink:fedora.im>
17:24:29
not quite - it's a bit more specific that that. it needs to be attached to a fedora package for the fedora ci setup
<@jsteffan:fedora.im>
17:24:35
!idea this question should be answered correctly by a phase 1 chatbot: is there a fedora bot thing that could at least build cpu torch and its dependencies.. then run some cpu torch example/tests ?
<@ludiusvox:fedora.im>
17:24:46
thank you
<@man2dev:fedora.im>
17:25:07
I tried making one a while back but saw the spec had changed and so I put it on the back burner.
<@man2dev:fedora.im>
17:25:48
https://github.com/Man2Dev/pytorch-vk
<@jsteffan:fedora.im>
17:26:26
!action investigate fedora CI for running automated pytorch suite CPU testing
<@man2dev:fedora.im>
17:26:34
I found a weird issue and haven't tried fixing it yet
<@tflink:fedora.im>
17:26:42
as much as I'd like to hash some of this out now, we are running out of time
<@trix:fedora.im>
17:26:54
yup.
<@trix:fedora.im>
17:27:26
if folks have ideas on testing poke my on the ai/ml channel.
<@tflink:fedora.im>
17:27:32
Tom Rix: do you have at least enough information to look at for the time being? otherwise, we can talk about this more after the meeting
<@trix:fedora.im>
17:27:50
tflink: after meeting.
<@trix:fedora.im>
17:28:02
let's talk.
<@tflink:fedora.im>
17:28:06
ok, moving on
<@tflink:fedora.im>
17:28:22
!topic pings before meeting time
<@tflink:fedora.im>
17:28:29
<@ludiusvox:fedora.im>
17:28:35
Yes please
<@man2dev:fedora.im>
17:28:52
do @ rooms ping in matrix half hour before meeting as to encourage new members or members who are interested in the sig to join in
<@man2dev:fedora.im>
17:29:09
Warning, it might get annoying real fast.
<@ludiusvox:fedora.im>
17:29:23
if it's a once a week ping it's not bad
<@tflink:fedora.im>
17:29:24
I don't think it hurts so long as we're talking about one or two pings
<@jsteffan:fedora.im>
17:29:40
once every two weeks is fine
<@ludiusvox:fedora.im>
17:29:55
Also, I used the ICS today to maintain conciousness of the meeting
<@ludiusvox:fedora.im>
17:30:07
Also, I used the *.ics today to maintain conciousness of the meeting
<@tflink:fedora.im>
17:30:30
it sounds like folks are broadly OK with a ping before meetings
<@tflink:fedora.im>
17:30:57
I'd say we can move forward with it and re-consider it if people find it to be too much
<@tflink:fedora.im>
17:31:07
any objections?
<@man2dev:fedora.im>
17:31:18
OK then
<@man2dev:fedora.im>
17:31:36
Yap
<@tflink:fedora.im>
17:32:02
!info agreed that pings to the matrix room before meetings could help and at least wouldn't be a problem. we can re-asses in the future if needed
<@tflink:fedora.im>
17:32:11
which bring us to
<@tflink:fedora.im>
17:32:14
!topic open floor
<@tflink:fedora.im>
17:32:30
we're already 2 minutes over. are there any other topics which have to be discussed now?
<@ludiusvox:fedora.im>
17:34:40
Everyone in the AI/ML SiG has been really helpful. I think that limited group members limits capabilities more than anything, getting more people into AI/ML SiG would help a lot
<@jsteffan:fedora.im>
17:35:12
having clear project work will help with that
<@ludiusvox:fedora.im>
17:35:16
yes
<@tflink:fedora.im>
17:35:52
yeah, very true. I've never had good experience with "ok, go find something to do" when it comes to attracting new people. it works some times, but not all that frequently :)
<@ludiusvox:fedora.im>
17:36:02
Not everyone is a self starter
<@tflink:fedora.im>
17:36:27
yep, exactly
<@tflink:fedora.im>
17:36:41
if there's nothing else, I'll end the meeting in 2 minutes
<@ludiusvox:fedora.im>
17:36:47
sounds good!
<@tflink:fedora.im>
17:38:10
thanks for coming, everyone. sorry for going over a little bit
<@tflink:fedora.im>
17:38:18
I'll send out minutes shortly
<@tflink:fedora.im>
17:38:22
!endmeeting