Conversation

Fahim Farook

I'm setting up a machine learning (ML) model locally to figure out autoimmune diseases for my wife.

(More details here: https://www.reddit.com/r/LocalLLaMA/comments/1ij5yf2/how_i_built_an_open_source_ai_tool_to_find_my/)

The project is here: https://github.com/OpenHealthForAll/open-health

It's set up to run as a docker file.

Trouble is, you can't use a locally installed Ollama instance with it. You have to run Ollama on docker.

But ... Ollama on docker (on macOS) does not use the GPU. So it's slow. Very, very, slow.

So finally ended up figuring out how to run the project locally, use a Postgres insance (which is needed by the project) via Docker, and to use the local Ollama.

It all seems to work, but won't know for sure till the wife wakes up and can use it. Yay!!

#ML
0
2
4