Enabling your GPU to speed up when self-hosting on your computer
Hello to All, I just wanted to share my experience. Over the last few days I had tried several different setups. I am running Fedora 41 I have AMD Ryzen 9 7900 with 24 cores, 96GB Ram and a AMD Radeon RX 7600
Now I tried running n8n with docker desktop and then without docker using npm. When just running the demo workflow with n8n and I would bring up the chat and type in my question it would literary take 10+ mins for the LLM to spit something back to me. It was very frustrating.
Then I realized the Ollama Chat Model was just using my CPU, once I figured out how to enable the ROCm drivers the Ollama responded to my prompt in 2 seconds.
So if you are self-hosting on your our equipment and have the supporting hardware, enable the GPU for your docker images and n8n to use.
1
0 comments
Scott Watson
2
Enabling your GPU to speed up when self-hosting on your computer
AI Automation Society
skool.com/ai-automation-society
A community built to master no-code AI automations. Join to learn, discuss, and build the systems that will shape the future of work.
Leaderboard (30-day)
Powered by