Hey Zero2Launch crew π
If youβve ever wanted to run AI chat models inside n8n without paying for OpenAI or burning through tokens, this oneβs for you.
In this new step-by-step video, Iβll show you how to host LLMs locally using LM Studio, and connect them directly to n8n using open-source models like DeepSeek or Llama.
π‘ What youβll learn:
π€ Install and run local LLMs with LM Studio
π₯ Download DeepSeek or LLaMA models β totally free
π Connect n8nβs Chat Model node to your local LLM
π§ͺ Test everything with live prompts β OpenAI-free
That means:
β
No API keys
β
No cloud costs
β
And 100% offline, full control over your AI workflows
π Got questions or want to share your setup? Drop it in the comments!