I've been more active in the AIS+ community lately, but I think this workflow could be useful here as well.
Most n8n chatbot tutorials use RAG: vector store, embeddings, retrieval chain, and so on. It works, but you've got lots of nodes, slower responses, higher API costs, and sometimes totally wrong answers.
I wanted to simplify this. The result is a fully functional 4-node chatbot: Chat Trigger → AI Agent → OpenRouter LLM + Redis memory. The entire knowledge base lives in the system prompt. So, there is no need for a vector store or embeddings. Just clean, direct context.
For a small business use case (knowledge base with 5,000–10,000 words of content) it handles customer questions accurately, costs roughly $1 per 1,000 messages, and the whole thing takes about 10 minutes to build from scratch.
For the advanced folks: what do you dislike about this approach? I know the obvious ceiling is context window size - that's why the knowledge base must stay under 20 pages/14,000 tokens. But are there other failure modes I'm not thinking about? Would love a real critique.
For the beginners: If you've been wanting to build your first chatbot but the RAG tutorials felt overwhelming, this might be a good starting point. I put together a step-by-step video walkthrough covering everything from scratch: setting up OpenRouter and Redis, configuring credentials, building the workflow, etc. No coding required, and the workflow is included, though my recommendation is to build this simple chatbot yourself.