Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

Ghostcoded

25 members • Free

Business Builders Club

479 members • Free

Svenska AI-Akademin(Gratis)

3k members • Free

The Build Room+

2.6k members • $67/month

Voice AI Accelerator

6.7k members • Free

AI Automation Agency Hub

275.4k members • Free

AI Automation Society

211.1k members • Free

4 contributions to Ghostcoded
New Redis vector store node to reduce LLM cost and increase semantic search!
Ever wonder how you could save on LLM token usage when people ask the same or SEMANTICALLY similar questions? Enter the new Redis Vector store node! This is from a template workflow on n8n’s website: “Stop Paying for the Same Answer Twice Your LLM is answering the same questions over and over. "What's the weather?" "How's the weather today?" "Tell me about the weather." Same answer, three API calls, triple the cost. This workflow fixes that. What Does It Do? Semantic caching with superpowers. When someone asks a question, it checks if you've answered something similar before. Not exact matches—semantic similarity. If it finds a match, boom, instant cached response. No LLM call, no cost, no waiting. First time: "What's your refund policy?" → Calls LLM, caches answer Next time: "How do refunds work?" → Instant cached response (it knows these are the same!) Result: Faster responses + way lower API bills” This is HUGE! Cutting the cost of api usage AND speeding up responses! Here is a downloadable template to play with for now. I’ll be releasing a video this next week showcasing how to setup and use it! https://n8n.io/workflows/10887-reduce-llm-costs-with-semantic-caching-using-redis-vector-store-and-huggingface/
New Redis vector store node to reduce LLM cost and increase semantic search!
2 likes • 3d
Hm might be a dumb question before checking this out myself, but is all answers stored/cached? Where? Haha
Favorite prompt for Claude or ChatGPT
Not gonna spam here just wanna start a new thread regarding what prompts people are using in Claude desktop or cloud / ChatGPT when they wanna throw ideas around / debug etc? And are you doing it in the cloud or made a personal n8n instance with toos? Always had a problem having a LLM being daily up to date with n8n documentation.
New MCP server
Has anyone found a way to use the new MCP server connection to debug localhosted workflows with the cloud version of Claude or Chat GPT? How did you manage to solve it?
Small edit to my n8n v2 update video!!
They've officially release their full v2 docker image, so it can now be accessed by using the :beta tag! Your docker-compose.yml file should have this as the n8n service: n8n: image: docker.n8n.io/n8nio/n8n:beta restart: always ports: - 5678:5678 environment: - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME} - N8N_PORT=5678 - N8N_PROTOCOL=https - NODE_ENV=production - WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/ - GENERIC_TIMEZONE=${GENERIC_TIMEZONE} - EXECUTIONS_TIMEOUT=3600 volumes: - n8n_data:/home/node/.n8n - ${DATA_FOLDER}/local_files:/files
1 like • 6d
Thanks for this! 🎉
1-4 of 4
Tobias Rosén
2
15points to level up
@tobias-rosen-7314
Senior CX Leader with 15+ Years in Customer Excellence | Building High-Performing Teams | Passionate about Leveraging AI & Automation for operations.

Active 2h ago
Joined Dec 10, 2025