Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

Ghostcoded

25 members • Free

3 contributions to Ghostcoded
New Redis vector store node to reduce LLM cost and increase semantic search!
Ever wonder how you could save on LLM token usage when people ask the same or SEMANTICALLY similar questions? Enter the new Redis Vector store node! This is from a template workflow on n8n’s website: “Stop Paying for the Same Answer Twice Your LLM is answering the same questions over and over. "What's the weather?" "How's the weather today?" "Tell me about the weather." Same answer, three API calls, triple the cost. This workflow fixes that. What Does It Do? Semantic caching with superpowers. When someone asks a question, it checks if you've answered something similar before. Not exact matches—semantic similarity. If it finds a match, boom, instant cached response. No LLM call, no cost, no waiting. First time: "What's your refund policy?" → Calls LLM, caches answer Next time: "How do refunds work?" → Instant cached response (it knows these are the same!) Result: Faster responses + way lower API bills” This is HUGE! Cutting the cost of api usage AND speeding up responses! Here is a downloadable template to play with for now. I’ll be releasing a video this next week showcasing how to setup and use it! https://n8n.io/workflows/10887-reduce-llm-costs-with-semantic-caching-using-redis-vector-store-and-huggingface/
New Redis vector store node to reduce LLM cost and increase semantic search!
1 like • 3d
Haven't looked at this for myself yet but it's very exciting! So, it doesn't just do exact matches but it has built in semantic matching? Because, that would be SO USEFUL
1 like • 3d
@Tanner Woodrum SO SICK!
Feature Request: Health recharge in Control Points
It would be sick if when you capture control points your health recharges and droids can recapture the control points!
1 like • 3d
@Tanner Woodrum YESSSSSS
Concurrency in n8n
Let's say I have a workflow and I want it to split and go 3 different ways AT THE SAME TIME. Imagine an LLM receives a request and produces 3 json objects that need to go 3 different paths and can be processed concurrently. How would you do that and, as a follow up, is there a way to join those 3 paths back together at the end?
1 like • 3d
@Tanner Woodrum Bro thank you so much haha this is LIFE CHANGING
1-3 of 3
Jackson Oaks
2
12points to level up
@jackson-oaks-5755
Custom LLMs and n8n

Active 8h ago
Joined Dec 8, 2025