User
Write something
Q&A is happening in 5 days
Pinned
n8n v2.0 just dropped! Here's How to Update Your Self-Hosted Instance
Here's a step-by-step guide on how to update your self hosted n8n to be able to access this pre-release v2.0.0 of n8n! Don't take the risk of guessing, follow along with me and I'll help make sure you do it right the first time! Also, worried about losing your workflows you've worked so hard to build? I know I was, so I went ahead and built a backup and restore system so you can have peace when you go to update your server. Stay tuned for more videos going over the update and other releases!
Pinned
Bug reports and feature requests!
This game has taken off and I love it! It started out as a way for me to just veg and take a mental brain break and now there are a handful of users already playing the game. Such a cool feeling to have people using something you've built. This said, I started this by accident haha. So I need your help: I want the game to progress and become better, so I need your feedback on both bugs and features. If something isnt working right, or if a mechanic is clunky, let me know. If you think there is something that would be amazing to add, let me know and if we gain enough support from the community, then we'll add it. Want the game to be what yall want it to be, so lets improve it! keep tabs on the Ghostcoded Clone Wars game category to follow the discussion for all of this!
New Redis vector store node to reduce LLM cost and increase semantic search!
Ever wonder how you could save on LLM token usage when people ask the same or SEMANTICALLY similar questions? Enter the new Redis Vector store node! This is from a template workflow on n8n’s website: “Stop Paying for the Same Answer Twice Your LLM is answering the same questions over and over. "What's the weather?" "How's the weather today?" "Tell me about the weather." Same answer, three API calls, triple the cost. This workflow fixes that. What Does It Do? Semantic caching with superpowers. When someone asks a question, it checks if you've answered something similar before. Not exact matches—semantic similarity. If it finds a match, boom, instant cached response. No LLM call, no cost, no waiting. First time: "What's your refund policy?" → Calls LLM, caches answer Next time: "How do refunds work?" → Instant cached response (it knows these are the same!) Result: Faster responses + way lower API bills” This is HUGE! Cutting the cost of api usage AND speeding up responses! Here is a downloadable template to play with for now. I’ll be releasing a video this next week showcasing how to setup and use it! https://n8n.io/workflows/10887-reduce-llm-costs-with-semantic-caching-using-redis-vector-store-and-huggingface/
New Redis vector store node to reduce LLM cost and increase semantic search!
Feature Request: Health recharge in Control Points
It would be sick if when you capture control points your health recharges and droids can recapture the control points!
Ray shields now just bind, but don’t do damage
the ray shields use to deal damage and would even go into the negatives, now if you are caught in a ray shields, you will only be bound but won’t have damage done! More how it should be! Enjoy!
1
0
1-20 of 20
powered by
Ghostcoded
skool.com/ghostcoded-6351
A community built to master n8n and to think like a software engineer. Join to learn, discuss, and build tools that make a
Build your own community
Bring people together around your passion and get paid.
Powered by