Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Real-World n8n Builders

238 members • Free

AI Automation Collective

105 members • Free

AI Automation Mastery

20.9k members • Free

Vibin' Coders

250 members • $314/y

Content Academy

13.5k members • Free

No-Code Architects

519 members • $97/m

8 contributions to Real-World n8n Builders
Question: Best RAG strategy for 12 heavy Textbooks?
Hi @Nadia Privalikhina & @Serop B , I have a client project using n8n and I need some advice on the RAG setup. The client has 12 large textbooks (PDFs). I need to build an automation where the AI generates course lessons, but it must use only specific books or chapters as the source material (not the whole library at once). I know that you will teach us about this in the upcoming course, but I just want to quickly understand it because I need it right now. My questions: 1. What is the best strategy to "chunk" or split these large files? Should I split them by chapter before uploading? 2. How do I set up the metadata so the AI knows which book is which? 3. Are there any specific tutorials or resources you recommend for building a "Textbook RAG" system in n8n? Thanks in advance!
1 like • 8d
Thanks @Nadia Privalikhina & @Serop B Thank you both for the pivot in strategy. This saves me a lot of over-engineering! To clarify: No, it is not a chatbot. It is strictly a backend automation to bulk-generate hundreds of course lessons based on textbook. My follow-up question regarding the "no-vector" approach. Since I’m skipping the vector database, my main challenge becomes parsing. These are PDF textbooks. Do you have a recommended tool, n8n node, or external API that reliably splits PDFs based on the Table of Contents/Chapters?
1 like • 8d
@Nadia Privalikhina Got it, Nadia. I’m going to rewatch the Week 3 lesson and use LlamaParse. I was a bit busy due to some family reasons, but I’ll get back to the work soon. Thanks!
🎉 Week 2 Completed — Great Job, Everyone!
We’ve officially wrapped up Week 2 of the cohort, and what an amazing week it has been! 🚀 So far, we dove deep into n8n fundamentals and explored two real-world use cases me and Nadia have already sold to clients: ✨ Automated Proposal Generation ✨ Email Auto-Responder Systems It was incredible to see the group applying these skills so quickly and creatively. You submitted your homework, tackled brainstorming tasks, and pushed yourselves to build practical automations that bring real value.👏 Huge congratulations to everyone who submitted their work on time and an equally big shoutout to those still working hard to learn n8n and integrate AI into their daily workflows. Your effort, curiosity, and consistency are what make this community special. Let’s keep the momentum going, Week 3, here we come! 💡
🎉 Week 2 Completed — Great Job, Everyone!
2 likes • 18d
I’ve already built some automations, but there’s still a lot to learn. The calls and lessons have been incredibly helpful. Thanks, Nadia & Serop
1 like • 17d
@Nadia Privalikhina Thanks Nadia, Looking forward to it.
n8n Error Trigger node
Hi @Serop B , I have a quick question. In our first office hours session, you suggested using the Error Trigger node to get notified whenever a client’s production workflow runs into an issue. I recorded a short video showing what I’m trying to do. Could you take a look and help me with this? Thanks! Issue : https://www.veed.io/view/40ff38f5-8915-44e9-8ce7-2bb2ac0c0c6f?source=editor&panel=share
1 like • 25d
@Serop B Oh okay, got it now. Silly me 😅 Thanks✌️
n8n Crash Course (week 1) is now live
Hi guys, we just created a crash course in n8n and posted it in the cohort under week one. There we covered some of the basic concepts of n8n which we noticed are a little bit challenging. Make sure to check it out, it's about 50 minutes long. So you find it: Classroom -> Week 1 -> Crash course. Enjoy
n8n Crash Course (week 1) is now live
2 likes • Nov 5
Thanks @Serop B ✌️
Need Help: Handling Large File Transcriptions in My Client Workflow
Here’s the context: I’ve designed an automation where, whenever a new file (audio or video) is dropped into a client’s “Contact Inbox” folder in Google Drive — the system: 1. Downloads the file 2. Sends it to OpenAI for transcription 3. Then analyzes the transcript The issue: Everything works perfectly for files under 25 MB, but OpenAI’s size limit causes the workflow to fail for anything larger. What’s the best approach to handle this workflow efficiently. especially for large video or audio files that exceed the 25 MB OpenAI limit? video walkthrough - https://www.veed.io/view/fd07e9f3-6934-4f48-bd97-c1bb9916244a?source=editor&panel=share Thanks in advance! – Ved
1 like • Oct 30
@Nadia Privalikhina Thanks! I’ll introduce it to my client on today’s call and hopefully, they like it, and then I’ll integrate it into my system.
0 likes • Oct 30
@Nadia Privalikhina Thanks!!!
1-8 of 8
Ved Automation
3
43points to level up
@ved-leverage
Leveraging AI and automation to help community owners streamline operations, engage members, and scale their communities effortlessly.

Active 2h ago
Joined Oct 23, 2025
Powered by