Day 50 Of Learning In Public! -> Instant RAG Retrieval!
Today I worked completely on the RAG Platform and checked on different ways to increase the retrieval speed!
I started with using Supabase as my vector DB, Local Embedding Model(nomic-embed-text) and I also added a contextual ingestion method!
This method is like you have a chunk and you provide that chunk to the llm with the whole document and the llm provides you a context that you can add with that chunk to increase the accuracy!
Then, My Mentor suggested on using Redis Vector Store, and it worked really well! The retrieval was really quick and it worked really well!
Now, I'm just waiting for the approval to start working on the V2 of the project and connect the Platform with a Database, and expose API's for the frontend!
Thats it for today! let me know what you think!
#RAGPlatform #VectorDatabase #Supabase #RedisVectorStore #LocalEmbeddingModel #NomicEmbedText #ContextualIngestion #AIIngestion #RAG #LLMIntegration #LearningInPublic #Day50 #DatabaseIntegration #APIDevelopment #FrontendIntegration #AIWorkflow #TechJourney
8
11 comments
Muhammad Arhan
5
Day 50 Of Learning In Public! -> Instant RAG Retrieval!
AI Automation Society
skool.com/ai-automation-society
A community built to master no-code AI automations. Join to learn, discuss, and build the systems that will shape the future of work.
Leaderboard (30-day)
Powered by