Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Owned by Holger

Entdecke KI und ChatGPT. Lernen, verstehen und anwenden - dein smarter Einstieg in die Welt der KI. Wir nehmen dich mit auf den Weg.

Memberships

Early AI-dopters

770 members • $59/month

AI Bits and Pieces

275 members • Free

KI Agenten Campus

2.2k members • Free

Artificial Intelligence AI

54 members • $5/m

Die KI - Lounge ...

3.7k members • Free

Vibe Coders

86 members • Free

AI Automations by Jack

1.4k members • $77/m

AI Foundations

794 members • $97/m

AI Marketing Forum

1k members • Free

11 contributions to AI Bits and Pieces
🌟 Day 4 – Diving Into the First Pillar: Chunking
A few weeks ago, I wrote a post saying I had zero idea what “chunking” even was. Now, a few weeks later I definitely understand more, but not enough.That’s why today is fully dedicated to Pillar 1: Chunking. Back then I got great examples: 🍞 “Slice a loaf of bread into pieces.” 🍕 “Cut a pizza into slices.”Perfect analogies — and still true. But now I understand why chunking is so important: 🔹 What Chunking Really Is Chunking is the most critical preprocessing step in any RAG system. IT means breaking large documents into smaller, meaningful segments (“chunks”), which are then embedded, indexed, and retrieved later. Chunks are the atomic information units your RAG system uses. If the chunks are bad, retrieval is bad — and the LLM can’t fix it. 🔹 The Core Dilemma Chunking is always a balance between: 1️⃣ Precision – smaller chunks give cleaner embeddings 2️⃣ Context – bigger chunks give more meaning to the LLM Too big → diluted meaningToo small → missing context→ And THAT is the hardest challenge in chunking. 🔹 Best Practices for Chunking Here are the key strategies I’m learning: 📌 Recursive Character ChunkingRespects natural text boundaries (paragraphs, sentences).Often the recommended default. 📌 Overlap (10–20%) Ensures context isn’t lost at the edges.Example: 500-token chunk → 50–100-token overlap. 📌 Optimal Sizes A strong starting point is 512–1024 tokens per chunk. 📌 Advanced Methods– Semantic Chunking: uses embeddings to detect topic changes– Agentic Chunking: LLM splits text into atomic, meaningful statements These methods help avoid context loss and improve retrieval quality. 🔹 Why This Matters Chunking literally determines what your RAG system can find.And if retrieval fails, the LLM fails — it can’t magically invent the missing context. All resources, diagrams, and notes as always:👉 Notebook: https://notebooklm.google.com/notebook/ea1c87b2-0eda-43f8-a389-ba1f57e758ce
🌟 Day 4 – Diving Into the First Pillar: Chunking
0 likes • 16h
@Michael Wacht yes 😁
0 likes • 11h
@Roger Richards you are welcome 😊
Day 1 – My RAG Mastery Challenge Starts Now
What happens when you dive one hour a day into RAG with absolute intensity? I’m about to find out. Starting today, I’m committing to a personal challenge:Every single day, I will spend at least one hour digging deep into Retrieval-Augmented Generation and I’ll share every step of my progress right here. Why?Because I want to grow. Deeply. Consistently. With purpose.And because RAG is becoming one of the most important building blocks of future AI systems. To make this challenge truly powerful, I need your support and I need everything this community has. 🔥 I genuinely need all of it: – your RAG automations – your RAG-enabled AI agents – your workflows– your best practices – your mistakes and lessons– your resources, tutorials, and websites – every piece of knowledge that exists here – every experience you’ve made To start this challenge in the right way, I need your help with a few key questions: 🔍 Which RAG automations have you already built? 🔍 What RAG-related information, examples, or materials already exist in this community? 🔍 What was absolutely essential for you to truly understand RAG? 🔍 Which websites, videos, or tutorials helped you the most? 🔍 Which RAG systems have you built — and would you be open to sharing them with me? I’m excited to dive deeper every single day and to build real RAG excellence together with all of you. Let’s go. Please put all informations in the comments, it will be helpful
Day 1 – My RAG Mastery Challenge Starts Now
1 like • 2d
@Michael Wacht thank you Michael. I Like it😊
1 like • 1d
@Michael Wacht Sounds very interesting 😊
Day 3 – The 5 Pillars of High-Quality RAG
Continuing with the theory — it honestly feels like studying for a new degree.But understanding the fundamentals is essential to make better decisions later. Today I found an excellent video, and the key lesson is this: 👉 The quality of any RAG system depends on 5 core factors.LLM = the master chef. Retrieval = the cook bringing the ingredients.If the ingredients are bad, the final dish will be bad, no matter how good the chef is. Here are the 5 pillars, short and clear: 1️⃣ Chunk SizeChunks must be the right size — too big overloads context, too small loses meaning. 2️⃣ Query ConstructionBetter queries = better retrieval.Multi-Query RAG helps cover synonyms and variations. 3️⃣ Embedding ChoiceDense, sparse, or hybrid — your choice directly impacts search quality. 4️⃣ Retrieval QualityThe most critical point.If retrieval brings irrelevant content, the answer will be bad.Metadata & filters improve relevance dramatically. 5️⃣ Generation LayerGood prompting shapes tone, structure, and quality of the final output. ➡️ Master these 5 basics, and your RAG accuracy improves dramatically. As always, you can find all materials in my notebook:👉 https://notebooklm.google.com/notebook/ea1c87b2-0eda-43f8-a389-ba1f57e758ce
Day 3 – The 5 Pillars of High-Quality RAG
Day 2 – Building the Foundation Before the Framework
In the past, I often just jumped straight into things without thinking too much about the technical foundation. It worked but only up to a certain point. For my RAG Mastery Challenge, I’ve completely changed my approach:This time, I want to understand the theory from the very beginning, so I don’t run into traps later and so I can progress much faster thanks to a solid knowledge base. In other words:No building the roof before laying the foundation. So today was all about research: – What exactly is RAG? – What does Retrieval Augmented Generation really mean? – Why is it important? What problems does it solve? – How does it fit into modern AI workflows? Many of you already know this deeply for me, the knowledge was only halfway complete. So today I: 📌 watched several YouTube videos 📌 compared fundamental explanations 📌 sketched the core concepts 📌 and created my own NotebookLM learning notebook And because I don’t want to learn just for myself but for all of us I’m sharing the notebook here: 👉 My RAG Learning Notebook (NotebookLM) https://notebooklm.google.com/notebook/ea1c87b2-0eda-43f8-a389-ba1f57e758ce This is where I’m collecting all learning materials that will support me along the way:Videos, explanations, sources, examples, definitions, diagrams. If you have anything to add please let me know!I’ll include everything so we can build a powerful shared RAG learning template. Day 2 complete.The foundation is set — tomorrow we go deeper.
Day 2 – Building the Foundation Before the Framework
📒 AI Term Daily Dose – ChatGPT
Term: ChatGPT Level: Beginner Category: Core Concept 🪄 Simple Definition: ChatGPT is an AI chatbot built by OpenAI that can answer questions, explain ideas, and create text that sounds human. 🌟 Expanded Definition: ChatGPT runs on a GPT model and is designed for conversation. Instead of just generating text, it can follow instructions, hold context across turns, and adjust tone or style. People use it to brainstorm ideas, draft emails, explain concepts, or even role-play scenarios. Think of it as a “smart assistant” that communicates in natural language. ⚡ In Action: You ask: “Write a bedtime story about a robot who loves pizza.” ChatGPT replies with a fun, creative story in seconds. 💡 Pro Tip: The clearer your prompt, the better the result. Treat ChatGPT like a collaborator — the more context you give, the more useful the answer. For great prompt advice, refer to this post by @Roger Richards https://www.skool.com/ai-bits-and-pieces/get-around-prompting-8-core-chatgpt-skills-to-always-get-the-result-you-want?p=d6453dc8
📒 AI Term Daily Dose – ChatGPT
3 likes • 6d
@Michael Wacht i Like it
1-10 of 11
Holger Peschke
3
40points to level up
@holger-peschke-6316
Experte für KI-Automatisierung, AI-Agenten und ChatGPT. Fokus auf digitale Transformation, Innovation, Prompting und generative KI.

Online now
Joined Oct 29, 2025
ESTJ
Bamberg
Powered by