Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Owned by Michael

AI Bits and Pieces

275 members • Free

Build real-world AI fluency -- while having fun with daily quips, pro tips and insights on people + AI.

Memberships

GEO Lab - Rank in AI Answers

113 members • Free

Grow With Evelyn

2.4k members • $33/month

AI Automation Society Plus

2.9k members • $94/month

110 contributions to AI Bits and Pieces
📦 Out of The Box Series: HeyGen My Avatar in 60
Welcome to the Out of The Box Series — where I test how far curiosity and AI can take you in 30, 60, or 90 minutes, using today’s best no-code and low-code tools. No setup. No training. Just pure exploration — right out of the box. 🎬 This Episode: HeyGen 🕒 Time Limit: 60 Minutes 📂 Category: AI Avatar Video Creation What is HeyGen? HeyGen is an AI video generation platform that turns a photo or default avatar, and a text script into lifelike talking videos. 💡 What I Built in 60 Minutes: Using a series of five close up photos, a few short scripts and HeyGen’s My Avatar, Voice Mirror, and Create Video features I created My Avatar. Once the photos (shown below) were uploaded, I generated a full talking video by just entering a script. Real Avatar Videos: Side Photo (6 seconds): https://app.heygen.com/videos/4e059b101cbc4847bca60f9d8f8a326d Front Photo Holiday Message (24 seconds): https://app.heygen.com/videos/baee6d942e4342dea949c84b75464bcf This is what out of the box really looks like. Just a few photos, a few prompts, and 60 minutes. Interested in HeyGen? Click here https://www.heygen.com/invite/7FFDTETE Have a blessed and creative AI day! @Michael Wacht
📦 Out of The Box Series: HeyGen My Avatar in 60
🌟 Day 4 – Diving Into the First Pillar: Chunking
A few weeks ago, I wrote a post saying I had zero idea what “chunking” even was. Now, a few weeks later I definitely understand more, but not enough.That’s why today is fully dedicated to Pillar 1: Chunking. Back then I got great examples: 🍞 “Slice a loaf of bread into pieces.” 🍕 “Cut a pizza into slices.”Perfect analogies — and still true. But now I understand why chunking is so important: 🔹 What Chunking Really Is Chunking is the most critical preprocessing step in any RAG system. IT means breaking large documents into smaller, meaningful segments (“chunks”), which are then embedded, indexed, and retrieved later. Chunks are the atomic information units your RAG system uses. If the chunks are bad, retrieval is bad — and the LLM can’t fix it. 🔹 The Core Dilemma Chunking is always a balance between: 1️⃣ Precision – smaller chunks give cleaner embeddings 2️⃣ Context – bigger chunks give more meaning to the LLM Too big → diluted meaningToo small → missing context→ And THAT is the hardest challenge in chunking. 🔹 Best Practices for Chunking Here are the key strategies I’m learning: 📌 Recursive Character ChunkingRespects natural text boundaries (paragraphs, sentences).Often the recommended default. 📌 Overlap (10–20%) Ensures context isn’t lost at the edges.Example: 500-token chunk → 50–100-token overlap. 📌 Optimal Sizes A strong starting point is 512–1024 tokens per chunk. 📌 Advanced Methods– Semantic Chunking: uses embeddings to detect topic changes– Agentic Chunking: LLM splits text into atomic, meaningful statements These methods help avoid context loss and improve retrieval quality. 🔹 Why This Matters Chunking literally determines what your RAG system can find.And if retrieval fails, the LLM fails — it can’t magically invent the missing context. All resources, diagrams, and notes as always:👉 Notebook: https://notebooklm.google.com/notebook/ea1c87b2-0eda-43f8-a389-ba1f57e758ce
🌟 Day 4 – Diving Into the First Pillar: Chunking
1 like • 18h
This is very informative, now you need to explain “embedding”! LOL. Very well done thanks for sharing this journey.
📒 AI Term Daily Dose – Response
Term: Response Level: Beginner Category: Core Concept 🪄 Simple Definition: A response is the answer the AI gives back after you type a prompt. 🌟 Expanded Definition: The response is the AI’s output — the text it generates based on your prompt. Responses can be short (a single fact) or long (a story, plan, or explanation). Since the AI is predicting the most likely next words, its responses can sound natural and human-like, though sometimes they may be off or inaccurate. ⚡ In Action: Prompt: “Write a haiku about the ocean. ”Response: The AI generates a three-line poem with 5-7-5 syllables. 💡 Pro Tip: If a response isn’t what you wanted, refine your prompt or ask the AI to try again. Think of it as a conversation where you guide the answer.
📒 AI Term Daily Dose – Response
Day 1 – My RAG Mastery Challenge Starts Now
What happens when you dive one hour a day into RAG with absolute intensity? I’m about to find out. Starting today, I’m committing to a personal challenge:Every single day, I will spend at least one hour digging deep into Retrieval-Augmented Generation and I’ll share every step of my progress right here. Why?Because I want to grow. Deeply. Consistently. With purpose.And because RAG is becoming one of the most important building blocks of future AI systems. To make this challenge truly powerful, I need your support and I need everything this community has. 🔥 I genuinely need all of it: – your RAG automations – your RAG-enabled AI agents – your workflows– your best practices – your mistakes and lessons– your resources, tutorials, and websites – every piece of knowledge that exists here – every experience you’ve made To start this challenge in the right way, I need your help with a few key questions: 🔍 Which RAG automations have you already built? 🔍 What RAG-related information, examples, or materials already exist in this community? 🔍 What was absolutely essential for you to truly understand RAG? 🔍 Which websites, videos, or tutorials helped you the most? 🔍 Which RAG systems have you built — and would you be open to sharing them with me? I’m excited to dive deeper every single day and to build real RAG excellence together with all of you. Let’s go. Please put all informations in the comments, it will be helpful
Day 1 – My RAG Mastery Challenge Starts Now
1 like • 2d
@Holger Peschke Thank you for sharing this journey. For those in the community that are interested in building automations and workflows this will be essential!
2 likes • 1d
I built a product knowledge base using Lovable and Pinecone, Google Drive and n8n
Day 3 – The 5 Pillars of High-Quality RAG
Continuing with the theory — it honestly feels like studying for a new degree.But understanding the fundamentals is essential to make better decisions later. Today I found an excellent video, and the key lesson is this: 👉 The quality of any RAG system depends on 5 core factors.LLM = the master chef. Retrieval = the cook bringing the ingredients.If the ingredients are bad, the final dish will be bad, no matter how good the chef is. Here are the 5 pillars, short and clear: 1️⃣ Chunk SizeChunks must be the right size — too big overloads context, too small loses meaning. 2️⃣ Query ConstructionBetter queries = better retrieval.Multi-Query RAG helps cover synonyms and variations. 3️⃣ Embedding ChoiceDense, sparse, or hybrid — your choice directly impacts search quality. 4️⃣ Retrieval QualityThe most critical point.If retrieval brings irrelevant content, the answer will be bad.Metadata & filters improve relevance dramatically. 5️⃣ Generation LayerGood prompting shapes tone, structure, and quality of the final output. ➡️ Master these 5 basics, and your RAG accuracy improves dramatically. As always, you can find all materials in my notebook:👉 https://notebooklm.google.com/notebook/ea1c87b2-0eda-43f8-a389-ba1f57e758ce
Day 3 – The 5 Pillars of High-Quality RAG
1 like • 1d
Excellent breakdown
1-10 of 110
Michael Wacht
6
1,204points to level up
@michael-wacht-9754
Creator of AI Bits and Pieces | A Nate Herk AIS+ Ambassador | TrueHorizon AI Community Manager | AI & Data Strategies Founder

Active 4h ago
Joined Aug 23, 2025
Mid-West United States
Powered by