User
Write something
Afternoon Tea is happening in 46 hours
Pinned
Welcome to Clief Notes. Here's where to start.
1. Watch the intro video and introduce yourself in the intro post here 2. Start with The Foundation (free course). Concepts, folder architecture, prompting framework. Everything else builds on this. 3. Check in at the bottom of each lesson. Polls, discussion posts, other members working through the same stuff. Use them. 4. When you're ready to build real things, move to Implementation Playbooks (Level 2). When you're ready to build your own tools, Building Your Stack (Level 3). 5. Post your work. Ask questions. Help others when you can. What are you here to build?
Poll
5223 members have voted
Pinned
Premium and VIP: Questionnaires Are Live
Saturday Tea is coming, get your questions in. If you want your questions answered live this Saturday, fill out the questionnaire for your tier below. Premium (Afternoon Tea): https://forms.gle/k6oSAzeo6LY5pUqA7 VIP (High Tea): https://forms.gle/ngkMV1oSGDHWYHEf8 Drop your questions in early so we can work through as many as possible on the call. See you Saturday!
Pinned
I come asking for help! (NEW ROUND! VOTE ONCE A DAY PLS)
Because of the Amazing support you all gave for the first Round Wylder (my step daughter) made it into the second round! You can vote once a day and some days are 2x votes ! I would love love love if any of you support her going to work with some of the best animal rescues in the world to just cast at least one free vote if you can! You can vote here! Not Ai related so sorry for that ! Wylder | Junior Ranger
Highest signal to nosie MD file length
What has everyone found to be their sweet spot when it comes to the line length of thier MD files? Wouldn't it be more efficient to have fewer lines in a markdown file but have them be higher quality? You would likely need to create more MD files, which will take more time. However from my understanding it's easier for LLMs to process higher quality MD files with more files than the other way around. I was wondering what everybody else thought of this.
Here is the current "Free-Tier AI Stack" for 2026
1. The Frontier Giants • Gemini: Access 1.5B tokens/day on Gemini 1.5 Flash/Pro. That is an astronomical amount of context for RAG and long-document analysis. • OpenAI: Their “Data Sharing” program offers 250k/2.5M tokens daily. • xAI Grok: Spend just $5 and unlock $150/month in free credits. • Amazon AWS: New users get $100 credit for 6 months, providing access to 200+ models including Opus 4.7 and GPT 5.1. 2. Speed & Open-Source Powerhouses • Groq: The king of inference speed. Access Llama 3.3-70b and Qwen3-32b at speeds that feel like magic—completely free. • Mistral: Their Experimental Program offers a massive 1B free tokens per month. • Nvidia: Use the Nemotron suite via their developer playground for high-performance base models. 3. The Aggregators & Community Hubs • Hugging Face: The "GitHub of AI" provides a Free Serverless Inference API for thousands of models (Llama, Stable Diffusion, Whisper). No credit card required. • OpenRouter: Access 50+ models with unlimited usage tiers for experimentation. • Deepinfra: Get 1M tokens/day on Llama/Mistral models just for signing up with an email. 4. Specialized & Niche Access • Cohere: Their Trial API gives 1,000 calls/month for the best-in-class Rerank v3 and multilingual Aya models. • Lepton AI: $10 free credit on signup to test Llama and Gemma models in a streamlined playground. So, what are building today?
1-30 of 1,150
Clief Notes
skool.com/cliefnotes
Jake Van Clief, giving you the Cliff notes on the new AI age.
Leaderboard (30-day)
Powered by