User
Write something
Pinned
Second brain - what is and why
Here's what we actually built — and why it matters for you— and **please join this SKOOL group!** If you came here from the second brain post, welcome. Let me show you the actual work. The "second brain that works across AIs, agents, and claws" isn't a concept. It's live infrastructure. Here's what we've built and what you can use right now: openbrainsystem.com — The deep dive on what an open brain system actually is, how to architect one, and why the tools most people are using (Obsidian, Notion, Tiago Forte's PARA) aren't built for the agent-first world we're operating in now. secondbrain.us.com — The technical layer. pgvector, MCP integration, end-to-end build guides. If you want to build rather than just read about it, start here. aiknowledgestack.com — The commercial angle. How AI knowledge infrastructure maps to real business leverage — content operations, client work, agency scale. All of it points toward the same thing: novcog.dev — the actual NovCog Brain implementation. A vectorized, cross-agent memory system that runs across Claude, local models, OpenClaw agents, and anything else in the stack. Not a product pitch. A working system you can replicate. ++++++++++++++++++++++++++++++++++++++++++++++++ In other words, you can build this in an afternoon. This afternoon. The hardened, battle-tested brain that Triston Goodwin has been selling to clients. ++++++++++++++++++++++++++++++++++++++++++++++++ This is what we do here. We build the thing, document it, and hand you the blueprint. The resources above are free. But if you want to be in the room where this gets built in real time — where the sessions, office hours, peer accountability, and live agent builds happen — that's the Hidden State Drift Mastermind. $105/month. Small group by design. Practitioners only. 👉 Upgrade to HSD Mastermind in the membership tab above. If you're not ready for that yet, you're still in the right place. Drop a post and tell us what you're building.
4
0
Pinned
welcome to the Burstiness and Perplexity community
Our mission is to create a true learning community where an exploration of AI, tools, agents and use cases can merge with thoughtful conversations about implications and fundamental ideas. To get a deeper overview of this Skool, click on the Classroom tab above, and enter the Welcome Classroom If you are joining, please consider engaging, not just lurking.Tell us about yourself and where you are in life journey and how tech and AI intersect it. for updates on research, models, and use cases, click on the Classrooms tab and then find the Bleeding Edge Classroom
Pinned
a learning content automation system
I built an automated content generation system that runs 24/7 on a Mac Mini in my house. No n8n. No Make. No Docker. No external orchestration dependencies. Pure Python, stdlib, launchd. It publishes across 18 sites daily. Every article is quality-scored against AP Style rubrics before it goes live. Here's the part most automation builders skip: the scoring model had a bias problem. GPT-4.1-mini's safety training bleeds into quality scoring. Political content — elections, protests, international conflict — gets reflexively penalized 3-4 out of 10 regardless of actual writing quality. The fix was chain-of-thought scoring: force the model to reason about specific criteria (headline accuracy, factual coherence, structure, tone) before outputting a score. That eliminated the topic-sensitivity reflex entirely. The quality gate rejects anything below 5.0/10. What passes gets a hero image generated via Fal.ai, publishes through WordPress REST API, and distributes to Bluesky, Telegram, and Tumblr — all with viral scoring that tiers articles into boost, standard, or skip. Cost: $0.92/day. Budget-capped at $2/day, $10/week. But the content generation is only half the system. Every article embeds a 1x1 tracking pixel from a Cloudflare Worker. That pixel tells me exactly which AI crawlers are ingesting the content and when. Within hours of publishing, I can see GPTBot, ClaudeBot, ByteSpider, Meta's external agent — all hitting the content. Not guessing. Measuring. Last week we deployed a 10-article interlinked content series across the network. 500 pixel hits in the first window. Breakdown: 14% GPTBot, 10% Meta, 4% ByteSpider, 2% ClaudeBot, 42% human readers. The content entered at least four major AI training pipelines within hours of publishing. The system improves daily without intervention. Quality scores trend upward because the rubric catches what the model misses. Publishing cadence stays natural with randomized 13-23 minute intervals — no fixed pattern for crawlers to fingerprint. Every run logs to a SQLite database. A daily email report hits my inbox at 7:03am with per-site metrics, quality trends, cost tracking, and pixel data.
looking at DeepSeek v4 more deeply
DeepSeek V4 isn't just a new model iteration. It’s a masterclass in low-level engineering that fundamentally rewrites the rules for how we build trillion-parameter AI 🛠️. Scaling to 1.6 trillion parameters across 61 Transformer layers usually causes massive mathematical instability and "signal explosions" that crash training runs. Instead of just hitting restart or throwing more hardware at the problem, the DeepSeek team ripped out the industry-standard components. Here is how V4 is built differently under the hood: 🧠 Manifold-Constrained Hyper-Connections (mHC): Standard bypass lanes (residual connections) fail at the trillion-parameter scale. To fix this, DeepSeek forced the network's residuals to behave like a "doubly stochastic matrix"—meaning the mathematical signal is always conserved and physically cannot explode, no matter how deep the network gets. ⚡ The Muon Optimizer: They ditched the industry-standard AdamW optimizer for a custom algorithm called Muon, which uses hybrid Newton-Schulz iterations to orthogonalize weight matrices and ensure optimal gradient flow. 🔄 Anticipatory Routing: To prevent catastrophic loss spikes during training, the model monitors itself. If it detects a spike, it temporarily looks at slightly older snapshots of its own parameters—ignoring the immediate chaotic noise to lock onto the underlying learning trend and self-stabilize. 🎓 On-Policy Distillation (OPD): Traditional weight-merging degrades performance. Instead, DeepSeek trained distinct expert models for specific domains (like math and code), then fused them into one unified student model by having it learn from the full-vocabulary distributions of the teachers. This isn't just one lucky breakthrough; it's dozens of cleverly engineered, mathematically beautiful solutions—including custom fused GPU kernels that perfectly overlap computation and network communication—all working together. You don't always need the biggest data center. Sometimes, you just need the most cracked engineering team.
0
0
Quick answer: what's a monkeypatch
A monkeypatch is when you replace a function (or method, or attribute) on a module at runtime, after it's already been imported, without touching the original source code. The replacement only lives in your process — it doesn't modify the file on disk. In our case: import torch _orig_load = torch.load # save the original def _trusting_load(*args, **kwargs): kwargs["weights_only"] = False # force this argument return _orig_load(*args, **kwargs) # then call the real one torch.load = _trusting_load # ← the monkeypatch After that line runs, every call to torch.load(...) anywhere in the process — ours, pyannote's, pytorch_lightning's, anyone's — silently gets routed through _trusting_load, which forces weights_only=False. The torch library itself is unchanged on disk; we just swapped the function pointer in this Python session. Why "monkey": the term goes back to "guerrilla patch" → "gorilla patch" → "monkey patch." It's mildly pejorative — the idea is it's a sneaky/dirty fix, because callers don't see it coming. When you should use one: - A library you depend on calls torch.load(weights_only=True) explicitly and you can't change its code, but you trust the source enough to bypass the check. - A test needs to replace time.time() to make output deterministic. - A vendor lib has a known bug and you want to patch it without forking. When NOT to: - If you can change the caller's code instead, do that — monkeypatches are invisible to anyone reading the codebase later. • • Long-lived monkeypatches in production code accumulate as "magic" no one understands. Each one needs a comment explaining why the underlying library can't just be fixed.
0
0
1-30 of 88
⚡Burstiness and Perplexity⚡
skool.com/burstiness-and-perplexity
AI-native SEO, autonomous agents, and automation pipelines. Built for practitioners who build— not collect. Home of the Hidden State Drift Mastermind.
Leaderboard (30-day)
Powered by