Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Owned by Michael

AI Bits and Pieces

702 members β€’ Free

Build real-world AI fluency to confidently learn & apply Artificial Intelligence while navigating the common quirks and growing pains of people + AI.

Lone Wolf AI League

1 member β€’ Free

Memberships

AI Automation Society Plus

3.5k members β€’ $99/month

AI for Life

28 members β€’ $297

57 contributions to AI for Life
APIs, explained the way I explain them to clients.
Most automation problems I see trace back to a fuzzy mental model of what an API actually is. So here's the frame I use with clients. An API is a remote control for software. Your app presses a button (sends a request). Another app does something and sends back a result (a response). You don't see how the other app works inside. You just follow the rules printed on the buttons (the docs). That's it. That's the whole concept. Two analogies that work in client calls: Restaurant menu. The menu lists what you can order and how to ask for it. Kitchen is hidden. Meal is the response. Light switch. Flip the switch (request). Wiring, grid, power plant are hidden. Light turns on (response). Same idea either way: clear inputs, clear outputs, hidden complexity. The actual call pattern: 1. Client asks (your app, browser, script) 2. Request goes out with a URL, a method (GET, POST, etc.), and any data the server needs 3. Server does the thing 4. Response comes back, usually JSON Break any of those rules and you get an error, not data. Why this matters for builders: - Reuse beats rebuild. Use Stripe's API instead of building payments from scratch. - Complexity stays hidden. You don't need to know how Twitter stores tweets to pull the last 20. - Access is controlled. APIs decide what's exposed, who can call it, and how often. Security still depends on the implementation, but the boundary exists by design. - Apps mix APIs like ingredients. Maps, payments, email, auth, all stitched together. When two pieces of software talk in a structured, agreed way, they're using an API. Every n8n node, every Claude Code tool call, every trigger. All APIs under the hood. What analogy do you use when a non-technical client asks what an API is? Curious what lands for other builders. Highly recommended related information: Check out @Michael Wacht 's Daily Dose: https://www.skool.com/ai-automation-society-plus/ai-terms-daily-dose-api-use?p=5c08d0bf
APIs, explained the way I explain them to clients.
1 like β€’ 19h
Great analogies. APIs are one of the technical elements that non-coders need to understand to take things to the next level.
Opus 4.7: 10 things that actually matter
A practitioner read on the April 16, 2026 release. Numbers cited are from Anthropic’s system card or named partner benchmarks. ## 1. Coding is the real jump SWE-bench Verified 80.8% β†’ 87.6%. SWE-bench Pro 53.4% β†’ 64.3%. CursorBench 58% β†’ 70%. Anthropic’s internal 93-task benchmark reports a 13% lift across the suite. Rakuten’s partner eval claims 3x more production tasks resolved vs 4.6. On multi-file work, fewer back-and-forth loops and more one-shot fixes. ## 2. Agents run shorter and cleaner Long-running loops reason more before acting. Notion AI reports ~14% improvement on multi-step workflows at one-third the tool errors. Box’s figure: average calls per workflow dropped from 16.3 (4.6) to 7.1 (4.7). Fewer decisive steps instead of noisy chatter. ## 3. Vision is finally usable for screenshots Resolution 1,568px (1.15MP) β†’ 2,576px (3.75MP) on the long edge, roughly 3x. XBOW visual-acuity 54.5% β†’ 98.5%. OSWorld-Verified computer use 72.7% β†’ 78.0%. This is the change that actually unlocks dense-UI automation, diagram parsing, and screenshot-based QA. ## 4. Still 1M context Context window and output limits match 4.6. Pipelines built around long documents or extended chains don’t need architectural changes. Self-verification is better, so coherence over long multi-step runs holds up longer. ## 5. Honesty and safety moved the right direction Reduced hallucinations and sycophancy, tougher against prompt injection. Good for client-facing systems. Note: 4.7 is also more conservative around offensive security work. Anthropic launched a Cyber Verification Program for approved red-team use cases. ## 6. Sharper codebase understanding CodeRabbit reports more real bugs found, more actionable reviews, and better cross-file reasoning than any model they’ve evaluated. The model builds a more persistent internal map of a repo instead of brute-forcing every file. Claude Code also shipped a new `/ultrareview` command for dedicated review passes. ## 7. New xhigh effort tier
Opus 4.7: 10 things that actually matter
1 like β€’ 2d
Do they need to open the context window. Or so they need to do more with the same?
Claude Code just shipped /ultrareview. Here is the practitioner breakdown.
Anthropic dropped a new slash command called /ultrareview in Claude Code v2.1.111, and it quietly changes how I review my own code before I ship it. Here is what it does, when to use it, when to hold back, and the catch most people are glossing over. What it actually is /ultrareview runs a full code review in the cloud using parallel reviewer agents while you keep working locally. - Type /ultrareview with no arguments. It reviews your current branch. - Type /ultrareview 123. It pulls PR #123 from GitHub and reviews that. By default it fires up 5 reviewer agents in parallel. Configurable up to 20. Each agent independently scans your diff for real bugs, and the command only surfaces a finding after it has been reproduced and verified. No "you might want to use const" noise. No lint-style nagging. Verified findings only. When to pull the trigger Spend a run when the cost of a missed bug is real: - Payment code - Auth changes - Database migrations - Large refactors touching many files - Any pre-merge review on a business-critical branch Do not burn a run on a one-line typo fix. The value lives in wide, high-stakes diffs where a human reviewer would take an hour and still miss something. The catch Users are reporting three free runs total on Pro and Max plans. Not three per month. Three, period. After that it meters against your plan. Treat them like good steakhouse reservations. You do not book one to show up and order a side salad. How I am using it 1. Finish a feature branch. 2. Run my own tests locally. 3. Fire /ultrareview before I open the PR. 4. Read the findings. Fix what matters. Push. 5. Only then ask a human to review. It does not replace a human reviewer. It does catch the things your eyes stopped seeing three hours ago. Try it Update Claude Code to 2.1.113 or later. Inside a git repo with real changes, type /ultrareview. Watch the fleet spin up. Come back in a few minutes. Feel free to share your initial result in the comments. I’m curious to see what it revealed about the code you deemed clean.
Claude Code just shipped /ultrareview. Here is the practitioner breakdown.
1 like β€’ 4d
How does this and Red Team conflict, complement or have not in the same universe?
/Claude-Design Research Preview by Anthropic Labs
I spent a total of 2 hours 30 minutes setting this up and then correcting a lot of mistakes, a lot of assumptions, so I would say that the amount of time versus the output that I got at the end is well worth it. Some were my fault, some were outdated files, and a couple of critical mistakes where Claude decided to make things up. However, if you do not have a brand marketing page or style, then this is a great place to design and develop one.
/Claude-Design Research Preview by Anthropic Labs
2 likes β€’ 5d
Public service announcement πŸ“£
2 likes β€’ 5d
@Matthew Sutherland πŸ‘Š
The web just got a new audience. AI agents.
Cloudflare rolled out a set of tools this month aimed at one thing. Making websites readable, usable, and controllable by AI agents. Why this matters for the rest of us: Your website was built for humans and search engines. That worked fine for 25 years. Now a third visitor is showing up, and most sites have no idea how to talk to it. Here’s what Cloudflare shipped: Is It Agent Ready (isitagentready.com). A scanner that grades your site on whether AI agents can actually work with it. Agent Readiness Score. Lives inside Cloudflare Radar. Measures how well you’ve adopted AI-specific standards like agent-aware robots.txt, Markdown content negotiation, and emerging agent protocols. AI Crawl Control. Shows which AI platforms are crawling your site. Gives you the choice to block, allow, or monetize that access. AI Index and Pub/Sub. Lets your site push structured updates to AI models in real time, so agents skip the repetitive re-crawling. Managed OAuth. Lets agents authenticate on behalf of users, which is how internal tools become agent-compatible. The practical takeaway: Run your site through isitagentready.com. See the score. You’ll learn more in five minutes than any article can teach you, because it grades your actual site. Two things worth noticing: 78 percent of sites have a robots.txt, but most are written for search engines. Different audience, different rules. The standards are moving toward actual interaction. MCP Server Cards (the same MCP you use with Claude Code), API catalogs, OAuth for agents. Your website is becoming an interface. Act on what feels urgent. Just know this shift is happening. Early adopters will start showing up in AI agent results before the broader web catches on. Scan your site. Drop your score in the comments if you want a second pair of eyes on what to fix first.​​​​​​​​​​​​​​​​
The web just got a new audience. AI agents.
2 likes β€’ 5d
It’s as if a new species came to town. Damn!
1-10 of 57
Michael Wacht
4
1point to level up
@michael-wacht-9754
AI Bits and Pieces | Learn to Close Deals | Become an AI Standout

Online now
Joined Feb 19, 2026
Mid-West United States