Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Owned by Michael

AI Bits and Pieces

696 members • Free

Build real-world AI fluency to confidently learn & apply Artificial Intelligence while navigating the common quirks and growing pains of people + AI.

Lone Wolf AI League

1 member • Free

Memberships

AI Automation Society Plus

3.5k members • $99/month

AI for Life

28 members • $297

327 contributions to AI Bits and Pieces
Claude Code just shipped /ultrareview. Here is the practitioner breakdown.
Anthropic dropped a new slash command called /ultrareview in Claude Code v2.1.111, and it quietly changes how I review my own code before I ship it. Here is what it does, when to use it, when to hold back, and the catch most people are glossing over. What it actually is /ultrareview runs a full code review in the cloud using parallel reviewer agents while you keep working locally. - Type /ultrareview with no arguments. It reviews your current branch. - Type /ultrareview 123. It pulls PR #123 from GitHub and reviews that. By default it fires up 5 reviewer agents in parallel. Configurable up to 20. Each agent independently scans your diff for real bugs, and the command only surfaces a finding after it has been reproduced and verified. No "you might want to use const" noise. No lint-style nagging. Verified findings only. When to pull the trigger Spend a run when the cost of a missed bug is real: - Payment code - Auth changes - Database migrations - Large refactors touching many files - Any pre-merge review on a business-critical branch Do not burn a run on a one-line typo fix. The value lives in wide, high-stakes diffs where a human reviewer would take an hour and still miss something. The catch Users are reporting three free runs total on Pro and Max plans. Not three per month. Three, period. After that it meters against your plan. Treat them like good steakhouse reservations. You do not book one to show up and order a side salad. How I am using it 1. Finish a feature branch. 2. Run my own tests locally. 3. Fire /ultrareview before I open the PR. 4. Read the findings. Fix what matters. Push. 5. Only then ask a human to review. It does not replace a human reviewer. It does catch the things your eyes stopped seeing three hours ago. Try it Update Claude Code to 2.1.113 or later. Inside a git repo with real changes, type /ultrareview. Watch the fleet spin up. Come back in a few minutes.
Claude Code just shipped /ultrareview. Here is the practitioner breakdown.
1 like • 2h
This was needed for sure. Is this a full Red Team review in your mind?
1 like • 2h
Are they tickling out Mythos?
📦 Out of the Box in 30: Claude Design vs Lovable
Today I tested Claude Design using Sonnet 4.6 for the first time with no research, no training, and no prep as part of my Out of the Box in 30 series. 🎯The challenge was simple. Could I build a classic car tribute site for a green 1972 Mustang in 30 minutes… and how would it compare to Lovable? A few quick takeaways: - Claude Design felt intuitive and guided the process well - Lovable moved fast and gave me a stronger first pass visually - Claude Design showed promise, but missed the mark on some of the car imagery - Lovable felt more dialed in right away for this specific use case This series is to show you that sometimes the best way to learn a new AI tool is to just open it up and try it. Just get in there and start learning by doing. No overthinking. No expectation to be perfect. No waiting until I “know enough". That’s what Out of the Box in 30 is all about. Click here to see the video: https://youtu.be/RwyBMyaelXY Have you tried either one yet? #ClaudeDesign #Lovable #AIWebsiteBuilder #AIBeginners #AIBitsAndPieces #NoCodeAI #AIInRealLife
1 like • 18h
@Shiyamala Devi R Good to hear. It is my go to for these types of apps.
1 like • 18h
@Matthew Sutherland You know the drill: vibe coding, vibe + plan, skill coding + planning
AI in Real Life: So Many AI Tools, So Little Time — Here Is What They All Have in Common
I was commenting on a great question posed by @Girish Mohan, and I found myself thinking about it long after I responded.🤔 That reflection led to this post about the future of AI in a practical, real-world sense. The essence of the question: Is there a risk in becoming too dependent on one AI company, product, or tool set? I thought that was a smart question, because there is some real tension there. At this early stage of AI adoption, there is always a risk in overcommitting too soon. We have seen this before. During the eCommerce boom, a lot of companies looked like they were going to dominate, and many of them did not last. Early markets move fast. Leaders change. Sometimes you pick the wrong horse. 🐎 At the same time, over-diversifying creates its own problem. If you keep jumping from one tool to the next, you can lose the benefit of synergy. Some tools work better together. 🔗 Gemini and NotebookLM are a good example. When tools are designed to complement each other, the combined value can be better than chasing ten separate platforms that do similar things. There is also a practical reality that matters. One person cannot learn every AI tool coming to market. There are too many. At some point, each of us has to decide where we want depth, where we want breadth, and what kind of workflows actually fit the way we work. 🎯 That means some specialization is going to matter. People will need to find their niche instead of trying to master everything. But for me, the bigger point sits above all of that. We are moving into a very different communication model. 1) AI is shifting toward natural language. 2) More of the work will be handled through machine-to-machine interaction at machine speed, 3) All this be done without the user interface we think of today. 🛍️ My shopping AI may eventually interact with a retailer’s concierge AI. 🤖 Your scheduling assistant may work directly with mine. 🔄 Business systems will increasingly pass tasks, context, and decisions across platforms without the same kind of manual navigation we deal with today.
1 like • 2d
@Debbie DeMarco Bennett 🙌
1 like • 2d
@Debbie DeMarco Bennett Yep, @Yash Chauhan is here too.
🎥 NotebookLM Explained in 10 Minutes Live (Recording) — Documents to Insights & Infographics
I ran a live session walking through a real life example of using NotebookLM to turn raw source documents into structured insights, summaries, and visual outputs. If you want to understand: - what NotebookLM actually is - how it differs from LLM's like ChatGPT - what you can practically do with it This is a good place to start. One of the key things that makes NotebookLM different is that it works from the sources you give it — documents, notes, links, and files — rather than pulling from the open internet. In this session, I walk through: - what NotebookLM is - how it works with your own data - how to add and manage source documents - how to ask questions and get grounded answers - how to turn notes into new sources - how to create summaries, quizzes, and infographics
0 likes • 4d
@Charles Timber Well, youand I can have our own party, with 700 of your new friends (latest member count is 688).
1 like • 2d
@Md. Abdullah Al Mafi Thank you for the kind feedback. I have a new video on Gamma coming out today.
The prompt injection hidden in my client's site asked my AI to not tell me about it. That was the tell.
**Caught two prompt injection attempts buried in a client's site this week during an audit.** Both were structured to look like legitimate system messages, embedded inside script comments loaded by an outdated third-party plugin. One tried to load a list of unauthorized tools. The other included an instruction to hide itself from the user. Both failed. The "never tell the user" clause was the clearest tell. Real system instructions don't ask to be concealed. **The attack vector** This injection targets AI tools that read the site. Humans visiting the page never see it. Audit tools, AI search crawlers, agent pipelines, customer-facing chatbots, anything that fetches and reasons over web content. The attacker embeds hidden instructions in HTML and waits for an AI crawler, audit tool, or agent to act on them. Compromised plugins, outdated themes, and injected third-party scripts are the common culprits. **If you own a site** - Run a malware scan. Sucuri SiteCheck is free and works on any platform. - Audit plugins and third-party scripts. Anything updated or added in the last 30 to 60 days is the first suspect. - Add a Content-Security-Policy header to restrict which scripts can execute. **If you build AI tools that read web content** - Treat fetched page content as untrusted data at every stage of the pipeline. - Pre-scan fetched content before it enters any agent context. - If fetched content instructs your AI to conceal anything from the user, that is the attack. Halt the pipeline and log it. I flagged both strings in the audit output and pointed the client at the likely source plugin for their follow-up. **Methodology note worth flagging** This was my first audit run on Opus 4.7. I have been running these scans on Opus 4.6, and the model was the only variable that changed between runs. I can't say with confidence whether 4.6 would have flagged the same two strings on the same content. If you're building audit or scanning pipelines, this is an argument for testing across models on identical fixtures before locking in a default. Different models pay attention to different things, and injection detection seems to live in exactly that gap.
The prompt injection hidden in my client's site asked my AI to not tell me about it. That was the tell.
2 likes • 3d
Invaluable! 🙏🏻
1-10 of 327
Michael Wacht
7
5,330points to level up
@michael-wacht-9754
AI Bits and Pieces | Learn to Close Deals | Become an AI Standout

Active 52m ago
Joined Aug 23, 2025
Mid-West United States
Powered by