Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Owned by Ward

AI for DBAs

25 members • Free

AI for DBAs is AI for Everyone. We empower everyone to master AI with databases, making data management accessible to all.

Memberships

AI Video Hub

124 members • Free

Septic Pro Academy

83 members • Free

Skoolers

195.4k members • Free

WotAI

741 members • Free

Vibe Coders Club

843 members • Free

Vibe Code Blueprint

128 members • Free

CC
Code-Free Creators

187 members • Free

Ai Titus

911 members • Free

kev´s AI OS Academy

193 members • $99/m

19 contributions to Vibe Coders
Part 1 of a 2 part article, building a SQL Server Monitor for AI
In this article, Database Administrator I describes building a local, AI-powered system using Ollama to analyze massive, automated SQL Server health reports that were previously unmanageable for human review. He explains how this approach, which keeps all sensitive data within the private network, solves the common industry problem of unread reports by using a local LLM to instantly correlate data, prioritize critical issues like log file growth, and provide actionable fixes. By transforming dense HTML reports into clear, intelligent summaries, I demonstrate how AI acts as a tireless mentor for junior DBAs and a sophisticated trend detector for seniors, ultimately evolving the DBA's role from reactive troubleshooting to proactive, data-comprehending management. https://www.linkedin.com/pulse/part-1-2-i-built-ai-powered-sql-server-health-monitor-ward-minson-6cdvc/
Part 1 of a 2 part article, building a SQL Server Monitor for AI
0 likes • 5d
@Atul Pathria, you hit the nail on the head regarding the "causation chain." Most DBAs are currently playing a game of whack-a-mole because their monitoring tools treat events as isolated incidents. You’re absolutely right. A spike in page life expectancy is rarely a "day one" problem; it’s usually the final domino in a chain that started with a specific query plan change or a maintenance job overlap hours prior. Why the On-Prem/Local LLM Approach Wins Here: - Temporal Context as a Feature: By using local vector embeddings, we can feed the model "Time-Series Context." Instead of just embedding a single error log, we embed windows of telemetry. This allows the AI to recognize that Event B almost always follows Event A within a 30-minute window. - The "Three-Hop" Logic: General-purpose LLMs struggle with SQL Server’s specific internal dependencies. A local model, fine-tuned or heavily prompted with your specific environment’s topology, can traverse that dependency graph (Memory, Log, Job) because it isn't just looking at text; it’s looking at your infrastructure's "digital twin." - Signal over Noise: The goal isn't more dashboards; it's a "Reasoning Engine" that sits on top of the telemetry and says: "Ignore the log growth alert; it's a symptom. Fix the memory pressure caused by the newly deployed ad-hoc reporting service." That shift from statistical correlation to causal inference is exactly where the "Automated DBA" needs to live❤️‍🔥
0 likes • 2d
@Beatrice Edward wow Hijacking 101
The Vibe Coding Volatility: Surviving the Claude 500 Outage
It started with a few failed prompts and ended with a complete lockout. If you’ve been hitting Internal Server Error 500 or getting bounced from the login screen this morning, you aren't alone. As of April 15, 2026, Anthropic is officially grappling with a major outage affecting Claude.ai, the API, and the Claude Code CLI. For those of us deep in the world of "vibe coding," where the flow depends on a tight feedback loop between our natural language and the machine, these service interruptions are more than just a nuisance: they are a complete work stoppage. What’s Happening? - Widespread Login Failures: Users are being logged out and unable to return to their sessions. - The "500" Wall: Claude Code and API requests are dropping mid-stream, returning "Internal Server Error" instead of that sweet, functional code. - Systemic Instability: This follows a week of intermittent degraded performance, leading many to wonder if the infrastructure is struggling to keep up with the latest Sonnet and Opus 4.6 deployments. The Home Lab Advantage If there was ever a day to celebrate data sovereignty, today is it. While the cloud-reliant masses are stuck staring at status pages, this is where a robust home lab pays for itself. 1. Failover to Ollama: By pointing your development agents to local Ollama endpoints, you keep your logic in-house and your throughput steady. 2. Modular Resilience: The best "vibe codi 3. g" workflows aren't tied to a single model. Use this downtime to test your current PRDs against local LLMs like Llama 3 or DeepSeek. If your prompts are truly modular, they should perform regardless of the backend. 4. Triple-Pass Validation (TPV): Even when the API returns, use the TPV protocol to ensure the "post-outage" code hasn't suffered from the "lazy output" issues that often plague models when servers are under extreme load. Staying Operational Check the official status page for updates, but don't wait for a green light to stay productive. Shift your builds to your local hardware, keep your Docker containers humming, and remember: the best AI infrastructure is the one you control.
Part 2 - Real World use of Local AI
In Part 2 of his series, I demonstrates the practical power of a local LLM-driven "AI DBA Analyst" that processed a massive 67,000-character SQL Server health report in just 12 seconds to identify three critical, interconnected performance issues. By utilizing a three-layer architecture collection, storage, and a Python-based intelligence pipeline the system successfully correlated memory pressure with log file growth and job slowdowns, providing immediate, actionable T-SQL fixes. Beyond simple analysis, I Try to highlight the AI's ability to modernize legacy database code by auditing and fixing 34 stored procedures, ultimately arguing that while AI lacks business context, it serves as an invaluable, tireless partner that allows DBAs to bypass manual data parsing and move straight to strategic resolution. ❤️‍🔥This is a real world solution for a real world problem solved by AI integration with legacy tools.🔥 👾👾👾💥👾👾 https://www.linkedin.com/pulse/part-2-my-ai-dba-analyst-found-3-critical-issues-12-ward-minson-6uz6c
Prompts to Commands
The bridge between a "product idea" and a "deployed application" is usually built with hundreds of hours of manual labor. But what if you could automate the entire architectural life-cycle? By converting high-level engineering prompts into Claude Code Custom Commands, you can transform a standard AI chat into an autonomous development squad. This collection of commands creates a linear, high-precision pipeline that moves from vision to verified code with military discipline. I have included my build prompts here as claude code commands. you could possibly use them in other CLI coders
1 like • 15d
💥 Here is the updated TPV Command that addresses what you are asking for.🚀 I am going to update the Class on https://www.skool.com/ai-for-dbas-7678 to insure it is up to date. This updated command acts as a "Black Box Flight Recorder" for your autonomous build process. It ensures that if Claude hits a token limit or crashes mid-task, it doesn't just forget what it was doing. By maintaining a .claude_state.json file, it saves its "In-Flight Context" essentially its train of thought—alongside its progress (e.g., "Step 3 of 7" or "2 of 3 test passes"). When you restart the pipeline, it reads this file first to pick up exactly where it left off, preventing the agent from getting stuck in a loop or losing track of the specific logic it was trying to fix. I certainly hope this helps answer your question.🔥
1 like • 15d
@Atul Pathria so this is what is now happening with the latest TVP-gate.md file: 1. Namespacing & Collision Avoidance Instead of a generic .claude_state.json, the command now dynamically generates a filename based on the environment. It uses the pattern: ${PROJECT_NAME}_${SESSION_ID}.json. This ensures that a session in one tmux window doesn't overwrite a checkpoint from another. 2. State Pruning (Keeping it Lean) To address the file size concern, the "In-Flight Context" is treated as a sliding window. - Pruning Rule: Only the current task's logic and the last failed test output are kept. - Function Bodies: Instead of saving full code blocks, we save Line References or Function Names. This keeps the state file under a few kilobytes, ensuring fast re-loads regardless of project size. Update will be available in the class folder.
🔥BOILER!💥
I really wanted to try my hand at creating my own agent builder, 🤖 with my own Nodes. so here is what i got. it was a kick, i used my method found in my first class about prompting for the full build. on "AI for DBAs" so this is the second class. Remember it starts with a great PRD.md file so here is my file. the rest of the files can be found in the free class. Give it a try, it was fun for me! 🔥 This was vibe coded with claude code, but Gemini cli or github copilot would do the same if done right! The PROMPT files are in the free class on "AI for DBAs"
🔥BOILER!💥
0 likes • 16d
@Atul Pathria I have found that you either do the auto process or you use the tpv, i prefer the TPV it takes longer and burns more tokens but it insures that the code comes out clean, in most cases, I can usually get through a base build from start to human test in about an hour on average, there have been a few times i kicked it off and let it do its thing.
0 likes • 16d
@Atul Pathria Every email gets que'd up in my Thunderbird for me to review, after i process the approval for an email response. This is still a very early version and i am working on it a couple times a week to create additional nodes.
1-10 of 19
Ward Minson
4
89points to level up
@ward-minson-4112
SQL DBA by profession AI integrator by choice.

Active 15h ago
Joined Jun 11, 2025
USA
Powered by