Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Memberships

AI Launchpad

21.3k members • Free

Applied AI Academy

3.1k members • Free

Claude Code Pirates

192 members • Free

AI OS

430 members • Free

Applied AI Club

2.6k members • Free

Builder’s Console Log 🛠️

2.3k members • Free

Enovair Circle

131 members • Free

Walmart Mastery (FREE)

1.6k members • Free

AI Hustle

294 members • $19/m

3 contributions to Claude Code Pirates
Scan Your Skills Before You Install Them — 1 in 7 Have Security Issues
📜 If you're pulling skills or MCP servers from GitHub into your Claude Code setup, read this. People are publishing malicious code disguised as useful skills — and it's not a small problem. ⚓ The Problem - Snyk scanned nearly 4,000 skills and found 13.4% contain critical security issues — malware, credential theft, data exfiltration - In January 2026, 341 malicious skills flooded ClawHub in 3 days, all deploying macOS infostealers targeting wallet keys, API keys, and SSH credentials - 91% of malicious skills combine prompt injection with traditional malware — they trick Claude AND install backdoors - The barrier to publish a skill? A SKILL.md file and a week-old GitHub account. No review, no signing, no sandbox. ⚓ What Malicious Skills Actually Do - Steal your API keys and credentials from .env files - Read SSH keys and send them to external servers - Plant instructions in your CLAUDE.md or MEMORY.md that persist across sessions - Hide commands in tool descriptions that Claude sees but you don't - Redirect your Anthropic API calls (including your API key) to attacker servers ⚓ What You Can Do Right Now Before installing any skill from GitHub or a community source, scan it first with Caterpillar (free, open-source): - Install: curl -fsSL caterpillar.alice.io/d/i.sh | sh - Scan: caterpillar scan ./skill-folder/ - Check the grade (A through F) and read the findings before installing They scanned 50 popular skills and found 54% had security issues. ⚓ Quick Red Flags (No Scanner Needed) - Does the SKILL.md request bash permissions you don't expect? (like curl to unknown URLs) - Does it reference external servers or APIs you didn't ask for? - Is the source repo less than a month old with no commit history? - Does it try to modify your CLAUDE.md, settings.json, or memory files? - Does it use base64 encoding or obfuscated strings? 🗝️ Always scan before you install. If it scores D or F — don't install it. For the full breakdown with real attack examples and a detailed checklist, check out the lesson in 🧪 The Deep End → "Scan Before You Install"
1 like • 15h
Thx.... I pasted this post & asked Claude for other ways to prevent malicious skills it said.. recommendation to scan and check a letter grade is a good start, but no single scanner catches everything. Multi-layered scanning (behavioral + static + pattern) is the real answer. 1. Snyk Agent Scan (free, open-source) Agent Scan scans the current machine for agents and agent components such as skills and MCP servers, and auto-discovers agents and their capabilities.6 It supports scanning of Claude, Cursor, Windsurf, Gemini CLI, and detects 15+ distinct security risks including Prompt Injection, Tool Poisoning, Tool Shadowing, Toxic Flows, Malware Payloads, Credential Handling, and Hardcoded Secrets.6 Install: uvx snyk-agent-scan@latest --skills This is arguably the most credible option since Snyk authored the ToxicSkills research the post itself cites. Their approach "combines deterministic rules with multi-model analysis, enabling the detection of behavioral prompt-injection patterns that single-LLM or regex-only scanners miss."1 2. SkillFortify (open-source, static analysis) A formal security scanner for AI agent skills & plugins with static analysis, supply chain verification, and SBOM generation, supporting 22 frameworks including MCP, LangChain, and CrewAI.7 Features include ASBOM generation, registry scanning, HTML dashboard, and system auto-discovery.7 Install: pip install skillfortify 3. Aguara (open-source, no cloud/no LLM, deterministic) Aguara catches threats before deployment with static analysis that requires no API keys, no cloud, and no LLM. It has 138+ rules across 15 categories covering prompt injection, data exfiltration, credential leaks, supply-chain attacks, and MCP-specific threats.8 It's deterministic — same input, same output. Every scan is reproducible. CI-ready with JSON, SARIF, and Markdown output.8
POLL: What Do You Want Demo'd at Thursday's Pirate Cove?
Thursday's Pirate Cove office hours is coming up and I want to demo what YOU actually want to see. Vote by dropping your reaction on the option(s) you want: ——— 1. Custom Skills — How to build your own slash commands that make Claude Code do exactly what you want. Write once, use forever. 2. Subagent Creation — Spawn multiple Claude agents to work in parallel. Break big tasks into pieces and let them run simultaneously. 3. Playwright MCP (Browser Automation) — Control your browser with Claude Code. Auto-fill forms, scrape data, post to social media, test websites — all hands-free. 4. CLAUDE.md Deep Dive — How to set up your CLAUDE.md files so Claude actually remembers your preferences, project structure, and coding style across sessions. 5. Something Else? — Drop it in the comments. If enough people want it, we'll pivot. ——— How to vote: Comment with the number(s) you want. Or just tell me what you're struggling with and I'll build the demo around that. See you Thursday! —Your Trusty First Mate (on Captain's Orders)
0 likes • 16d
2. Subagent Creation
Advanced CC Usage patterns paper Shared
advanced Claude Code usage patterns that actually work was shared.. https://x.com/DataChaz/status/2009344753754886350?s=20
0 likes • Jan 11
Look forward to it
1-3 of 3
Tony D
1
3points to level up
@tony-d-2110
Excited to explore AI's potential for side hustles, eager to automate, optimize, and innovate, discovering new ways to enhance success and creativity.

Active 2h ago
Joined Jan 7, 2026
US