π New Video: How I Use Claude Code To Find Viral Topics First
I built a system that scans Twitter, news websites, and YouTube every morning β filters out the noise, clusters duplicate coverage into stories, and scores everything by relevance. No more doomscrolling 10 tabs to stay informed. In this video I show the full system live and walk you through building your own. Here's the quick setup guide: 1. Connect your data sources Claude Code supports external tools through MCP and API integrations. For this system you need: - Apify β scrapes Twitter/X accounts and YouTube channels. Get an API key at apify.com, add it to your .env as APIFY_API_KEY. Claude Code calls Apify actors via REST API to pull tweets and video metadata + transcripts. - Firecrawl β scrapes any website (blogs, news sites). Get an API key at firecrawl.dev, add it as FIRECRAWL_API_KEY. Claude Code uses it to grab article content from pages like techcrunch.com, openai.com/news, etc. - Kie AI β (optional) for generating visual assets from your trend data. Add as `KIE_AI_API_KEY`. 2. Define your sources Create a list of Twitter accounts and websites relevant to YOUR niche. Mine are 10 Twitter accounts (@OpenAI, @AnthropicAI, @karpathy, @sama, etc.) and 5 news sites. You can track any industry β crypto, biotech, marketing, whatever. 3. Write a Claude Code slash command Create a file at `.claude/commands/trends.md` that tells Claude to: - Fetch latest posts from your Twitter accounts (via Apify) - Scrape your news websites (via Firecrawl) - Filter to news only (drop opinions and commentary) - Cluster items about the same event into stories - Score each story 1-10 for relevance to your niche Then just run /trends in Claude Code and it does everything. 4. Generate a web interface Ask Claude Code to build you a simple Next.js page that displays your stories grouped by day, sorted by score. Mine has color-coded badges (green = high relevance, gray = low), expandable source lists, and voting buttons so the system learns what I care about over time.