160 job leads before my first cup of coffee
I got back on the job market last week. By day 2 I was already annoyed at the process, so I did what any of us would do and automated it. Sharing the full stack here because I think a few of you will appreciate the build. 🔧 WHAT I BUILT A fully automated job hunting pipeline that runs every morning and feeds me a curated, scored, deduplicated list of fresh job leads — before I touch my keyboard. ⚙️ THE STACK • n8n — orchestrates the entire workflow (schedule + manual trigger - for testing) • Apify — custom LinkedIn scraper actor I built myself (more on this below) • Airtable — job database with scoring, dedup logic, and status tracking • Claude (Anthropic) — reviews listings, scores them, builds application packages • Slack — notifies me by category when new jobs land for me to review 🔄 THE FLOW 1. n8n fires on schedule across 4 search categories (engineer, architect, leadership, executive) 2. Hits my custom Apify actor to scrape LinkedIn job results 3. Flattens + normalizes the dataset, deduplicates, scores relevance 4. Checks Airtable — new jobs get inserted, existing ones get skipped 5. Slack alert fires per category: "New architect jobs added (58)" 6. Claude agent queues up, reviews each listing, scores it, and builds a tailored application package 7. Airtable field Application_Package_Created flips to true when done 🎭 THE ACTOR This is the part I'm most excited to share. I built my own LinkedIn job scraper actor on Apify — not a wrapper around an existing one. It's tuned specifically for job search queries with clean output that maps directly into the n8n normalization layer. Dropping it to the Apify marketplace tomorrow alongside the GitHub repo. 📦 WHAT'S DROPPING TOMORROW • n8n workflow JSON (import and go) • Custom Apify actor (public on marketplace) • Airtable base schema • Claude prompt chain • Full setup docs ⚠️ Note on Airtable: the free tier caps at 1,000 records. Fine to start, but you'll hit it fast if you're running multiple search categories daily. I'm building a MongoDB version this weekend — dropping Monday for anyone who wants to run this at scale without the limit.