What I scraped:
Extracted the structured main navigation links from vistaprint.com — 12 top-level nav items (name + URL) saved to a clean CSV in one shot.
One thing I learned:
Claude Code didn't just call a scraper blindly — it reached for firecrawl_scrape with a json format and a structured schema instead of pulling raw HTML and parsing it manually. It also caught that the /v1/extract endpoint was deprecated mid-task and switched approaches on its own. That kind of adaptive tool selection is what makes the WAT framework click — the agent handles the how, you just define the what.
One use case idea:
Competitive nav monitoring — scrape the top-level navigation of 10–20 competitor sites on a schedule to track when they add new product categories or restructure their offerings. Easy signal for market moves that would otherwise take manual checking.