Spin Up Better n8n Workflows with Cursor + MCP (Even If AI Assist Feels Clunky)
Even if n8n’s built-in AI Assist feels clunky, you’re not stuck. Pair Cursor with an n8n MCP Server and you’ll spin up workflows way faster. Here’s one prompt I built as an example—try it yourself in Cursor. The Workflow You’ll Build The user uploads a product image and a product description text. (Use Cloudinary for image/video uploads). Call fal.ai with kontextmax to generate a 9:16 segmented human photo. Call fal-ai/nano-banana/edit on fal.ai to merge the person + product into a styled photo, using one option from a predefined style list. Generate a voiceover script from the product description text, supporting custom accent and voice tone. Use fal.ai’s google veo3 or veo3-fast to create a video with the generated content. Provide the user with a downloadable link. When the generation is complete, send me a Slack DM with the result URL. For any AI calls, use OpenRouter instead of OpenAI or Claude. Create an n8n workflow with these nodes and connections, and fill in credentials via env vars. Use only OpenRouter for LLM calls. Ready-to-Paste Prompt for Cursor Paste this into Cursor and let it orchestrate via your n8n MCP server: MCP Server Install Snippet Use this in your MCP settings (Cursor mcp.json or your host’s config): "n8n-mcp": { "command": "npx", "args": [ "n8n-mcp" ], "env": { "MCP_MODE": "stdio", "LOG_LEVEL": "error", "DISABLE_CONSOLE_OUTPUT": "true", "N8N_API_URL": "https://xxxx.app.n8n.cloud", "N8N_API_KEY": "xxxx" } } Why This Beats Built-in AI Assist Deterministic control over nodes and credentials Faster iteration in Cursor with a single, reusable prompt If you want the full exported workflow or a prebuilt template, reply and I’ll share one you can import into your n8n instance.