Gurus going to hate me for this, but this goes back to 's principle of building products that last for more than 10 years. Applying AI to every step in an application's pipeline risks: (a) API and model changes, (b) burns tokens needlessly, (c) non-deterministic nature of AI makes it hard to scale, and (d) it's always grep + ETL (extraction, transformation, lift) anyway on the backend. LLMs and AI models are needed when human reasoning or judgement needs to be automated.
If you wouldn't ask a human to search through 100K lines in a text file manually, you shouldn't be asking AI to do the same.
And the way most "architectures" are touted these days, there seems to be this growing sentiment of "AI everything", when it should be AI for judgement, deterministic code for everything else. Architecturally, AI occupies a small but highly specialised role in the pipeline — not the pipeline itself.