Understanding LLM Workflows, LangGraph Execution Model & Hands-on Practice
Day 4 of my AI Agent Development journey took me deeper into how LLM-based systems are structured — and how real AI workflows are designed using LangGraph.
Along with the theory, I also created a hands-on notebook, where I built simple conditional, sequential, and parallel workflows to strengthen my understanding.
🔹 LLM Workflow Types I Explored Today
1️⃣ Prompt Chaining — Sequential prompting where each output feeds the next step.
2️⃣ Routing Chaining — Choosing which LLM or chain solves which problem.
3️⃣ Parallelization Chaining — Breaking a task into sub-tasks, running them together, then combining results.
4️⃣ Orchestration Chaining — An LLM deciding which sub-model is best for each step.
5️⃣ Evaluation Chaining — One model answers, another evaluates; if not satisfied → regenerate.
🔹 Core Graph Concepts in LangGraph
🔹 Nodes — Define what work is performed.
🔹 Edges — Define how the workflow transitions.
🔹 State — A shared memory (typed dict / Pydantic model) flowing through the graph.
🔹 Reducers — Define how updates to the shared state are merged.
🔹 LangGraph Execution Model
Today I learned how LangGraph manages:
- Input handling
- Node execution
- State updates
- Conditional routing
- Parallel flows
- Cyclic workflows
And I practiced implementing all of this by writing a custom notebook that includes:✔ Simple conditional workflow✔ Sequential workflow✔ Parallel workflow
This practical coding session helped me understand how real AI agents operate under the hood.
👉 Here is the notebook — check out the code and feel free to share your feedback: