Hey skoolers, a bit of an unrelated question. Has anyone here ever recovered from a huge financial loss? (I lost all my savings in crypto recently due to a phishing attack — everything I had saved over the past three years, which was a huge amount for me). Now I’m trying to focus on AI studies and creations, but I’m also afraid of failing…
Sorry to hear about your loss that’s really rough. But it’s awesome you’re focusing on AI now! Don’t worry about failing, it’s all part of the journey. You’ve got this! 🙌
I’m currently learning how to build AI agents, and while I already have a gist of the main frameworks and even tried them myself, I still feel a bit overwhelmed about which one is best suited for production. When building a production-level AI agent, which approach do you prefer — LangGraph, AutoGen, CrewAI, OpenAI’s Agents SDK, or just raw LLM calls? I’d love to hear your perspective on why you’d choose one over the others.
@Cloud Bagtas @Cloud Bagtas Great question! You’re right that at a fundamental level, raw LLM calls can be orchestrated to mimic multi-agent collaboration or workflow control by managing contexts and calls yourself. But the frameworks like LangGraph and AutoGen add a lot of value by providing structure, reliability, and scalability out of the box. LangGraph shines when you need explicit workflow control—think complex pipelines with conditional logic, retries, or long-running processes. It helps manage state, dependencies, and orchestrates steps cleanly without reinventing the wheel each time. AutoGen is tailored for multi-agent setups, where different agents with specialized roles need to communicate and collaborate in real-time. It abstracts a lot of the messaging and context-sharing complexities, making multi-agent coordination smoother and less error-prone. Raw LLM calls definitely work well for simpler, stateless tasks or prototypes where overhead matters and you want full control. But as use cases grow in complexity, these frameworks reduce development time and improve maintainability. So, yes, raw LLM calls are the foundation, but these frameworks layer on powerful abstractions to tackle specific challenges at scale.
@Cloud Bagtas I really appreciate you sharing your experience. It’s super relatable! I’m also in the learning phase, so I totally get how overwhelming it can get, especially with tools like LangGraph.
NVIDIA’s data factory team creates the foundation for AI models like Cosmos Reason, which today topped the physical reasoning leaderboard on Hugging Face. https://blogs.nvidia.com/blog/ai-reasoning-cosmos/