Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
What is this?
Less
More

Memberships

AI Developer Accelerator

10.8k members • Free

5 contributions to AI Developer Accelerator
Help: adk api_server Not Persisting Sessions to SQLite Database from Front End
## Problem (tried so many thing, I feel like I am missing something..) I've set up an ADK agent with a DatabaseSessionService pointing to a SQLite database. My React frontend successfully creates sessions and communicates with the backend. While the API server is running, everything works fine, but sessions aren't persisting when the server restarts. ## Code Setup **backend/agent_module/agent.py**: from google.adk.sessions import DatabaseSessionService # Database setup db_url = "sqlite:///./my-sqlite-database.db" session_service = DatabaseSessionService(db_url=db_url) ``` **backend/agent_module/__init__.py**: from . import agent, session_service # and tried it just the usual direclty from . import agent #both same results no difference ``` **frontend/src/components/Chat.tsx** (relevant parts): ```typescript // API endpoints for the ADK API server const api = { // Create a new session with optional initial state session: (userId: string, initialState?: Record<string, unknown>) => fetch(`/apps/agent_module/users/${userId}/sessions`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ state: initialState }) }).then(r => r.json() as Promise<{ id: string }>), // Other API endpoints... } // Creating a session const createNewSession = async () => { // Load initial state from files... // Create the session with initial state const { id } = await api.session(userId, initialState); console.log("Created session with ID:", id); setSessionId(id); } ``` ## Observations 1. When running `adk api_server` from the parent directory, the frontend successfully connects 2. When I send a message, the SQLite file is created successfully 3. Sessions can be created and used while the server is running 4. Debug code shows the DatabaseSessionService is initialized correctly: ``` Created DatabaseSessionService with URL: sqlite:///./my-sqlite-database.db Session service type: <class 'google.adk.sessions.database_session_service.DatabaseSessionService'>
0
0
Code question Google_Adk
I've had success for most part in Google_ADK but oddly I'm having an issue with an "output_schema" call with List of objects. I'm just doing a basic agent that returns 5 random U.S Presidents from an Open AI call. I created class for President with both name and year they took office. And then another class that has one list object of the five returned Presidents. I keep getting this error. " Input should be an object [type=model_type, input_value=[{'name': 'James Madison'...ear_took_office': 1909}], input_type=list] For further information visit https://errors.pydantic.dev/2.11/v/model_type" Real simple code but output schema oddly does not work. Anyone who can assist would be greatly appreciated. Code below. agent.py from google.adk.agents import LlmAgent from google.adk.models.lite_llm import LiteLlm from pydantic import BaseModel,Field class President(BaseModel): name: str = Field( description="The name of the U.S President" ) year_took_office: int= Field( description="The year the president took office" ) class TopPresidents(BaseModel): presidents: list[President] = "List of Presidents and the year they took office." llm = LiteLlm(model="openrouter/gpt-4.1") root_agent= LlmAgent( name="root_agent", #model="gemini-2.0-flash-lite", model=llm, description="You are an expert U.S Historian with knowledge of all former US Presidents", instruction=""" Return the name of 5 random US Presidents and the year they took office. - The returned output must have 5 Random United State Presidents. DO NOT include any explanations or additional text outside the JSON response. """, output_schema=TopPresidents, output_key="mypresidents" )
1 like • May 13
No you can use it and I also use the from models.lite_llm import LiteLlm it does work - Here is my full exact single script agent.py (with the usual __init__.py with from . import agent) ------------------------------------------------------ from google.adk.agents import LlmAgent from google.adk.models.lite_llm import LiteLlm from pydantic import BaseModel,Field import os llm = LiteLlm( model="gpt-4.1", api_key=os.environ["OPENAI_API_KEY"], response_format={"type": "json_object"} ) class President(BaseModel): name: str = Field( description="The name of the U.S President" ) year_took_office: int= Field( description="The year the president took office" ) class TopPresidents(BaseModel): presidents: list[President] = "List of Presidents and the year they took office." root_agent= LlmAgent( name="root_agent", #model="gemini-2.0-flash-lite", model=llm, description="You are an expert U.S Historian with knowledge of all former US Presidents", instruction=""" Return the name of 5 random US Presidents and the year they took office. - The returned output must have 5 Random United State Presidents. DO NOT include any explanations or additional text outside the JSON response. """, output_schema=TopPresidents, output_key="mypresidents" ) and here is the outut exaclty as the schema no errors: ------------------------------------------------------------------------ { "presidents": [ {"name": "Chester A. Arthur", "year_took_office": 1881}, {"name": "James K. Polk", "year_took_office": 1845}, {"name": "Jimmy Carter", "year_took_office": 1977}, {"name": "Zachary Taylor", "year_took_office": 1849}, {"name": "Barack Obama", "year_took_office": 2009} ] }
1 like • May 13
also note if you use gemini model direclty you DO NOT need this workaround by adding the response_format - this is only needed when using the LiteLlm wrapper and with openai - if u use a gemini model direclty in the model='gemini-etc-etc' - it will work directly and respect the output_schema
[New Video] How to Build Your First RAG Agent with Agent Development Kit (ADK + Vertex AI RAG Service)
You can now build your own Retrieval-Augmented Generation (RAG) agent that answers questions using your documents from Google Drive—and it only takes a few minutes with Google’s Agent Development Kit (ADK). In this brand-new tutorial, I’ll walk you through a complete ADK workflow that: ✅ Connects your agent to Google Cloud and sets up a knowledge base ✅ Uploads and manages your documents for smarter responses ✅ Instantly answers questions with citations from your actual files ✅ Expands easily with new docs to make your agent even more powerful This is the perfect crash course if you're looking to break into real-world AI development and learn RAG from the ground up. And yes—I'm giving away the entire source code for free. Ready to build your own document-aware AI assistant? Click the link below to watch the full tutorial—and make sure to subscribe so you don’t miss future drops. Cheers, Brandon Hancock 🧑‍💻 P.S. Here is the source code: 👉 https://github.com/bhancockio/adk-rag-agent
1 like • May 12
@Brandon Hancock Hey so I know this is a big question, but like now that you have seen them ALL - Do you truly believe that you can pick n match different frameworks for the JOB, or can you really JUST see yourself only working with ADK and it just does it all? Like any reason any OTHER non-ADK framework can patch a need for you, or there are still a need for like CrewAI and Langgraph Frameworks...
Anybody else invested heavily in a framework just to find a better one appear right at the end?
Hello everybody! So I basically spent about 1 month on the DEEP DEEP end of Langchain's Langgraph entire framework. I did their official fully 6 full modules from their Langchain Academy - and each module has like 5 sections! I learnt everything I possibly could about langgrah, I even created my own sort of shortcut snippet modular version of it where I could easily within a couple of keystrokes spin up all kinds of complex combinations at will, and even host it and spin it up in a UI, with tracing and everything. THEN ADK drops! Then I feel that overwhelm of dear lord, is everything now going to be like ADK. And even though I was in denial it was very obvious that ADK will soon be the superior more integrated option - And it seemed like it was just more powerful! - So I then like in bits in pieces starting to go through the documentation, and then I quickly realized something - WOW Learning Langgraph actually made learning ADK soooooo much easier and FAST - Because all the main patterns of agents, like workflows and extremely complex state managment I have already learnt then NOW with ADK I could see it was all kind of the same concept just a different flavor, a differnt way of going about doing it - And within 2 days I feel extremely confident with ADK and Brandon's tutorial definitely helped - So I guess I am writing this post to share my experience how now I am 100% sure I will be stickin to ADK cause it has literally everything you can ask for and you have Custom Agent if you wanted to go SUPER weirdo mode, its there for you! So my point is anything you learn actually really helps your brain understand the concepts at a much deeper level, and any new framework will become so much easier to learn and it almost gives you different perspectives and creativity to skin the same problem. Anyways its my first post ever, I dont leave my coding computer much these days, so thought to be social and show some love to Brandons group since his tutorial has really helped me!
1 like • May 12
@Tom Welsh Hey Tom, thanks for your reply, not sure the last human I spoke to outside ChatGpt, Claude, Gemini and Grok lol - I started as a vibe coder 2 years ago before it was a thing, I mean like its actually a really good and easy way to get you through the door if you have literally ZERO background in tech, THEN once you are in, you can really see if like this is for you and that is when I found out that actually asking the models to teach me what the hell they are writing made such a huge difference. It also became so much more fun when I actually understood the syntax and it really makes it much easier to innovate cause now you can level the logic and get creative. But I am almost grateful that when I started the models were relatively so bad, that I was FORCED to learn the fundamentals just to get it done, and that was the best choice ever, I can't imagine how I could be programming right now without the actual fundamentals, even as the models got smarter it is still no where close to what you need to get a full program from A to Z done!
The LLM Overload: How "AI ADHD" is Draining Developer Productivity
Remember the early days of large language models (LLMs)? It felt like a single, powerful oracle at our fingertips, ready to answer coding questions and debug tricky problems. Now, we're bombarded with a dizzying array of models, each with its own strengths, weaknesses, and quirky personalities. While choice is generally good, this explosion of LLMs is starting to feel less like a helpful toolkit and more like… well, a digital form of ADHD for developers. We're calling it "AI ADHD" – the constant distraction and context switching caused by the sheer number of LLMs available and the pressure to know which one is "best" for any given task. Here's how this overload is quietly hurting the programming experience: **1. Decision Fatigue Sets In (Before You Even Write Code):** Before you even type your first line, you're faced with a choice: Which LLM should I ask? Do I need something specifically tuned for Python? Should I use the one known for creative code generation or the one better at factual explanations? This initial decision-making process, repeated multiple times throughout the day, is surprisingly draining. It's like having to choose from a hundred different screwdrivers for a single screw – most of them will *kind of* work, but you're wasting time trying to figure out the *optimal* one. **2. Context Switching Becomes a Constant Headache:** Each LLM has its own prompt engineering nuances, its own preferred input formats, and its own unique ways of interpreting requests. Switching between models for different tasks means constantly shifting your mental gears. You might have just gotten used to crafting prompts for Model A when you realize Model B is better for your current problem, forcing you to relearn how to effectively communicate with it. This constant context switching breaks flow and hinders deep work. **3. The Fear of Missing Out (FOMO) is Real:** There's a nagging feeling that you're not using the "right" tool. Did that other LLM have a more up-to-date knowledge base? Would it have generated cleaner code? This FOMO can lead to second-guessing, re-running requests in different models, and ultimately, more wasted time chasing an elusive "perfect" answer.
The LLM Overload: How "AI ADHD" is Draining Developer Productivity
0 likes • May 12
This is very true! But you know like how in Cursor and Windsurfer you basically have a shortcut to quickly switch models, and now it is a mainstay of my workflow, its like ok SUPER COMPLEX instnalty I go for o3 - Is it super long context? I got for Gemini 2.5 - Is it quick n dirty I go for Claude 3.7 - BUT if i want the real power of Claude 3.7 then I opt for Claude Code in the terminal which is orders of magnitudes stronger than 3.7 in an IDE coz it gets super nerfed. And whats crazy is that ALL of them became a super important part of my main workflow!
1-5 of 5
Basha Kodes
2
12points to level up
@basha-kodes-4162
Software Engineer - Python/Machine Learning

Active 213d ago
Joined May 12, 2025
San Jose, California
Powered by