Yesterday I watched a video from AI Engineer Summit (if you haven't seen it, there are 3 dozen top AI talks). At one of the presentations, the speaker showed an interesting approach to working with AI agents. The agent gets access to the necessary tools (database, API services, etc.) and processes a user request as input.
Using these tools, the agent builds an interface in real time that matches the specific task. An example was shown on the screen - a simple email interface.
The user interacts with this interface - for example, clicks on an email and the agent receives data about this action. It then analyzes what the user is likely to want to see next and updates the interface, tailoring it to the next step - in this case displaying the contents of the selected email.
Thus, with access to the data and the context of the interaction, the agent is able to shape interfaces on the fly to the user's current need.
Of course, this approach has limitations: everything works quite slowly now, the interfaces are simple, and the result can change from request to request.
But if we introduce a system of rules and speed up the AI's reaction, then in the future we can get really adaptive interfaces that will automatically adjust to the user's tasks without unnecessary tinsel.
Watch the presentation here