📰 AI News: Tokyo Startup Claims It Built A Brain Inspired AGI That Teaches Itself
📝 TL;DR A little known startup led by a former Google AI veteran says it has built the first AGI capable system that can learn new skills on its own, without human data or hand holding. The model is said to mirror how the brain’s neocortex works, but outside experts are extremely skeptical and there is no public proof yet. 🧠 Overview A company called Integral AI, founded by ex Google researcher Jad Tarifi, has announced what it calls the first AGI capable model. The system is designed to learn new skills autonomously in both digital environments and with robots in the physical world, using an architecture that is explicitly modeled on the layered structure of the human neocortex. The claims are bold, and they land in a moment where big players openly say AGI is still ahead of us, which is why the announcement is being met with a mix of curiosity, side eye, and memes. 📜 The Announcement On December 8, 2025, Integral AI publicly claimed it has successfully tested a model that meets its own definition of AGI capable. The startup says its system can teach itself entirely new tasks in unfamiliar domains, without pre existing datasets or human intervention, while remaining safe and energy efficient. The founders frame this as a foundational step toward embodied superintelligence and position their architecture as a fundamental leap beyond current large language models. At the same time, there is no peer reviewed paper, open benchmarks, or independent verification yet, so for now this is a marketing claim rather than an accepted scientific milestone. ⚙️ How It Works • Brain inspired architecture - Integral says its model grows, abstracts, plans, and acts in a layered way that mirrors the human neocortex, with higher levels building increasingly abstract world models on top of raw sensory data. • Universal simulators - The first piece is a simulator that learns a unified internal model of different environments from vision, language, audio, and sensor data, then uses that internal model to reason and predict across many domains.