⚠️WHY CHATGPT WONT SUFFICE⚠️
WARNING: a bit nerdy
A lot of people in here using ChatGPT as their base AI and this is why it won’t be enough in the future
a 27M parameter ai model just beat gpt-5 on reasoning benchmarks!
yes, 27M parameters.
that's 4x smaller than gpt-1 (117M).
why?
because singapore startup sapient intelligence built a brain-inspired architecture that thinks like humans do: two modules working together - a slow strategic planner and a fast tactical worker.
what hrm (hierarchical reasoning models) achieved:
– 40.3% on arc-agi (beats claude 3.7's 21.2%)
– near-perfect on extreme sudoku (gpt-4: 0%)
– optimal 30x30 maze solving (state-of-the-art: 0%)
– trained in 2 gpu hours with just 1,000 examples
gpt-5 today:
– billions of parameters
– $17-20 per complex reasoning task
– requires massive pre-training
– still uses chain-of-thought "thinking out loud"
translation: it doesn’t matter what model you use, it’s what the model does that matters for what you do that will make it effective.
funny thing is gpt-5 just launched with all the hype, yet a model 100x smaller solves puzzles it can't even attempt.
2
5 comments
Jonas Alfonso
4
⚠️WHY CHATGPT WONT SUFFICE⚠️
powered by
AI Infrastructure
skool.com/ai-agents-infrastructure-2975
Automate, save time, and maximize results with Artificial Intelligence.
The power of AI in your hands.
Build your own community
Bring people together around your passion and get paid.
Powered by