Data Alchemy
Log In
Community
Classroom
Calendar
Members
Leaderboards
About
Log In
7
Anaxareian Aia
Oct '24 (edited) •
💬 General
Move over Groq, Cerebras fastest at Inferences
https://beebom.com/cerebras-worlds-fastest-ai-inference-released/
"""
Cerebras' Wafer-Scale Engine has outperformed Groq in delivering the fastest AI inference.
Cerebras Inference clocks up to 1,800 tokens per second while running the 8B model and 450 tokens per second on the 70B model.
In comparison, Groq reaches up to 750 T/s and 250 T/s while running 8B and 70B models, respectively.
"""
Now Llamma 70b over 2000 tokens / sec.
https://cerebras.ai/blog/cerebras-inference-3x-faster
Like
6
2 comments
7
Move over Groq, Cerebras fastest at Inferences
Data Alchemy
skool.com/data-alchemy
Your Community to Master the Fundamentals of Working with Data and AI — by Datalumina®
📚 Explore More Resources
🔗 Subscribe on YouTube
38k
Members
35
Online
2
Admins
JOIN GROUP
Leaderboard (30-day)
1
James Brown
+203
2
Pavan Sai
+77
3
Surya Narayan
+32
4
Yves Joseph Sikati
+28
5
Marcio Pacheco
+21
See all leaderboards
Powered by