“The wafer-scale approach is unique and clearly better for big models than much smaller GPUs. ... Cerebras has created what should be the industry’s best solution for training very large neural networks.”
Linley Gwennap, President and Principal Analyst, The Linley Group
“Cerebras is the company whose architecture is skating to where the puck is going: huge AI.”
Karl Freund, Principal, Cambrian AI Research
“Cerebras’ ability to bring large language models to the masses with cost-efficient, easy access opens up an exciting new era in AI. It gives organizations that can’t spend tens of millions an easy and inexpensive on-ramp to major league NLP.”
Dan Olds, Chief Research Officer, Intersect360 Research
“Years later, [Cerebras] is still perhaps the most differentiated competitor to NVIDIA’s AI platform. It takes a lot to go head-to-head with NVIDIA on AI training, but Cerebras has a differentiated approach that may end up being a winner.”
Patrick Kennedy, Serve the Home
What our customers are saying
GlaxoSmithKline
"The Cerebras CS-2 is a critical component that allows GSK to train language models using biological datasets at a scale and size previously unattainable. These foundational models form the basis of many of our AI systems and play a vital role in the discovery of transformational medicines."
Kim Branson
SVP Global Head of AI and ML
GlaxoSmithKline
AstraZeneca
"Training which historically took over 2 weeks to run on a large cluster of GPUs was accomplished in just over 2 days — 52hrs to be exact — on a single CS-1. This could allow us to iterate more frequently and get much more accurate answers, orders of magnitude faster."
Nick Brown
Head of AI & Data Science
Astrazeneca
TotalEnergies
"TotalEnergies’ roadmap is crystal clear: more energy, less emissions. To achieve this, we need to combine our strengths with those who enable us to go faster, higher, and stronger… We count on the CS-2 system to boost our multi-energy research and give our research ‘athletes’ that extra competitive advantage."
Vincent Saubestre
CEO & President
TotalEnergies Research & Technology USA
Argonne National Laboratory
"Cerebras allowed us to reduce the experiment turnaround time on our cancer prediction models by 300x, ultimately enabling us to explore questions that previously would have taken years, in mere months."
Dr. Rick Stevens
Associate Laboratory Director of Computing, Environment and Life Sciences
Argonne National Laboratory
National Energy Technology Laboratory
"We used the original CS-1 system, which features the WSE, to successfully perform a key computational fluid dynamics workload more than 200 times faster and at a fraction of the power consumption than the same workload on the Lab’s supercomputer JOULE 2.0.”
Dr. Dirk Van Essendelft
ML and Data Science Engineer
NETL
Our unique technology
Wafer-Scale Cluster
The Cerebras Wafer-Scale Cluster delivers unprecedented near-linear scaling and a remarkably simple programming model.
Learn more
CS-2 System
Purpose built for AI and HPC, the field-proven CS-2 replaces racks of GPUs. Gone are the challenges of parallel programming and distributed training.
Wafer-Scale Engine
The revolutionary central processor for our deep learning computer system is the largest computer chip ever built and the fastest AI processor on Earth.
Software Platform
The Cerebras Software Platform integrates with TensorFlow and PyTorch, so researchers can effortlessly bring their models to CS-2 systems and clusters.
Flexible Deployment
On or off-premises, Cerebras Cloud meshes with your current cloud-based workflow to create a secure, multi-cloud solution.
Learn more