tech
January 14, 2026
OpenAI partners with Cerebras
OpenAI is partnering with Cerebras to add 750MW of ultra low-latency AI compute to our platform.

TL;DR
- OpenAI is partnering with Cerebras to add 750MW of ultra low-latency AI compute.
- Cerebras provides AI systems with massive compute, memory, and bandwidth on a single chip to accelerate inference.
- The integration aims to make AI respond faster, leading to more user engagement and higher-value workloads.
- This low-latency capacity will be phased into OpenAI's inference stack.
- The capacity will be deployed in tranches through 2028.