tech
February 5, 2026
ElevenLabs CEO: Voice is the next interface for AI
ElevenLabs CEO argued at Web Summit Qatar that voice is the next interface for AI, as OpenAI, Google, and Apple push conversational systems into wearables, new hardware, and everyday interactions.

TL;DR
- Voice is becoming the next major interface for AI, moving beyond text and screens.
- Voice models now incorporate emotion and intonation, working with large language models' reasoning capabilities.
- The future vision involves phones becoming less essential, with voice as the primary mechanism for controlling technology.
- Major companies like OpenAI, Google, and Apple are focusing on voice technology.
- Voice is seen as a key battleground for the next phase of AI development as it integrates into wearables and other hardware.
- Traditional input methods like keyboards are considered outdated in favor of voice interaction.
- Agentic AI systems will require less explicit prompting, relying on persistent memory and context.
- ElevenLabs is developing a hybrid cloud and on-device processing approach for voice models to support new hardware.
- ElevenLabs is partnering with Meta to integrate its voice technology into products like Instagram and Horizon Worlds.
- The increasing persistence and embedded nature of voice AI raise significant privacy and surveillance concerns.
Continue reading the original article