LiveKit's Series A: Infra for the AI computing era
LiveKit has raised $22.5M in additional funding to build infrastructure for realtime voice and video-driven AI applications.
AI is accelerating a massive paradigm shift in computing. The more humanlike computers get, the less we adapt to them, and the more they adapt to us. We use our eyes, ears, and mouths to communicate. A humanlike computer uses cameras, microphones, and speakers. While some input processing happens on device, large AI models responsible for planning, reasoning, or performing actions run in the cloud.
Getting that voice or video input from your device to an AI model and the generated output back to your device, as quickly as possible, is a challenge. The brain needs a nervous system to carry signals back and forth and existing internet infrastructure wasn’t designed for transporting high-bandwidth data in realtime.
We started LiveKit to build that nervous system. Back in 2021, realtime platforms were designed specifically for humans to have online meetings. We envisioned a future where realtime voice and video was as critical to an application as a database. In that future, there would need to be a flexible, open source stack that any developer could use to build any type of realtime voice or video-based application.
What we didn’t foresee was how quickly the future would arrive. In just 18 months, LiveKit is being used by over 20,000 developers including:
- Four of the top 10 most valuable private companies
- Six of the Fortune 50
- 911 emergency dispatch
- AI assistants, support and service agents, and NPCs in video games
- Police and military drones for teleoperation
- Autonomous cars, trucks, tractors, and spaceships for telemetry and observation
- Security cameras and door locks
- Surgical robots and human surgeons in operating theaters
We also had the extraordinary privilege of working with the teams at OpenAI and Character.ai to give their AI models the ability to see, hear, and speak. Along the way, we learned about the infrastructure challenges that needed to be overcome in order to build and deliver AI applications at scale. While AI is talked about as a “space” right now, in the fullness of time, AI is every space and these challenges become undifferentiated heavy lifting.
Earlier this year we incorporated the learnings from our work on voice-driven AI assistants into LiveKit Agents, a framework that any developer can use to build realtime multimodal AI applications. While the Agents framework includes core features like programmatic access to streaming audio and video, stateful orchestration, and a plugin system, AI use cases are evolving so rapidly that there’s already a handful of new problems to solve for developers.
With that in mind, I’m honored to share we’ve raised a $22.5M Series A to scale up our efforts towards building the delivery network for AI. This round was led by Altimeter with support from Redpoint and incredible folks focused on AI including Jeff Dean, Elad Gil, Aravind Srinivas, Amjad Masad, Guillermo Rauch, Logan Kilpatrick, Mati Staniszewski, Erik Bernhardsson, Rohan Anil, Mckay Wrigley, Demi Guo, and Matt Shumer.
This investment will be used to cement our position as the default stack for building realtime voice and video applications, expand our AI infrastructure offerings, and develop new products to serve novel use cases.
I want to express my gratitude to the thousands of developers that have built things with LiveKit, given us feedback in our community, and supported us through growing pains. I wouldn’t be writing this post without you. We’ll continue working hard to build muck, so you don’t have to.