Build a multilingual voice agent that automatically switches languages
Create a voice agent using LiveKit Agents, Deepgram STT, OpenAI, and Rime TTS that detects language changes mid-conversation and responds with native-sounding voices.
Last Updated:
One of the most common questions developers ask when building voice AI applications is: "How do I detect what language the user is speaking and respond in that same language?" This tutorial walks you through building a voice agent that does exactly that.
You'll create a multilingual voice assistant using LiveKit Agents, Deepgram STT, OpenAI, and Rime TTS. The agent listens for the user's language, detects when they switch languages mid-conversation, and dynamically updates the TTS configuration to respond with a native-sounding voice in that language.
Try the demo live. For the full source code including the Next.js frontend, see the rime-multilingual-demo repository on GitHub. You can also watch a video demo of the multilingual agent in action.
What you'll build
By the end of this tutorial, you'll have a voice agent that:
- Supports English, Hindi, Spanish, Arabic, French, Portuguese, German, Japanese, Hebrew, and Tamil
- Automatically detects the language the user is speaking
- Switches TTS language settings on the fly using a single Rime voice
- Responds naturally in the detected language
- Optionally syncs the current language to the frontend via participant attributes
The key technique involves overriding the STT node in your agent to intercept speech events, extract the detected language, and update the TTS configuration before the agent responds.
Prerequisites
Before you start, make sure you have:
- Python 3.11 or later installed
- uv package manager installed
- A LiveKit Cloud account (free tier works)
- API keys from the following providers:
Step 1: Set up the project
Create a new directory and initialize the project:
mkdir rime-multilingual-agentcd rime-multilingual-agentuv init --bare
Step 2: Install dependencies
Install the LiveKit Agents framework and the packages you need:
uv add \ "livekit>=1.0.23" \ "livekit-agents[silero,turn-detector]>=1.3.12" \ "livekit-plugins-noise-cancellation>=0.2.5" \ "python-dotenv>=1.2.1"
This installs:
- livekit-agents: The core agents framework with unified inference (STT, LLM, TTS)
- silero: Voice Activity Detection (VAD)
- turn-detector: Contextually-aware turn detection for natural conversations
STT, LLM, and TTS are configured via the framework's inference API using provider-prefixed models (e.g. deepgram/nova-3-general, openai/gpt-4o, rime/arcana). You supply the corresponding API keys in your environment.
Step 3: Configure environment variables
Create a .env file in your project directory:
LIVEKIT_API_KEY=<your_api_key>LIVEKIT_API_SECRET=<your_api_secret>LIVEKIT_URL=wss://<project-subdomain>.livekit.cloud
You can get your LiveKit credentials from the LiveKit Cloud dashboard under Settings > API Keys.
Step 4: Create the agent
Create a file named main.py and add the following code. I'll break down each section to explain what it does.
Import dependencies and configure logging
import loggingfrom typing import AsyncIterablefrom dataclasses import dataclassfrom dotenv import load_dotenvfrom livekit.agents import ( Agent, AgentServer, AgentSession, JobContext, JobProcess, MetricsCollectedEvent, ModelSettings, RoomOutputOptions, cli, metrics, stt, inference,)from livekit.plugins import silerofrom livekit.plugins.turn_detector.multilingual import MultilingualModelfrom livekit import rtc
logger = logging.getLogger("multilingual-agent")
load_dotenv()
Define language configurations
Next, create a dataclass to store TTS settings for each supported language. The current backend uses a single Rime voice (seraphina) and switches only the language code:
# Default configuration constantsDEFAULT_LANGUAGE = "eng"DEFAULT_TTS_MODEL = "arcana"DEFAULT_VOICE = "seraphina"
@dataclassclass LanguageConfig: """Configuration for TTS settings per language."""
lang: str model: str = DEFAULT_TTS_MODEL
The LanguageConfig dataclass holds the Rime language code and model name. The framework uses a single voice across languages; Rime handles pronunciation per language.
Create the multilingual agent class
Now create the agent class that handles language detection and TTS switching:
class MultilingualAgent(Agent): """A multilingual voice agent that detects user language and responds accordingly."""
# TTS config per language. Keys are Rime 3-letter codes. Voice is always seraphina. LANGUAGE_CONFIGS = { "eng": LanguageConfig(lang="eng"), "hin": LanguageConfig(lang="hin"), "spa": LanguageConfig(lang="spa"), "ara": LanguageConfig(lang="ara"), "fra": LanguageConfig(lang="fra"), "por": LanguageConfig(lang="por"), "ger": LanguageConfig(lang="ger"), "jpn": LanguageConfig(lang="jpn"), "heb": LanguageConfig(lang="heb"), "tam": LanguageConfig(lang="tam"), }
# Display names for instructions. Keys match LANGUAGE_CONFIGS. LANGUAGE_DISPLAY_NAMES = { "eng": "English", "hin": "Hindi", "spa": "Spanish", "ara": "Arabic", "fra": "French", "por": "Portuguese", "ger": "German", "jpn": "Japanese", "heb": "Hebrew", "tam": "Tamil", }
# STT returns ISO 639-1 (e.g. "en", "es") or locale (e.g. "en-US"). Map to Rime codes. STT_TO_RIME = { "en": "eng", "hi": "hin", "es": "spa", "ar": "ara", "fr": "fra", "pt": "por", "de": "ger", "ja": "jpn", "he": "heb", "ta": "tam", }
SUPPORTED_LANGUAGES = list(LANGUAGE_CONFIGS.keys())
def __init__(self) -> None: super().__init__(instructions=self._get_instructions()) self._current_language = DEFAULT_LANGUAGE self._room: rtc.Room | None = None
def _get_instructions(self) -> str: """Get agent instructions in a clean, maintainable format.""" supported_languages = ", ".join( self.LANGUAGE_DISPLAY_NAMES[lang] for lang in self.SUPPORTED_LANGUAGES ) return ( "You are a voice assistant powered by Rime's text-to-speech technology. " "You are here to showcase Rime's natural, expressive, and multilingual voice capabilities. " "You respond in the same language the user speaks in. " f"You support {supported_languages}. " "If the user speaks in any other language, respond in English and politely let them know: " f"'I only support {supported_languages}. Please speak in one of these languages.' " "Keep your responses concise and to the point since this is a voice conversation. " "Do not use emojis, asterisks, markdown, or other special characters in your responses. " "You are curious, friendly, and have a sense of humor." )
The LANGUAGE_CONFIGS dictionary maps Rime 3-letter language codes to TTS config. STT_TO_RIME maps the ISO codes returned by Deepgram to those Rime codes. The instructions are built from LANGUAGE_DISPLAY_NAMES so the list of supported languages stays in sync.
Override the STT node
This is the core technique for detecting language changes. Override the stt_node method to intercept speech-to-text events and check for language changes:
async def stt_node( self, audio: AsyncIterable[rtc.AudioFrame], model_settings: ModelSettings ) -> AsyncIterable[stt.SpeechEvent]: """ Override STT node to detect language and update TTS configuration dynamically.
This method intercepts speech events to detect language changes and updates the TTS settings to match the detected language for natural voice output. """ default_stt = super().stt_node(audio, model_settings)
async for event in default_stt: if self._is_transcript_event(event): await self._handle_language_detection(event) yield event
def _is_transcript_event(self, event: stt.SpeechEvent) -> bool: """Check if event is a transcript event with language information.""" return ( event.type in [ stt.SpeechEventType.INTERIM_TRANSCRIPT, stt.SpeechEventType.FINAL_TRANSCRIPT, ] and event.alternatives )
async def _handle_language_detection(self, event: stt.SpeechEvent) -> None: """Update TTS from STT-detected language and sync to frontend via participant attributes.""" detected_language = event.alternatives[0].language if not detected_language: return effective_language = self._update_tts_for_language(detected_language) if effective_language != self._current_language: self._current_language = effective_language await self._publish_language_update(effective_language)
def _update_tts_for_language(self, language: str) -> str: """Update TTS configuration based on detected language.
Returns the effective Rime language code (the one actually used for TTS). """ base = language.split("-")[0].lower() if language else "" rime_lang = self.STT_TO_RIME.get(base, base) if base else DEFAULT_LANGUAGE effective_lang = rime_lang if rime_lang in self.LANGUAGE_CONFIGS else DEFAULT_LANGUAGE config = self.LANGUAGE_CONFIGS.get(effective_lang, self.LANGUAGE_CONFIGS[DEFAULT_LANGUAGE]) logger.info(f"Updating TTS: detected={language} -> rime={effective_lang}") self.session.tts.update_options( model=f"rime/{config.model}", language=config.lang, ) return effective_lang
async def _publish_language_update(self, language_code: str) -> None: """Sync current language to the frontend via participant attributes (see LiveKit docs: participant attributes).""" if not self._room: return try: display_name = self.LANGUAGE_DISPLAY_NAMES.get(language_code, "English") await self._room.local_participant.set_attributes({"current_language": display_name}) except Exception as e: logger.warning("Failed to publish language update: %s", e)
The stt_node method receives audio frames and yields speech events. By iterating through the default STT output and checking each event, you get the detected language from transcript events. When the language changes, _update_tts_for_language maps the STT language (e.g. en or en-US) to a Rime code, updates TTS with update_options(), and returns the effective language. _publish_language_update writes the current language to the room participant's attributes so a frontend can show it (see the full demo repo for an example UI).
Add the greeting
Override on_enter to publish the initial language and greet the user when they connect:
async def on_enter(self) -> None: """Called when the agent session starts. Generate initial greeting.""" await self._publish_language_update(self._current_language) self.session.generate_reply( instructions="Greet the user and introduce yourself as a voice assistant powered by Rime's text-to-speech technology. Ask how you can help them." )
Set up the server and entrypoint
The agent uses the AgentServer API: register a prewarm function and an RTC session entrypoint that configures the agent session:
def prewarm(proc: JobProcess) -> None: """Preload VAD model for faster startup.""" proc.userdata["vad"] = silero.VAD.load()
server = AgentServer()server.setup_fnc = prewarm
@server.rtc_session(agent_name="rime-multilingual-agent")async def entrypoint(ctx: JobContext) -> None: """Main entry point for the multilingual agent worker.""" ctx.log_context_fields = {"room": ctx.room.name}
session = AgentSession( vad=ctx.proc.userdata["vad"], stt=inference.STT(model="deepgram/nova-3-general", language="multi"), llm=inference.LLM(model="openai/gpt-4o"), tts=inference.TTS( model=f"rime/{DEFAULT_TTS_MODEL}", voice=DEFAULT_VOICE, language=DEFAULT_LANGUAGE ), turn_detection=MultilingualModel(), )
usage_collector = metrics.UsageCollector()
@session.on("metrics_collected") def _on_metrics_collected(ev: MetricsCollectedEvent) -> None: metrics.log_metrics(ev.metrics) usage_collector.collect(ev.metrics)
async def log_usage() -> None: summary = usage_collector.get_summary() logger.info(f"Usage summary: {summary}")
ctx.add_shutdown_callback(log_usage)
agent = MultilingualAgent() agent._room = ctx.room await session.start( agent=agent, room=ctx.room, room_output_options=RoomOutputOptions(transcription_enabled=True), )
if __name__ == "__main__": cli.run_app(server)
Configuration notes:
- inference.STT with
model="deepgram/nova-3-general"andlanguage="multi"enables automatic language detection. - inference.LLM and inference.TTS use provider-prefixed models (
openai/gpt-4o,rime/arcana). - MultilingualModel for turn detection works with multilingual STT for natural turn-taking.
- The agent is given a reference to the room (
agent._room = ctx.room) so it can publish language updates to participant attributes.
Step 5: Download model files
Before running the agent for the first time, download the required model files for the turn detector and Silero VAD:
uv run main.py download-files
Step 6: Run the agent
Start by running the agent in console mode so you can test the voice pipeline locally with your microphone and speakers:
uv run main.py console
Want a visual interface? Run the agent in dev mode (uv run main.py dev), then use the LiveKit Agents Playground. Open agents-playground.livekit.io, sign in with your LiveKit Cloud project, and create or join a room. Your agent will attach when dispatched (e.g. via LiveKit Cloud agent configuration). Use the playground's microphone and speaker to have a voice conversation and confirm language switching.
Development mode
Connect to LiveKit Cloud for internet-accessible testing:
uv run main.py dev
Production mode
Run in production:
uv run main.py start
How it works
The language detection flow works like this:
- User speaks in any supported language.
- Deepgram STT (with
language="multi") transcribes the speech and detects the language. - The overridden
stt_nodeintercepts the speech event and reads the detected language. - If the language changed,
_update_tts_for_languagemaps the STT code to a Rime code and updates TTS viaupdate_options(). - Optionally,
_publish_language_updatewrites the current language to the participant's attributes for the frontend. - The LLM receives the transcript and generates a response in context.
- Rime TTS synthesizes the response using the updated language setting.
The instructions tell the LLM to respond in the same language as the user; the TTS update makes the spoken output use the correct Rime language.
Summary
This tutorial covered how to build a multilingual voice agent that automatically detects and responds in the user's language. The key techniques include:
- Overriding the
stt_nodeto intercept speech events and detect language changes - Mapping STT language codes to Rime (or your TTS provider) and using
update_options()to change TTS settings mid-conversation - Configuring Deepgram STT with multilingual mode for automatic language detection
- Using the MultilingualModel turn detector for natural conversation flow
- Optionally syncing the current language to a frontend via participant attributes
For more information, check out:
- Pipeline nodes and hooks for customizing agent behavior
- Deepgram STT plugin for STT configuration options
- Rime TTS plugin for TTS voice and language options
- LiveKit turn detector for multilingual turn detection
- Full source code (backend + Next.js frontend) for the complete demo
Complete code
Here is the complete main.py file.
import loggingfrom typing import AsyncIterablefrom dataclasses import dataclassfrom dotenv import load_dotenvfrom livekit.agents import ( Agent, AgentServer, AgentSession, JobContext, JobProcess, MetricsCollectedEvent, ModelSettings, RoomOutputOptions, cli, metrics, stt, inference,)from livekit.plugins import silerofrom livekit.plugins.turn_detector.multilingual import MultilingualModelfrom livekit import rtc
logger = logging.getLogger("multilingual-agent")
load_dotenv()
# Default configuration constantsDEFAULT_LANGUAGE = "eng"DEFAULT_TTS_MODEL = "arcana"DEFAULT_VOICE = "seraphina"
@dataclassclass LanguageConfig: """Configuration for TTS settings per language."""
lang: str model: str = DEFAULT_TTS_MODEL
class MultilingualAgent(Agent): """A multilingual voice agent that detects user language and responds accordingly."""
# TTS config per language. Keys are Rime 3-letter codes. Voice is always seraphina. LANGUAGE_CONFIGS = { "eng": LanguageConfig(lang="eng"), "hin": LanguageConfig(lang="hin"), "spa": LanguageConfig(lang="spa"), "ara": LanguageConfig(lang="ara"), "fra": LanguageConfig(lang="fra"), "por": LanguageConfig(lang="por"), "ger": LanguageConfig(lang="ger"), "jpn": LanguageConfig(lang="jpn"), "heb": LanguageConfig(lang="heb"), "tam": LanguageConfig(lang="tam"), }
LANGUAGE_DISPLAY_NAMES = { "eng": "English", "hin": "Hindi", "spa": "Spanish", "ara": "Arabic", "fra": "French", "por": "Portuguese", "ger": "German", "jpn": "Japanese", "heb": "Hebrew", "tam": "Tamil", }
STT_TO_RIME = { "en": "eng", "hi": "hin", "es": "spa", "ar": "ara", "fr": "fra", "pt": "por", "de": "ger", "ja": "jpn", "he": "heb", "ta": "tam", }
SUPPORTED_LANGUAGES = list(LANGUAGE_CONFIGS.keys())
def __init__(self) -> None: super().__init__(instructions=self._get_instructions()) self._current_language = DEFAULT_LANGUAGE self._room: rtc.Room | None = None
def _get_instructions(self) -> str: """Get agent instructions in a clean, maintainable format.""" supported_languages = ", ".join( self.LANGUAGE_DISPLAY_NAMES[lang] for lang in self.SUPPORTED_LANGUAGES ) return ( "You are a voice assistant powered by Rime's text-to-speech technology. " "You are here to showcase Rime's natural, expressive, and multilingual voice capabilities. " "You respond in the same language the user speaks in. " f"You support {supported_languages}. " "If the user speaks in any other language, respond in English and politely let them know: " f"'I only support {supported_languages}. Please speak in one of these languages.' " "Keep your responses concise and to the point since this is a voice conversation. " "Do not use emojis, asterisks, markdown, or other special characters in your responses. " "You are curious, friendly, and have a sense of humor." )
async def stt_node( self, audio: AsyncIterable[rtc.AudioFrame], model_settings: ModelSettings ) -> AsyncIterable[stt.SpeechEvent]: """ Override STT node to detect language and update TTS configuration dynamically.
This method intercepts speech events to detect language changes and updates the TTS settings to match the detected language for natural voice output. """ default_stt = super().stt_node(audio, model_settings)
async for event in default_stt: if self._is_transcript_event(event): await self._handle_language_detection(event) yield event
def _is_transcript_event(self, event: stt.SpeechEvent) -> bool: """Check if event is a transcript event with language information.""" return ( event.type in [ stt.SpeechEventType.INTERIM_TRANSCRIPT, stt.SpeechEventType.FINAL_TRANSCRIPT, ] and event.alternatives )
async def _handle_language_detection(self, event: stt.SpeechEvent) -> None: """Update TTS from STT-detected language and sync to frontend via participant attributes.""" detected_language = event.alternatives[0].language if not detected_language: return effective_language = self._update_tts_for_language(detected_language) if effective_language != self._current_language: self._current_language = effective_language await self._publish_language_update(effective_language)
def _update_tts_for_language(self, language: str) -> str: """Update TTS configuration based on detected language.
Returns the effective Rime language code (the one actually used for TTS). """ base = language.split("-")[0].lower() if language else "" rime_lang = self.STT_TO_RIME.get(base, base) if base else DEFAULT_LANGUAGE effective_lang = rime_lang if rime_lang in self.LANGUAGE_CONFIGS else DEFAULT_LANGUAGE config = self.LANGUAGE_CONFIGS.get(effective_lang, self.LANGUAGE_CONFIGS[DEFAULT_LANGUAGE]) logger.info(f"Updating TTS: detected={language} -> rime={effective_lang}") self.session.tts.update_options( model=f"rime/{config.model}", language=config.lang, ) return effective_lang
async def _publish_language_update(self, language_code: str) -> None: """Sync current language to the frontend via participant attributes (see LiveKit docs: participant attributes).""" if not self._room: return try: display_name = self.LANGUAGE_DISPLAY_NAMES.get(language_code, "English") await self._room.local_participant.set_attributes({"current_language": display_name}) except Exception as e: logger.warning("Failed to publish language update: %s", e)
async def on_enter(self) -> None: """Called when the agent session starts. Generate initial greeting.""" await self._publish_language_update(self._current_language) self.session.generate_reply( instructions="Greet the user and introduce yourself as a voice assistant powered by Rime's text-to-speech technology. Ask how you can help them." )
def prewarm(proc: JobProcess) -> None: """Preload VAD model for faster startup.""" proc.userdata["vad"] = silero.VAD.load()
server = AgentServer()server.setup_fnc = prewarm
@server.rtc_session(agent_name="rime-multilingual-agent")async def entrypoint(ctx: JobContext) -> None: """Main entry point for the multilingual agent worker.""" ctx.log_context_fields = {"room": ctx.room.name}
session = AgentSession( vad=ctx.proc.userdata["vad"], stt=inference.STT(model="deepgram/nova-3-general", language="multi"), llm=inference.LLM(model="openai/gpt-4o"), tts=inference.TTS( model=f"rime/{DEFAULT_TTS_MODEL}", voice=DEFAULT_VOICE, language=DEFAULT_LANGUAGE ), turn_detection=MultilingualModel(), )
usage_collector = metrics.UsageCollector()
@session.on("metrics_collected") def _on_metrics_collected(ev: MetricsCollectedEvent) -> None: metrics.log_metrics(ev.metrics) usage_collector.collect(ev.metrics)
async def log_usage() -> None: """Log usage summary on shutdown.""" summary = usage_collector.get_summary() logger.info(f"Usage summary: {summary}")
ctx.add_shutdown_callback(log_usage)
agent = MultilingualAgent() agent._room = ctx.room await session.start( agent=agent, room=ctx.room, room_output_options=RoomOutputOptions(transcription_enabled=True), )
if __name__ == "__main__": cli.run_app(server)