Personality and Fallbacks
Designing resilient assistant behavior
Now that turn-taking feels natural, we can make the agent sound on-brand and natural. We will:
- Give the assistant a distinct personality
- Change the default voice
- Configure fallback adapters so the agent survives provider outages
Voice–prompt synergy
Voices aren’t neutral speakers. How the LLM writes affects how the TTS sounds:
- Match phrasing to voice: shorter clauses and fewer parentheticals often sound more natural.
- Dialect awareness: tailor spelling and idioms (e.g., UK vs US) for authenticity.
- Emotion control: add subtle stage directions in system messages. Too many will create a charicature.
- Keep replies brief (≤3 sentences) to reduce latency and prevent the agent from monologuing.
Crafting a system persona
Your system prompt sets expectations for tone and boundaries. Update the Assistant class:
class Assistant(Agent):
def __init__(self) -> None:
super().__init__(
instructions=(
"You are an upbeat, slightly sarcastic voice AI for tech support. "
"Help the caller fix issues without rambling, and keep replies under 3 sentences."
),
)
You don't need to use this specific set of instructions. Take some time to write a set of instructions that you think make sense for you.
Trying different voices
Next, let's modify the TTS configuration to match our new personality:
session = AgentSession(
# ...
tts="cartesia/sonic-2:a167e0f3-df7e-4d52-a9c3-f949145efdab",
# ...
)
If you want to try some other styles of voices, you can find a list of some recommended ones in the LiveKit Inference docs.
Adding fallback adapters
Every provider has hiccups. When outages from model providers happen, we want to make sure that your agents don't stop working.
Production agents always need secondary providers.
LiveKit’s fallback adapters automatically retry in priority order. Let's add some fallback adapters.
- Import the adapters near the top of the file:
from livekit.agents import llm, stt, tts
- Extract the VAD so we can reuse it:
vad = silero.VAD.load()
session = AgentSession(
vad=vad,
# ...
)
- Wrap each pipeline component:
session = AgentSession(
llm=llm.FallbackAdapter(
[
"openai/gpt-4.1-mini",
"google/gemini-2.5-flash",
]
),
stt=stt.FallbackAdapter(
[
"deepgram/nova-3",
"assemblyai/universal-streaming"
]
),
tts=tts.FallbackAdapter(
[
"cartesia/sonic-2:a167e0f3-df7e-4d52-a9c3-f949145efdab",
"inworld/inworld-tts-1",
]
),
vad=vad,
turn_detection=MultilingualModel(),
)
The adapters degrade gracefully: if the first provider fails, the session retries the next one without dropping the call.
Production agents should always have fallbacks.
Exercise: chaos resilience drills
Run uv run agent.py console and talk to your agent again. While interacting with the agent:
- Ask the agent to reintroduce itself multiple times—does it stay on persona?
- Interrupt the agent mid-sentence and confirm the tone still matches the prompt.
With persona and fallbacks configured, the agent now feels distinct and reliable. Next, we’ll capture metrics so you can measure these improvements.