Quickstart
Overview
In this guide, you’ll initialize a voice agent, interact with it locally, and then deploy it in production.
Requirements
- Python 3.11+
- Jay API Key
- Ngrok
- Docker
- AI Provider API Keys:
- Speech-to-Text (STT), e.g. Deepgram
- Large Language Model (LLM), e.g. OpenAI
- Text-to-Speech (TTS), e.g. ElevenLabs
1: Select Your Package Manager
2: Select Your LLM Provider
3: Set up a virtual environment
Skip this step if you already have a virtual environment.
4: Install Jay
5: Install OpenAI
Skip this step if it’s already installed.
6: Initialize Your Project
7: Run Locally
This command will launch a local playground where you can interact with your agent as you develop it:
You can close the process when you’re done interacting with your agent locally.
8: Deploy to Production
First, generate a requirements.txt
that contains your latest dependencies:
Then, deploy to production:
This step will take a few minutes to complete.
9: Connect to the Deployed Agent
10 (optional): Modify Your Agent
You can change your LLM’s responses by opening agent/main.py
, then navigating to the
llm_response_handler
function.
You can update your LLM to respond with a joke about the moon by making the following modification:
Test this change locally by running:
Then, you can redeploy in production by running:
And finally, you can connect to the deployed agent by running:
Next Steps
Next, we recommend learning about the core components of the agent in the Agent Overview.