Quickstart
Overview
In this guide, you’ll initialize a voice agent, interact with it locally, and then deploy it in production.
Requirements
- Python 3.11+
- Jay API Key
- Ngrok
- Docker
- AI Provider API Keys:
- Speech-to-Text (STT), e.g. Deepgram
- Large Language Model (LLM), e.g. OpenAI
- Text-to-Speech (TTS), e.g. ElevenLabs
1: Select Your Package Manager
2: Select Your LLM Provider
3: Set up a virtual environment
Skip this step if you already have a virtual environment.
4: Install Jay
5: Install OpenAI
Skip this step if it’s already installed.
6: Initialize Your Project
What does this command do?
What does this command do?
This command will:
- Prompt you to enter a few values:
- Jay API Key
- Jay Developer ID
- AI provider API keys (STT, LLM, TTS).
- Create a file containing your agent (
agent/main.py
). - Either create a
.env
file or append new values to your existing one. - Request your permission to store your AI provider API keys as encrypted environment variables in our database, which is necessary to run the agent.
7: Run Locally
This command will launch a local playground where you can interact with your agent as you develop it:
You can close the process when you’re done interacting with your agent locally.
8: Deploy to Production
First, generate a requirements.txt
that contains your latest dependencies:
Then, deploy to production:
This step will take a few minutes to complete.
9: Connect to the Deployed Agent
10 (optional): Modify Your Agent
You can change your LLM’s responses by opening agent/main.py
, then navigating to the
llm_response_handler
function.
You can update your LLM to respond with a joke about the moon by making the following modification:
Test this change locally by running:
Then, you can redeploy in production by running:
And finally, you can connect to the deployed agent by running:
3: Set up a virtual environment
Skip this step if you already have a virtual environment.
4: Install Jay
5: Install OpenAI
Skip this step if it’s already installed.
6: Initialize Your Project
What does this command do?
What does this command do?
This command will:
- Prompt you to enter a few values:
- Jay API Key
- Jay Developer ID
- AI provider API keys (STT, LLM, TTS).
- Create a file containing your agent (
agent/main.py
). - Either create a
.env
file or append new values to your existing one. - Request your permission to store your AI provider API keys as encrypted environment variables in our database, which is necessary to run the agent.
7: Run Locally
This command will launch a local playground where you can interact with your agent as you develop it:
You can close the process when you’re done interacting with your agent locally.
8: Deploy to Production
First, generate a requirements.txt
that contains your latest dependencies:
Then, deploy to production:
This step will take a few minutes to complete.
9: Connect to the Deployed Agent
10 (optional): Modify Your Agent
You can change your LLM’s responses by opening agent/main.py
, then navigating to the
llm_response_handler
function.
You can update your LLM to respond with a joke about the moon by making the following modification:
Test this change locally by running:
Then, you can redeploy in production by running:
And finally, you can connect to the deployed agent by running:
2: Select Your LLM Provider
3: Set up a virtual environment
Skip this step if you already have a virtual environment.
4: Install Jay
5: Install OpenAI
Skip this step if it’s already installed.
6: Initialize Your Project
What does this command do?
What does this command do?
This command will:
- Prompt you to enter a few values:
- Jay API Key
- Jay Developer ID
- AI provider API keys (STT, LLM, TTS).
- Create a file containing your agent (
agent/main.py
). - Either create a
.env
file or append new values to your existing one. - Request your permission to store your AI provider API keys as encrypted environment variables in our database, which is necessary to run the agent.
7: Run Locally
This command will launch a local playground where you can interact with your agent as you develop it:
You can close the process when you’re done interacting with your agent locally.
8: Deploy to Production
First, generate a requirements.txt
that contains your latest dependencies:
Then, deploy to production:
This step will take a few minutes to complete.
9: Connect to the Deployed Agent
10 (optional): Modify Your Agent
You can change your LLM’s responses by opening agent/main.py
, then navigating to the
llm_response_handler
function.
You can update your LLM to respond with a joke about the moon by making the following modification:
Test this change locally by running:
Then, you can redeploy in production by running:
And finally, you can connect to the deployed agent by running:
3: Set up a virtual environment
Skip this step if you already have a virtual environment.
4: Install Jay
5: Install OpenAI
Skip this step if it’s already installed.
6: Initialize Your Project
What does this command do?
What does this command do?
This command will:
- Prompt you to enter a few values:
- Jay API Key
- Jay Developer ID
- AI provider API keys (STT, LLM, TTS).
- Create a file containing your agent (
agent/main.py
). - Either create a
.env
file or append new values to your existing one. - Request your permission to store your AI provider API keys as encrypted environment variables in our database, which is necessary to run the agent.
7: Run Locally
This command will launch a local playground where you can interact with your agent as you develop it:
You can close the process when you’re done interacting with your agent locally.
8: Deploy to Production
First, generate a requirements.txt
that contains your latest dependencies:
Then, deploy to production:
This step will take a few minutes to complete.
9: Connect to the Deployed Agent
10 (optional): Modify Your Agent
You can change your LLM’s responses by opening agent/main.py
, then navigating to the
llm_response_handler
function.
You can update your LLM to respond with a joke about the moon by making the following modification:
Test this change locally by running:
Then, you can redeploy in production by running:
And finally, you can connect to the deployed agent by running:
2: Select Your LLM Provider
3: Install the Poetry Export Plugin
You’ll need this command later when deploying the agent:
4: Install Jay
5: Install OpenAI
Skip this step if it’s already installed.
6: Initialize Your Project
What does this command do?
What does this command do?
This command will:
- Prompt you to enter a few values:
- Jay API Key
- Jay Developer ID
- AI provider API keys (STT, LLM, TTS).
- Create a file containing your agent (
agent/main.py
). - Either create a
.env
file or append new values to your existing one. - Request your permission to store your AI provider API keys as encrypted environment variables in our database, which is necessary to run the agent.
7: Run Locally
This command will launch a local playground where you can interact with your agent as you develop it:
You can close the process when you’re done interacting with your agent locally.
8: Deploy to Production
First, generate a requirements.txt
that contains your latest dependencies:
Then, deploy to production:
This step will take a few minutes to complete.
9: Connect to the Deployed Agent
10 (optional): Modify Your Agent
You can change your LLM’s responses by opening agent/main.py
, then navigating to the
llm_response_handler
function.
You can update your LLM to respond with a joke about the moon by making the following modification:
Test this change locally by running:
Then, you can redeploy in production by running:
And finally, you can connect to the deployed agent by running:
3: Install the Poetry Export Plugin
You’ll need this command later when deploying the agent:
4: Install Jay
5: Install OpenAI
Skip this step if it’s already installed.
6: Initialize Your Project
What does this command do?
What does this command do?
This command will:
- Prompt you to enter a few values:
- Jay API Key
- Jay Developer ID
- AI provider API keys (STT, LLM, TTS).
- Create a file containing your agent (
agent/main.py
). - Either create a
.env
file or append new values to your existing one. - Request your permission to store your AI provider API keys as encrypted environment variables in our database, which is necessary to run the agent.
7: Run Locally
This command will launch a local playground where you can interact with your agent as you develop it:
You can close the process when you’re done interacting with your agent locally.
8: Deploy to Production
First, generate a requirements.txt
that contains your latest dependencies:
Then, deploy to production:
This step will take a few minutes to complete.
9: Connect to the Deployed Agent
10 (optional): Modify Your Agent
You can change your LLM’s responses by opening agent/main.py
, then navigating to the
llm_response_handler
function.
You can update your LLM to respond with a joke about the moon by making the following modification:
Test this change locally by running:
Then, you can redeploy in production by running:
And finally, you can connect to the deployed agent by running:
Next Steps
Next, we recommend learning about the core components of the agent in the Agent Overview.