Skip to main content

Overview

In this guide, you’ll initialize a voice agent, interact with it locally, and then deploy it in production.

Requirements

1: Select Your Package Manager

  • Pip
  • Poetry

2: Select Your LLM Provider

  • OpenAI (or OpenAI Compatible)

3: Set up a virtual environment

Skip this step if you already have a virtual environment.
python -m venv .venv && source .venv/bin/activate

4: Install Jay

pip install jay_ai

5: Install OpenAI

Skip this step if it’s already installed.
pip install "openai>=1.0,<2.0"

6: Initialize Your Project

jay init
This command will:
  1. Prompt you to enter a few values:
    • Jay API Key
    • Jay Developer ID
    • AI provider API keys (STT, LLM, TTS).
  2. Create a file containing your agent (agent/main.py).
  3. Either create a .env file or append new values to your existing one.
  4. Request your permission to store your AI provider API keys as encrypted environment variables in our database, which is necessary to run the agent.

7: Run Locally

This command will launch a local playground where you can interact with your agent as you develop it:
jay run --agent agent/main.py --connect
You can close the process when you’re done interacting with your agent locally.

8: Deploy to Production

First, generate a requirements.txt that contains your latest dependencies:
pip freeze > requirements.txt
Then, deploy to production:
jay deploy --requirement requirements.txt --agent agent/main.py
This step will take a few minutes to complete.

9: Connect to the Deployed Agent

jay connect --agent agent/main.py

10 (optional): Modify Your Agent

You can change your LLM’s responses by opening agent/main.py, then navigating to the llm_response_handler function.You can update your LLM to respond with a joke about the moon by making the following modification:
async def llm_response_handler(input: LLMResponseHandlerInput):
    client = AsyncOpenAI(api_key=os.environ["OPENAI_API_KEY"])
    messages = input["messages"] + [{"role": "user", "content": "Tell me a joke about the moon."}]
    completion = await client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        stream=True,
    )
    return completion
Test this change locally by running:
jay run --agent agent/main.py --connect
Then, you can redeploy in production by running:
jay deploy --requirement requirements.txt --agent agent/main.py
And finally, you can connect to the deployed agent by running:
jay connect --agent agent/main.py

Next Steps

Next, we recommend learning about the core components of the agent in the Agent Overview.
I