MCP Server

How LuvVoice works with MCP

MCP gives your AI client a clean tool surface for speech generation. The client stays focused on orchestration, the LuvVoice server exposes a small set of voice tools, and the result comes back as usable audio.

Client receives intent

Claude, Cursor, VS Code, and other MCP-aware clients decide when speech generation should be invoked.

Server exposes tools

LuvVoice MCP keeps the surface intentionally small: synthesize audio or browse voices.

Audio returns cleanly

The response comes back as audio output your client can present immediately.

MCP keeps the integration structured

Compared with a skill, MCP gives the client a more explicit contract. Instead of interpreting instructions and then making API calls itself, the client talks to a server that exposes named tools and resources.

1. Client identifies the task

The user asks for speech output and the MCP-aware client decides a tool call is the best path.

2. Server handles voice logic

LuvVoice MCP exposes the voice tools and returns either audio data or an audio URL.

3. User gets the result

The client hands the response back in a form the user can listen to immediately.

You can get a local setup working in minutes

The shortest path is still the best one for most people: create a token, paste a stdio config block, then ask your assistant to speak.

Step 1

Get your API token

Subscribe to a Pro or Enterprise plan, then create a token from Dashboard → API Tokens.

Step 2

Add the stdio config

Point your client at npx -y luvvoice-mcp and pass the token through LUVVOICE_API_TOKEN.

Step 3

Ask naturally

Request a voice, let the client resolve the right tool call, and return audio in the same conversation.

JSON - Claude Desktop / Cursor / VS Code
{
  "mcpServers": {
    "luvvoice": {
      "command": "npx",
      "args": ["-y", "luvvoice-mcp"],
      "env": {
        "LUVVOICE_API_TOKEN": "YOUR_API_TOKEN"
      }
    }
  }
}
Example conversation
User:  "Read this paragraph aloud in a friendly female voice"

AI:    1. Calls list_voices -> finds Jenny (voice-001, English Female)
       2. Calls text_to_speech with voice_id="voice-001"
       3. Returns audio URL -> user clicks to listen

The tool surface stays intentionally small

Most voice workflows only need one synthesis tool and one discovery tool. The rest of the complexity should stay out of the user's way.

Tool

text_to_speech

Convert text into natural-sounding speech with configurable voice, rate, pitch, and volume.

text (required)
voice_id (required)
rate (-50 to 50)
pitch (-50 to 50)
volume (-50 to 50)
Tool

list_voices

Browse the full catalog and filter by language code or gender before synthesis.

language (optional)
gender (optional)
200+ available voices

Resources and compatible clients

Alongside callable tools, the server can expose reference material that helps an agent inspect the service contract before using it.

stdio and HTTP supported

luvvoice://api-docs

Markdown API notes an agent can inspect before it synthesizes speech.

luvvoice://voices

Voice metadata in JSON, including names, genders, languages, and IDs.

Claude Desktop
Cursor
VS Code / Copilot
Windsurf
Factory Droid
Cline
OpenAI Agents
Any MCP Client

Choose transport based on where the server should live

The main architectural decision is whether the MCP server runs locally beside the client or remotely behind an endpoint.

Local stdio

Best for single-user desktop setups and the default choice for Claude Desktop, Cursor, VS Code, Windsurf, and similar local clients.

npx -y luvvoice-mcp

Streamable HTTP

Best when the MCP server should live behind a remote endpoint that multiple clients or environments can reach.

LUVVOICE_API_TOKEN=your_token npx luvvoice-mcp --http
JSON - stdio client config
{
  "mcpServers": {
    "luvvoice": {
      "command": "npx",
      "args": ["-y", "luvvoice-mcp"],
      "env": {
        "LUVVOICE_API_TOKEN": "YOUR_API_TOKEN"
      }
    }
  }
}
JSON - streamable-http client config
{
  "mcpServers": {
    "luvvoice": {
      "type": "streamable-http",
      "url": "https://luvvoice.com/mcp"
    }
  }
}

Ready to wire voice output into your client?

If you already have an MCP-capable assistant, the next move is straightforward: create a token, paste the config, and let the client call the server for you.

Start with local stdio, then scale out only if needed

That keeps the first integration simple while leaving room to move into remote HTTP once your deployment model actually calls for it.