Get started quickly with these practical examples for each API capability.

Text Generation

Use chat completions for conversational AI, content generation, and analysis.
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.applerouter.ai/v1"
)

# Basic chat
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
)
print(response.choices[0].message.content)

Chat API Reference

View full documentation

Image Generation

Generate images from text descriptions using DALL-E, Gemini Imagen, or other models.
response = client.images.generate(
    model="dall-e-3",
    prompt="A futuristic city with flying cars at sunset",
    size="1024x1024",
    quality="hd"
)
print(response.data[0].url)

Image API Reference

View full documentation

Video Generation

Create videos from text or images using Sora, Kling, or Veo.
Python
# Create video task
response = client.post(
    "/v1/video/generations",
    body={
        "model": "sora",
        "prompt": "A cat playing piano in a jazz bar"
    }
)
task_id = response["id"]

# Check status
status = client.get(f"/v1/video/generations/{task_id}")
print(status["status"])  # "processing" or "completed"

Video API Reference

View full documentation

Audio: Text-to-Speech

Convert text to natural-sounding speech.
response = client.audio.speech.create(
    model="tts-1-hd",
    voice="alloy",
    input="Hello! Welcome to AppleRouter."
)
response.stream_to_file("output.mp3")

TTS API Reference

View full documentation

Audio: Speech-to-Text

Transcribe audio files to text.
audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
    model="whisper-1",
    file=audio_file
)
print(transcript.text)

STT API Reference

View full documentation

Embeddings

Convert text to vector representations for semantic search and RAG.
response = client.embeddings.create(
    model="text-embedding-3-large",
    input="AppleRouter is an AI API gateway"
)
embedding = response.data[0].embedding
print(f"Vector dimension: {len(embedding)}")

Embeddings API Reference

View full documentation

Realtime Voice

Build real-time voice conversations using WebSocket.
Node.js
import WebSocket from "ws";

const ws = new WebSocket(
  "wss://api.applerouter.ai/v1/realtime?model=gpt-4o-realtime-preview",
  {
    headers: {
      Authorization: "Bearer YOUR_API_KEY",
      "OpenAI-Beta": "realtime=v1",
    },
  }
);

ws.on("open", () => {
  // Send audio data
  ws.send(JSON.stringify({
    type: "input_audio_buffer.append",
    audio: "base64_encoded_audio_data"
  }));
});

ws.on("message", (data) => {
  const event = JSON.parse(data);
  console.log("Received:", event.type);
});

Realtime API Reference

View full documentation

Next Steps