API Reference
Demeterics provides OpenAI-compatible reverse proxy endpoints for Groq, OpenAI, Anthropic, and Google Gemini. Use your Demeterics API key as the Bearer token, and we'll automatically track usage, bill credits, and store interactions in BigQuery.
All endpoints require Authorization: Bearer <DEMETERICS_API_KEY>. Replace https://api.demeterics.com with your deployment URL if self-hosting.
Important
- Use your Demeterics API key (from
/api-keys), NOT your vendor API key - Demeterics handles vendor authentication, credit billing, and usage tracking automatically
- For BYOK (Bring Your Own Key), store your vendor keys in Settings → API Keys
- Model identifiers vary by provider—use the model name from your provider's documentation
LLM Reverse Proxy Endpoints
Groq Proxy
Base URL: https://api.demeterics.com/groq
Supported endpoints:
POST /v1/chat/completions- Chat completions (streaming supported)POST /v1/responses- Groq responses endpointGET /v1/models- List available modelsGET /groq/health- Health check
Example: Chat completion with Groq
curl -X POST https://api.demeterics.com/groq/v1/chat/completions \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "llama-3.3-70b-versatile",
"messages": [
{"role": "user", "content": "What is the capital of France?"}
]
}'
Response
{
"id": "chatcmpl-...",
"object": "chat.completion",
"created": 1234567890,
"model": "llama-3.3-70b-versatile",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 15,
"completion_tokens": 8,
"total_tokens": 23
}
}
OpenAI Proxy
Base URL: https://api.demeterics.com/openai
Supported endpoints:
POST /v1/chat/completions- Chat completionsPOST /v1/responses- OpenAI Responses API (agentic workflows)GET /v1/models- List modelsGET /openai/health- Health check
Note: For image generation, use the dedicated Image Generation API at
/imagen/v1/generate. For text-to-speech, use the Speech API at/tts/v1/generate.
Example: Chat completion with OpenAI
curl -X POST https://api.demeterics.com/openai/v1/chat/completions \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-5",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Anthropic Proxy
Base URL: https://api.demeterics.com/anthropic
Supported endpoints:
POST /v1/messages- Claude messages APIGET /v1/models- List modelsGET /anthropic/health- Health check
Example: Messages API with Claude
curl -X POST https://api.demeterics.com/anthropic/v1/messages \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "anthropic/claude-sonnet-4.5",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Write a haiku about coding"}
]
}'
Gemini Proxy (Native API)
Base URL: https://api.demeterics.com/gemini
Supported endpoints:
POST /v1/models/{model}:generateContent- Generate contentPOST /v1/models/{model}:streamGenerateContent- Streaming generationGET /v1/models- List modelsGET /gemini/health- Health check
Example: Generate content with Gemini (Native API)
curl -X POST "https://api.demeterics.com/gemini/v1/models/gemini-2.0-flash-exp:generateContent" \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"parts": [
{"text": "Explain the theory of relativity"}
]
}
]
}'
Google Proxy (OpenAI-Compatible)
Base URL: https://api.demeterics.com/google
The Google proxy provides an OpenAI-compatible interface to Google Gemini models. This allows you to use Gemini with OpenAI SDKs and tools without changing your code—just switch the base URL.
Supported endpoints:
POST /v1/chat/completions- Chat completions (OpenAI format)POST /v1/responses- OpenAI Responses API (agentic workflows)GET /v1/models- List modelsGET /google/health- Health check
Example: Chat completion with Gemini (OpenAI format)
curl -X POST https://api.demeterics.com/google/v1/chat/completions \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.0-flash-exp",
"messages": [
{"role": "user", "content": "Explain the theory of relativity"}
]
}'
Using with OpenAI SDKs
Python:
from openai import OpenAI
client = OpenAI(
base_url="https://api.demeterics.com/google/v1",
api_key="dmt_your_demeterics_api_key"
)
response = client.chat.completions.create(
model="gemini-2.0-flash-exp",
messages=[{"role": "user", "content": "Hello from Gemini!"}]
)
Node.js:
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.demeterics.com/google/v1',
apiKey: 'dmt_your_demeterics_api_key'
});
const response = await client.chat.completions.create({
model: 'gemini-2.0-flash-exp',
messages: [{ role: 'user', content: 'Hello from Gemini!' }]
});
Note: The Google proxy automatically translates between OpenAI's chat completion format and Gemini's native generateContent format. You get the same OpenAI-style response structure while using Google's models.
OpenRouter Proxy
Base URL: https://api.demeterics.com/openrouter
OpenRouter provides access to 300+ models from various providers including Grok, DeepSeek, Meta, Mistral, and more. All models use OpenAI-compatible API format.
Supported endpoints:
POST /v1/chat/completions- Chat completions (streaming supported)GET /v1/models- List available modelsGET /openrouter/health- Health check
Example: Grok (xAI)
curl -X POST https://api.demeterics.com/openrouter/v1/chat/completions \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "x-ai/grok-4.1-fast:free",
"messages": [
{"role": "user", "content": "What makes you different from other AI assistants?"}
]
}'
Example: DeepSeek
curl -X POST https://api.demeterics.com/openrouter/v1/chat/completions \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek/deepseek-chat-v3-0324",
"messages": [
{"role": "user", "content": "Explain the difference between transformers and RNNs"}
]
}'
Using with OpenAI SDKs
Python:
from openai import OpenAI
client = OpenAI(
base_url="https://api.demeterics.com/openrouter/v1",
api_key="dmt_your_demeterics_api_key"
)
# Grok example
response = client.chat.completions.create(
model="x-ai/grok-4.1-fast:free",
messages=[{"role": "user", "content": "Hello from Grok!"}]
)
# DeepSeek example
response = client.chat.completions.create(
model="deepseek/deepseek-chat-v3-0324",
messages=[{"role": "user", "content": "Hello from DeepSeek!"}]
)
Node.js:
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.demeterics.com/openrouter/v1',
apiKey: 'dmt_your_demeterics_api_key'
});
// Grok example
const grokResponse = await client.chat.completions.create({
model: 'x-ai/grok-4.1-fast:free',
messages: [{ role: 'user', content: 'Hello from Grok!' }]
});
// DeepSeek example
const deepseekResponse = await client.chat.completions.create({
model: 'deepseek/deepseek-chat-v3-0324',
messages: [{ role: 'user', content: 'Hello from DeepSeek!' }]
});
Popular OpenRouter Models:
x-ai/grok-4.1-fast:free- Grok 4.1 Fast (free tier)deepseek/deepseek-chat-v3-0324- DeepSeek Chat v3meta-llama/llama-4-scout-17b-16e-instruct- Llama 4 Scoutmistralai/mistral-large-2- Mistral Large 2google/gemini-2.0-flash-001- Gemini 2.0 Flash via OpenRouter
Note: Model names follow OpenRouter's naming convention. Check OpenRouter's model list for the full catalog of available models.
Data Management Endpoints
GET /api/v1/status
Verifies API key validity and returns service information.
Request
curl https://api.demeterics.com/api/v1/status \
-H "Authorization: Bearer dmt_your_demeterics_api_key"
Response
{
"status": "ok",
"project": "demeterics-api"
}
POST /api/v1/exports
Exports interaction data to JSON or CSV format. Supports streaming or GCS bucket delivery.
Request fields
format:"json"or"csv"range:{"start": "YYYY-MM-DD", "end": "YYYY-MM-DD"}filters: Optional filtering (e.g., by model, user_id)
Example
curl -X POST https://api.demeterics.com/api/v1/exports \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"format": "json",
"range": {
"start": "2025-01-01",
"end": "2025-01-31"
}
}'
Response
- Streams JSON/CSV data directly, OR
- Returns
{"export_url": "gs://..."}if GCS export is configured
POST /api/v1/data
Requests data deletion for GDPR/privacy compliance. Supports deletion by user_id or transaction_ids.
Request fields
user_id: Delete all data for this user (optional)transaction_ids: Array of transaction IDs to delete (optional)reason: Explanation for deletion (required)
Example
curl -X POST https://api.demeterics.com/api/v1/data \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{
"user_id": "user@example.com",
"reason": "GDPR deletion request"
}'
Response
{
"status": "ok",
"message": "Deletion request queued"
}
Authentication Modes
Demeterics supports three authentication modes:
-
Demeter-Managed Keys (Default)
- Use only your Demeterics API key
- Demeterics provides vendor API keys automatically
- Billed per-token via Stripe credits
-
BYOK (Bring Your Own Key)
- Store your vendor API keys in Settings → API Keys
- Demeterics uses your keys for API calls
- Still tracks usage and analytics (no billing)
-
Dual-Key Mode
- Format:
Authorization: Bearer dmt_YOUR_KEY;vendor_VENDOR_KEY - Combines Demeterics tracking with your vendor key
- Useful for migration or hybrid deployments
- Format:
Error Responses
All errors follow this format:
{
"error": {
"message": "Human-readable error description",
"type": "error_type",
"code": "error_code"
}
}
Common error codes:
400- Invalid request (malformed JSON, missing fields)401- Missing or invalid Authorization header403- API key valid but lacks permissions404- Endpoint not found or model unavailable429- Rate limited (too many requests)402- Insufficient credits (Stripe billing required)5xx- Server error (retry with exponential backoff)
Best Practices
- Idempotency: Include
X-Request-IDheader for retries - Streaming: Use streaming endpoints for real-time responses
- Error Handling: Implement exponential backoff for 5xx errors
- Model Validation: Don't hardcode model names—use provider documentation
- Credit Monitoring: Check credit balance at
/creditsto avoid service interruption - BYOK Setup: Store vendor keys in Settings → API Keys for zero-billing mode
SDKs
Demeterics is compatible with all OpenAI SDKs. Just change the base URL:
Python (OpenAI SDK)
from openai import OpenAI
client = OpenAI(
base_url="https://api.demeterics.com/groq/v1",
api_key="dmt_your_demeterics_api_key"
)
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{"role": "user", "content": "Hello!"}]
)
Node.js (OpenAI SDK)
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.demeterics.com/groq/v1',
apiKey: 'dmt_your_demeterics_api_key'
});
const response = await client.chat.completions.create({
model: 'llama-3.3-70b-versatile',
messages: [{ role: 'user', content: 'Hello!' }]
});
cURL
curl -X POST https://api.demeterics.com/groq/v1/chat/completions \
-H "Authorization: Bearer dmt_your_demeterics_api_key" \
-H "Content-Type: application/json" \
-d '{"model": "llama-3.3-70b-versatile", "messages": [{"role": "user", "content": "Hello!"}]}'
For more examples and integration guides, visit the Quick Start page.