Palantyra

Quick Start

Get started with Palantyra in minutes

Quick Start

Automatic Tracing (Zero-Code)

The simplest way to get started - all LLM calls are automatically traced:

import palantyra
from openai import OpenAI

# Initialize once at application startup
palantyra.initialize(
    project_api_key="your-api-key-here"
)

# All LLM calls are now automatically traced!
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello!"}]
)

# Traces include: cost, latency, tokens, model, status, etc.
print(response.choices[0].message.content)

Configuration Options

import palantyra

palantyra.initialize(
    project_api_key="your-api-key",
    endpoint="http://localhost:8080",  # Palantyra server
    service_name="my-ai-app",
    service_version="1.0.0",
    use_batching=True  # Better for high-volume apps
)

Configuration Parameters

ParameterTypeDefaultDescription
project_api_keystrRequiredYour Palantyra API key
endpointstrhttp://localhost:8080Palantyra server URL
service_namestrllm-applicationService identifier
service_versionstr1.0.0Service version
auto_instrumentboolTrueAuto-trace LLM calls
use_batchingboolTrueBatch span exports

Environment Variables

You can also configure using environment variables:

export LLM_OBSERVABILITY_API_KEY="your-api-key"
export LLM_OBSERVABILITY_ENDPOINT="http://localhost:8080"
export LLM_OBSERVABILITY_SERVICE_NAME="my-ai-app"
from palantyra.utils import SDKConfig

config = SDKConfig.from_env()
palantyra.initialize(**config.__dict__)

💡 Pro Tip: Call initialize() once at your application's entry point. All subsequent LLM calls will be automatically traced without any code changes!