Quick Start
Get started with Palantyra in minutes
Quick Start
Automatic Tracing (Zero-Code)
The simplest way to get started - all LLM calls are automatically traced:
import palantyra
from openai import OpenAI
# Initialize once at application startup
palantyra.initialize(
project_api_key="your-api-key-here"
)
# All LLM calls are now automatically traced!
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
# Traces include: cost, latency, tokens, model, status, etc.
print(response.choices[0].message.content)
Configuration Options
import palantyra
palantyra.initialize(
project_api_key="your-api-key",
endpoint="http://localhost:8080", # Palantyra server
service_name="my-ai-app",
service_version="1.0.0",
use_batching=True # Better for high-volume apps
)
Configuration Parameters
Parameter | Type | Default | Description |
---|---|---|---|
project_api_key | str | Required | Your Palantyra API key |
endpoint | str | http://localhost:8080 | Palantyra server URL |
service_name | str | llm-application | Service identifier |
service_version | str | 1.0.0 | Service version |
auto_instrument | bool | True | Auto-trace LLM calls |
use_batching | bool | True | Batch span exports |
Environment Variables
You can also configure using environment variables:
export LLM_OBSERVABILITY_API_KEY="your-api-key"
export LLM_OBSERVABILITY_ENDPOINT="http://localhost:8080"
export LLM_OBSERVABILITY_SERVICE_NAME="my-ai-app"
from palantyra.utils import SDKConfig
config = SDKConfig.from_env()
palantyra.initialize(**config.__dict__)
💡 Pro Tip: Call initialize()
once at your application's entry point. All subsequent LLM calls will be automatically traced without any code changes!