TrueFoundry is an AI Gateway that provides a unified interface for accessing multiple AI providers with observability, caching, and rate limiting. TrueFoundry can export LLM traces to Braintrust using OpenTelemetry, providing comprehensive observability for all your AI Gateway interactions. The integration captures:Documentation Index
Fetch the complete documentation index at: https://braintrust.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
- LLM calls (chat completions, agent responses, embeddings)
- Token usage and cost metrics
- Request and response data
- Latency and performance data
- Hierarchical trace trees showing relationships between calls
Prerequisites
Before configuring the integration, ensure you have:- A TrueFoundry account (sign up at truefoundry.com)
- A Braintrust account (sign up at braintrust.dev)
- Your Braintrust API key from Settings → API Keys
- Your Braintrust project ID from your project’s configuration page
Trace with TrueFoundry
TrueFoundry exports traces to Braintrust using OpenTelemetry. Configure the integration through the TrueFoundry dashboard:Enable OpenTelemetry export
In the TrueFoundry dashboard, navigate to AI Gateway → Controls → OTEL Config and enable the “Otel Traces Exporter Configuration” toggle.
Configure the Braintrust endpoint
Select the HTTP Configuration tab and configure these settings:
- Traces endpoint:
https://api.braintrust.dev/otel/v1/traces - Encoding:
Proto
Add authentication headers
Add two HTTP headers to authenticate and route traces to your Braintrust project:Replace
<YOUR_BRAINTRUST_API_KEY> with your API key from Braintrust settings, and <YOUR_PROJECT_ID> with your project ID.The
x-bt-parent header supports multiple prefixes for organizing traces: project_id:, project_name:, or experiment_id:.View traces in Braintrust
After configuration, all LLM interactions through TrueFoundry AI Gateway appear in the Logs page of your Braintrust project. Each trace includes:- Request parameters and prompts
- Response data and completions
- Token usage and estimated costs
- Latency measurements
- Hierarchical span relationships
Self-hosted Braintrust
For self-hosted Braintrust deployments, replace the standard endpoint with your custom Braintrust URL:Proto and use the same authentication headers with your self-hosted instance’s API key.