Documentation Index
Fetch the complete documentation index at: https://braintrust.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
Learn how to configure Braintrust features, set up integrations, and enable advanced capabilities. Each guide provides step-by-step configuration instructions.
All config guides
Access UI parameters in remote eval tasks
Accessing log attachments in Python scorers
Add custom measures to monitoring charts
Apply log retention policies across all projects via API
Attaching debuggers to Braintrust eval CLI
AWS Bedrock IAM permissions for Braintrust integration
BTQL POST endpoint payload and response schema
Configure custom model costs for estimation
Configure dataset schemas via API
Configure default groups for Google SSO users
Configure human review visibility in dataset views
Configure MCP server with self-hosted data plane
Configure root span filter for online scoring
Configure scorer pass threshold via API
Create monitoring dashboards with organization permissions
Creating prompts with encoded custom provider model names
Customizing experiment names in the SDK
Customizing Monitoring Views: Removing Charts and Managing View Layouts
Dataset format for system and user prompts
Defining response schema in prompts.create()
Detecting Remote Eval and CLI Execution Context: SDK Hooks and Environment Variables
Duplicate a prompt in Braintrust
Edit Topics automation sampling rate after setup
Emit traces to multiple projects simultaneously
Experiment Fetch Limited to 1000 Events - Export All
Export all experiment events using BTQL pagination
Export missing child spans
Fetching human review comments via API and BTQL
File attachment support across models
Filter logs by tags and scores with ANY_SPAN
Flagging logs and users for review via API
Flagging users for review via API
How to configure OpenAI for EU organizations
How to duplicate views across projects
How to find and use self-hosted data plane API endpoints
How to Save Custom Instructions for Loop Using Project
Human review visibility in experiment row view
Human review with multiple scores in expected field
Java SDK proxy configuration with authentication
Log token metrics to @traced spans manually
Log user feedback selectively with init_logger
Loop security model and user permissions
Managing Prompt Updates in Production: Environment Tags
Navigate and filter metadata arrays by index
Open trace URLs in a specific view mode
Optimize large trace logging with attachments
Override custom providers at project level
Permission group not working: Project-level scorer access
Project permission levels and user access control
Project-scoped access with service tokens
Promoting prompts across environments via API
Python SDK Docker containerization setup
Query multiple projects for combined metrics
Removing charts from the default "All Data" view
Restrict AI provider access using project permissions
Restrict user access to specific projects only
Review specific rows in Logs and Experiments
Route requests to specific AI providers
Running Evaluations Per Git Commit: SHA-Based Experiment
Scorer template variable reference
Setting up SSO integration in Braintrust
Slack alert permissions in project settings
Stitch multi-turn chat into one trace with wrapAISDK
Thread view span extraction and filtering
Topics automation idle timeout and trace update behavior
Tracing OpenAI Realtime API with WebSockets
Troubleshooting 500 errors in self-hosted data planes
UI experiment timeout configuration for self-hosted k8s
Understanding Experiment Score Aggregation: Simple vs
Understanding Update Project ACLs
Update AWS Bedrock credentials using API
Use project IDs to prevent duplicate projects after rename
Using OpenAI Responses API with Braintrust SDKs
Need more help?
Can’t find what you’re looking for? Search across all documentation using the search bar above, or contact support at support@braintrust.dev.