Skip to main content

Documentation Index

Fetch the complete documentation index at: https://braintrust.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

Use the Braintrust CLI to automatically instrument your project with your preferred coding agent.

Auto-instrumentation

Auto-instrumentation patches supported AI libraries at startup so every LLM call is captured without wrapping individual clients. This is the recommended way to set up tracing. The examples on this page use OpenAI, but Braintrust supports many providers and frameworks.
Auto-instrumentation in TypeScript uses a startup hook that patches supported AI libraries automatically.
1

Install the dependencies

npm install braintrust openai
2

Set your environment variables

export BRAINTRUST_API_KEY="your-api-key"
export OPENAI_API_KEY="your-api-key"
3

Trace your LLM calls

This example traces a single OpenAI call:
import { initLogger } from "braintrust";
import OpenAI from "openai";

// Call once at startup — all LLM calls are traced automatically
initLogger({
  apiKey: process.env.BRAINTRUST_API_KEY,
  projectName: "My Project (TypeScript)",
});

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const response = await client.responses.create({
  model: "gpt-5-mini",
  input: "What is the capital of France?",
});
4

Run your app

Run with the --import flag to enable auto-instrumentation:
node --import braintrust/hook.mjs app.js
If you’re using a bundler or a framework that uses one, use the appropriate bundler plugin instead of the --import flag. The plugins are included in the Braintrust SDK:
Bundler / FrameworkImport path
Vite / SvelteKitbraintrust/vite
Nuxtbraintrust/vite (client) + braintrust/rollup (Nitro server)
Webpack / Next.jsbraintrust/webpack
esbuildbraintrust/esbuild
Rollupbraintrust/rollup
Vite (vite.config.ts):
import { defineConfig } from "vite";
import { vitePlugin } from "braintrust/vite";

export default defineConfig({
  plugins: [vitePlugin()],
});
Next.js with Turbopack (next.config.ts) — default in Next.js 16+:
import type { NextConfig } from "next";
import { createRequire } from "module";

const require = createRequire(import.meta.url);

const nextConfig: NextConfig = {
  turbopack: {
    rules: {
      "*.{js,mjs,cjs}": {
        condition: "foreign",
        loaders: [{ loader: require.resolve("braintrust/webpack-loader") }],
      },
    },
  },
};

export default nextConfig;
Next.js with Webpack (next.config.ts) — default in Next.js 15 and earlier:
import type { NextConfig } from "next";
import { webpackPlugin } from "braintrust/webpack";

const nextConfig: NextConfig = {
  webpack(config) {
    config.plugins.push(webpackPlugin());
    return config;
  },
};

export default nextConfig;
Nuxt (nuxt.config.ts) — Nuxt uses Vite for the client build and Rollup (via Nitro) for the server build, so both plugins are needed:
import { vitePlugin } from "braintrust/vite";
import { rollupPlugin } from "braintrust/rollup";

export default defineNuxtConfig({
  vite: {
    plugins: [vitePlugin()],
  },
  nitro: {
    rollupConfig: {
      plugins: [rollupPlugin()],
    },
  },
});
Requires Node.js 18.19.0+ or 20.6.0+ for --import flag support. Check with node --version.
Run your app and check Braintrust — your LLM calls will appear in the project logs.
Streaming responses are fully supported — Braintrust automatically collects streamed chunks and logs the complete response as a single span.

Manual instrumentation

Manual instrumentation lets you explicitly instrument individual client instances. This is an alternative to auto-instrumentation, useful if you prefer explicit control or if auto-instrumentation isn’t supported by the libraries you’re using. Unlike auto-instrumentation, you need to wrap each client instance in your application.
import { initLogger, wrapOpenAI } from "braintrust";
import OpenAI from "openai";

initLogger({
  apiKey: process.env.BRAINTRUST_API_KEY,
  projectName: "My Project (TypeScript)",
});

// Wrap the OpenAI client to trace all calls
const client = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));
const response = await client.responses.create({
  model: "gpt-5-mini",
  input: "What is the capital of France?",
});

Braintrust gateway

The Braintrust gateway provides a unified OpenAI-compatible API for accessing models from many providers. When you call a model through the gateway, your requests are automatically traced — no SDK instrumentation needed. The gateway also provides automatic caching and observability across providers.

Supported libraries

To help you log traces, Braintrust’s SDKs support automatic and manual instrumentation for many common libraries. Select a language to get started.
Braintrust also integrates with frameworks like LangChain, LangGraph, AgentScope, CrewAI, LlamaIndex, Mastra, and OpenTelemetry. Many Python frameworks can be auto-instrumented directly with braintrust.auto_instrument(), while others still require framework-specific setup. See Integrations.

Next steps