Skip to main content

Documentation Index

Fetch the complete documentation index at: https://braintrust.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

@openrouter/agent is OpenRouter’s TypeScript agent toolkit for agent loops.

Setup

1

Install packages

# pnpm
pnpm add braintrust @openrouter/agent
# npm
npm install braintrust @openrouter/agent
2

Set environment variables

.env
OPENROUTER_API_KEY=<your-openrouter-api-key>
BRAINTRUST_API_KEY=<your-braintrust-api-key>

# If you are self-hosting Braintrust, set the URL of your hosted dataplane
# BRAINTRUST_API_URL=<your-braintrust-api-url>
Braintrust supports @openrouter/agent v0.1.2 and later.

Auto-instrumentation

Braintrust can auto-instrument OpenRouter.callModel() calls. This is the recommended setup for most projects.
import { initLogger } from "braintrust";
import { OpenRouter } from "@openrouter/agent";

initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});

const client = new OpenRouter({ apiKey: process.env.OPENROUTER_API_KEY });
const result = client.callModel({
  model: "openai/gpt-5-mini",
  input: "What is observability?",
});

const text = await result.getText();
Run with the import hook:
node --import braintrust/hook.mjs app.js
If you’re using a bundler, see Trace LLM calls for plugin and loader setup.

Manual instrumentation

Trace an OpenRouter Agent client explicitly by wrapping it with wrapOpenRouterAgent.
import { initLogger, wrapOpenRouterAgent } from "braintrust";
import { OpenRouter } from "@openrouter/agent";

initLogger({
  projectName: "My Project",
  apiKey: process.env.BRAINTRUST_API_KEY,
});

const client = wrapOpenRouterAgent(
  new OpenRouter({ apiKey: process.env.OPENROUTER_API_KEY }),
);

const result = client.callModel({
  model: "openai/gpt-5-mini",
  input: "Reply with exactly: traced",
});

const text = await result.getText();

What Braintrust traces

Braintrust captures the agent run as a trace tree:
  • A top-level LLM span for the call, with the final output and aggregated token usage across all turns.
  • One nested LLM span per agent loop turn, with per-turn input, output, and usage, plus step and step_type (initial or continue) metadata.
  • One tool span per tool invocation the agent makes, with the tool name, input, and output.