Encyclopedia Evalica / Observability / Model drift
Model drift
/'mah.duhl drihft/A change in model behavior or performance over time, often caused by shifts in input distribution or provider updates. It's typically detected via score trends, topic shifts, or regressions on a fixed eval suite. (noun)
“After the provider update, we saw model drift in support tickets about billing.”
Customer example
Replit's CTO described how the team moved from manually investigating individual sessions to surfacing patterns of model drift across many sessions in Braintrust, catching quality degradation at scale rather than waiting for one-off bug reports. Read more
Related Observability terms
- AI observability •
- Alert / threshold •
- Dashboard •
- Data flywheel •
- Deep search •
- Drift •
- Error rate •
- Feedback loop •
- Logs •
- Online evaluation (production scoring) •
- P50 / P95 / P99 (Percentiles) •
- Sampling rate •
- Service Level Indicator (SLI) •
- Service Level Objective (SLO) •
- Time-to-first-token (TTFT) •
- Token usage / cost tracking •
- Topics
From the docs
Braintrust is the AI observability and eval platform for production AI. By connecting evals and observability in one workflow, teams at Notion, Stripe, Zapier, Vercel, and Ramp ship quality AI products at scale.
Start building