Evaluation and Observability
A/B Testing Agents
A/B testing for agents means running two or more configurations, such as different prompts, models, or tool sets, against live production traffic simultaneously and measuring which one performs better on metrics that actually matter in your system. Offline evaluations on curated datasets tell you what a configuration can do in controlled conditions, but A/B tests reveal how it behaves against real users, real edge cases, and real environmental factors that no test suite anticipates. The core difficulty is that agent outputs are multidimensional: one configuration might be faster but less accurate, or cheaper but more prone to hallucination, so you need a weighted scoring model rather than a single conversion metric to declare a winner.
connected to
resources
Braintrust: Experimentsbraintrust.devRunning controlled experiments to compare agent configurations (braintrust.dev)Promptfoo: Comparisonpromptfoo.devSide-by-side comparison tooling for different prompt and model configurations (promptfoo.dev)LaunchDarkly: AI Feature Flagslaunchdarkly.comFeature flag infrastructure for progressively rolling out agent changes (launchdarkly.com)OpenAI: Production Best Practicesplatform.openai.comGuidelines for testing in production with LLM-based systems (platform.openai.com)