Observability
Tracing is the most crucial component in debugging and improving your AI app. It brings visibility into every execution step while collecting valuable data for evaluations and fine-tuning. With Laminar, you can start tracing with a single line of code.
import { Laminar, observe } from '@lmnr-ai/lmnr';
// automatically traces common LLM frameworks and SDKs
Laminar.initialize({ projectApiKey: "..." });
// you can also manually trace any function
const myFunction = observe({name: 'myFunc'}, async () => {
...
})

Tommy He
CTO, Clarum

I can attest to it being the only reliable and performant LLM monitoring platform I've tried. Founding team is great to talk to and super responsive.
Hashim Rehman
CTO, Remo

Laminar's evals help us maintain high accuracy while moving fast, and their team is incredibly responsive. We now use them for every LLM based feature we build.
Michael Ettlinger
CTO, Saturn

Laminar's tracing is genuinely great. So much better than the others I've tried.
LLM observability with 1 line of code
By initializing Laminar, you automatically trace all LLM frameworks and SDKs.
Real-time traces
Laminar's tracing engine allows you to debug your AI app faster with real-time traces.
Browser agent observability
Laminar invented new kind of observability for browser agents. We automatically record browser sessions and sync them with agent traces to help you see what the agent was seeing. This is a game changer for debugging and improving your browser agent.

LLM playground
Open LLM spans in the playground to experiment with prompts and models.
Datasets
Build datasets from span data for evals, fine-tuning and prompt engineering.
Labels
Label your spans with custom tags to make them more informative.
Open-Source and easy to self-host
Laminar is fully open-source and easy to self-host.