What is Vercel AI SDK tracing?
This guide shows how to set up Respan tracing with Next.js and the Vercel AI SDK so you can monitor and trace your AI-powered applications.Resources
Steps to use
If you already have a Next.js + Vercel AI SDK app, start from Step 1 below.Optional: start with the pre-built example
- Add your API keys to
.env.local(see Step 3) - Run
yarn dev(orpnpm dev) to start the dev server - Start chatting and check your Respan dashboard
Set up OpenTelemetry instrumentation
Next.js supports OpenTelemetry instrumentation out of the box. Create Then configure the Respan exporter:
instrumentation.ts in your project root (where package.json lives).Install Vercel’s OpenTelemetry instrumentation:instrumentation.ts
Configure environment variables
Add your Respan credentials (and your provider key) to
.env.local:- OpenAI
- Anthropic
- Google Gemini
.env.local
Enable telemetry in your route
In your API route file (e.g.
app/api/chat/route.ts), enable telemetry by adding the experimental_telemetry option.- OpenAI
- Anthropic
- Google Gemini
app/api/chat/route.ts
Run locally and verify traces
- Start your dev server:
- Make some chat requests through your application
- Verify traces in Respan:
- Go to Logs → Traces
- Confirm requests are being traced
@vercel/otel, install the missing packages and retry.What gets traced
With this setup, Respan will capture:- AI model calls: requests made via the Vercel AI SDK
- Token usage: input and output token counts
- Performance metrics: latency and throughput
- Errors: failed requests and error details
- Custom metadata: additional context you attach via telemetry metadata/headers