Skip to main content

What is Vercel AI SDK tracing?

This guide shows how to set up Respan tracing with Next.js and the Vercel AI SDK so you can monitor and trace your AI-powered applications.

Resources

Steps to use

If you already have a Next.js + Vercel AI SDK app, start from Step 1 below.

Optional: start with the pre-built example

npx create-next-app --example https://github.com/Respan/respan-example-projects/tree/main/vercel_ai_next_openai my-respan-app
Then:
  1. Add your API keys to .env.local (see Step 3)
  2. Run yarn dev (or pnpm dev) to start the dev server
  3. Start chatting and check your Respan dashboard
1

Install Respan exporter

Install the Respan exporter package:
npm install @respan/exporter-vercel
2

Set up OpenTelemetry instrumentation

Next.js supports OpenTelemetry instrumentation out of the box. Create instrumentation.ts in your project root (where package.json lives).Install Vercel’s OpenTelemetry instrumentation:
yarn add @vercel/otel
Then configure the Respan exporter:
instrumentation.ts
import { registerOTel } from "@vercel/otel";
import { RespanExporter } from "@respan/exporter-vercel";

export function register() {
  registerOTel({
    serviceName: "next-app",
    traceExporter: new RespanExporter({
      apiKey: process.env.RESPAN_API_KEY,
      baseUrl: process.env.RESPAN_BASE_URL,
      debug: true,
    }),
  });
}
3

Configure environment variables

Add your Respan credentials (and your provider key) to .env.local:
.env.local
OPENAI_API_KEY=your_openai_api_key_here

RESPAN_API_KEY=your_respan_api_key_here
RESPAN_BASE_URL=https://api.respan.ai
4

Enable telemetry in your route

In your API route file (e.g. app/api/chat/route.ts), enable telemetry by adding the experimental_telemetry option.
app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";

export const maxDuration = 30;

export async function POST(req: Request) {
  const { messages, id } = await req.json();
  console.log("chat id", id);

  const createHeader = () => {
    return {
      "X-Data-Respan-Params": Buffer.from(
        JSON.stringify({
          prompt_unit_price: 100000,
        })
      ).toString("base64"),
    };
  };

  const result = streamText({
    model: openai("gpt-4o"),
    messages,
    experimental_telemetry: {
      isEnabled: true,
      metadata: {
        customer_identifier: "customer_from_metadata",
        prompt_unit_price: 100000,
      },
      headers: createHeader(),
    },
  });

  return result.toDataStreamResponse();
}
5

Run locally and verify traces

  1. Start your dev server:
pnpm dev
  1. Make some chat requests through your application
  2. Verify traces in Respan:
If you hit missing dependency errors from @vercel/otel, install the missing packages and retry.

What gets traced

With this setup, Respan will capture:
  • AI model calls: requests made via the Vercel AI SDK
  • Token usage: input and output token counts
  • Performance metrics: latency and throughput
  • Errors: failed requests and error details
  • Custom metadata: additional context you attach via telemetry metadata/headers