Trace with the Vercel AI SDK (JS/TS only)
You can use LangSmith to trace runs from the Vercel AI SDK with our built-in AISDKExporter
OpenTelemetry trace exporter. This guide will walk through an example.
The AISDKExporter
class is only available in langsmith
JS SDK version >=0.2.1
.
0. Installation
Install the Vercel AI SDK. We use their OpenAI integration for the code snippets below, but you can use any of their other options as well.
- yarn
- npm
- pnpm
yarn add ai @ai-sdk/openai zod
npm install ai @ai-sdk/openai zod
pnpm add ai @ai-sdk/openai zod
1. Configure your environment
- Shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export OPENAI_API_KEY=<your-openai-api-key>
2. Log a trace
Next.js
First, create a instrumentation.js
file in your project root. Learn more how to setup OpenTelemetry instrumentation within your Next.js app here.
import { registerOTel } from "@vercel/otel";
import { AISDKExporter } from "langsmith/vercel";
export function register() {
registerOTel({
serviceName: "langsmith-vercel-ai-sdk-example",
traceExporter: new AISDKExporter(),
});
}
Afterwards, add the experimental_telemetry
argument to your AI SDK calls that you want to trace. For convenience, we've included the AISDKExporter.getSettings()
method which appends additional metadata for LangSmith.
import { AISDKExporter } from "langsmith/vercel";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
await streamText({
model: openai("gpt-4o-mini"),
prompt: "Write a vegetarian lasagna recipe for 4 people.",
experimental_telemetry: AISDKExporter.getSettings(),
});
You should see a trace in your LangSmith dashboard like this one.
You can also trace runs with tool calls:
import { AISDKExporter } from "langsmith/vercel";
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
await generateText({
model: openai("gpt-4o-mini"),
messages: [
{
role: "user",
content: "What are my orders and where are they? My user ID is 123",
},
],
tools: {
listOrders: tool({
description: "list all orders",
parameters: z.object({ userId: z.string() }),
execute: async ({ userId }) =>
`User ${userId} has the following orders: 1`,
}),
viewTrackingInformation: tool({
description: "view tracking information for a specific order",
parameters: z.object({ orderId: z.string() }),
execute: async ({ orderId }) =>
`Here is the tracking information for ${orderId}`,
}),
},
experimental_telemetry: AISDKExporter.getSettings(),
maxSteps: 10,
});
Which results in a trace like this one.
Node.js
First, learn more how to setup OpenTelemetry instrumentation within your Node.js app here.
In particular, you will need to ensure that OTEL setup and configuration run before your application logic. One tool commonly used for this task are Node's --require
or --import
flags.
Add the AISDKExporter
to the trace exporter to your OpenTelemetry setup.
import { AISDKExporter } from "langsmith/vercel";
import { NodeSDK } from "@opentelemetry/sdk-node";
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
const sdk = new NodeSDK({
traceExporter: new AISDKExporter(),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
Afterwards, add the experimental_telemetry
argument to your AI SDK calls that you want to trace.
Do not forget to call await sdk.shutdown()
before your application shuts down in order to flush any remaining traces to LangSmith.
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { AISDKExporter } from "langsmith/vercel";
const result = await generateText({
model: openai("gpt-4o-mini"),
prompt: "Write a vegetarian lasagna recipe for 4 people.",
experimental_telemetry: AISDKExporter.getSettings(),
});
await sdk.shutdown();
Sentry
If you're using Sentry, you can attach the LangSmith trace exporter to Sentry's default OpenTelemetry instrumentation as follows:
import * as Sentry from "@sentry/node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { AISDKExporter } from "langsmith/vercel";
const client = Sentry.init({
dsn: "[Sentry DSN]",
tracesSampleRate: 1.0,
});
client?.traceProvider?.addSpanProcessor(
new BatchSpanProcessor(new AISDKExporter())
);
Alternatively, you can use your existing OpenTelemetry setup by setting skipOpenTelemetrySetup: true
in your Sentry.init()
call. In this case, we recommend following the official Sentry OpenTelemetry Setup documentation.
Cloudflare Workers
To instrument AI SDK calls within Cloudflare Workers, you can use the AISDKExporter
with @microlabs/otel-cf-workers
. See the documentation for otel-cf-workers
here.
import { Client } from "langsmith";
import { instrument } from "@microlabs/otel-cf-workers";
import { AISDKExporter } from "langsmith/vercel";
import { createOpenAI } from "@ai-sdk/openai";
import { generateText } from "ai";
interface Env {
OPENAI_API_KEY: string;
LANGSMITH_TRACING_V2: string;
LANGSMITH_ENDPOINT: string;
LANGSMITH_API_KEY: string;
}
const handler = {
async fetch(request, env): Promise<Response> {
const openai = createOpenAI({ apiKey: env.OPENAI_API_KEY });
const model = openai("gpt-4o-mini");
const response = await generateText({
model,
prompt: "Tell me a joke",
experimental_telemetry: AISDKExporter.getSettings({
// As `process.env.LANGSMITH_TRACING_V2` is undefined in Cloudflare Workers,
// we need to check the environment variable directly.
isEnabled: env.LANGSMITH_TRACING_V2 === "true",
}),
});
return new Response(response.text);
},
} satisfies ExportedHandler<Env>;
export default instrument<Env, unknown, unknown>(handler, (env) => ({
exporter: new AISDKExporter({
client: new Client({
// Batching is handled by OTEL by default, we need to
// disable LangSmith batch tracing to avoid losing traces
autoBatchTracing: false,
apiKey: env.LANGSMITH_API_KEY,
apiUrl: env.LANGSMITH_ENDPOINT,
}),
}),
service: { name: "ai-sdk-service" },
}));
You should see a trace in your LangSmith dashboard like this one.
Customize run name
You can customize the run name by passing the runName
argument to the AISDKExporter.getSettings()
method.
import { AISDKExporter } from "langsmith/vercel";
await generateText({
model: openai("gpt-4o-mini"),
prompt: "Write a vegetarian lasagna recipe for 4 people.",
experimental_telemetry: AISDKExporter.getSettings({
runName: "my-custom-run-name",
}),
});
Customize run ID
You can customize the run ID by passing the runId
argument to the AISDKExporter.getSettings()
method. This is especially useful if you want to know the run ID before the run has been completed.
import { AISDKExporter } from "langsmith/vercel";
await generateText({
model: openai("gpt-4o-mini"),
prompt: "Write a vegetarian lasagna recipe for 4 people.",
experimental_telemetry: AISDKExporter.getSettings({
runId: "my-custom-run-id",
}),
});
Nesting runs
You can also nest runs within other traced functions to create a hierarchy of associated runs.
Here's an example using the traceable
method:
import { AISDKExporter } from "langsmith/vercel";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
import { traceable } from "langsmith/traceable";
const wrappedGenerateText = traceable(
async (content: string) => {
const { text } = await generateText({
model: openai("gpt-4o-mini"),
messages: [{ role: "user", content }],
experimental_telemetry: AISDKExporter.getSettings(),
});
const reverseText = traceable(
async (text: string) => {
return text.split("").reverse().join("");
},
{ name: "reverseText" }
);
const reversedText = await reverseText(text);
return { text, reversedText };
},
{ name: "parentTraceable" }
);
const result = await wrappedGenerateText(
"What color is the sky? Respond with one word."
);
The resulting trace will look like this one.
Custom LangSmith client
You can also pass a LangSmith client instance into the AISDKExporter
constructor:
import { AISDKExporter } from "langsmith/vercel";
import { Client } from "langsmith";
import { NodeSDK } from "@opentelemetry/sdk-node";
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
const langsmithClient = new Client({});
const sdk = new NodeSDK({
traceExporter: new AISDKExporter({ client: langsmithClient }),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
await generateText({
model: openai("gpt-4o-mini"),
prompt: "Write a vegetarian lasagna recipe for 4 people.",
experimental_telemetry: AISDKExporter.getSettings(),
});
Debugging Exporter
You can enable debug logs for the AISDKExporter
by passing the debug
argument to the constructor.
const traceExporter = new AISDKExporter({ debug: true });
Alternatively, you can set the OTEL_LOG_LEVEL=DEBUG
environment variable to enable debug logs for the exporter as well as the rest of the OpenTelemetry stack.
Adding metadata
You can add metadata to your traces to help organize and filter them in the LangSmith UI:
import { AISDKExporter } from "langsmith/vercel";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
await generateText({
model: openai("gpt-4o-mini"),
prompt: "Write a vegetarian lasagna recipe for 4 people.",
experimental_telemetry: AISDKExporter.getSettings({
metadata: { userId: "123", language: "english" },
}),
});
Metadata will be visible in your LangSmith dashboard and can be used to filter and search for specific traces.
wrapAISDKModel
(deprecated)
The wrapAISDKModel
method is deprecated and will be removed in a future release.
The wrapAISDKModel
method wraps the Vercel model wrapper and intercept model invocation to send traces to LangSmith. This method is useful if you are using an older version of LangSmith or if you are using streamUI
/ Vercel AI RSC, which currently does not support experimental_telemetry
.
import { wrapAISDKModel } from "langsmith/wrappers/vercel";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";
const vercelModel = openai("gpt-4o-mini");
const modelWithTracing = wrapAISDKModel(vercelModel);
await generateText({
model: modelWithTracing,
prompt: "Write a vegetarian lasagna recipe for 4 people.",
});