Skip to main content

TypeScript Integrations

LangChain.js

To log traces with LangChain.js, you'll need to set a few environment variables.

export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_TRACING_V2=true

And optionally:

LANGCHAIN_PROJECT=<some-custom-session-name>

After that, you can use LangChain as you normally would and all traces will get logged to LangSmith!

OpenAI SDK

The easiest way to trace calls from the OpenAI SDK with LangSmith is using the wrapOpenAI wrapper function available in LangSmith 0.1.3 and up.

In order to use, you first need to set your LangSmith API key:

export LANGCHAIN_API_KEY=<your-api-key>

Next, you will need to install the LangSmith SDK and the OpenAI SDK:

npm install langsmith openai

After that, initialize your OpenAI client and wrap the client with wrapOpenAI method to enable tracing for the completions and chat completions methods:

import { OpenAI } from "openai";
import { wrapOpenAI } from "langsmith/wrappers";

const openai = wrapOpenAI(new OpenAI());

This new client takes the same exact arguments and has the same return type as the original method, but will log everything to LangSmith.

await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ content: "Hi there!", role: "user" }],
});
{
id: 'chatcmpl-8sOWEOYVyehDlyPcBiaDtTxWvr9v6',
object: 'chat.completion',
created: 1707974654,
model: 'gpt-3.5-turbo-0613',
choices: [
{
index: 0,
message: { role: 'assistant', content: 'Hello! How can I help you today?' },
logprobs: null,
finish_reason: 'stop'
}
],
usage: { prompt_tokens: 10, completion_tokens: 9, total_tokens: 19 },
system_fingerprint: null
}

Alternatively, you can use the traceable function to wrap the client methods you want to use:

import { OpenAI } from "openai";
import { traceable } from "langsmith/traceable";

const openai = new OpenAI();

const createCompletion = traceable(
openai.chat.completions.create.bind(openai.chat.completions),
{ name: "OpenAI Chat Completion", run_type: "llm" }
);

await createCompletion({
model: "gpt-3.5-turbo",
messages: [{ content: "Hi there!", role: "user" }],
});

Note the use of .bind to preserve the function's context. The run_type field in the extra config object marks the function as an LLM call, and enables token usage tracking for OpenAI.

This also works for streaming:

const stream = await createCompletion({
model: "gpt-3.5-turbo",
stream: true,
messages: [{ content: "Hi there!", role: "user" }],
});
for await (const chunk of stream) {
console.log(chunk);
}

Nested tracing

Oftentimes, you use the OpenAI client inside of other functions or as part of a longer sequence. You can automatically get nested traces by using this wrapped method within other functions wrapped with traceable.

const nestedTrace = traceable(async (text: string) => {
const completion = await createCompletion({
model: "gpt-3.5-turbo",
messages: [{ content: text, role: "user" }],
});
return completion;
});

await nestedTrace("Why is the sky blue?");
{
"id": "chatcmpl-8sPToJQLLVepJvyeTfzZMOMVIKjMo",
"object": "chat.completion",
"created": 1707978348,
"model": "gpt-3.5-turbo-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The sky appears blue because of a phenomenon known as Rayleigh scattering. The Earth's atmosphere is composed of tiny molecules, such as nitrogen and oxygen, which are much smaller than the wavelength of visible light. When sunlight interacts with these molecules, it gets scattered in all directions. However, shorter wavelengths of light (blue and violet) are scattered more compared to longer wavelengths (red, orange, and yellow). \n\nAs a result, when sunlight passes through the Earth's atmosphere, the blue and violet wavelengths are scattered in all directions, making the sky appear blue. This scattering of shorter wavelengths is also responsible for the vibrant colors observed during sunrise and sunset, when the sunlight has to pass through a thicker portion of the atmosphere, causing the longer wavelengths to dominate the scattered light."
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 154,
"total_tokens": 167
},
"system_fingerprint": null
}
tip

Click here to see an example LangSmith trace of the above.

Next.js

You can use the traceable wrapper function in Next.js apps to wrap arbitrary functions much like in the example above.

One neat trick you can use for Next.js and other similar server frameworks is to wrap the entire exported handler for a route to group traces for the any sub-runs. Here's an example:

import { NextRequest, NextResponse } from "next/server";

import { OpenAI } from "openai";
import { traceable } from "langsmith/traceable";

export const runtime = "edge";

const handler = traceable(
async function () {
const openai = new OpenAI();
const createCompletion = traceable(
openai.chat.completions.create.bind(openai.chat.completions),
{ name: "OpenAI Chat Completion", run_type: "llm" }
);

const completion = await createCompletion({
model: "gpt-3.5-turbo",
messages: [{ content: "Why is the sky blue?", role: "user" }],
});

const response1 = completion.choices[0].message.content;

const completion2 = await createCompletion({
model: "gpt-3.5-turbo",
messages: [
{ content: "Why is the sky blue?", role: "user" },
{ content: response1, role: "assistant" },
{ content: "Cool thank you!", role: "user" },
],
});

const response2 = completion2.choices[0].message.content;

return {
text: response2,
};
},
{
name: "Simple Next.js handler",
}
);

export async function POST(req: NextRequest) {
const result = await handler();
return NextResponse.json(result);
}

The two OpenAI calls within the handler will be traced with appropriate inputs, outputs, and token usage information.

tip

Click here to see an example LangSmith trace of the above.

Vercel AI SDK

The Vercel AI SDK contains integrations with a variety of model providers. Here's an example of how you can trace outputs in a Next.js handler:

import { traceable } from "langsmith/traceable";
import { OpenAIStream, StreamingTextResponse } from "ai";

// Note: There are no types for the Mistral API client yet.
import MistralClient from "@mistralai/mistralai";

const client = new MistralClient(process.env.MISTRAL_API_KEY || "");

export async function POST(req: Request) {
// Extract the `messages` from the body of the request
const { messages } = await req.json();

const mistralChatStream = traceable(client.chatStream.bind(client), {
name: "Mistral Stream",
run_type: "llm",
});

const response = await mistralChatStream({
model: "mistral-tiny",
maxTokens: 1000,
messages,
});

// Convert the response into a friendly text-stream. The Mistral client responses are
// compatible with the Vercel AI SDK OpenAIStream adapter.
const stream = OpenAIStream(response as any);

// Respond with the stream
return new StreamingTextResponse(stream);
}

See the AI SDK docs for more examples.

Arbitrary SDKs

You can use the generic wrapSDK method to add tracing for arbitrary SDKs.

Do note that this will trace ALL methods in the SDK, not just chat completion endpoints. If the SDK you are wrapping has other methods, we recommend using it for only LLM calls.

Here's an example using the Anthropic SDK:

import { wrapSDK } from "langsmith/wrappers";
import { Anthropic } from "@anthropic-ai/sdk";

const originalSDK = new Anthropic();
const sdkWithTracing = wrapSDK(originalSDK);

const response = await sdkWithTracing.messages.create({
messages: [
{
role: "user",
content: `What is 1 + 1? Respond only with "2" and nothing else.`,
},
],
model: "claude-3-sonnet-20240229",
max_tokens: 1024,
});
tip

Click here to see an example LangSmith trace of the above.


Was this page helpful?


You can leave detailed feedback on GitHub.