traceable#

langsmith.run_helpers.traceable(func: Callable[[P], R]) SupportsLangsmithExtra[P, R][source]#
langsmith.run_helpers.traceable(run_type: Literal['tool', 'chain', 'llm', 'retriever', 'embedding', 'prompt', 'parser'] = 'chain', *, name: str | None = None, metadata: Mapping[str, Any] | None = None, tags: List[str] | None = None, client: Client | None = None, reduce_fn: Callable[[Sequence], dict] | None = None, project_name: str | None = None, process_inputs: Callable[[dict], dict] | None = None, process_outputs: Callable[[...], dict] | None = None, _invocation_params_fn: Callable[[dict], dict] | None = None) Callable[[Callable[[P], R]], SupportsLangsmithExtra[P, R]]

Trace a function with langsmith.

Parameters:
  • run_type – The type of run (span) to create. Examples: llm, chain, tool, prompt, retriever, etc. Defaults to β€œchain”.

  • name – The name of the run. Defaults to the function name.

  • metadata – The metadata to add to the run. Defaults to None.

  • tags – The tags to add to the run. Defaults to None.

  • client – The client to use for logging the run to LangSmith. Defaults to None, which will use the default client.

  • reduce_fn – A function to reduce the output of the function if the function returns a generator. Defaults to None, which means the values will be logged as a list. Note: if the iterator is never exhausted (e.g. the function returns an infinite generator), this will never be called, and the run itself will be stuck in a pending state.

  • project_name – The name of the project to log the run to. Defaults to None, which will use the default project.

  • process_inputs – Custom serialization / processing function for inputs. Defaults to None.

  • process_outputs – Custom serialization / processing function for outputs. Defaults to None.

Returns:

The decorated function.

Return type:

Union[Callable, Callable[[Callable], Callable]]

Note

  • Requires that LANGSMITH_TRACING_V2 be set to β€˜true’ in the environment.

Examples

Basic usage:

@traceable
def my_function(x: float, y: float) -> float:
    return x + y


my_function(5, 6)


@traceable
async def my_async_function(query_params: dict) -> dict:
    async with httpx.AsyncClient() as http_client:
        response = await http_client.get(
            "https://api.example.com/data",
            params=query_params,
        )
        return response.json()


asyncio.run(my_async_function({"param": "value"}))

Streaming data with a generator:

@traceable
def my_generator(n: int) -> Iterable:
    for i in range(n):
        yield i


for item in my_generator(5):
    print(item)

Async streaming data:

@traceable
async def my_async_generator(query_params: dict) -> Iterable:
    async with httpx.AsyncClient() as http_client:
        response = await http_client.get(
            "https://api.example.com/data",
            params=query_params,
        )
        for item in response.json():
            yield item


async def async_code():
    async for item in my_async_generator({"param": "value"}):
        print(item)


asyncio.run(async_code())

Specifying a run type and name:

@traceable(name="CustomName", run_type="tool")
def another_function(a: float, b: float) -> float:
    return a * b


another_function(5, 6)

Logging with custom metadata and tags:

@traceable(
    metadata={"version": "1.0", "author": "John Doe"}, tags=["beta", "test"]
)
def tagged_function(x):
    return x**2


tagged_function(5)

Specifying a custom client and project name:

custom_client = Client(api_key="your_api_key")


@traceable(client=custom_client, project_name="My Special Project")
def project_specific_function(data):
    return data


project_specific_function({"data": "to process"})

Manually passing langsmith_extra:

@traceable
def manual_extra_function(x):
    return x**2


manual_extra_function(5, langsmith_extra={"metadata": {"version": "1.0"}})