AsyncClient#
- class langsmith.async_client.AsyncClient(api_url: str | None = None, api_key: str | None = None, timeout_ms: int | Tuple[int | None, int | None, int | None, int | None] | None = None, retry_config: Mapping[str, Any] | None = None, web_url: str | None = None)[source]#
Async Client for interacting with the LangSmith API.
Initialize the async client.
Methods
__init__
([api_url, api_key, timeout_ms, ...])Initialize the async client.
aclose
()Close the async client.
create_dataset
(dataset_name, **kwargs)Create a dataset.
create_example
(inputs[, outputs, ...])Create an example.
create_feedback
(run_id, key[, score, value, ...])Create feedback for a run.
create_feedback_from_token
(token_or_url[, ...])Create feedback from a presigned token or URL.
create_presigned_feedback_token
(run_id, ...)Create a pre-signed URL to send feedback data to.
create_project
(project_name, **kwargs)Create a project.
create_run
(name, inputs, run_type, *[, ...])Create a run.
delete_dataset
(dataset_id)Delete a dataset.
delete_project
(*[, project_name, project_id])Delete a project from LangSmith.
index_dataset
(*, dataset_id[, tag])Enable dataset indexing.
list_datasets
(**kwargs)List datasets.
list_examples
(*[, dataset_id, dataset_name])List examples.
list_feedback
(*[, run_ids, feedback_key, ...])List feedback.
list_runs
(*[, project_id, project_name, ...])List runs from the LangSmith API.
read_dataset
([dataset_name, dataset_id])Read a dataset.
read_example
(example_id)Read an example.
read_feedback
(feedback_id)Read feedback.
read_project
([project_name, project_id])Read a project.
read_run
(run_id)Read a run.
read_run_shared_link
(run_id)Retrieve the shared link for a specific run asynchronously.
run_is_shared
(run_id)Get share state for a run asynchronously.
share_run
(run_id, *[, share_id])Get a share link for a run asynchronously.
similar_examples
(inputs, /, *, limit, dataset_id)Retrieve the dataset examples whose inputs best match the current inputs.
update_run
(run_id, **kwargs)Update a run.
- Parameters:
api_url (Optional[str]) –
api_key (Optional[str]) –
timeout_ms (Optional[Union[int, Tuple[Optional[int], Optional[int], Optional[int], Optional[int]]]]) –
retry_config (Optional[Mapping[str, Any]]) –
web_url (Optional[str]) –
- __init__(api_url: str | None = None, api_key: str | None = None, timeout_ms: int | Tuple[int | None, int | None, int | None, int | None] | None = None, retry_config: Mapping[str, Any] | None = None, web_url: str | None = None)[source]#
Initialize the async client.
- Parameters:
api_url (str | None) –
api_key (str | None) –
timeout_ms (int | Tuple[int | None, int | None, int | None, int | None] | None) –
retry_config (Mapping[str, Any] | None) –
web_url (str | None) –
- async create_dataset(dataset_name: str, **kwargs: Any) Dataset [source]#
Create a dataset.
- Parameters:
dataset_name (str) –
kwargs (Any) –
- Return type:
- async create_example(inputs: Dict[str, Any], outputs: Dict[str, Any] | None = None, dataset_id: UUID | str | None = None, dataset_name: str | None = None, **kwargs: Any) Example [source]#
Create an example.
- Parameters:
inputs (Dict[str, Any]) –
outputs (Dict[str, Any] | None) –
dataset_id (UUID | str | None) –
dataset_name (str | None) –
kwargs (Any) –
- Return type:
- async create_feedback(run_id: UUID | str | None, key: str, score: float | None = None, value: Any | None = None, comment: str | None = None, **kwargs: Any) Feedback [source]#
Create feedback for a run.
- Parameters:
run_id (Optional[ls_client.ID_TYPE]) – The ID of the run to provide feedback for. Can be None for project-level feedback.
key (str) – The name of the metric or aspect this feedback is about.
score (Optional[float]) – The score to rate this run on the metric or aspect.
value (Optional[Any]) – The display value or non-numeric value for this feedback.
comment (Optional[str]) – A comment about this feedback.
**kwargs – Additional keyword arguments to include in the feedback data.
- Returns:
The created feedback object.
- Return type:
ls_schemas.Feedback
- Raises:
httpx.HTTPStatusError – If the API request fails.
- async create_feedback_from_token(token_or_url: str | UUID, score: float | int | bool | None = None, *, value: float | int | bool | str | dict | None = None, correction: dict | None = None, comment: str | None = None, metadata: dict | None = None) None [source]#
Create feedback from a presigned token or URL.
- Parameters:
token_or_url (Union[str, uuid.UUID]) – The token or URL from which to create feedback.
score (Union[float, int, bool, None], optional) – The score of the feedback. Defaults to None.
value (Union[float, int, bool, str, dict, None], optional) – The value of the feedback. Defaults to None.
correction (Union[dict, None], optional) – The correction of the feedback. Defaults to None.
comment (Union[str, None], optional) – The comment of the feedback. Defaults to None.
metadata (Optional[dict], optional) – Additional metadata for the feedback. Defaults to None.
- Raises:
ValueError – If the source API URL is invalid.
- Returns:
This method does not return anything.
- Return type:
None
- async create_presigned_feedback_token(run_id: UUID | str, feedback_key: str, *, expiration: datetime | timedelta | None = None, feedback_config: FeedbackConfig | None = None, feedback_id: UUID | str | None = None) FeedbackIngestToken [source]#
Create a pre-signed URL to send feedback data to.
This is useful for giving browser-based clients a way to upload feedback data directly to LangSmith without accessing the API key.
- Parameters:
run_id (UUID | str) –
feedback_key (str) –
expiration (datetime | timedelta | None) – The expiration time of the pre-signed URL. Either a datetime or a timedelta offset from now. Default to 3 hours.
feedback_config (FeedbackConfig | None) – FeedbackConfig or None. If creating a feedback_key for the first time, this defines how the metric should be interpreted, such as a continuous score (w/ optional bounds), or distribution over categorical values.
feedback_id (UUID | str | None) – The ID of the feedback to create. If not provided, a new feedback will be created.
- Returns:
The pre-signed URL for uploading feedback data.
- Return type:
- async create_project(project_name: str, **kwargs: Any) TracerSession [source]#
Create a project.
- Parameters:
project_name (str) –
kwargs (Any) –
- Return type:
- async create_run(name: str, inputs: Dict[str, Any], run_type: str, *, project_name: str | None = None, revision_id: UUID | str | None = None, **kwargs: Any) None [source]#
Create a run.
- Parameters:
name (str) –
inputs (Dict[str, Any]) –
run_type (str) –
project_name (str | None) –
revision_id (UUID | str | None) –
kwargs (Any) –
- Return type:
None
- async delete_dataset(dataset_id: UUID | str) None [source]#
Delete a dataset.
- Parameters:
dataset_id (UUID | str) –
- Return type:
None
- async delete_project(*, project_name: str | None = None, project_id: str | None = None) None [source]#
Delete a project from LangSmith.
- Parameters:
project_name (str or None, default=None) – The name of the project to delete.
project_id (str or None, default=None) – The ID of the project to delete.
- Return type:
None
- index_dataset(*, dataset_id: UUID | str, tag: str = 'latest', **kwargs: Any) None [source]#
Enable dataset indexing. Examples are indexed by their inputs.
This enables searching for similar examples by inputs with
client.similar_examples()
.- Parameters:
dataset_id (UUID) – The ID of the dataset to index.
tag (str, optional) – The version of the dataset to index. If ‘latest’ then any updates to the dataset (additions, updates, deletions of examples) will be reflected in the index.
kwargs (Any) –
- Returns:
None
- Raises:
requests.HTTPError –
- Return type:
None
- async list_datasets(**kwargs: Any) AsyncIterator[Dataset] [source]#
List datasets.
- Parameters:
kwargs (Any) –
- Return type:
AsyncIterator[Dataset]
- async list_examples(*, dataset_id: UUID | str | None = None, dataset_name: str | None = None, **kwargs: Any) AsyncIterator[Example] [source]#
List examples.
- Parameters:
dataset_id (UUID | str | None) –
dataset_name (str | None) –
kwargs (Any) –
- Return type:
AsyncIterator[Example]
- async list_feedback(*, run_ids: Sequence[UUID | str] | None = None, feedback_key: Sequence[str] | None = None, feedback_source_type: Sequence[FeedbackSourceType] | None = None, limit: int | None = None, **kwargs: Any) AsyncIterator[Feedback] [source]#
List feedback.
- Parameters:
run_ids (Sequence[UUID | str] | None) –
feedback_key (Sequence[str] | None) –
feedback_source_type (Sequence[FeedbackSourceType] | None) –
limit (int | None) –
kwargs (Any) –
- Return type:
AsyncIterator[Feedback]
- async list_runs(*, project_id: UUID | str | Sequence[UUID | str] | None = None, project_name: Sequence[str] | str | None = None, run_type: str | None = None, trace_id: UUID | str | None = None, reference_example_id: UUID | str | None = None, query: str | None = None, filter: str | None = None, trace_filter: str | None = None, tree_filter: str | None = None, is_root: bool | None = None, parent_run_id: UUID | str | None = None, start_time: datetime | None = None, error: bool | None = None, run_ids: Sequence[UUID | str] | None = None, select: Sequence[str] | None = None, limit: int | None = None, **kwargs: Any) AsyncIterator[Run] [source]#
List runs from the LangSmith API.
- Parameters:
project_id (UUID or None, default=None) – The ID(s) of the project to filter by.
project_name (str or None, default=None) – The name(s) of the project to filter by.
run_type (str or None, default=None) – The type of the runs to filter by.
trace_id (UUID or None, default=None) – The ID of the trace to filter by.
reference_example_id (UUID or None, default=None) – The ID of the reference example to filter by.
query (str or None, default=None) – The query string to filter by.
filter (str or None, default=None) – The filter string to filter by.
trace_filter (str or None, default=None) – Filter to apply to the ROOT run in the trace tree. This is meant to be used in conjunction with the regular filter parameter to let you filter runs by attributes of the root run within a trace.
tree_filter (str or None, default=None) – Filter to apply to OTHER runs in the trace tree, including sibling and child runs. This is meant to be used in conjunction with the regular filter parameter to let you filter runs by attributes of any run within a trace.
is_root (bool or None, default=None) – Whether to filter by root runs.
parent_run_id (UUID or None, default=None) – The ID of the parent run to filter by.
start_time (datetime or None, default=None) – The start time to filter by.
error (bool or None, default=None) – Whether to filter by error status.
run_ids (List[str or UUID] or None, default=None) – The IDs of the runs to filter by.
limit (int or None, default=None) – The maximum number of runs to return.
**kwargs (Any) – Additional keyword arguments.
Yields –
------ –
Run – The runs.
Examples –
-------- –
project (List root traces in a) –
code-block: (..) – python: project_runs = client.list_runs(project_name=”<your_project>”)
hours (List LLM and Chat runs in the last 24) –
code-block: –
python: todays_llm_runs = client.list_runs(
project_name=”<your_project>”, start_time=datetime.now() - timedelta(days=1), run_type=”llm”,
)
project –
code-block: – python: root_runs = client.list_runs(project_name=”<your_project>”, is_root=1)
errors (List runs without) –
code-block: – python: correct_runs = client.list_runs(project_name=”<your_project>”, error=False)
query) (List runs and only return their inputs/outputs (to speed up the) –
code-block: –
python: input_output_runs = client.list_runs(
project_name=”<your_project>”, select=[“inputs”, “outputs”]
)
ID (List runs by run) –
code-block: –
python: run_ids = [
”a36092d2-4ad5-4fb4-9c0d-0dba9a2ed836”, “9398e6be-964f-4aa4-8ae9-ad78cd4b7074”,
] selected_runs = client.list_runs(id=run_ids)
had (List all "chain" type runs that took more than 10 seconds and) –
5000 (total_tokens greater than) –
code-block: –
python: chain_runs = client.list_runs(
project_name=”<your_project>”, filter=’and(eq(run_type, “chain”), gt(latency, 10), gt(total_tokens, 5000))’,
)
1 (List all runs called "extractor" whose root of the trace was assigned feedback "user_score" score of) –
code-block: –
python: good_extractor_runs = client.list_runs(
project_name=”<your_project>”, filter=’eq(name, “extractor”)’, trace_filter=’and(eq(feedback_key, “user_score”), eq(feedback_score, 1))’,
)
0 (List all runs that started after a specific timestamp and either have "error" not equal to null or a "Correctness" feedback score equal to) –
code-block: –
python: complex_runs = client.list_runs(
project_name=”<your_project>”, filter=’and(gt(start_time, “2023-07-15T12:34:56Z”), or(neq(error, null), and(eq(feedback_key, “Correctness”), eq(feedback_score, 0.0))))’,
)
seconds (List all runs where tags include "experimental" or "beta" and latency is greater than 2) –
code-block: –
python: tagged_runs = client.list_runs(
project_name=”<your_project>”, filter=’and(or(has(tags, “experimental”), has(tags, “beta”)), gt(latency, 2))’,
)
select (Sequence[str] | None) –
- Return type:
AsyncIterator[Run]
- async read_dataset(dataset_name: str | None = None, dataset_id: UUID | str | None = None) Dataset [source]#
Read a dataset.
- Parameters:
dataset_name (str | None) –
dataset_id (UUID | str | None) –
- Return type:
- async read_example(example_id: UUID | str) Example [source]#
Read an example.
- Parameters:
example_id (UUID | str) –
- Return type:
- async read_feedback(feedback_id: UUID | str) Feedback [source]#
Read feedback.
- Parameters:
feedback_id (UUID | str) –
- Return type:
- async read_project(project_name: str | None = None, project_id: UUID | str | None = None) TracerSession [source]#
Read a project.
- Parameters:
project_name (str | None) –
project_id (UUID | str | None) –
- Return type:
- async read_run(run_id: UUID | str) Run [source]#
Read a run.
- Parameters:
run_id (UUID | str) –
- Return type:
Retrieve the shared link for a specific run asynchronously.
- Parameters:
run_id (ID_TYPE) – The ID of the run.
- Returns:
The shared link for the run, or None if the link is not available.
- Return type:
Optional[str]
- Raises:
httpx.HTTPStatusError – If the API request fails.
Get share state for a run asynchronously.
- Parameters:
run_id (UUID | str) –
- Return type:
bool
Get a share link for a run asynchronously.
- Parameters:
run_id (ID_TYPE) – The ID of the run to share.
share_id (Optional[ID_TYPE], optional) – Custom share ID. If not provided, a random UUID will be generated.
- Returns:
The URL of the shared run.
- Return type:
str
- Raises:
httpx.HTTPStatusError – If the API request fails.
- similar_examples(inputs: dict, /, *, limit: int, dataset_id: UUID | str, filter: str | None = None, **kwargs: Any) List[ExampleSearch] [source]#
Retrieve the dataset examples whose inputs best match the current inputs.
Note: Must have few-shot indexing enabled for the dataset. See
client.index_dataset()
.- Parameters:
inputs (dict) – The inputs to use as a search query. Must match the dataset input schema. Must be JSON serializable.
limit (int) – The maximum number of examples to return.
dataset_id (str or UUID) – The ID of the dataset to search over.
filter (str, optional) – A filter string to apply to the search results. Uses the same syntax as the filter parameter in list_runs(). Only a subset of operations are supported. Defaults to None.
kwargs (Any) – Additional keyword args to pass as part of request body.
- Returns:
List of ExampleSearch objects.
- Return type:
List[ExampleSearch]
Example
from langsmith import Client client = Client() await client.similar_examples( {"question": "When would i use the runnable generator"}, limit=3, dataset_id="...", )
[ ExampleSearch( inputs={'question': 'How do I cache a Chat model? What caches can I use?'}, outputs={'answer': 'You can use LangChain\'s caching layer for Chat Models. This can save you money by reducing the number of API calls you make to the LLM provider, if you\'re often requesting the same completion multiple times, and speed up your application.\n\n```python\n\nfrom langchain.cache import InMemoryCache\nlangchain.llm_cache = InMemoryCache()\n\n# The first time, it is not yet in cache, so it should take longer\nllm.predict(\'Tell me a joke\')\n\n```\n\nYou can also use SQLite Cache which uses a SQLite database:\n\n```python\n rm .langchain.db\n\nfrom langchain.cache import SQLiteCache\nlangchain.llm_cache = SQLiteCache(database_path=".langchain.db")\n\n# The first time, it is not yet in cache, so it should take longer\nllm.predict(\'Tell me a joke\') \n```\n'}, metadata=None, id=UUID('b2ddd1c4-dff6-49ae-8544-f48e39053398'), dataset_id=UUID('01b6ce0f-bfb6-4f48-bbb8-f19272135d40') ), ExampleSearch( inputs={'question': "What's a runnable lambda?"}, outputs={'answer': "A runnable lambda is an object that implements LangChain's `Runnable` interface and runs a callbale (i.e., a function). Note the function must accept a single argument."}, metadata=None, id=UUID('f94104a7-2434-4ba7-8293-6a283f4860b4'), dataset_id=UUID('01b6ce0f-bfb6-4f48-bbb8-f19272135d40') ), ExampleSearch( inputs={'question': 'Show me how to use RecursiveURLLoader'}, outputs={'answer': 'The RecursiveURLLoader comes from the langchain.document_loaders.recursive_url_loader module. Here\'s an example of how to use it:\n\n```python\nfrom langchain.document_loaders.recursive_url_loader import RecursiveUrlLoader\n\n# Create an instance of RecursiveUrlLoader with the URL you want to load\nloader = RecursiveUrlLoader(url="https://example.com")\n\n# Load all child links from the URL page\nchild_links = loader.load()\n\n# Print the child links\nfor link in child_links:\n print(link)\n```\n\nMake sure to replace "https://example.com" with the actual URL you want to load. The load() method returns a list of child links found on the URL page. You can iterate over this list to access each child link.'}, metadata=None, id=UUID('0308ea70-a803-4181-a37d-39e95f138f8c'), dataset_id=UUID('01b6ce0f-bfb6-4f48-bbb8-f19272135d40') ), ]