Skip to content

Latest commit

Β 

History

History
6631 lines (4625 loc) Β· 116 KB

File metadata and controls

6631 lines (4625 loc) Β· 116 KB

Reference

client.analyze_stream(...)

πŸ“ Description

This method synchronously analyzes your videos and generates fully customizable text based on your prompts.

- Minimum duration: 4 seconds - Maximum duration: 1 hour - Formats: [FFmpeg supported formats](https://ffmpeg.org/ffmpeg-formats.html) - Resolution: 360x360 to 5184x2160 pixels - Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1.

When to use this method:

  • Analyze videos up to 1 hour
  • Retrieve immediate results without waiting for asynchronous processing
  • Stream text fragments in real-time for immediate processing and feedback

Do not use this method for:

  • Videos longer than 1 hour. Use the POST method of the /analyze/tasks endpoint instead.
- This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.

πŸ”Œ Usage

from twelvelabs import ResponseFormat, TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.analyze_stream(
    video_id="6298d673f1090f1100476d4c",
    prompt="I want to generate a description for my video with the following format - Title of the video, followed by a summary in 2-3 sentences, highlighting the main topic, key events, and concluding remarks.",
    temperature=0.2,
    response_format=ResponseFormat(
        type="json_schema",
        json_schema={
            "type": "object",
            "properties": {
                "title": {"type": "string"},
                "summary": {"type": "string"},
                "keywords": {"type": "array", "items": {"type": "string"}},
            },
        },
    ),
    max_tokens=2000,
)
for chunk in response.data:
    yield chunk

βš™οΈ Parameters

prompt: AnalyzeTextPrompt

video_id: typing.Optional[str]

The unique identifier of the video to analyze.

This parameter will be deprecated and removed in a future version. Use the video parameter instead.

video: typing.Optional[VideoContext]

temperature: typing.Optional[AnalyzeTemperature]

response_format: typing.Optional[ResponseFormat]

max_tokens: typing.Optional[AnalyzeMaxTokens]

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.analyze(...)

πŸ“ Description

This method synchronously analyzes your videos and generates fully customizable text based on your prompts.

- Minimum duration: 4 seconds - Maximum duration: 1 hour - Formats: [FFmpeg supported formats](https://ffmpeg.org/ffmpeg-formats.html) - Resolution: 360x360 to 5184x2160 pixels - Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1.

When to use this method:

  • Analyze videos up to 1 hour
  • Retrieve immediate results without waiting for asynchronous processing
  • Stream text fragments in real-time for immediate processing and feedback

Do not use this method for:

  • Videos longer than 1 hour. Use the POST method of the /analyze/tasks endpoint instead.
- This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.

πŸ”Œ Usage

from twelvelabs import ResponseFormat, TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.analyze(
    video_id="6298d673f1090f1100476d4c",
    prompt="I want to generate a description for my video with the following format - Title of the video, followed by a summary in 2-3 sentences, highlighting the main topic, key events, and concluding remarks.",
    temperature=0.2,
    response_format=ResponseFormat(
        type="json_schema",
        json_schema={
            "type": "object",
            "properties": {
                "title": {"type": "string"},
                "summary": {"type": "string"},
                "keywords": {"type": "array", "items": {"type": "string"}},
            },
        },
    ),
    max_tokens=2000,
)

βš™οΈ Parameters

prompt: AnalyzeTextPrompt

video_id: typing.Optional[str]

The unique identifier of the video to analyze.

This parameter will be deprecated and removed in a future version. Use the video parameter instead.

video: typing.Optional[VideoContext]

temperature: typing.Optional[AnalyzeTemperature]

response_format: typing.Optional[ResponseFormat]

max_tokens: typing.Optional[AnalyzeMaxTokens]

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Tasks

client.tasks.list(...)

πŸ“ Description

This method returns a list of the video indexing tasks in your account. The platform returns your video indexing tasks sorted by creation date, with the newest at the top of the list.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.tasks.list(
    page=1,
    page_limit=10,
    sort_by="created_at",
    sort_option="desc",
    index_id="630aff993fcee0532cb809d0",
    filename="01.mp4",
    duration=531.998133,
    width=640,
    height=360,
    created_at="2024-03-01T00:00:00Z",
    updated_at="2024-03-01T00:00:00Z",
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

sort_by: typing.Optional[str]

The field to sort on. The following options are available:

  • updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.
  • created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.

Default: created_at.

sort_option: typing.Optional[str]

The sorting direction. The following options are available:

  • asc
  • desc

Default: desc.

index_id: typing.Optional[str] β€” Filter by the unique identifier of an index.

status: typing.Optional[ typing.Union[ TasksListRequestStatusItem, typing.Sequence[TasksListRequestStatusItem] ] ]

Filter by one or more video indexing task statuses. The following options are available:

  • ready: The video has been successfully uploaded and indexed.
  • uploading: The video is being uploaded.
  • validating: The video is being validated against the prerequisites.
  • pending: The video is pending.
  • queued: The video is queued.
  • indexing: The video is being indexed.
  • failed: The video indexing task failed.

To filter by multiple statuses, specify the status parameter for each value:

status=ready&status=validating

filename: typing.Optional[str] β€” Filter by filename.

duration: typing.Optional[float] β€” Filter by duration. Expressed in seconds.

width: typing.Optional[int] β€” Filter by width.

height: typing.Optional[int] β€” Filter by height.

created_at: typing.Optional[str] β€” Filter video indexing tasks by the creation date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the video indexing tasks that were created on the specified date at or after the given time.

updated_at: typing.Optional[str] β€” Filter video indexing tasks by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the video indexing tasks that were updated on the specified date at or after the given time.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tasks.create(...)

πŸ“ Description

This method creates a video indexing task that uploads and indexes a video in a single operation.

This endpoint bundles two operations (upload and indexing) together. In the next major API release, this endpoint will be removed in favor of a separated workflow: 1. Upload your video using the [`POST /assets`](/v1.3/api-reference/upload-content/direct-uploads/create) endpoint 2. Index the uploaded video using the [`POST /indexes/{index-id}/indexed-assets`](/v1.3/api-reference/index-content/create) endpoint

This separation provides better control, reusability of assets, and improved error handling. New implementations should use the new workflow.

Upload options:

  • Local file: Use the video_file parameter.
  • Publicly accessible URL: Use the video_url parameter.

Your video files must meet requirements based on your workflow:

This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.tasks.create(
    index_id="index_id",
)

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index to which the video is being uploaded.

video_file: `from future import annotations

typing.Optional[core.File]` β€” See core.File for more documentation

video_url: typing.Optional[str] β€” Specify this parameter to upload a video from a publicly accessible URL.

enable_video_stream: typing.Optional[bool] β€” This parameter indicates if the platform stores the video for streaming. When set to true, the platform stores the video, and you can retrieve its URL by calling the GET method of the /indexes/{index-id}/videos/{video-id} endpoint. You can then use this URL to access the stream over the HLS protocol.

user_metadata: typing.Optional[str] β€” Metadata that helps you categorize your videos. You can specify a list of keys and values. Keys must be of type string, and values can be of the following types: string, integer, float or boolean.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tasks.retrieve(...)

πŸ“ Description

This method retrieves a video indexing task.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.tasks.retrieve(
    task_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

task_id: str β€” The unique identifier of the video indexing task to retrieve.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tasks.delete(...)

πŸ“ Description

This action cannot be undone. Note the following about deleting a video indexing task:

  • You can only delete video indexing tasks for which the status is ready or failed.
  • If the status of your video indexing task is ready, you must first delete the video vector associated with your video indexing task by calling the DELETE method of the /indexes/videos endpoint.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.tasks.delete(
    task_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

task_id: str β€” The unique identifier of the video indexing task you want to delete.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Indexes

client.indexes.list(...)

πŸ“ Description

This method returns a list of the indexes in your account. The platform returns indexes sorted by creation date, with the oldest indexes at the top of the list.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.indexes.list(
    page=1,
    page_limit=10,
    sort_by="created_at",
    sort_option="desc",
    index_name="myIndex",
    model_options="visual,audio",
    model_family="marengo",
    created_at="2024-08-16T16:53:59Z",
    updated_at="2024-08-16T16:55:59Z",
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

sort_by: typing.Optional[str]

The field to sort on. The following options are available:

  • updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.
  • created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.

Default: created_at.

sort_option: typing.Optional[str]

The sorting direction. The following options are available:

  • asc
  • desc

Default: desc.

index_name: typing.Optional[str] β€” Filter by the name of an index.

model_options: typing.Optional[str] β€” Filter by the model options. When filtering by multiple model options, the values must be comma-separated.

model_family: typing.Optional[str] β€” Filter by the model family. This parameter can take one of the following values: marengo or pegasus. You can specify a single value.

created_at: typing.Optional[str] β€” Filter indexes by the creation date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the indexes that were created on the specified date at or after the given time.

updated_at: typing.Optional[str] β€” Filter indexes by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the indexes that were last updated on the specified date at or after the given time.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.create(...)

πŸ“ Description

This method creates an index.

πŸ”Œ Usage

from twelvelabs import TwelveLabs
from twelvelabs.indexes import IndexesCreateRequestModelsItem

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.create(
    index_name="myIndex",
    models=[
        IndexesCreateRequestModelsItem(
            model_name="marengo3.0",
            model_options=["visual", "audio"],
        ),
        IndexesCreateRequestModelsItem(
            model_name="pegasus1.2",
            model_options=["visual", "audio"],
        ),
    ],
    addons=["thumbnail"],
)

βš™οΈ Parameters

index_name: str β€” The name of the index. Make sure you use a succinct and descriptive name.

models: typing.Sequence[IndexesCreateRequestModelsItem] β€” An array that specifies the video understanding models and the model options to be enabled for this index. Models determine what tasks you can perform with your videos. Model options determine which modalities the platform analyzes.

addons: typing.Optional[typing.Sequence[str]]

An array specifying which add-ons should be enabled. Each entry in the array is an addon, and the following values are supported:

  • thumbnail: Enables thumbnail generation.

If you don't provide this parameter, no add-ons will be enabled.

- You can only enable addons when using the Marengo video understanding model. - You cannot disable an add-on once the index has been created.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.retrieve(...)

πŸ“ Description

This method retrieves details about the specified index.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.retrieve(
    index_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

index_id: str β€” Unique identifier of the index to retrieve.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.update(...)

πŸ“ Description

This method updates the name of the specified index.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.update(
    index_id="6298d673f1090f1100476d4c",
    index_name="myIndex",
)

βš™οΈ Parameters

index_id: str β€” Unique identifier of the index to update.

index_name: str β€” The name of the index.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.delete(...)

πŸ“ Description

This method deletes the specified index and all the videos within it. This action cannot be undone.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.delete(
    index_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

index_id: str β€” Unique identifier of the index to delete.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Assets

client.assets.list(...)

πŸ“ Description

This method returns a list of assets in your account.

The platform returns your assets sorted by creation date, with the newest at the top of the list.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.assets.list(
    page=1,
    page_limit=10,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

asset_ids: typing.Optional[typing.Union[str, typing.Sequence[str]]] β€” Filters the response to include only assets with the specified IDs. Provide one or more asset IDs. When you specify multiple IDs, the platform returns all matching assets.

asset_types: typing.Optional[ typing.Union[ AssetsListRequestAssetTypesItem, typing.Sequence[AssetsListRequestAssetTypesItem], ] ] β€” Filters the response to include only assets of the specified types. Provide one or more asset types. When you specify multiple types, the platform returns all matching assets.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.assets.create(...)

πŸ“ Description

This method creates an asset by uploading a file to the platform. Assets are media files that you can use in downstream workflows, including indexing, analyzing video content, and creating entities.

Supported content: Video, audio, and images.

Upload methods:

  • Local file: Set the method parameter to direct and use the file parameter to specify the file.
  • Publicly accessible URL: Set the method parameter to url and use the url parameter to specify the URL of your file.

File size: Up to 4 GB.

Additional requirements depend on your workflow:

This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.assets.create(
    method="direct",
)

βš™οΈ Parameters

method: AssetsCreateRequestMethod β€” Specifies the upload method for the asset. Use direct to upload a local file or url for a publicly accessible URL.

file: `from future import annotations

typing.Optional[core.File]` β€” See core.File for more documentation

url: typing.Optional[str]

Specify this parameter to upload a file from a publicly accessible URL. This parameter is required when method is set to url.

URL uploads have a maximum limit of 4 GB.

filename: typing.Optional[str] β€” The optional filename of the asset. If not provided, the platform will determine the filename from the file or URL.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.assets.retrieve(...)

πŸ“ Description

This method retrieves details about the specified asset.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.assets.retrieve(
    asset_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

asset_id: str β€” The unique identifier of the asset to retrieve.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.assets.delete(...)

πŸ“ Description

This method deletes the specified asset. This action cannot be undone.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.assets.delete(
    asset_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

asset_id: str β€” The unique identifier of the asset to delete.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

MultipartUpload

client.multipart_upload.list_incomplete_uploads(...)

πŸ“ Description

This method returns a list of all incomplete multipart upload sessions in your account.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.multipart_upload.list_incomplete_uploads(
    page=1,
    page_limit=10,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.multipart_upload.create(...)

πŸ“ Description

This method creates a multipart upload session.

Supported content: Video and audio

File size: 4 GB maximum.

Additional requirements depend on your workflow:

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.multipart_upload.create(
    filename="my-video.mp4",
    type="video",
    total_size=104857600,
)

βš™οΈ Parameters

filename: str β€” The original file name of the asset.

type: CreateAssetUploadRequestType β€” The type of asset you want to upload.

total_size: int

The total size of the file in bytes. The platform uses this value to:

  • Calculate the optimal chunk size.
  • Determine the total number of chunks required
  • Generate the initial set of presigned URLs

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.multipart_upload.get_status(...)

πŸ“ Description

This method provides information about an upload session, including its current status, chunk-level progress, and completion state.

Use this method to:

  • Verify upload completion (status = completed)
  • Identify any failed chunks that require a retry
  • Monitor the upload progress by comparing uploaded_size with total_size
  • Determine if the session has expired
  • Retrieve the status information for each chunk

You must call this method after reporting chunk completion to confirm the upload has transitioned to the completed status before using the asset.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.multipart_upload.get_status(
    upload_id="507f1f77bcf86cd799439011",
    page=1,
    page_limit=10,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

upload_id: str β€” The unique identifier of the upload session.

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.multipart_upload.report_chunk_batch(...)

πŸ“ Description

This method reports successfully uploaded chunks to the platform. The platform finalizes the upload after you report all chunks.

For optimal performance, report chunks in batches and in any order.

πŸ”Œ Usage

from twelvelabs import CompletedChunk, TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.multipart_upload.report_chunk_batch(
    upload_id="507f1f77bcf86cd799439011",
    completed_chunks=[
        CompletedChunk(
            chunk_index=1,
            proof="d41d8cd98f00b204e9800998ecf8427e",
            proof_type="etag",
            chunk_size=5242880,
        )
    ],
)

βš™οΈ Parameters

upload_id: str β€” The unique identifier of the upload session.

completed_chunks: typing.Sequence[CompletedChunk] β€” The list of chunks successfully uploaded that you're reporting to the platform. Report only after receiving an ETag.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.multipart_upload.get_additional_presigned_urls(...)

πŸ“ Description

This method generates new presigned URLs for specific chunks that require uploading. Use this endpoint in the following situations:

  • Your initial URLs have expired (URLs expire after one hour).
  • The initial set of presigned URLs does not include URLs for all chunks.
  • You need to retry failed chunk uploads with new URLs. To specify which chunks need URLs, use the start and count parameters. For example, to generate URLs for chunks 21 to 30, use start=21 and count=10. The response will provide new URLs, each with a fresh expiration time of one hour.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.multipart_upload.get_additional_presigned_urls(
    upload_id="507f1f77bcf86cd799439011",
    start=1,
    count=10,
)

βš™οΈ Parameters

upload_id: str β€” The unique identifier of the upload session.

start: int β€” The index of the first chunk number to generate URLs for. Chunks are numbered from 1.

count: int β€” The number of presigned URLs to generate starting from the index. You can request a maximum of 50 URLs in a single API call.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

EntityCollections

client.entity_collections.list(...)

πŸ“ Description

This method returns a list of the entity collections in your account.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.entity_collections.list(
    page=1,
    page_limit=10,
    name="My entity collection",
    sort_by="created_at",
    sort_option="desc",
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

name: typing.Optional[str] β€” Filter entity collections by name.

sort_by: typing.Optional[EntityCollectionsListRequestSortBy]

The field to sort on. The following options are available:

  • created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity collection was updated.
  • updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity collection was created.
  • name: Sorts by the name.

sort_option: typing.Optional[str]

The sorting direction. The following options are available:

  • asc
  • desc

Default: desc.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.create(...)

πŸ“ Description

This method creates an entity collection.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.create(
    name="My entity collection",
)

βš™οΈ Parameters

name: str β€” The name of the entity collection. Make sure you use a succinct and descriptive name.

description: typing.Optional[str] β€” Optional description of the entity collection.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.retrieve(...)

πŸ“ Description

This method retrieves details about the specified entity collection.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.retrieve(
    entity_collection_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection to retrieve.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.delete(...)

πŸ“ Description

This method deletes the specified entity collection. This action cannot be undone.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.delete(
    entity_collection_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection to delete.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.update(...)

πŸ“ Description

This method updates the specified entity collection.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.update(
    entity_collection_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection to update.

name: typing.Optional[str] β€” The updated name of the entity collection.

description: typing.Optional[str] β€” The updated description of the entity collection.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Embed

client.embed.create(...)

πŸ“ Description

This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.

This method creates embeddings for text, image, and audio content.

Ensure your media files meet the following requirements:

Parameters for embeddings:

  • Common parameters:
    • model_name: The video understanding model you want to use. Example: "marengo3.0".
  • Text embeddings:
    • text: Text for which to create an embedding.
  • Image embeddings: Provide one of the following:
    • image_url: Publicly accessible URL of your image file.
    • image_file: Local image file.
  • Audio embeddings: Provide one of the following:
    • audio_url: Publicly accessible URL of your audio file.
    • audio_file: Local audio file.
- The Marengo video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content. - You can create multiple types of embeddings in a single API call. - Audio embeddings combine generic sound and human speech in a single embedding. For videos with transcriptions, you can retrieve transcriptions and then [create text embeddings](/v1.3/api-reference/create-embeddings-v1/text-image-audio-embeddings/create-text-image-audio-embeddings) from these - This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.embed.create(
    model_name="model_name",
)

βš™οΈ Parameters

model_name: str

The name of the model you want to use. The following models are available:

  • marengo3.0: Enhanced model with sports intelligence and extended content support.

text: typing.Optional[str]

The text for which you wish to create an embedding.

Example: "Man with a dog crossing the street"

image_url: typing.Optional[str] β€” The publicly accessible URL of the image for which you wish to create an embedding. This parameter is required for image embeddings if image_file is not provided.

image_file: `from future import annotations

typing.Optional[core.File]` β€” See core.File for more documentation

audio_url: typing.Optional[str] β€” The publicly accessible URL of the audio file for which you wish to creae an embedding. This parameter is required for audio embeddings if audio_file is not provided.

audio_file: `from future import annotations

typing.Optional[core.File]` β€” See core.File for more documentation

audio_start_offset_sec: typing.Optional[float]

Specifies the start time, in seconds, from which the platform generates the audio embeddings. This parameter allows you to skip the initial portion of the audio during processing. Default: 0.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Search

client.search.create(...)

πŸ“ Description

Use this endpoint to search for relevant matches in an index using text, media, or a combination of both as your query.

Text queries:

  • Use the query_text parameter to specify your query.

Media queries:

  • Set the query_media_type parameter to the corresponding media type (example: image).
  • Provide up to 10 images by specifying the following parameters multiple times:
    • query_media_url: Publicly accessible URL of your media file.
    • query_media_file: Local media file. Composed text and media queries:
  • Use the query_text parameter for your text query.
  • Set query_media_type to image.
  • Provide up to 10 images by specifying the query_media_url and query_media_file parameters multiple times.

Entity search (beta):

  • To find a specific person in your videos, enclose the unique identifier of the entity you want to find in the query_text parameter.
- When using images in your search queries (either as media queries or in composed searches), ensure your image files meet the [requirements](/v1.3/docs/concepts/models/marengo#image-file-requirements). - This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.search.create(
    index_id="index_id",
    search_options=["visual"],
)

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index to search.

search_options: typing.List[SearchCreateRequestSearchOptionsItem]

Specifies the modalities the video understanding model uses to find relevant information.

Available options:

  • visual: Searches visual content.
  • audio: Searches non-speech audio.
  • transcription: Spoken words
- You can specify multiple search options in conjunction with the [`operator`](/v1.3/api-reference/any-to-video-search/make-search-request#request.body.operator.operator) parameter described below to broaden or narrow your search. For example, to search using both visual and non-speech audio content, include this parameter two times in the request as shown below: ```JSON --form search_options=visual \ --form search_options=audio \ --form search_options=transcription \ ```

For guidance, see the Search options section.

query_media_type: typing.Optional[SearchCreateRequestQueryMediaType] β€” The type of media you wish to use. This parameter is required for media queries. For example, to perform an image-based search, set this parameter to image. Use query_text together with this parameter when you want to perform a composed image+text search.

query_media_url: typing.Optional[str]

The publicly accessible URL of a media file to use as a query. This parameter is required for media queries if query_media_file is not provided.

You can provide up to 10 images by specifying this parameter multiple times (Marengo 3.0 only):

--form query_media_url=https://example.com/image1.jpg \
--form query_media_url=https://example.com/image2.jpg

query_media_file: `from future import annotations

typing.Optional[core.File]` β€” See core.File for more documentation

query_text: typing.Optional[str]

The text query to search for. This parameter is required for text queries. Note that the platform supports full natural language-based search. You can use this parameter together with query_media_type and query_media_url or query_media_file to perform a composed image+text search.

If you're using the Entity Search feature to search for specific persons in your video content, you must enclose the unique identifier of your entity between the <@ and > markers. For example, to search for an entity with the ID entity123, use <@entity123> is walking as your query.

Marengo supports up to 500 tokens per query.

transcription_options: typing.Optional[typing.List[SearchCreateRequestTranscriptionOptionsItem]]

Specifies how the platform matches your text query with the words spoken in the video. This parameter applies only when the search_options parameter contains the transcription value.

Available options:

  • lexical: Exact word matching
  • semantic: Meaning-based matching

For details on when to use each option, see the Transcription options section.

Default: ["lexical", "semantic"].

group_by: typing.Optional[SearchCreateRequestGroupBy]

Use this parameter to group or ungroup items in a response. It can take one of the following values:

  • video: The platform will group the matching video clips in the response by video.
  • clip: The matching video clips in the response will not be grouped.

Default: clip

operator: typing.Optional[SearchCreateRequestOperator]

Combines multiple search options using or or and. Use and to find segments matching all search options. Use or to find segments matching any search option. For detailed guidance on using this parameter, see the Combine multiple modalities section.

Default: or.

page_limit: typing.Optional[int]

The number of items to return on each page. When grouping by video, this parameter represents the number of videos per page. Otherwise, it represents the maximum number of video clips per page.

Max: 50.

filter: typing.Optional[str]

Specifies a stringified JSON object to filter your search results. Supports both system-generated metadata (example: video ID, duration) and user-defined metadata.

Syntax for filtering

The following table describes the supported data types, operators, and filter syntax:

Data type Operator Description Syntax
String = Matches results equal to the specified value. {"field": "value"}
Array of strings = Matches results with any value in the specified array. Supported only for id. {"id": ["value1", "value2"]}
Numeric (integer, float) =, lte, gte Matches results equal to or within a range of the specified value. {"field": number} or {"field": { "gte": number, "lte": number }}
Boolean = Matches results equal to the specified boolean value. {"field": true} or {"field": false}.

**System-generated metadata**

The table below describes the system-generated metadata available for filtering your search results:

Field name Description Type Example
id Filters by specific video IDs. Array of strings {"id": ["67cec9caf45d9b64a58340fc", "67cec9baf45d9b64a58340fa"]}.
duration Filters based on the duration of the video containing the segment that matches your query. Number or object with gte and lte {"duration": 600} or {"duration": { "gte": 600, "lte": 800 }}
width Filters by video width (in pixels). Number or object with gte and lte {"width": 1920} or {"width": { "gte": 1280, "lte": 1920}}
height Filters by video height (in pixels). Number or object with gte and lte. {"height": 1080} or {"height": { "gte": 720, "lte": 1080 }}.
size Filters by video size (in bytes) Number or object with gte and lte. {"size": 1048576} or {"size": { "gte": 1048576, "lte": 5242880}}
filename Filters by the exact file name. String {"filename": "Animal Encounters part 1"}

**User-defined metadata**

To filter by user-defined metadata:

  1. Add metadata to your video by calling the PUT method of the /indexes/:index-id/videos/:video-id endpoint
  2. Reference the custom field in your filter object. For example, to filter videos where a custom field named needsReview of type boolean is true, use {"needs_review": true}.

For more details and examples, see the Filter search results page.

include_user_metadata: typing.Optional[bool] β€” Specifies whether to include user-defined metadata in the search results.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.search.retrieve(...)

πŸ“ Description

Use this endpoint to retrieve a specific page of search results.

When you use pagination, you will not be charged for retrieving subsequent pages of results.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.search.retrieve(
    page_token="1234567890",
    include_user_metadata=True,
)

βš™οΈ Parameters

page_token: str β€” A token that identifies the page to retrieve.

include_user_metadata: typing.Optional[bool] β€” Specifies whether to include user-defined metadata in the search results.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

AnalyzeAsync Tasks

client.analyze_async.tasks.list(...)

πŸ“ Description

This method returns a list of the analysis tasks in your account. The platform returns your analysis tasks sorted by creation date, with the newest at the top of the list.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.analyze_async.tasks.list(
    page=1,
    page_limit=10,
    status="queued",
)

βš™οΈ Parameters

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

status: typing.Optional[AnalyzeTaskStatus]

Filter analysis tasks by status. Possible values: queued, pending, processing, ready, failed.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.analyze_async.tasks.create(...)

πŸ“ Description

This method asynchronously analyzes your videos and generates fully customizable text based on your prompts.

- Minimum duration: 4 seconds - Maximum duration: 2 hours - Formats: [FFmpeg supported formats](https://ffmpeg.org/ffmpeg-formats.html) - Resolution: 360x360 to 5184x2160 pixels - Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1.

When to use this method:

  • Analyze videos longer than 1 hour
  • Process videos asynchronously without blocking your application

Do not use this method for:

  • Videos for which you need immediate results or real-time streaming. Use the POST method of the /analyze endpoint instead.

Analyzing videos asynchronously requires three steps:

  1. Create an analysis task using this method. The platform returns a task ID.
  2. Poll the status of the task using the GET method of the /analyze/tasks/{task_id} endpoint. Wait until the status is ready.
  3. Retrieve the results from the response when the status is ready using the GET method of the /analyze/tasks/{task_id} endpoint.
This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.

πŸ”Œ Usage

from twelvelabs import TwelveLabs, VideoContext_Url

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.analyze_async.tasks.create(
    video=VideoContext_Url(
        url="https://example.com/video.mp4",
    ),
    prompt="Generate a detailed summary of this video in 3-4 sentences",
    temperature=0.2,
    max_tokens=1000,
)

βš™οΈ Parameters

video: VideoContext

prompt: AnalyzeTextPrompt

temperature: typing.Optional[AnalyzeTemperature]

max_tokens: typing.Optional[AnalyzeMaxTokens]

response_format: typing.Optional[ResponseFormat]

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.analyze_async.tasks.retrieve(...)

πŸ“ Description

This method retrieves the status and results of an analysis task.

Task statuses:

  • queued: The task is waiting to be processed.
  • pending: The task is queued and waiting to start.
  • processing: The platform is analyzing the video.
  • ready: Processing is complete. Results are available in the response.
  • failed: The task failed. No results were generated.

Poll this method until status is ready or failed. When status is ready, use the results from the response.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.analyze_async.tasks.retrieve(
    task_id="64f8d2c7e4a1b37f8a9c5d12",
)

βš™οΈ Parameters

task_id: str β€” The unique identifier of the analysis task.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.analyze_async.tasks.delete(...)

πŸ“ Description

This method deletes an analysis task. You can only delete tasks that are not currently being processed.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.analyze_async.tasks.delete(
    task_id="64f8d2c7e4a1b37f8a9c5d12",
)

βš™οΈ Parameters

task_id: str β€” The unique identifier of the analyze task.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Embed Tasks

client.embed.tasks.list(...)

πŸ“ Description

This method will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features. This method returns a list of the video embedding tasks in your account. The platform returns your video embedding tasks sorted by creation date, with the newest at the top of the list. - Video embeddings are stored for seven days. - When you invoke this method without specifying the `started_at` and `ended_at` parameters, the platform returns all the video embedding tasks created within the last seven days.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.embed.tasks.list(
    started_at="2024-03-01T00:00:00Z",
    ended_at="2024-03-01T00:00:00Z",
    status="processing",
    page=1,
    page_limit=10,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

started_at: typing.Optional[str] β€” Retrieve the embedding tasks that were created after the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").

ended_at: typing.Optional[str] β€” Retrieve the embedding tasks that were created before the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").

status: typing.Optional[str]

Filter the embedding tasks by their current status.

Values: processing, ready, or failed.

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.embed.tasks.create(...)

πŸ“ Description

This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.

This method creates a new video embedding task that uploads a video to the platform and creates one or multiple video embeddings.

This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.

Upload options:

  • Local file: Use the video_file parameter
  • Publicly accessible URL: Use the video_url parameter.

Specify at least one option. If both are provided, video_url takes precedence.

Your video files must meet the requirements. This endpoint allows you to upload files up to 2 GB in size. To upload larger files, use the Multipart Upload API

- The Marengo video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content. - Video embeddings are stored for seven days.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.embed.tasks.create(
    model_name="model_name",
)

βš™οΈ Parameters

model_name: str

The name of the model you want to use. The following models are available:

  • marengo3.0: Enhanced model with sports intelligence and extended content support.

video_file: `from future import annotations

typing.Optional[core.File]` β€” See core.File for more documentation

video_url: typing.Optional[str] β€” Specify this parameter to upload a video from a publicly accessible URL.

video_start_offset_sec: typing.Optional[float]

The start offset in seconds from the beginning of the video where processing should begin. Specifying 0 means starting from the beginning of the video.

Default: 0 Min: 0 Max: Duration of the video minus video_clip_length

video_end_offset_sec: typing.Optional[float]

The end offset in seconds from the beginning of the video where processing should stop.

Ensure the following when you specify this parameter:

  • The end offset does not exceed the total duration of the video file.
  • The end offset is greater than the start offset.
  • You must set both the start and end offsets. Setting only one of these offsets is not permitted, resulting in an error.

Min: video_start_offset + video_clip_length Max: Duration of the video file

video_clip_length: typing.Optional[float]

The desired duration in seconds for each clip for which the platform generates an embedding. Ensure that the clip length does not exceed the interval between the start and end offsets.

Default: 6 Min: 2 Max: 10

video_embedding_scope: typing.Optional[typing.List[TasksCreateRequestVideoEmbeddingScopeItem]]

Defines the scope of video embedding generation. Valid values are the following:

  • clip: Creates embeddings for each video segment of video_clip_length seconds, from video_start_offset_sec to video_end_offset_sec.
  • clip and video: Creates embeddings for video segments and the entire video. Use the video scope for videos up to 10-30 seconds to maintain optimal performance.

To create embeddings for segments and the entire video in the same request, include this parameter twice as shown below:

--form video_embedding_scope=clip \
--form video_embedding_scope=video

Default: clip

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.embed.tasks.status(...)

πŸ“ Description

This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features. This method retrieves the status of a video embedding task. Check the task status of a video embedding task to determine when you can retrieve the embedding.

A task can have one of the following statuses:

  • processing: The platform is creating the embeddings.
  • ready: Processing is complete. Retrieve the embeddings by invoking the GET method of the /embed/tasks/{task_id} endpoint.
  • failed: The task could not be completed, and the embeddings haven't been created.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.embed.tasks.status(
    task_id="663da73b31cdd0c1f638a8e6",
)

βš™οΈ Parameters

task_id: str β€” The unique identifier of your video embedding task.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.embed.tasks.retrieve(...)

πŸ“ Description

This method retrieves embeddings for a specific video embedding task. Ensure the task status is ready before invoking this method. Refer to the Retrieve the status of a video embedding tasks page for instructions on checking the task status.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.embed.tasks.retrieve(
    task_id="663da73b31cdd0c1f638a8e6",
)

βš™οΈ Parameters

task_id: str β€” The unique identifier of your video embedding task.

embedding_option: typing.Optional[ typing.Union[ TasksRetrieveRequestEmbeddingOptionItem, typing.Sequence[TasksRetrieveRequestEmbeddingOptionItem], ] ]

Specifies which types of embeddings to retrieve. Values: visual, audio, transcription. For details, see the Embedding options section.

The platform returns all available embeddings when you omit this parameter.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Embed V2

client.embed.v_2.create(...)

πŸ“ Description

This endpoint synchronously creates embeddings for multimodal content and returns the results immediately in the response.

When to use this endpoint:

  • Create embeddings for text, images, audio, or video content
  • Retrieve immediate results without waiting for background processing
  • Process audio or video content up to 10 minutes in duration

Do not use this endpoint for:

  • Audio or video content longer than 10 minutes. Use the POST method of the /embed-v2/tasks endpoint instead.
**Text**: - Maximum length: 500 tokens

Images:

  • Formats: JPEG, PNG
  • Minimum size: 128x128 pixels
  • Maximum file size: 5 MB

Audio and video:

  • Maximum duration: 10 minutes
  • Maximum file size for base64 encoded strings: 36 MB
  • Audio formats: WAV (uncompressed), MP3 (lossy), FLAC (lossless)
  • Video formats: FFmpeg supported formats
  • Video resolution: 360x360 to 5184x2160 pixels
  • Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1
This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.

πŸ”Œ Usage

from twelvelabs import TextInputRequest, TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.embed.v_2.create(
    input_type="text",
    model_name="marengo3.0",
    text=TextInputRequest(
        input_text="man walking a dog",
    ),
)

βš™οΈ Parameters

input_type: CreateEmbeddingsRequestInputType

The type of content for the embeddings.

Values:

  • audio: Creates embeddings for an audio file
  • video: Creates embeddings for a video file
  • image: Creates embeddings for an image file
  • text: Creates embeddings for text input
  • text_image: Creates embeddings for text and an image
  • multi_input: Creates a single embedding from up to 10 images. You can optionally include text to provide context. To reference specific images in your text, use placeholders in the following format: <@name>, where name matches the name field of a media source

model_name: CreateEmbeddingsRequestModelName β€” The video understanding model to use. Value: "marengo3.0".

text: typing.Optional[TextInputRequest]

image: typing.Optional[ImageInputRequest]

text_image: typing.Optional[TextImageInputRequest]

audio: typing.Optional[AudioInputRequest]

video: typing.Optional[VideoInputRequest]

multi_input: typing.Optional[MultiInputRequest]

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Embed V2 Tasks

client.embed.v_2.tasks.list(...)

πŸ“ Description

This method returns a list of the async embedding tasks in your account. The platform returns your async embedding tasks sorted by creation date, with the newest at the top of the list.

- Embeddings are stored for seven days. - When you invoke this method without specifying the `started_at` and `ended_at` parameters, the platform returns all the async embedding tasks created within the last seven days.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.embed.v_2.tasks.list(
    started_at="2024-03-01T00:00:00Z",
    ended_at="2024-03-01T00:00:00Z",
    status="processing",
    page=1,
    page_limit=10,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

started_at: typing.Optional[str] β€” Retrieve the embedding tasks that were created after the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").

ended_at: typing.Optional[str] β€” Retrieve the embedding tasks that were created before the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").

status: typing.Optional[str]

Filter the embedding tasks by their current status.

Values: processing, ready, or failed.

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.embed.v_2.tasks.create(...)

πŸ“ Description

This endpoint creates embeddings for audio and video content asynchronously.

When to use this endpoint:

  • Process audio or video files longer than 10 minutes
  • Process files up to 4 hours in duration
**Video**: - Minimum duration: 4 seconds - Maximum duration: 4 hours - Maximum file size: 4 GB - Formats: [FFmpeg supported formats](https://ffmpeg.org/ffmpeg-formats.html) - Resolution: 360x360 to 5184x2160 pixels - Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1

Audio:

  • Minimum duration: 4 seconds
  • Maximum duration: 4 hours
  • Maximum file size: 2 GB
  • Formats: WAV (uncompressed), MP3 (lossy), FLAC (lossless)

Creating embeddings asynchronously requires three steps:

  1. Create a task using this endpoint. The platform returns a task ID.
  2. Poll for the status of the task using the GET method of the /embed-v2/tasks/{task_id} endpoint. Wait until the status is ready.
  3. Retrieve the embeddings from the response when the status is ready using the GET method of the /embed-v2/tasks/{task_id} endpoint.
- This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page. - Embeddings are stored for seven days.

πŸ”Œ Usage

from twelvelabs import MediaSource, TwelveLabs, VideoInputRequest

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.embed.v_2.tasks.create(
    input_type="video",
    model_name="marengo3.0",
    video=VideoInputRequest(
        media_source=MediaSource(
            url="https://user-bucket.com/video/long-video.mp4",
        ),
        embedding_option=["visual", "audio"],
        embedding_scope=["clip", "asset"],
        embedding_type=["separate_embedding", "fused_embedding"],
    ),
)

βš™οΈ Parameters

input_type: CreateAsyncEmbeddingRequestInputType

The type of content for the embeddings.

Values:

  • audio: Audio files
  • video: Video content

model_name: CreateAsyncEmbeddingRequestModelName β€” The model you wish to use. Value: "marengo3.0".

audio: typing.Optional[AudioInputRequest]

video: typing.Optional[VideoInputRequest]

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.embed.v_2.tasks.retrieve(...)

πŸ“ Description

This method retrieves the status and the results of an async embedding task.

Task statuses:

  • processing: The platform is creating the embeddings.
  • ready: Processing is complete. Embeddings are available in the response.
  • failed: The task failed. Embeddings were not created.

Invoke this method repeatedly until the status field is ready. When status is ready, use the embeddings from the response.

Embeddings are stored for seven days.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.embed.v_2.tasks.retrieve(
    task_id="64f8d2c7e4a1b37f8a9c5d12",
)

βš™οΈ Parameters

task_id: str β€” The unique identifier of the embedding task.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

EntityCollections Entities

client.entity_collections.entities.list(...)

πŸ“ Description

This method returns a list of the entities in the specified entity collection.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.entity_collections.entities.list(
    entity_collection_id="6298d673f1090f1100476d4c",
    page=1,
    page_limit=10,
    name="My entity",
    status="processing",
    sort_by="created_at",
    sort_option="desc",
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection for which the platform will retrieve the entities.

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

name: typing.Optional[str] β€” Filter entities by name.

status: typing.Optional[EntitiesListRequestStatus] β€” Filter entities by status.

sort_by: typing.Optional[EntitiesListRequestSortBy]

The field to sort on. The following options are available:

  • created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity was created.
  • updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity collection was updated.
  • name: Sorts by the name.

sort_option: typing.Optional[str]

The sorting direction. The following options are available:

  • asc
  • desc

Default: desc.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.entities.create(...)

πŸ“ Description

This method creates an entity within a specified entity collection. Each entity must be associated with at least one asset.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.entities.create(
    entity_collection_id="6298d673f1090f1100476d4c",
    name="My entity",
    asset_ids=["6298d673f1090f1100476d4c", "6298d673f1090f1100476d4d"],
)

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection in which to create the entity.

name: str β€” The name of the entity. Make sure you use a succinct and descriptive name.

asset_ids: typing.Sequence[str] β€” An array of asset IDs to associate with the entity. You must provide at least one value.

description: typing.Optional[str] β€” An optional description of the entity.

metadata: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]

Optional metadata for the entity, provided as key-value pairs to store additional context or attributes. Use metadata to categorize or describe the entity for easier management and search. Keys must be of type string, and values can be of type string, integer, float, or boolean.

Example:

{
  "sport": "soccer",
  "teamId": 42,
  "performanceScore": 8.7,
  "isActive": true
}
To store complex data types such as objects or arrays, convert them to string values before including them in the metadata.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.entities.create_bulk(...)

πŸ“ Description

This method creates multiple entities within a specified entity collection in a single request. Each entity must be associated with at least one asset. This endpoint is useful for efficiently adding multiple entities, such as a roster of players or a group of characters.

πŸ”Œ Usage

from twelvelabs import TwelveLabs
from twelvelabs.entity_collections.entities import (
    EntitiesCreateBulkRequestEntitiesItem,
)

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.entities.create_bulk(
    entity_collection_id="6298d673f1090f1100476d4c",
    entities=[
        EntitiesCreateBulkRequestEntitiesItem(
            name="My entity",
            asset_ids=["6298d673f1090f1100476d4c", "6298d673f1090f1100476d4d"],
        )
    ],
)

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection in which to create the entities.

entities: typing.Sequence[EntitiesCreateBulkRequestEntitiesItem]

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.entities.retrieve(...)

πŸ“ Description

This method retrieves details about the specified entity.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.entities.retrieve(
    entity_collection_id="6298d673f1090f1100476d4c",
    entity_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection.

entity_id: str β€” The unique identifier of the entity to retrieve.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.entities.delete(...)

πŸ“ Description

This method deletes a specific entity from an entity collection. It permanently removes the entity and its associated data, but does not affect the assets associated with this entity.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.entities.delete(
    entity_collection_id="6298d673f1090f1100476d4c",
    entity_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection containing the entity to be deleted.

entity_id: str β€” The unique identifier of the entity to delete.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.entities.update(...)

πŸ“ Description

This method updates the specified entity within an entity collection. This operation allows modification of the entity's name, description, or metadata. Note that this endpoint does not affect the assets associated with the entity.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.entities.update(
    entity_collection_id="6298d673f1090f1100476d4c",
    entity_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection containing the entity to be updated.

entity_id: str β€” The unique identifier of the entity to update.

name: typing.Optional[str] β€” The new name for the entity.

description: typing.Optional[str] β€” An updated description for the entity.

metadata: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Updated metadata for the entity. If provided, this completely replaces the existing metadata. Use this to store custom key-value pairs related to the entity.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.entities.create_assets(...)

πŸ“ Description

This method adds assets to the specified entity within an entity collection. Assets are used to identify the entity in media content, and adding multiple assets can improve the accuracy of entity recognition in searches.

When assets are added, the entity may temporarily enter the "processing" state while the platform updates the necessary data. Once processing is complete, the entity status will return to "ready."

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.entities.create_assets(
    entity_collection_id="6298d673f1090f1100476d4c",
    entity_id="6298d673f1090f1100476d4c",
    asset_ids=["6298d673f1090f1100476d4c", "6298d673f1090f1100476d4d"],
)

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection that contains the entity to which assets will be added.

entity_id: str β€” The unique identifier of the entity within the specified entity collection to which the assets will be added.

asset_ids: typing.Sequence[str] β€” An array of asset IDs to add to the entity.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.entity_collections.entities.delete_assets(...)

πŸ“ Description

This method removes from the specified entity. Assets are used to identify the entity in media content, and removing assets may impact the accuracy of entity recognition in searches if too few assets remain.

When assets are removed, the entity may temporarily enter a "processing" state while the system updates the necessary data. Once processing is complete, the entity status will return to "ready."

- This operation only removes the association between the entity and the specified assets; it does not delete the assets themselves. - An entity must always have at least one asset associated with it. You can't remove the last asset from an entity.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.entity_collections.entities.delete_assets(
    entity_collection_id="6298d673f1090f1100476d4c",
    entity_id="6298d673f1090f1100476d4c",
    asset_ids=["6298d673f1090f1100476d4e", "6298d673f1090f1100476d4f"],
)

βš™οΈ Parameters

entity_collection_id: str β€” The unique identifier of the entity collection that contains the entity from which assets will be removed.

entity_id: str β€” The unique identifier of the entity within the specified entity collection from which the assets will be removed.

asset_ids: typing.Sequence[str] β€” An array of asset IDs to remove from the entity.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Indexes IndexedAssets

client.indexes.indexed_assets.list(...)

πŸ“ Description

This method returns a list of the indexed assets in the specified index. By default, the platform returns your indexed assets sorted by creation date, with the newest at the top of the list.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.indexes.indexed_assets.list(
    index_id="6298d673f1090f1100476d4c",
    page=1,
    page_limit=10,
    sort_by="created_at",
    sort_option="desc",
    filename="01.mp4",
    created_at="2024-08-16T16:53:59Z",
    updated_at="2024-08-16T16:53:59Z",
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index for which the platform will retrieve the indexed assets.

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

sort_by: typing.Optional[str]

The field to sort on. The following options are available:

  • updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.
  • created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.

Default: created_at.

sort_option: typing.Optional[str]

The sorting direction. The following options are available:

  • asc
  • desc

Default: desc.

status: typing.Optional[ typing.Union[ IndexedAssetsListRequestStatusItem, typing.Sequence[IndexedAssetsListRequestStatusItem], ] ]

Filter by one or more indexing task statuses. The following options are available:

  • ready: The indexed asset has been successfully uploaded and indexed.
  • pending: The indexed asset is pending.
  • queued: The indexed asset is queued.
  • indexing: The indexed asset is being indexed.
  • failed: The indexed asset indexing task failed.

To filter by multiple statuses, specify the status parameter for each value:

status=ready&status=validating

filename: typing.Optional[str] β€” Filter by filename.

duration: typing.Optional[IndexedAssetsListRequestDuration]

Filter by duration in seconds. Pass an object with gte and/or lte for range filtering. For exact match, set both to the same value.

fps: typing.Optional[IndexedAssetsListRequestFps]

Filter by frames per second. Pass an object with gte and/or lte for range filtering. For exact match, set both to the same value.

width: typing.Optional[IndexedAssetsListRequestWidth]

Filter by width in pixels. Pass an object with gte and/or lte for range filtering. For exact match, set both to the same value.

height: typing.Optional[IndexedAssetsListRequestHeight]

Filter by height in pixels. Pass an object with gte and/or lte for range filtering. For exact match, set both to the same value.

size: typing.Optional[IndexedAssetsListRequestSize]

Filter by size in bytes. Pass an object with gte and/or lte for range filtering. For exact match, set both to the same value.

created_at: typing.Optional[str] β€” Filter indexed assets by the creation date and time of their associated indexing tasks, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns indexed assets created on or after the specified date and time.

updated_at: typing.Optional[str] β€” This filter applies only to indexed assets updated using the PUT method of the /indexes/{index-id}/indexed-assets/{indexed-asset-id} endpoint. It filters indexed assets by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the indexed assets that were last updated on the specified date at or after the given time.

user_metadata: typing.Optional[ typing.Dict[str, typing.Optional[IndexedAssetsListRequestUserMetadataValue]] ]

To enable filtering by custom fields, you must first add user-defined metadata to your video by calling the PUT method of the /indexes/:index-id/indexed-assets/:indexed-asset-id endpoint.

Examples:

  • To filter on a string: ?category=recentlyAdded
  • To filter on an integer: ?batchNumber=5
  • To filter on a float: ?rating=9.3
  • To filter on a boolean: ?needsReview=true

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.indexed_assets.create(...)

πŸ“ Description

This method indexes an uploaded asset to make it searchable and analyzable. Indexing processes your content and extracts information that enables the platform to search and analyze your videos.

This operation is asynchronous. The platform returns an indexed asset ID immediately and processes your content in the background. Monitor the indexing status to know when your content is ready to use.

Your asset must meet the requirements based on your workflow:

If you want to both search and analyze your videos, the most restrictive requirements apply.

This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.indexed_assets.create(
    index_id="6298d673f1090f1100476d4c",
    asset_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index to which the asset will be indexed.

asset_id: str β€” The unique identifier of the asset to index. The asset status must be ready. Use the Retrieve an asset method to check the status.

enable_video_stream: typing.Optional[bool] β€” This parameter indicates if the platform stores the video for streaming. When set to true, the platform stores the video, and you can retrieve its URL by calling the GET method of the /indexes/{index-id}/indexed-assets/{indexed-asset-id} endpoint. You can then use this URL to access the stream over the HLS protocol.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.indexed_assets.retrieve(...)

πŸ“ Description

This method retrieves information about an indexed asset, including its status, metadata, and optional embeddings or transcription.

Use this method to:

  • Monitor the indexing progress:

    • Call this endpoint after creating an indexed asset
    • Check the status field until it shows ready
    • Once ready, your content is available for search and analysis
  • Retrieve the asset metadata:

    • Retrieve system metadata (duration, resolution, filename)
    • Access user-defined metadata
  • Retrieve the embeddings:

    • Include the embeddingOption parameter to retrieve video embeddings
    • Requires the Marengo video understanding model to be enabled in your index
  • Retrieve transcriptions:

    • Set the transcription parameter to true to retrieve spoken words from your video

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.indexed_assets.retrieve(
    index_id="6298d673f1090f1100476d4c",
    indexed_asset_id="6298d673f1090f1100476d4c",
    transcription=True,
)

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index to which the indexed asset has been uploaded.

indexed_asset_id: str β€” The unique identifier of the indexed asset to retrieve.

embedding_option: typing.Optional[ typing.Union[ IndexedAssetsRetrieveRequestEmbeddingOptionItem, typing.Sequence[IndexedAssetsRetrieveRequestEmbeddingOptionItem], ] ]

Specifies which types of embeddings to retrieve. Values: visual, audio, transcription. For details, see the Embedding options section.

To retrieve embeddings for a video, it must be indexed using the Marengo video understanding model. For details on enabling this model for an index, see the [Create an index](/reference/create-index) page.

transcription: typing.Optional[bool] β€” Specifies whether to retrieve a transcription of the spoken words.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.indexed_assets.delete(...)

πŸ“ Description

This method deletes all the information about the specified indexed asset. This action cannot be undone.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.indexed_assets.delete(
    index_id="6298d673f1090f1100476d4c",
    indexed_asset_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index to which the indexed asset has been uploaded.

indexed_asset_id: str β€” The unique identifier of the indexed asset to delete.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.indexed_assets.update(...)

πŸ“ Description

This method updates one or more fields of the metadata of an indexed asset. Also, can delete a field by setting it to null.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.indexed_assets.update(
    index_id="6298d673f1090f1100476d4c",
    indexed_asset_id="6298d673f1090f1100476d4c",
    user_metadata={
        "category": "recentlyAdded",
        "batchNumber": 5,
        "rating": 9.3,
        "needsReview": True,
    },
)

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index to which the indexed asset has been uploaded.

indexed_asset_id: str β€” The unique identifier of the indexed asset to update.

user_metadata: typing.Optional[UserMetadata]

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Indexes Videos

client.indexes.videos.list(...)

πŸ“ Description

This method will be deprecated in a future version. New implementations should use the List indexed assets method.

This method returns a list of the videos in the specified index. By default, the platform returns your videos sorted by creation date, with the newest at the top of the list.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
response = client.indexes.videos.list(
    index_id="6298d673f1090f1100476d4c",
    page=1,
    page_limit=10,
    sort_by="created_at",
    sort_option="desc",
    filename="01.mp4",
    created_at="2024-08-16T16:53:59Z",
    updated_at="2024-08-16T16:53:59Z",
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index for which the platform will retrieve the videos.

page: typing.Optional[int]

A number that identifies the page to retrieve.

Default: 1.

page_limit: typing.Optional[int]

The number of items to return on each page.

Default: 10. Max: 50.

sort_by: typing.Optional[str]

The field to sort on. The following options are available:

  • updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.
  • created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.

Default: created_at.

sort_option: typing.Optional[str]

The sorting direction. The following options are available:

  • asc
  • desc

Default: desc.

filename: typing.Optional[str] β€” Filter by filename.

duration: typing.Optional[VideosListRequestDuration]

Filter by duration in seconds. Pass an object with gte and/or lte for range filtering. For exact match, set both to the same value.

fps: typing.Optional[VideosListRequestFps]

Filter by frames per second. Pass an object with gte and/or lte for range filtering. For exact match, set both to the same value.

width: typing.Optional[VideosListRequestWidth]

Filter by width in pixels. Pass an object with gte and/or lte for range filtering. For exact match, set both to the same value.

height: typing.Optional[VideosListRequestHeight]

Filter by height in pixels. Pass an object with gte and/or lte for range filtering. For exact match, set both to the same value.

size: typing.Optional[VideosListRequestSize]

Filter by size in bytes. Pass an object with gte and/or lte for range filtering. For exact match, set both to the same value.

created_at: typing.Optional[str] β€” Filter videos by the creation date and time of their associated indexing tasks, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the videos whose indexing tasks were created on the specified date at or after the given time.

updated_at: typing.Optional[str] β€” This filter applies only to videos updated using the PUT method of the /indexes/{index-id}/videos/{video-id} endpoint. It filters videos by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the video indexing tasks that were last updated on the specified date at or after the given time.

user_metadata: typing.Optional[ typing.Dict[str, typing.Optional[VideosListRequestUserMetadataValue]] ]

To enable filtering by custom fields, you must first add user-defined metadata to your video by calling the PUT method of the /indexes/:index-id/videos/:video-id endpoint.

Examples:

  • To filter on a string: ?category=recentlyAdded
  • To filter on an integer: ?batchNumber=5
  • To filter on a float: ?rating=9.3
  • To filter on a boolean: ?needsReview=true

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.videos.retrieve(...)

πŸ“ Description

This method will be deprecated in a future version. New implementations should use the Retrieve an indexed asset method.

This method retrieves information about the specified video.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.videos.retrieve(
    index_id="6298d673f1090f1100476d4c",
    video_id="6298d673f1090f1100476d4c",
    transcription=True,
)

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index to which the video has been uploaded.

video_id: str β€” The unique identifier of the video to retrieve.

embedding_option: typing.Optional[ typing.Union[ VideosRetrieveRequestEmbeddingOptionItem, typing.Sequence[VideosRetrieveRequestEmbeddingOptionItem], ] ]

Specifies which types of embeddings to retrieve. Values: visual, audio, transcription. For details, see the Embedding options section.

To retrieve embeddings for a video, it must be indexed using the Marengo video understanding model. For details on enabling this model for an index, see the [Create an index](/reference/create-index) page.

transcription: typing.Optional[bool] β€” The parameter indicates whether to retrieve a transcription of the spoken words for the indexed video.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.videos.delete(...)

πŸ“ Description

This method will be deprecated in a future version. New implementations should use the Delete an indexed asset method.

This method deletes all the information about the specified indexed video. This action cannot be undone.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.videos.delete(
    index_id="6298d673f1090f1100476d4c",
    video_id="6298d673f1090f1100476d4c",
)

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index to which the video has been uploaded.

video_id: str β€” The unique identifier of the video to delete.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.indexes.videos.update(...)

πŸ“ Description

This method will be deprecated in a future version. New implementations should use the Partial update indexed asset method.

This method updates one or more fields of the metadata of a video. Also, can delete a field by setting it to null.

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.indexes.videos.update(
    index_id="6298d673f1090f1100476d4c",
    video_id="6298d673f1090f1100476d4c",
    user_metadata={
        "category": "recentlyAdded",
        "batchNumber": 5,
        "rating": 9.3,
        "needsReview": True,
    },
)

βš™οΈ Parameters

index_id: str β€” The unique identifier of the index to which the video has been uploaded.

video_id: str β€” The unique identifier of the video to update.

user_metadata: typing.Optional[UserMetadata]

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Tasks Transfers

client.tasks.transfers.create(...)

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.tasks.transfers.create(
    integration_id="integration-id",
)

βš™οΈ Parameters

integration_id: str

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tasks.transfers.get_status(...)

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.tasks.transfers.get_status(
    integration_id="integration-id",
)

βš™οΈ Parameters

integration_id: str

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tasks.transfers.get_logs(...)

πŸ”Œ Usage

from twelvelabs import TwelveLabs

client = TwelveLabs(
    api_key="YOUR_API_KEY",
)
client.tasks.transfers.get_logs(
    integration_id="integration-id",
)

βš™οΈ Parameters

integration_id: str

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.