client.analyze_stream(...)
-
-
-
This method synchronously analyzes your videos and generates fully customizable text based on your prompts.
- Minimum duration: 4 seconds - Maximum duration: 1 hour - Formats: [FFmpeg supported formats](https://ffmpeg.org/ffmpeg-formats.html) - Resolution: 360x360 to 5184x2160 pixels - Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1.When to use this method:
- Analyze videos up to 1 hour
- Retrieve immediate results without waiting for asynchronous processing
- Stream text fragments in real-time for immediate processing and feedback
Do not use this method for:
- Videos longer than 1 hour. Use the
POSTmethod of the/analyze/tasksendpoint instead.
-
-
-
from twelvelabs import ResponseFormat, TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.analyze_stream( video_id="6298d673f1090f1100476d4c", prompt="I want to generate a description for my video with the following format - Title of the video, followed by a summary in 2-3 sentences, highlighting the main topic, key events, and concluding remarks.", temperature=0.2, response_format=ResponseFormat( type="json_schema", json_schema={ "type": "object", "properties": { "title": {"type": "string"}, "summary": {"type": "string"}, "keywords": {"type": "array", "items": {"type": "string"}}, }, }, ), max_tokens=2000, ) for chunk in response.data: yield chunk
-
-
-
prompt:
AnalyzeTextPrompt
-
video_id:
typing.Optional[str]The unique identifier of the video to analyze.
This parameter will be deprecated and removed in a future version. Use the
videoparameter instead.
-
video:
typing.Optional[VideoContext]
-
temperature:
typing.Optional[AnalyzeTemperature]
-
response_format:
typing.Optional[ResponseFormat]
-
max_tokens:
typing.Optional[AnalyzeMaxTokens]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.analyze(...)
-
-
-
This method synchronously analyzes your videos and generates fully customizable text based on your prompts.
- Minimum duration: 4 seconds - Maximum duration: 1 hour - Formats: [FFmpeg supported formats](https://ffmpeg.org/ffmpeg-formats.html) - Resolution: 360x360 to 5184x2160 pixels - Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1.When to use this method:
- Analyze videos up to 1 hour
- Retrieve immediate results without waiting for asynchronous processing
- Stream text fragments in real-time for immediate processing and feedback
Do not use this method for:
- Videos longer than 1 hour. Use the
POSTmethod of the/analyze/tasksendpoint instead.
-
-
-
from twelvelabs import ResponseFormat, TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.analyze( video_id="6298d673f1090f1100476d4c", prompt="I want to generate a description for my video with the following format - Title of the video, followed by a summary in 2-3 sentences, highlighting the main topic, key events, and concluding remarks.", temperature=0.2, response_format=ResponseFormat( type="json_schema", json_schema={ "type": "object", "properties": { "title": {"type": "string"}, "summary": {"type": "string"}, "keywords": {"type": "array", "items": {"type": "string"}}, }, }, ), max_tokens=2000, )
-
-
-
prompt:
AnalyzeTextPrompt
-
video_id:
typing.Optional[str]The unique identifier of the video to analyze.
This parameter will be deprecated and removed in a future version. Use the
videoparameter instead.
-
video:
typing.Optional[VideoContext]
-
temperature:
typing.Optional[AnalyzeTemperature]
-
response_format:
typing.Optional[ResponseFormat]
-
max_tokens:
typing.Optional[AnalyzeMaxTokens]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.list(...)
-
-
-
This method returns a list of the video indexing tasks in your account. The platform returns your video indexing tasks sorted by creation date, with the newest at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.tasks.list( page=1, page_limit=10, sort_by="created_at", sort_option="desc", index_id="630aff993fcee0532cb809d0", filename="01.mp4", duration=531.998133, width=640, height=360, created_at="2024-03-01T00:00:00Z", updated_at="2024-03-01T00:00:00Z", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
sort_by:
typing.Optional[str]The field to sort on. The following options are available:
updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.
Default:
created_at.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
index_id:
typing.Optional[str]β Filter by the unique identifier of an index.
-
status:
typing.Optional[ typing.Union[ TasksListRequestStatusItem, typing.Sequence[TasksListRequestStatusItem] ] ]Filter by one or more video indexing task statuses. The following options are available:
ready: The video has been successfully uploaded and indexed.uploading: The video is being uploaded.validating: The video is being validated against the prerequisites.pending: The video is pending.queued: The video is queued.indexing: The video is being indexed.failed: The video indexing task failed.
To filter by multiple statuses, specify the
statusparameter for each value:status=ready&status=validating
-
filename:
typing.Optional[str]β Filter by filename.
-
duration:
typing.Optional[float]β Filter by duration. Expressed in seconds.
-
width:
typing.Optional[int]β Filter by width.
-
height:
typing.Optional[int]β Filter by height.
-
created_at:
typing.Optional[str]β Filter video indexing tasks by the creation date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the video indexing tasks that were created on the specified date at or after the given time.
-
updated_at:
typing.Optional[str]β Filter video indexing tasks by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the video indexing tasks that were updated on the specified date at or after the given time.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.create(...)
-
-
-
This method creates a video indexing task that uploads and indexes a video in a single operation.
This endpoint bundles two operations (upload and indexing) together. In the next major API release, this endpoint will be removed in favor of a separated workflow: 1. Upload your video using the [`POST /assets`](/v1.3/api-reference/upload-content/direct-uploads/create) endpoint 2. Index the uploaded video using the [`POST /indexes/{index-id}/indexed-assets`](/v1.3/api-reference/index-content/create) endpointThis separation provides better control, reusability of assets, and improved error handling. New implementations should use the new workflow.
Upload options:
- Local file: Use the
video_fileparameter. - Publicly accessible URL: Use the
video_urlparameter.
Your video files must meet requirements based on your workflow:
- Search: Marengo requirements.
- Video analysis: Pegasus requirements.
- If you want to both search and analyze your videos, the most restrictive requirements apply.
- This method allows you to upload files up to 2 GB in size. To upload larger files, use the Multipart Upload API
- Local file: Use the
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.create( index_id="index_id", )
-
-
-
index_id:
strβ The unique identifier of the index to which the video is being uploaded.
-
video_file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
video_url:
typing.Optional[str]β Specify this parameter to upload a video from a publicly accessible URL.
-
enable_video_stream:
typing.Optional[bool]β This parameter indicates if the platform stores the video for streaming. When set totrue, the platform stores the video, and you can retrieve its URL by calling theGETmethod of the/indexes/{index-id}/videos/{video-id}endpoint. You can then use this URL to access the stream over the HLS protocol.
-
user_metadata:
typing.Optional[str]β Metadata that helps you categorize your videos. You can specify a list of keys and values. Keys must be of typestring, and values can be of the following types:string,integer,floatorboolean.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.retrieve(...)
-
-
-
This method retrieves a video indexing task.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.retrieve( task_id="6298d673f1090f1100476d4c", )
-
-
-
task_id:
strβ The unique identifier of the video indexing task to retrieve.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.delete(...)
-
-
-
This action cannot be undone. Note the following about deleting a video indexing task:
- You can only delete video indexing tasks for which the status is
readyorfailed. - If the status of your video indexing task is
ready, you must first delete the video vector associated with your video indexing task by calling theDELETEmethod of the/indexes/videosendpoint.
- You can only delete video indexing tasks for which the status is
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.delete( task_id="6298d673f1090f1100476d4c", )
-
-
-
task_id:
strβ The unique identifier of the video indexing task you want to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.list(...)
-
-
-
This method returns a list of the indexes in your account. The platform returns indexes sorted by creation date, with the oldest indexes at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.indexes.list( page=1, page_limit=10, sort_by="created_at", sort_option="desc", index_name="myIndex", model_options="visual,audio", model_family="marengo", created_at="2024-08-16T16:53:59Z", updated_at="2024-08-16T16:55:59Z", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
sort_by:
typing.Optional[str]The field to sort on. The following options are available:
updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.
Default:
created_at.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
index_name:
typing.Optional[str]β Filter by the name of an index.
-
model_options:
typing.Optional[str]β Filter by the model options. When filtering by multiple model options, the values must be comma-separated.
-
model_family:
typing.Optional[str]β Filter by the model family. This parameter can take one of the following values:marengoorpegasus. You can specify a single value.
-
created_at:
typing.Optional[str]β Filter indexes by the creation date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the indexes that were created on the specified date at or after the given time.
-
updated_at:
typing.Optional[str]β Filter indexes by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the indexes that were last updated on the specified date at or after the given time.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.create(...)
-
-
-
This method creates an index.
-
-
-
from twelvelabs import TwelveLabs from twelvelabs.indexes import IndexesCreateRequestModelsItem client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.create( index_name="myIndex", models=[ IndexesCreateRequestModelsItem( model_name="marengo3.0", model_options=["visual", "audio"], ), IndexesCreateRequestModelsItem( model_name="pegasus1.2", model_options=["visual", "audio"], ), ], addons=["thumbnail"], )
-
-
-
index_name:
strβ The name of the index. Make sure you use a succinct and descriptive name.
-
models:
typing.Sequence[IndexesCreateRequestModelsItem]β An array that specifies the video understanding models and the model options to be enabled for this index. Models determine what tasks you can perform with your videos. Model options determine which modalities the platform analyzes.
-
addons:
typing.Optional[typing.Sequence[str]]An array specifying which add-ons should be enabled. Each entry in the array is an addon, and the following values are supported:
thumbnail: Enables thumbnail generation.
If you don't provide this parameter, no add-ons will be enabled.
- You can only enable addons when using the Marengo video understanding model. - You cannot disable an add-on once the index has been created.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.retrieve(...)
-
-
-
This method retrieves details about the specified index.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.retrieve( index_id="6298d673f1090f1100476d4c", )
-
-
-
index_id:
strβ Unique identifier of the index to retrieve.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.update(...)
-
-
-
This method updates the name of the specified index.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.update( index_id="6298d673f1090f1100476d4c", index_name="myIndex", )
-
-
-
index_id:
strβ Unique identifier of the index to update.
-
index_name:
strβ The name of the index.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.delete(...)
-
-
-
This method deletes the specified index and all the videos within it. This action cannot be undone.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.delete( index_id="6298d673f1090f1100476d4c", )
-
-
-
index_id:
strβ Unique identifier of the index to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.assets.list(...)
-
-
-
This method returns a list of assets in your account.
The platform returns your assets sorted by creation date, with the newest at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.assets.list( page=1, page_limit=10, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
asset_ids:
typing.Optional[typing.Union[str, typing.Sequence[str]]]β Filters the response to include only assets with the specified IDs. Provide one or more asset IDs. When you specify multiple IDs, the platform returns all matching assets.
-
asset_types:
typing.Optional[ typing.Union[ AssetsListRequestAssetTypesItem, typing.Sequence[AssetsListRequestAssetTypesItem], ] ]β Filters the response to include only assets of the specified types. Provide one or more asset types. When you specify multiple types, the platform returns all matching assets.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.assets.create(...)
-
-
-
This method creates an asset by uploading a file to the platform. Assets are media files that you can use in downstream workflows, including indexing, analyzing video content, and creating entities.
Supported content: Video, audio, and images.
Upload methods:
- Local file: Set the
methodparameter todirectand use thefileparameter to specify the file. - Publicly accessible URL: Set the
methodparameter tourland use theurlparameter to specify the URL of your file.
File size: Up to 4 GB.
Additional requirements depend on your workflow:
- Search: Marengo requirements
- Video analysis: Pegasus requirements
- Entity search: Marengo image requirements
- Create embeddings: Marengo requirements
- Local file: Set the
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.assets.create( method="direct", )
-
-
-
method:
AssetsCreateRequestMethodβ Specifies the upload method for the asset. Usedirectto upload a local file orurlfor a publicly accessible URL.
-
file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
url:
typing.Optional[str]Specify this parameter to upload a file from a publicly accessible URL. This parameter is required when
methodis set tourl.URL uploads have a maximum limit of 4 GB.
-
filename:
typing.Optional[str]β The optional filename of the asset. If not provided, the platform will determine the filename from the file or URL.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.assets.retrieve(...)
-
-
-
This method retrieves details about the specified asset.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.assets.retrieve( asset_id="6298d673f1090f1100476d4c", )
-
-
-
asset_id:
strβ The unique identifier of the asset to retrieve.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.assets.delete(...)
-
-
-
This method deletes the specified asset. This action cannot be undone.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.assets.delete( asset_id="6298d673f1090f1100476d4c", )
-
-
-
asset_id:
strβ The unique identifier of the asset to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.multipart_upload.list_incomplete_uploads(...)
-
-
-
This method returns a list of all incomplete multipart upload sessions in your account.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.multipart_upload.list_incomplete_uploads( page=1, page_limit=10, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.multipart_upload.create(...)
-
-
-
This method creates a multipart upload session.
Supported content: Video and audio
File size: 4 GB maximum.
Additional requirements depend on your workflow:
- Search: Marengo requirements
- Video analysis: Pegasus requirements
- Create embeddings: Marengo requirements
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.multipart_upload.create( filename="my-video.mp4", type="video", total_size=104857600, )
-
-
-
filename:
strβ The original file name of the asset.
-
type:
CreateAssetUploadRequestTypeβ The type of asset you want to upload.
-
total_size:
intThe total size of the file in bytes. The platform uses this value to:
- Calculate the optimal chunk size.
- Determine the total number of chunks required
- Generate the initial set of presigned URLs
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.multipart_upload.get_status(...)
-
-
-
This method provides information about an upload session, including its current status, chunk-level progress, and completion state.
Use this method to:
- Verify upload completion (
status=completed) - Identify any failed chunks that require a retry
- Monitor the upload progress by comparing
uploaded_sizewithtotal_size - Determine if the session has expired
- Retrieve the status information for each chunk
You must call this method after reporting chunk completion to confirm the upload has transitioned to the
completedstatus before using the asset. - Verify upload completion (
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.multipart_upload.get_status( upload_id="507f1f77bcf86cd799439011", page=1, page_limit=10, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
upload_id:
strβ The unique identifier of the upload session.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.multipart_upload.report_chunk_batch(...)
-
-
-
This method reports successfully uploaded chunks to the platform. The platform finalizes the upload after you report all chunks.
For optimal performance, report chunks in batches and in any order.
-
-
-
from twelvelabs import CompletedChunk, TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.multipart_upload.report_chunk_batch( upload_id="507f1f77bcf86cd799439011", completed_chunks=[ CompletedChunk( chunk_index=1, proof="d41d8cd98f00b204e9800998ecf8427e", proof_type="etag", chunk_size=5242880, ) ], )
-
-
-
upload_id:
strβ The unique identifier of the upload session.
-
completed_chunks:
typing.Sequence[CompletedChunk]β The list of chunks successfully uploaded that you're reporting to the platform. Report only after receiving an ETag.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.multipart_upload.get_additional_presigned_urls(...)
-
-
-
This method generates new presigned URLs for specific chunks that require uploading. Use this endpoint in the following situations:
- Your initial URLs have expired (URLs expire after one hour).
- The initial set of presigned URLs does not include URLs for all chunks.
- You need to retry failed chunk uploads with new URLs.
To specify which chunks need URLs, use the
startandcountparameters. For example, to generate URLs for chunks 21 to 30, usestart=21andcount=10. The response will provide new URLs, each with a fresh expiration time of one hour.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.multipart_upload.get_additional_presigned_urls( upload_id="507f1f77bcf86cd799439011", start=1, count=10, )
-
-
-
upload_id:
strβ The unique identifier of the upload session.
-
start:
intβ The index of the first chunk number to generate URLs for. Chunks are numbered from 1.
-
count:
intβ The number of presigned URLs to generate starting from the index. You can request a maximum of 50 URLs in a single API call.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.list(...)
-
-
-
This method returns a list of the entity collections in your account.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.entity_collections.list( page=1, page_limit=10, name="My entity collection", sort_by="created_at", sort_option="desc", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
name:
typing.Optional[str]β Filter entity collections by name.
-
sort_by:
typing.Optional[EntityCollectionsListRequestSortBy]The field to sort on. The following options are available:
created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity collection was updated.updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity collection was created.name: Sorts by the name.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.create(...)
-
-
-
This method creates an entity collection.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.create( name="My entity collection", )
-
-
-
name:
strβ The name of the entity collection. Make sure you use a succinct and descriptive name.
-
description:
typing.Optional[str]β Optional description of the entity collection.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.retrieve(...)
-
-
-
This method retrieves details about the specified entity collection.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.retrieve( entity_collection_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection to retrieve.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.delete(...)
-
-
-
This method deletes the specified entity collection. This action cannot be undone.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.delete( entity_collection_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.update(...)
-
-
-
This method updates the specified entity collection.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.update( entity_collection_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection to update.
-
name:
typing.Optional[str]β The updated name of the entity collection.
-
description:
typing.Optional[str]β The updated description of the entity collection.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.create(...)
-
-
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
This method creates embeddings for text, image, and audio content.
Ensure your media files meet the following requirements:
Parameters for embeddings:
- Common parameters:
model_name: The video understanding model you want to use. Example: "marengo3.0".
- Text embeddings:
text: Text for which to create an embedding.
- Image embeddings:
Provide one of the following:
image_url: Publicly accessible URL of your image file.image_file: Local image file.
- Audio embeddings:
Provide one of the following:
audio_url: Publicly accessible URL of your audio file.audio_file: Local audio file.
- Common parameters:
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.create( model_name="model_name", )
-
-
-
model_name:
strThe name of the model you want to use. The following models are available:
marengo3.0: Enhanced model with sports intelligence and extended content support.
-
text:
typing.Optional[str]The text for which you wish to create an embedding.
Example: "Man with a dog crossing the street"
-
image_url:
typing.Optional[str]β The publicly accessible URL of the image for which you wish to create an embedding. This parameter is required for image embeddings ifimage_fileis not provided.
-
image_file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
audio_url:
typing.Optional[str]β The publicly accessible URL of the audio file for which you wish to creae an embedding. This parameter is required for audio embeddings ifaudio_fileis not provided.
-
audio_file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
audio_start_offset_sec:
typing.Optional[float]Specifies the start time, in seconds, from which the platform generates the audio embeddings. This parameter allows you to skip the initial portion of the audio during processing. Default:
0.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.search.create(...)
-
-
-
Use this endpoint to search for relevant matches in an index using text, media, or a combination of both as your query.
Text queries:
- Use the
query_textparameter to specify your query.
Media queries:
- Set the
query_media_typeparameter to the corresponding media type (example:image). - Provide up to 10 images by specifying the following parameters multiple times:
query_media_url: Publicly accessible URL of your media file.query_media_file: Local media file. Composed text and media queries:
- Use the
query_textparameter for your text query. - Set
query_media_typetoimage. - Provide up to 10 images by specifying the
query_media_urlandquery_media_fileparameters multiple times.
Entity search (beta):
- To find a specific person in your videos, enclose the unique identifier of the entity you want to find in the
query_textparameter.
- Use the
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.search.create( index_id="index_id", search_options=["visual"], )
-
-
-
index_id:
strβ The unique identifier of the index to search.
-
search_options:
typing.List[SearchCreateRequestSearchOptionsItem]Specifies the modalities the video understanding model uses to find relevant information.
Available options:
visual: Searches visual content.audio: Searches non-speech audio.transcription: Spoken words
For guidance, see the Search options section.
-
query_media_type:
typing.Optional[SearchCreateRequestQueryMediaType]β The type of media you wish to use. This parameter is required for media queries. For example, to perform an image-based search, set this parameter toimage. Usequery_texttogether with this parameter when you want to perform a composed image+text search.
-
query_media_url:
typing.Optional[str]The publicly accessible URL of a media file to use as a query. This parameter is required for media queries if
query_media_fileis not provided.You can provide up to 10 images by specifying this parameter multiple times (Marengo 3.0 only):
--form query_media_url=https://example.com/image1.jpg \ --form query_media_url=https://example.com/image2.jpg
-
query_media_file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
query_text:
typing.Optional[str]The text query to search for. This parameter is required for text queries. Note that the platform supports full natural language-based search. You can use this parameter together with
query_media_typeandquery_media_urlorquery_media_fileto perform a composed image+text search.If you're using the Entity Search feature to search for specific persons in your video content, you must enclose the unique identifier of your entity between the
<@and>markers. For example, to search for an entity with the IDentity123, use<@entity123> is walkingas your query.Marengo supports up to 500 tokens per query.
-
transcription_options:
typing.Optional[typing.List[SearchCreateRequestTranscriptionOptionsItem]]Specifies how the platform matches your text query with the words spoken in the video. This parameter applies only when the
search_optionsparameter contains thetranscriptionvalue.Available options:
lexical: Exact word matchingsemantic: Meaning-based matching
For details on when to use each option, see the Transcription options section.
Default:
["lexical", "semantic"].
-
group_by:
typing.Optional[SearchCreateRequestGroupBy]Use this parameter to group or ungroup items in a response. It can take one of the following values:
video: The platform will group the matching video clips in the response by video.clip: The matching video clips in the response will not be grouped.
Default:
clip
-
operator:
typing.Optional[SearchCreateRequestOperator]Combines multiple search options using
ororand. Useandto find segments matching all search options. Useorto find segments matching any search option. For detailed guidance on using this parameter, see the Combine multiple modalities section.Default:
or.
-
page_limit:
typing.Optional[int]The number of items to return on each page. When grouping by video, this parameter represents the number of videos per page. Otherwise, it represents the maximum number of video clips per page.
Max:
50.
-
filter:
typing.Optional[str]Specifies a stringified JSON object to filter your search results. Supports both system-generated metadata (example: video ID, duration) and user-defined metadata.
Syntax for filtering
The following table describes the supported data types, operators, and filter syntax:
Data type Operator Description Syntax String =Matches results equal to the specified value. {"field": "value"}Array of strings =Matches results with any value in the specified array. Supported only for id.{"id": ["value1", "value2"]}Numeric (integer, float) =,lte,gteMatches results equal to or within a range of the specified value. {"field": number}or{"field": { "gte": number, "lte": number }}Boolean =Matches results equal to the specified boolean value. {"field": true}or{"field": false}.
**System-generated metadata**The table below describes the system-generated metadata available for filtering your search results:
Field name Description Type Example idFilters by specific video IDs. Array of strings {"id": ["67cec9caf45d9b64a58340fc", "67cec9baf45d9b64a58340fa"]}.durationFilters based on the duration of the video containing the segment that matches your query. Number or object with gteandlte{"duration": 600}or{"duration": { "gte": 600, "lte": 800 }}widthFilters by video width (in pixels). Number or object with gteandlte{"width": 1920}or{"width": { "gte": 1280, "lte": 1920}}heightFilters by video height (in pixels). Number or object with gteandlte.{"height": 1080}or{"height": { "gte": 720, "lte": 1080 }}.sizeFilters by video size (in bytes) Number or object with gteandlte.{"size": 1048576}or{"size": { "gte": 1048576, "lte": 5242880}}filenameFilters by the exact file name. String {"filename": "Animal Encounters part 1"}
**User-defined metadata**To filter by user-defined metadata:
- Add metadata to your video by calling the
PUTmethod of the/indexes/:index-id/videos/:video-idendpoint - Reference the custom field in your filter object. For example, to filter videos where a custom field named
needsReviewof type boolean istrue, use{"needs_review": true}.
For more details and examples, see the Filter search results page.
- Add metadata to your video by calling the
-
include_user_metadata:
typing.Optional[bool]β Specifies whether to include user-defined metadata in the search results.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.search.retrieve(...)
-
-
-
Use this endpoint to retrieve a specific page of search results.
When you use pagination, you will not be charged for retrieving subsequent pages of results.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.search.retrieve( page_token="1234567890", include_user_metadata=True, )
-
-
-
page_token:
strβ A token that identifies the page to retrieve.
-
include_user_metadata:
typing.Optional[bool]β Specifies whether to include user-defined metadata in the search results.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.analyze_async.tasks.list(...)
-
-
-
This method returns a list of the analysis tasks in your account. The platform returns your analysis tasks sorted by creation date, with the newest at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.analyze_async.tasks.list( page=1, page_limit=10, status="queued", )
-
-
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
status:
typing.Optional[AnalyzeTaskStatus]Filter analysis tasks by status. Possible values:
queued,pending,processing,ready,failed.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.analyze_async.tasks.create(...)
-
-
-
This method asynchronously analyzes your videos and generates fully customizable text based on your prompts.
- Minimum duration: 4 seconds - Maximum duration: 2 hours - Formats: [FFmpeg supported formats](https://ffmpeg.org/ffmpeg-formats.html) - Resolution: 360x360 to 5184x2160 pixels - Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1.When to use this method:
- Analyze videos longer than 1 hour
- Process videos asynchronously without blocking your application
Do not use this method for:
- Videos for which you need immediate results or real-time streaming. Use the
POSTmethod of the/analyzeendpoint instead.
Analyzing videos asynchronously requires three steps:
- Create an analysis task using this method. The platform returns a task ID.
- Poll the status of the task using the
GETmethod of the/analyze/tasks/{task_id}endpoint. Wait until the status isready. - Retrieve the results from the response when the status is
readyusing theGETmethod of the/analyze/tasks/{task_id}endpoint.
-
-
-
from twelvelabs import TwelveLabs, VideoContext_Url client = TwelveLabs( api_key="YOUR_API_KEY", ) client.analyze_async.tasks.create( video=VideoContext_Url( url="https://example.com/video.mp4", ), prompt="Generate a detailed summary of this video in 3-4 sentences", temperature=0.2, max_tokens=1000, )
-
-
-
video:
VideoContext
-
prompt:
AnalyzeTextPrompt
-
temperature:
typing.Optional[AnalyzeTemperature]
-
max_tokens:
typing.Optional[AnalyzeMaxTokens]
-
response_format:
typing.Optional[ResponseFormat]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.analyze_async.tasks.retrieve(...)
-
-
-
This method retrieves the status and results of an analysis task.
Task statuses:
queued: The task is waiting to be processed.pending: The task is queued and waiting to start.processing: The platform is analyzing the video.ready: Processing is complete. Results are available in the response.failed: The task failed. No results were generated.
Poll this method until
statusisreadyorfailed. Whenstatusisready, use the results from the response.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.analyze_async.tasks.retrieve( task_id="64f8d2c7e4a1b37f8a9c5d12", )
-
-
-
task_id:
strβ The unique identifier of the analysis task.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.analyze_async.tasks.delete(...)
-
-
-
This method deletes an analysis task. You can only delete tasks that are not currently being processed.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.analyze_async.tasks.delete( task_id="64f8d2c7e4a1b37f8a9c5d12", )
-
-
-
task_id:
strβ The unique identifier of the analyze task.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.tasks.list(...)
-
-
- This method will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features. This method returns a list of the video embedding tasks in your account. The platform returns your video embedding tasks sorted by creation date, with the newest at the top of the list. - Video embeddings are stored for seven days. - When you invoke this method without specifying the `started_at` and `ended_at` parameters, the platform returns all the video embedding tasks created within the last seven days.
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.embed.tasks.list( started_at="2024-03-01T00:00:00Z", ended_at="2024-03-01T00:00:00Z", status="processing", page=1, page_limit=10, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
started_at:
typing.Optional[str]β Retrieve the embedding tasks that were created after the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").
-
ended_at:
typing.Optional[str]β Retrieve the embedding tasks that were created before the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").
-
status:
typing.Optional[str]Filter the embedding tasks by their current status.
Values:
processing,ready, orfailed.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.tasks.create(...)
-
-
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
This method creates a new video embedding task that uploads a video to the platform and creates one or multiple video embeddings.
This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.Upload options:
- Local file: Use the
video_fileparameter - Publicly accessible URL: Use the
video_urlparameter.
Specify at least one option. If both are provided,
video_urltakes precedence.Your video files must meet the requirements. This endpoint allows you to upload files up to 2 GB in size. To upload larger files, use the Multipart Upload API
- The Marengo video understanding model generates embeddings for all modalities in the same latent space. This shared space enables any-to-any searches across different types of content. - Video embeddings are stored for seven days. - Local file: Use the
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.tasks.create( model_name="model_name", )
-
-
-
model_name:
strThe name of the model you want to use. The following models are available:
marengo3.0: Enhanced model with sports intelligence and extended content support.
-
video_file: `from future import annotations
typing.Optional[core.File]` β See core.File for more documentation
-
video_url:
typing.Optional[str]β Specify this parameter to upload a video from a publicly accessible URL.
-
video_start_offset_sec:
typing.Optional[float]The start offset in seconds from the beginning of the video where processing should begin. Specifying 0 means starting from the beginning of the video.
Default: 0 Min: 0 Max: Duration of the video minus video_clip_length
-
video_end_offset_sec:
typing.Optional[float]The end offset in seconds from the beginning of the video where processing should stop.
Ensure the following when you specify this parameter:
- The end offset does not exceed the total duration of the video file.
- The end offset is greater than the start offset.
- You must set both the start and end offsets. Setting only one of these offsets is not permitted, resulting in an error.
Min: video_start_offset + video_clip_length Max: Duration of the video file
-
video_clip_length:
typing.Optional[float]The desired duration in seconds for each clip for which the platform generates an embedding. Ensure that the clip length does not exceed the interval between the start and end offsets.
Default: 6 Min: 2 Max: 10
-
video_embedding_scope:
typing.Optional[typing.List[TasksCreateRequestVideoEmbeddingScopeItem]]Defines the scope of video embedding generation. Valid values are the following:
clip: Creates embeddings for each video segment ofvideo_clip_lengthseconds, fromvideo_start_offset_sectovideo_end_offset_sec.clipandvideo: Creates embeddings for video segments and the entire video. Use thevideoscope for videos up to 10-30 seconds to maintain optimal performance.
To create embeddings for segments and the entire video in the same request, include this parameter twice as shown below:
--form video_embedding_scope=clip \ --form video_embedding_scope=video
Default:
clip
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.tasks.status(...)
-
-
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
This method retrieves the status of a video embedding task. Check the task status of a video embedding task to determine when you can retrieve the embedding.
A task can have one of the following statuses:
processing: The platform is creating the embeddings.ready: Processing is complete. Retrieve the embeddings by invoking theGETmethod of the/embed/tasks/{task_id} endpoint.failed: The task could not be completed, and the embeddings haven't been created.
-
This endpoint will be deprecated in a future version. Migrate to the [Embed API v2](/v1.3/api-reference/create-embeddings-v2) for continued support and access to new features.
This method retrieves the status of a video embedding task. Check the task status of a video embedding task to determine when you can retrieve the embedding.
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.tasks.status( task_id="663da73b31cdd0c1f638a8e6", )
-
-
-
task_id:
strβ The unique identifier of your video embedding task.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.tasks.retrieve(...)
-
-
-
This method retrieves embeddings for a specific video embedding task. Ensure the task status is
readybefore invoking this method. Refer to the Retrieve the status of a video embedding tasks page for instructions on checking the task status.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.tasks.retrieve( task_id="663da73b31cdd0c1f638a8e6", )
-
-
-
task_id:
strβ The unique identifier of your video embedding task.
-
embedding_option:
typing.Optional[ typing.Union[ TasksRetrieveRequestEmbeddingOptionItem, typing.Sequence[TasksRetrieveRequestEmbeddingOptionItem], ] ]Specifies which types of embeddings to retrieve. Values:
The platform returns all available embeddings when you omit this parameter.visual,audio,transcription. For details, see the Embedding options section.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.v_2.create(...)
-
-
-
This endpoint synchronously creates embeddings for multimodal content and returns the results immediately in the response.
When to use this endpoint:
- Create embeddings for text, images, audio, or video content
- Retrieve immediate results without waiting for background processing
- Process audio or video content up to 10 minutes in duration
Do not use this endpoint for:
- Audio or video content longer than 10 minutes. Use the
POSTmethod of the/embed-v2/tasksendpoint instead.
Images:
- Formats: JPEG, PNG
- Minimum size: 128x128 pixels
- Maximum file size: 5 MB
Audio and video:
- Maximum duration: 10 minutes
- Maximum file size for base64 encoded strings: 36 MB
- Audio formats: WAV (uncompressed), MP3 (lossy), FLAC (lossless)
- Video formats: FFmpeg supported formats
- Video resolution: 360x360 to 5184x2160 pixels
- Aspect ratio: Between 1:1 and 1:2.4, or between 2.4:1 and 1:1
-
-
-
from twelvelabs import TextInputRequest, TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.v_2.create( input_type="text", model_name="marengo3.0", text=TextInputRequest( input_text="man walking a dog", ), )
-
-
-
input_type:
CreateEmbeddingsRequestInputTypeThe type of content for the embeddings.
Values:
audio: Creates embeddings for an audio filevideo: Creates embeddings for a video fileimage: Creates embeddings for an image filetext: Creates embeddings for text inputtext_image: Creates embeddings for text and an imagemulti_input: Creates a single embedding from up to 10 images. You can optionally include text to provide context. To reference specific images in your text, use placeholders in the following format:<@name>, wherenamematches thenamefield of a media source
-
model_name:
CreateEmbeddingsRequestModelNameβ The video understanding model to use. Value: "marengo3.0".
-
text:
typing.Optional[TextInputRequest]
-
image:
typing.Optional[ImageInputRequest]
-
text_image:
typing.Optional[TextImageInputRequest]
-
audio:
typing.Optional[AudioInputRequest]
-
video:
typing.Optional[VideoInputRequest]
-
multi_input:
typing.Optional[MultiInputRequest]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.v_2.tasks.list(...)
-
-
-
This method returns a list of the async embedding tasks in your account. The platform returns your async embedding tasks sorted by creation date, with the newest at the top of the list.
- Embeddings are stored for seven days. - When you invoke this method without specifying the `started_at` and `ended_at` parameters, the platform returns all the async embedding tasks created within the last seven days.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.embed.v_2.tasks.list( started_at="2024-03-01T00:00:00Z", ended_at="2024-03-01T00:00:00Z", status="processing", page=1, page_limit=10, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
started_at:
typing.Optional[str]β Retrieve the embedding tasks that were created after the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").
-
ended_at:
typing.Optional[str]β Retrieve the embedding tasks that were created before the given date and time, expressed in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ").
-
status:
typing.Optional[str]Filter the embedding tasks by their current status.
Values:
processing,ready, orfailed.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.v_2.tasks.create(...)
-
-
-
This endpoint creates embeddings for audio and video content asynchronously.
When to use this endpoint:
- Process audio or video files longer than 10 minutes
- Process files up to 4 hours in duration
Audio:
- Minimum duration: 4 seconds
- Maximum duration: 4 hours
- Maximum file size: 2 GB
- Formats: WAV (uncompressed), MP3 (lossy), FLAC (lossless)
Creating embeddings asynchronously requires three steps:
- Create a task using this endpoint. The platform returns a task ID.
- Poll for the status of the task using the
GETmethod of the/embed-v2/tasks/{task_id}endpoint. Wait until the status isready. - Retrieve the embeddings from the response when the status is
readyusing theGETmethod of the/embed-v2/tasks/{task_id}endpoint.
-
-
-
from twelvelabs import MediaSource, TwelveLabs, VideoInputRequest client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.v_2.tasks.create( input_type="video", model_name="marengo3.0", video=VideoInputRequest( media_source=MediaSource( url="https://user-bucket.com/video/long-video.mp4", ), embedding_option=["visual", "audio"], embedding_scope=["clip", "asset"], embedding_type=["separate_embedding", "fused_embedding"], ), )
-
-
-
input_type:
CreateAsyncEmbeddingRequestInputTypeThe type of content for the embeddings.
Values:
audio: Audio filesvideo: Video content
-
model_name:
CreateAsyncEmbeddingRequestModelNameβ The model you wish to use. Value:"marengo3.0".
-
audio:
typing.Optional[AudioInputRequest]
-
video:
typing.Optional[VideoInputRequest]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.embed.v_2.tasks.retrieve(...)
-
-
-
This method retrieves the status and the results of an async embedding task.
Task statuses:
processing: The platform is creating the embeddings.ready: Processing is complete. Embeddings are available in the response.failed: The task failed. Embeddings were not created.
Invoke this method repeatedly until the
Embeddings are stored for seven days.statusfield isready. Whenstatusisready, use the embeddings from the response.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.embed.v_2.tasks.retrieve( task_id="64f8d2c7e4a1b37f8a9c5d12", )
-
-
-
task_id:
strβ The unique identifier of the embedding task.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.list(...)
-
-
-
This method returns a list of the entities in the specified entity collection.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.entity_collections.entities.list( entity_collection_id="6298d673f1090f1100476d4c", page=1, page_limit=10, name="My entity", status="processing", sort_by="created_at", sort_option="desc", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection for which the platform will retrieve the entities.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
name:
typing.Optional[str]β Filter entities by name.
-
status:
typing.Optional[EntitiesListRequestStatus]β Filter entities by status.
-
sort_by:
typing.Optional[EntitiesListRequestSortBy]The field to sort on. The following options are available:
created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity was created.updated_at:Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the entity collection was updated.name: Sorts by the name.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.create(...)
-
-
-
This method creates an entity within a specified entity collection. Each entity must be associated with at least one asset.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.create( entity_collection_id="6298d673f1090f1100476d4c", name="My entity", asset_ids=["6298d673f1090f1100476d4c", "6298d673f1090f1100476d4d"], )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection in which to create the entity.
-
name:
strβ The name of the entity. Make sure you use a succinct and descriptive name.
-
asset_ids:
typing.Sequence[str]β An array of asset IDs to associate with the entity. You must provide at least one value.
-
description:
typing.Optional[str]β An optional description of the entity.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]Optional metadata for the entity, provided as key-value pairs to store additional context or attributes. Use metadata to categorize or describe the entity for easier management and search. Keys must be of type
string, and values can be of typestring,integer,float, orboolean.Example:
To store complex data types such as objects or arrays, convert them to string values before including them in the metadata.{ "sport": "soccer", "teamId": 42, "performanceScore": 8.7, "isActive": true }
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.create_bulk(...)
-
-
-
This method creates multiple entities within a specified entity collection in a single request. Each entity must be associated with at least one asset. This endpoint is useful for efficiently adding multiple entities, such as a roster of players or a group of characters.
-
-
-
from twelvelabs import TwelveLabs from twelvelabs.entity_collections.entities import ( EntitiesCreateBulkRequestEntitiesItem, ) client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.create_bulk( entity_collection_id="6298d673f1090f1100476d4c", entities=[ EntitiesCreateBulkRequestEntitiesItem( name="My entity", asset_ids=["6298d673f1090f1100476d4c", "6298d673f1090f1100476d4d"], ) ], )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection in which to create the entities.
-
entities:
typing.Sequence[EntitiesCreateBulkRequestEntitiesItem]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.retrieve(...)
-
-
-
This method retrieves details about the specified entity.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.retrieve( entity_collection_id="6298d673f1090f1100476d4c", entity_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection.
-
entity_id:
strβ The unique identifier of the entity to retrieve.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.delete(...)
-
-
-
This method deletes a specific entity from an entity collection. It permanently removes the entity and its associated data, but does not affect the assets associated with this entity.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.delete( entity_collection_id="6298d673f1090f1100476d4c", entity_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection containing the entity to be deleted.
-
entity_id:
strβ The unique identifier of the entity to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.update(...)
-
-
-
This method updates the specified entity within an entity collection. This operation allows modification of the entity's name, description, or metadata. Note that this endpoint does not affect the assets associated with the entity.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.update( entity_collection_id="6298d673f1090f1100476d4c", entity_id="6298d673f1090f1100476d4c", )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection containing the entity to be updated.
-
entity_id:
strβ The unique identifier of the entity to update.
-
name:
typing.Optional[str]β The new name for the entity.
-
description:
typing.Optional[str]β An updated description for the entity.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Updated metadata for the entity. If provided, this completely replaces the existing metadata. Use this to store custom key-value pairs related to the entity.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.create_assets(...)
-
-
-
This method adds assets to the specified entity within an entity collection. Assets are used to identify the entity in media content, and adding multiple assets can improve the accuracy of entity recognition in searches.
When assets are added, the entity may temporarily enter the "processing" state while the platform updates the necessary data. Once processing is complete, the entity status will return to "ready."
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.create_assets( entity_collection_id="6298d673f1090f1100476d4c", entity_id="6298d673f1090f1100476d4c", asset_ids=["6298d673f1090f1100476d4c", "6298d673f1090f1100476d4d"], )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection that contains the entity to which assets will be added.
-
entity_id:
strβ The unique identifier of the entity within the specified entity collection to which the assets will be added.
-
asset_ids:
typing.Sequence[str]β An array of asset IDs to add to the entity.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.entity_collections.entities.delete_assets(...)
-
-
-
This method removes from the specified entity. Assets are used to identify the entity in media content, and removing assets may impact the accuracy of entity recognition in searches if too few assets remain.
When assets are removed, the entity may temporarily enter a "processing" state while the system updates the necessary data. Once processing is complete, the entity status will return to "ready."
- This operation only removes the association between the entity and the specified assets; it does not delete the assets themselves. - An entity must always have at least one asset associated with it. You can't remove the last asset from an entity.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.entity_collections.entities.delete_assets( entity_collection_id="6298d673f1090f1100476d4c", entity_id="6298d673f1090f1100476d4c", asset_ids=["6298d673f1090f1100476d4e", "6298d673f1090f1100476d4f"], )
-
-
-
entity_collection_id:
strβ The unique identifier of the entity collection that contains the entity from which assets will be removed.
-
entity_id:
strβ The unique identifier of the entity within the specified entity collection from which the assets will be removed.
-
asset_ids:
typing.Sequence[str]β An array of asset IDs to remove from the entity.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.indexed_assets.list(...)
-
-
-
This method returns a list of the indexed assets in the specified index. By default, the platform returns your indexed assets sorted by creation date, with the newest at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.indexes.indexed_assets.list( index_id="6298d673f1090f1100476d4c", page=1, page_limit=10, sort_by="created_at", sort_option="desc", filename="01.mp4", created_at="2024-08-16T16:53:59Z", updated_at="2024-08-16T16:53:59Z", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
index_id:
strβ The unique identifier of the index for which the platform will retrieve the indexed assets.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
sort_by:
typing.Optional[str]The field to sort on. The following options are available:
updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.
Default:
created_at.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
status:
typing.Optional[ typing.Union[ IndexedAssetsListRequestStatusItem, typing.Sequence[IndexedAssetsListRequestStatusItem], ] ]Filter by one or more indexing task statuses. The following options are available:
ready: The indexed asset has been successfully uploaded and indexed.pending: The indexed asset is pending.queued: The indexed asset is queued.indexing: The indexed asset is being indexed.failed: The indexed asset indexing task failed.
To filter by multiple statuses, specify the
statusparameter for each value:status=ready&status=validating
-
filename:
typing.Optional[str]β Filter by filename.
-
duration:
typing.Optional[IndexedAssetsListRequestDuration]Filter by duration in seconds. Pass an object with
gteand/orltefor range filtering. For exact match, set both to the same value.
-
fps:
typing.Optional[IndexedAssetsListRequestFps]Filter by frames per second. Pass an object with
gteand/orltefor range filtering. For exact match, set both to the same value.
-
width:
typing.Optional[IndexedAssetsListRequestWidth]Filter by width in pixels. Pass an object with
gteand/orltefor range filtering. For exact match, set both to the same value.
-
height:
typing.Optional[IndexedAssetsListRequestHeight]Filter by height in pixels. Pass an object with
gteand/orltefor range filtering. For exact match, set both to the same value.
-
size:
typing.Optional[IndexedAssetsListRequestSize]Filter by size in bytes. Pass an object with
gteand/orltefor range filtering. For exact match, set both to the same value.
-
created_at:
typing.Optional[str]β Filter indexed assets by the creation date and time of their associated indexing tasks, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns indexed assets created on or after the specified date and time.
-
updated_at:
typing.Optional[str]β This filter applies only to indexed assets updated using thePUTmethod of the/indexes/{index-id}/indexed-assets/{indexed-asset-id}endpoint. It filters indexed assets by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the indexed assets that were last updated on the specified date at or after the given time.
-
user_metadata:
typing.Optional[ typing.Dict[str, typing.Optional[IndexedAssetsListRequestUserMetadataValue]] ]To enable filtering by custom fields, you must first add user-defined metadata to your video by calling the
PUTmethod of the/indexes/:index-id/indexed-assets/:indexed-asset-idendpoint.Examples:
- To filter on a string:
?category=recentlyAdded - To filter on an integer:
?batchNumber=5 - To filter on a float:
?rating=9.3 - To filter on a boolean:
?needsReview=true
- To filter on a string:
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.indexed_assets.create(...)
-
-
-
This method indexes an uploaded asset to make it searchable and analyzable. Indexing processes your content and extracts information that enables the platform to search and analyze your videos.
This operation is asynchronous. The platform returns an indexed asset ID immediately and processes your content in the background. Monitor the indexing status to know when your content is ready to use.
Your asset must meet the requirements based on your workflow:
- Search: Marengo requirements
- Video analysis: Pegasus requirements.
If you want to both search and analyze your videos, the most restrictive requirements apply.
This endpoint is rate-limited. For details, see the [Rate limits](/v1.3/docs/get-started/rate-limits) page.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.indexed_assets.create( index_id="6298d673f1090f1100476d4c", asset_id="6298d673f1090f1100476d4c", )
-
-
-
index_id:
strβ The unique identifier of the index to which the asset will be indexed.
-
asset_id:
strβ The unique identifier of the asset to index. The asset status must beready. Use the Retrieve an asset method to check the status.
-
enable_video_stream:
typing.Optional[bool]β This parameter indicates if the platform stores the video for streaming. When set totrue, the platform stores the video, and you can retrieve its URL by calling theGETmethod of the/indexes/{index-id}/indexed-assets/{indexed-asset-id}endpoint. You can then use this URL to access the stream over the HLS protocol.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.indexed_assets.retrieve(...)
-
-
-
This method retrieves information about an indexed asset, including its status, metadata, and optional embeddings or transcription.
Use this method to:
-
Monitor the indexing progress:
- Call this endpoint after creating an indexed asset
- Check the
statusfield until it showsready - Once ready, your content is available for search and analysis
-
Retrieve the asset metadata:
- Retrieve system metadata (duration, resolution, filename)
- Access user-defined metadata
-
Retrieve the embeddings:
- Include the
embeddingOptionparameter to retrieve video embeddings - Requires the Marengo video understanding model to be enabled in your index
- Include the
-
Retrieve transcriptions:
- Set the
transcriptionparameter totrueto retrieve spoken words from your video
- Set the
-
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.indexed_assets.retrieve( index_id="6298d673f1090f1100476d4c", indexed_asset_id="6298d673f1090f1100476d4c", transcription=True, )
-
-
-
index_id:
strβ The unique identifier of the index to which the indexed asset has been uploaded.
-
indexed_asset_id:
strβ The unique identifier of the indexed asset to retrieve.
-
embedding_option:
typing.Optional[ typing.Union[ IndexedAssetsRetrieveRequestEmbeddingOptionItem, typing.Sequence[IndexedAssetsRetrieveRequestEmbeddingOptionItem], ] ]Specifies which types of embeddings to retrieve. Values:
To retrieve embeddings for a video, it must be indexed using the Marengo video understanding model. For details on enabling this model for an index, see the [Create an index](/reference/create-index) page.visual,audio,transcription. For details, see the Embedding options section.
-
transcription:
typing.Optional[bool]β Specifies whether to retrieve a transcription of the spoken words.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.indexed_assets.delete(...)
-
-
-
This method deletes all the information about the specified indexed asset. This action cannot be undone.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.indexed_assets.delete( index_id="6298d673f1090f1100476d4c", indexed_asset_id="6298d673f1090f1100476d4c", )
-
-
-
index_id:
strβ The unique identifier of the index to which the indexed asset has been uploaded.
-
indexed_asset_id:
strβ The unique identifier of the indexed asset to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.indexed_assets.update(...)
-
-
-
This method updates one or more fields of the metadata of an indexed asset. Also, can delete a field by setting it to
null.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.indexed_assets.update( index_id="6298d673f1090f1100476d4c", indexed_asset_id="6298d673f1090f1100476d4c", user_metadata={ "category": "recentlyAdded", "batchNumber": 5, "rating": 9.3, "needsReview": True, }, )
-
-
-
index_id:
strβ The unique identifier of the index to which the indexed asset has been uploaded.
-
indexed_asset_id:
strβ The unique identifier of the indexed asset to update.
-
user_metadata:
typing.Optional[UserMetadata]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.videos.list(...)
-
-
-
This method will be deprecated in a future version. New implementations should use the List indexed assets method.
This method returns a list of the videos in the specified index. By default, the platform returns your videos sorted by creation date, with the newest at the top of the list.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) response = client.indexes.videos.list( index_id="6298d673f1090f1100476d4c", page=1, page_limit=10, sort_by="created_at", sort_option="desc", filename="01.mp4", created_at="2024-08-16T16:53:59Z", updated_at="2024-08-16T16:53:59Z", ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
index_id:
strβ The unique identifier of the index for which the platform will retrieve the videos.
-
page:
typing.Optional[int]A number that identifies the page to retrieve.
Default:
1.
-
page_limit:
typing.Optional[int]The number of items to return on each page.
Default:
10. Max:50.
-
sort_by:
typing.Optional[str]The field to sort on. The following options are available:
updated_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was updated.created_at: Sorts by the time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"), when the item was created.
Default:
created_at.
-
sort_option:
typing.Optional[str]The sorting direction. The following options are available:
ascdesc
Default:
desc.
-
filename:
typing.Optional[str]β Filter by filename.
-
duration:
typing.Optional[VideosListRequestDuration]Filter by duration in seconds. Pass an object with
gteand/orltefor range filtering. For exact match, set both to the same value.
-
fps:
typing.Optional[VideosListRequestFps]Filter by frames per second. Pass an object with
gteand/orltefor range filtering. For exact match, set both to the same value.
-
width:
typing.Optional[VideosListRequestWidth]Filter by width in pixels. Pass an object with
gteand/orltefor range filtering. For exact match, set both to the same value.
-
height:
typing.Optional[VideosListRequestHeight]Filter by height in pixels. Pass an object with
gteand/orltefor range filtering. For exact match, set both to the same value.
-
size:
typing.Optional[VideosListRequestSize]Filter by size in bytes. Pass an object with
gteand/orltefor range filtering. For exact match, set both to the same value.
-
created_at:
typing.Optional[str]β Filter videos by the creation date and time of their associated indexing tasks, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the videos whose indexing tasks were created on the specified date at or after the given time.
-
updated_at:
typing.Optional[str]β This filter applies only to videos updated using thePUTmethod of the/indexes/{index-id}/videos/{video-id}endpoint. It filters videos by the last update date and time, in the RFC 3339 format ("YYYY-MM-DDTHH:mm:ssZ"). The platform returns the video indexing tasks that were last updated on the specified date at or after the given time.
-
user_metadata:
typing.Optional[ typing.Dict[str, typing.Optional[VideosListRequestUserMetadataValue]] ]To enable filtering by custom fields, you must first add user-defined metadata to your video by calling the
PUTmethod of the/indexes/:index-id/videos/:video-idendpoint.Examples:
- To filter on a string:
?category=recentlyAdded - To filter on an integer:
?batchNumber=5 - To filter on a float:
?rating=9.3 - To filter on a boolean:
?needsReview=true
- To filter on a string:
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.videos.retrieve(...)
-
-
-
This method will be deprecated in a future version. New implementations should use the Retrieve an indexed asset method.
This method retrieves information about the specified video.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.videos.retrieve( index_id="6298d673f1090f1100476d4c", video_id="6298d673f1090f1100476d4c", transcription=True, )
-
-
-
index_id:
strβ The unique identifier of the index to which the video has been uploaded.
-
video_id:
strβ The unique identifier of the video to retrieve.
-
embedding_option:
typing.Optional[ typing.Union[ VideosRetrieveRequestEmbeddingOptionItem, typing.Sequence[VideosRetrieveRequestEmbeddingOptionItem], ] ]Specifies which types of embeddings to retrieve. Values:
To retrieve embeddings for a video, it must be indexed using the Marengo video understanding model. For details on enabling this model for an index, see the [Create an index](/reference/create-index) page.visual,audio,transcription. For details, see the Embedding options section.
-
transcription:
typing.Optional[bool]β The parameter indicates whether to retrieve a transcription of the spoken words for the indexed video.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.videos.delete(...)
-
-
-
This method will be deprecated in a future version. New implementations should use the Delete an indexed asset method.
This method deletes all the information about the specified indexed video. This action cannot be undone.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.videos.delete( index_id="6298d673f1090f1100476d4c", video_id="6298d673f1090f1100476d4c", )
-
-
-
index_id:
strβ The unique identifier of the index to which the video has been uploaded.
-
video_id:
strβ The unique identifier of the video to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.indexes.videos.update(...)
-
-
-
This method will be deprecated in a future version. New implementations should use the Partial update indexed asset method.
This method updates one or more fields of the metadata of a video. Also, can delete a field by setting it to
null.
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.indexes.videos.update( index_id="6298d673f1090f1100476d4c", video_id="6298d673f1090f1100476d4c", user_metadata={ "category": "recentlyAdded", "batchNumber": 5, "rating": 9.3, "needsReview": True, }, )
-
-
-
index_id:
strβ The unique identifier of the index to which the video has been uploaded.
-
video_id:
strβ The unique identifier of the video to update.
-
user_metadata:
typing.Optional[UserMetadata]
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.transfers.create(...)
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.transfers.create( integration_id="integration-id", )
-
-
-
integration_id:
str
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.transfers.get_status(...)
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.transfers.get_status( integration_id="integration-id", )
-
-
-
integration_id:
str
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tasks.transfers.get_logs(...)
-
-
-
from twelvelabs import TwelveLabs client = TwelveLabs( api_key="YOUR_API_KEY", ) client.tasks.transfers.get_logs( integration_id="integration-id", )
-
-
-
integration_id:
str
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-