This document provides comprehensive API documentation for the SVECTOR SDK.
- Client Configuration
- Chat Completions
- Models
- Files
- Knowledge
- Error Handling
- Utilities
- Advanced Usage
interface SVECTOROptions {
apiKey?: string; // Your SVECTOR API key
baseURL?: string; // API base URL (default: https://api.svector.co.in)
maxRetries?: number; // Maximum retry attempts (default: 2)
timeout?: number; // Request timeout in milliseconds (default: 600000)
fetch?: typeof fetch; // Custom fetch implementation
dangerouslyAllowBrowser?: boolean; // Allow browser usage (default: false)
}import { SVECTOR } from 'svector';
const client = new SVECTOR({
apiKey: 'your-api-key',
maxRetries: 3,
timeout: 30000,
});Creates a chat completion using SVECTOR's Spec-Chat models.
interface ChatCompletionRequest {
model: string; // Model name (e.g., 'spec-3-turbo')
messages: ChatMessage[]; // Array of conversation messages
max_tokens?: number; // Maximum tokens to generate
temperature?: number; // Randomness (0.0 to 2.0)
stream?: boolean; // Enable streaming (use createStream instead)
files?: FileReference[]; // Files for RAG
}
interface ChatMessage {
role: 'system' | 'user' | 'assistant' | 'developer';
content: string;
}
interface FileReference {
type: 'file' | 'collection';
id: string;
}interface ChatCompletionResponse {
choices: ChatCompletionChoice[];
usage?: {
prompt_tokens: number;
completion_tokens: number;
total_tokens: number;
};
_request_id?: string;
}
interface ChatCompletionChoice {
message: {
role: string;
content: string;
};
index: number;
finish_reason?: string;
}const response = await client.chat.create({
model: 'spec-3-turbo',
messages: [
{
role: 'system',
content: 'You are a helpful assistant that provides accurate and concise answers.'
},
{
role: 'user',
content: 'Hello, how are you?'
}
],
max_tokens: 150,
temperature: 0.7,
});
console.log(response.choices[0].message.content);Creates a streaming chat completion.
Same as create() but with stream: true required.
AsyncIterable<StreamEvent>const stream = await client.chat.createStream({
model: 'spec-3-turbo',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
for await (const event of stream) {
if (event.choices?.[0]?.delta?.content) {
process.stdout.write(event.choices[0].delta.content);
}
}Creates a chat completion and returns both data and raw response.
{
data: ChatCompletionResponse;
response: Response;
}const { data, response } = await client.chat.createWithResponse({
model: 'spec-3-turbo',
messages: [{ role: 'user', content: 'Hello' }],
});
console.log('Status:', response.status);
console.log('Message:', data.choices[0].message.content);Retrieves all available models.
interface ModelListResponse {
models: string[];
_request_id?: string;
}const models = await client.models.list();
console.log('Available models:', models.models);Uploads a file for RAG functionality.
file: File, Buffer, Uint8Array, string, or ReadableStreampurpose: File purpose (default: 'default')filename: Optional filenameoptions: Request options
interface FileUploadResponse {
file_id: string;
_request_id?: string;
}// From File (browser)
const fileInput = document.getElementById('file') as HTMLInputElement;
const file = fileInput.files[0];
const response = await client.files.create(file);
// From Buffer (Node.js)
import fs from 'fs';
const buffer = fs.readFileSync('document.pdf');
const response = await client.files.create(buffer, 'default', 'document.pdf');
// From stream (Node.js)
const stream = fs.createReadStream('document.pdf');
const response = await client.files.create(stream, 'default');
// From string
const text = 'This is sample text content';
const response = await client.files.create(text, 'default', 'sample.txt');Uploads a file from a file path (Node.js only).
const response = await client.files.createFromPath(
'/path/to/document.pdf',
'default'
);Adds a file to a knowledge collection.
knowledgeId: Collection IDfileId: File ID from uploadoptions: Request options
interface KnowledgeAddFileResponse {
status: string;
message?: string;
_request_id?: string;
}const result = await client.knowledge.addFile(
'collection-123',
'file-456'
);
console.log('Status:', result.status);class SVECTORError extends Error {
status?: number;
request_id?: string;
headers?: Record<string, string>;
}
class APIError extends SVECTORError {}
class AuthenticationError extends SVECTORError {}
class PermissionDeniedError extends SVECTORError {}
class NotFoundError extends SVECTORError {}
class UnprocessableEntityError extends SVECTORError {}
class RateLimitError extends SVECTORError {}
class InternalServerError extends SVECTORError {}
class APIConnectionError extends SVECTORError {}
class APIConnectionTimeoutError extends APIConnectionError {}import {
AuthenticationError,
RateLimitError,
APIError
} from 'svector';
try {
const response = await client.chat.create({
model: 'spec-3-turbo',
messages: [{ role: 'user', content: 'Hello' }],
});
} catch (error) {
if (error instanceof AuthenticationError) {
console.error('Invalid API key');
} else if (error instanceof RateLimitError) {
console.error('Rate limit exceeded');
} else if (error instanceof APIError) {
console.error(`API error: ${error.message} (${error.status})`);
}
}Converts various input types to File objects.
value: Buffer, Uint8Array, string, ReadableStream, or Responsefilename: Optional filenameoptions: Optional file options
import { toFile } from 'svector';
const file = await toFile('Hello world', 'hello.txt', { type: 'text/plain' });
const response = await client.files.create(file);All API methods accept optional request options:
interface RequestOptions {
headers?: Record<string, string>;
query?: Record<string, string>;
maxRetries?: number;
timeout?: number;
}const response = await client.chat.create(
{
model: 'spec-3-turbo',
messages: [{ role: 'user', content: 'Hello' }],
},
{
timeout: 30000,
maxRetries: 1,
headers: { 'X-Custom-Header': 'value' },
}
);For undocumented endpoints or custom requests:
// GET request
const data = await client.get<ResponseType>('/api/custom');
// POST request
const data = await client.post<ResponseType>('/api/custom', { key: 'value' });
// PUT request
const data = await client.put<ResponseType>('/api/custom', { key: 'value' });
// DELETE request
const data = await client.delete<ResponseType>('/api/custom');import fetch from 'node-fetch';
const client = new SVECTOR({
apiKey: 'your-key',
fetch: fetch as any,
});import { SVECTOR } from 'svector';
// API key from environment variable
const client = new SVECTOR();import { SVECTOR } from 'svector';
const client = new SVECTOR({
apiKey: 'your-key',
dangerouslyAllowBrowser: true,
});import { SVECTOR } from 'npm:svector';
const client = new SVECTOR({
apiKey: Deno.env.get('SVECTOR_API_KEY'),
});import { SVECTOR } from 'svector';
const client = new SVECTOR({
apiKey: process.env.SVECTOR_API_KEY,
});// Upload file
const fileResponse = await client.files.create(fileData, 'default');
// Use in chat
const chatResponse = await client.chat.create({
model: 'spec-3-turbo',
messages: [{ role: 'user', content: 'Summarize this document' }],
files: [{ type: 'file', id: fileResponse.file_id }],
});// Add files to collection
await client.knowledge.addFile('collection-id', 'file-id-1');
await client.knowledge.addFile('collection-id', 'file-id-2');
// Use collection in chat
const chatResponse = await client.chat.create({
model: 'spec-3-turbo',
messages: [{ role: 'user', content: 'What insights can you provide?' }],
files: [{ type: 'collection', id: 'collection-id' }],
});The SVECTOR API has rate limits that vary by plan:
- Free tier: 10 requests/minute
- Pro plan: 100 requests/minute
- Enterprise: 1000+ requests/minute
-
Handle Rate Limits: Use the built-in retry logic or implement your own backoff strategy.
-
Set Appropriate Timeouts: Adjust timeouts based on your use case.
-
Monitor Token Usage: Track token consumption for cost optimization.
-
Implement Error Handling: Always handle potential errors gracefully.
-
Use Streaming for Long Responses: For lengthy responses, use streaming to improve user experience.
-
Secure API Keys: Never expose API keys in client-side code in production.
For additional help: