Tekimax LogoSDK

Core Concepts

The Tekimax SDK is built around two main primitives: the Tekimax client and the AIProvider interface.

The Client

The Tekimax client is the unified entry point. It organizes capabilities into Namespaces — one per modality — so that auto-complete guides you to the right method without memorizing the API surface.

Code
const client = new Tekimax({ provider }); // Namespaces client.text // Chat, Completions, Embeddings client.images // Generation, Editing, Analysis (Vision) client.audio // Text-to-Speech (TTS), Transcription (STT) client.videos // Generation, Analysis

Text (Chat)

The client.text namespace provides two equivalent interfaces. Use whichever reads better in your codebase.

Code
// OpenAI-style dot-chain (aliases to the same method internally) const response = await client.text.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Hello!' }] }); // Direct method const response = await client.text.generate({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Hello!' }] }); // Both return ChatResult — no choices array. console.log(response.message.content);

Providers

Providers act as translation layers. They convert strict Tekimax types into the specific format required by the upstream API (e.g., OpenAI, Anthropic) and normalize the response back.

All providers implement the AIProvider interface. The base AIProvider is strictly scoped to text chat. Instead of having messy optional methods for other modalities, Tekimax uses Capability Interfaces (like VisionCapability or ImageGenerationCapability). Providers explicitly implement these interfaces to opt into specific modalities.

Code
// The base provider is scoped strictly to text chat. export interface AIProvider { name: string; chat: (options: ChatOptions) => Promise<ChatResult>; chatStream: (options: ChatOptions) => AsyncIterable<StreamChunk>; } // Providers selectively implement capability interfaces. export interface VisionCapability { analyzeImage: (options: ImageAnalysisOptions) => Promise<ImageAnalysisResult>; } export interface ImageGenerationCapability { generateImage: (options: ImageGenerationOptions) => Promise<ImageResult>; } // Example: OpenAIProvider implements almost everything. export class OpenAIProvider implements AIProvider, VisionCapability, ImageGenerationCapability { // ... }

Because the Tekimax client namespaces use TypeScript generics (client.images), calling .images.generate() on a provider that lacks the ImageGenerationCapability will throw a compile-time error, giving you instant feedback in your IDE.

Streaming

Streaming uses a separate method (createStream / generateStream) rather than a stream: true flag. This lets TypeScript infer the correct return type at the call site — ChatResult for non-streaming, AsyncGenerator<StreamChunk> for streaming — without union-type gymnastics.

Code
// createStream returns an AsyncGenerator directly — no await needed. const stream = client.text.chat.completions.createStream({ model: 'gpt-4o', messages: [ { role: 'user', content: 'Tell me a story' } ], }); for await (const chunk of stream) { process.stdout.write(chunk.delta); }

React Hooks

If you are using React, we provide built-in hooks for instant integration.

Code
import { useChat } from 'tekimax-ts/react';

See the React Integration guide for more details.

Universal Features

The SDK normalizes advanced features across all providers (OpenAI, Anthropic, Gemini, Ollama, Grok, OpenRouter), ensuring a consistent API experience.

Reasoning (Thinking)

Support for "Thinking" models (like DeepSeek R1) is built-in. Use the think: true parameter to capture reasoning traces.

Code
const response = await client.text.chat.completions.create({ model: 'deepseek-r1', messages: [{ role: 'user', content: 'Solve this logic puzzle...' }], think: true // Enable reasoning capture }); // The thinking field captures the model's chain-of-thought separately // from the final answer, so you can display reasoning in a collapsible UI. console.log(response.message.thinking); // Reasoning trace console.log(response.message.content); // Final answer

See the Reasoning Models guide for more details.

Tool Calling

Tool calling is standardized. You define tools once using the OpenAI function-calling schema, and the SDK handles the specific format for each provider (e.g., mapping to Gemini's functionDeclarations or Anthropic's tool_use content blocks).

Code
const client = new Tekimax({ provider: new GeminiProvider({ apiKey: process.env.GOOGLE_API_KEY! }) }); const response = await client.text.chat.completions.create({ model: 'gemini-1.5-pro', messages: [{ role: 'user', content: 'What is the weather in Tokyo?' }], tools: [{ type: 'function', function: { name: 'get_weather', description: 'Get current weather', parameters: { type: 'object', properties: { location: { type: 'string' } }, required: ['location'] } } }] }); // Uniform access — same shape regardless of provider. // Anthropic returns tool_use blocks, Gemini returns functionCall parts, // but Tekimax normalizes them all to this structure. const toolCalls = response.message.toolCalls; if (toolCalls) { console.log(toolCalls[0].function.name); // "get_weather" console.log(toolCalls[0].function.arguments); // '{"location":"Tokyo"}' }

Agentic Loops with generateText

For multi-step tool-calling workflows, the SDK provides a generateText utility that handles the tool execution loop for you.

Code
import { generateText } from 'tekimax-ts'; const result = await generateText({ adapter: provider, model: 'gpt-4o', messages: [{ role: 'user', content: 'What is the weather in Tokyo?' }], tools: { get_weather: { type: 'function', function: { name: 'get_weather', description: 'Get current weather', parameters: { type: 'object', properties: { location: { type: 'string' } } } }, execute: async ({ location }) => ({ temp: 22, unit: 'C', location }) } }, // maxSteps controls how many chat round-trips the agent can take. // This prevents runaway loops — the default is 1 (no looping). maxSteps: 5 }); console.log(result.text); // Final answer after tool execution

On this page