Skip to main content
The buildApiClient function creates a type-safe HTTP client for interacting with the AgentStack platform API. This client enables you to manage contexts, generate tokens with permissions, match model providers, and list connectors.

Initialization

Create an API client by calling buildApiClient with configuration options:
import { buildApiClient } from 'agentstack-sdk';

const api = buildApiClient({
  baseUrl: 'https://your-agentstack-instance.com',
  fetch: customFetch, // Optional: provide custom fetch implementation
});

Configuration Options

  • baseUrl (required): The base URL of your AgentStack server instance
  • fetch (optional): Custom fetch implementation. Required in Node.js < 18 or environments without global fetch support. Useful for adding authentication headers or custom request handling.

Example: Authenticated Client

In many applications, you’ll want to add authentication headers to requests:
const authenticatedFetch: typeof fetch = async (url, init) => {
  const request = new Request(url, init);
  
  // Add your authentication token
  const token = await getAuthToken();
  request.headers.set('Authorization', `Bearer ${token}`);
  
  return fetch(request);
};

const api = buildApiClient({
  baseUrl: 'https://your-agentstack-instance.com',
  fetch: authenticatedFetch,
});

API Methods

createContext(providerId: string)

Creates a new agent context. Contexts are isolated workspaces where agents can store files, vector stores, and other context-specific data. Parameters:
  • providerId: The ID of the provider to associate with this context
Returns: Promise<CreateContextResponse>
const context = await api.createContext('my-provider-id');
console.log(context.id); // Use this ID for subsequent operations
Response Type:
interface CreateContextResponse {
  id: string;
  created_at: string;
  updated_at: string;
  last_active_at: string;
  created_by: string;
  provider_id: string | null;
  metadata: Record<string, unknown> | null;
}

createContextToken(params: CreateContextTokenParams)

Generates a context token with specific permissions. Context tokens are used to grant agents access to platform resources through the Platform API extension. Parameters:
  • contextId: The ID of the context to create a token for
  • globalPermissions: Permissions that apply across all contexts
  • contextPermissions: Permissions specific to this context
Returns: Promise<{ token: ContextToken; contextId: string }>
const { token, contextId } = await api.createContextToken({
  contextId: context.id,
  globalPermissions: {
    llm: ['*'], // Grant access to all LLM providers
    embeddings: ['*'], // Grant access to all embedding providers
    a2a_proxy: ['*'], // Allow A2A proxy access
  },
  contextPermissions: {
    files: ['*'], // Full file access in this context
    vector_stores: ['*'], // Full vector store access
    context_data: ['*'], // Full context data access
  },
});

// Use token.token to pass to agents via Platform API extension
Token Type:
interface ContextToken {
  token: string;
  expires_at: string | null;
}

Permissions

Permissions control what resources agents can access. There are two types:

Global Permissions

Apply across all contexts and control access to platform-wide resources:
  • llm: Access to LLM providers. Can be ['*'] for all providers or an array of specific provider IDs
  • embeddings: Access to embedding providers. Same format as llm
  • model_providers: Read/write access to model provider configurations
  • a2a_proxy: Access to A2A proxy services
  • providers: Access to provider management
  • provider_variables: Access to provider environment variables
  • contexts: Access to context management
  • mcp_providers: Access to MCP providers
  • mcp_tools: Access to MCP tools
  • mcp_proxy: Access to MCP proxy
  • connectors: Access to connector management
  • feedback: Ability to submit feedback

Context Permissions

Apply only to the specific context:
  • files: File operations (read, write, extract, or * for all)
  • vector_stores: Vector store operations (read, write, or * for all)
  • context_data: Context metadata operations (read, write, or * for all)

matchProviders(params: MatchProvidersParams)

Finds model providers that match specified criteria. This is typically used when fulfilling LLM or embedding service extension demands. Parameters:
  • suggestedModels: Array of preferred model IDs, or null to match any model
  • capability: Either ModelCapability.Llm or ModelCapability.Embedding
  • scoreCutoff: Minimum match score (0.0 to 1.0). Higher values require better matches.
Returns: Promise<ModelProviderMatch>
import { ModelCapability } from 'agentstack-sdk';

const matches = await api.matchProviders({
  suggestedModels: ['gpt-4', 'gpt-3.5-turbo'],
  capability: ModelCapability.Llm,
  scoreCutoff: 0.4,
});

if (matches.items.length > 0) {
  const bestMatch = matches.items[0];
  console.log(`Best match: ${bestMatch.model_id} (score: ${bestMatch.score})`);
}
Response Type:
interface ModelProviderMatch {
  items: Array<{
    model_id: string;
    score: number;
  }>;
  total_count: number;
  has_more: boolean;
  next_page_token: string | null;
}

listConnectors()

Lists all available connectors in the platform. Connectors enable agents to integrate with external services and APIs. Returns: Promise<ListConnectorsResponse>
const connectors = await api.listConnectors();

for (const connector of connectors.items) {
  console.log(`${connector.id}: ${connector.state}`);
  
  if (connector.state === ConnectorState.AuthRequired) {
    // Handle OAuth flow using connector.auth_request
  }
}
Response Type:
interface ListConnectorsResponse {
  items: Connector[];
  total_count: number;
  has_more: boolean;
  next_page_token: string | null;
}

interface Connector {
  id: string;
  url: string;
  state: ConnectorState;
  auth_request: {
    type: 'code';
    authorization_endpoint: string;
  } | null;
  disconnect_reason: string | null;
  metadata: Record<string, string> | null;
}

enum ConnectorState {
  Created = 'created',
  AuthRequired = 'auth_required',
  Connected = 'connected',
  Disconnected = 'disconnected',
}

Common Usage Patterns

Creating a Context and Token for an Agent

The most common pattern is creating a context and generating a token with appropriate permissions:
// 1. Create a context
const context = await api.createContext('my-provider-id');

// 2. Generate a token with permissions
const { token } = await api.createContextToken({
  contextId: context.id,
  globalPermissions: {
    llm: ['*'],
    embeddings: ['*'],
    a2a_proxy: ['*'],
  },
  contextPermissions: {
    files: ['*'],
    vector_stores: ['*'],
    context_data: ['*'],
  },
});

// 3. Use the token in your Platform API extension fulfillment
// The token.token value is passed as the api_key in the fulfillment

Fulfilling LLM Demands

When an agent requests LLM access, use matchProviders to find suitable models:
import { buildLLMExtensionFulfillmentResolver } from 'agentstack-sdk';

// Create context and token first
const context = await api.createContext('my-provider-id');
const { token } = await api.createContextToken({
  contextId: context.id,
  globalPermissions: { llm: ['*'], a2a_proxy: ['*'] },
  contextPermissions: { files: ['*'] },
});

// Build the LLM fulfillment resolver
const llmResolver = buildLLMExtensionFulfillmentResolver(api, token);

// Use in your fulfillments when calling resolveMetadata
const { resolveMetadata } = handleAgentCard(agentCard);
const metadata = resolveMetadata({
  llm: llmResolver,
  // ... other fulfillments
});

Type Safety

All API methods are fully typed with TypeScript and use Zod schemas for runtime validation. Response data is automatically validated against these schemas, ensuring type safety and catching API contract changes at runtime.

Next Steps

  • Extensions - Learn how to use the API client with extension fulfillments
  • Examples - See complete integration examples