Skip to main content
To communicate with agents, you use the A2A protocol client to send messages and receive streaming events. The Agent Stack SDK provides helpers that turn those events into UI updates. This guide shows how to integrate @a2a-js/sdk with the Agent Stack SDK helpers, mirroring the same flow used in agentstack-ui. If you only need the fast path, start with Getting Started.

Prereqs

  • Packages installed: agentstack-sdk and @a2a-js/sdk
  • Platform base URL and provider ID
  • User access token (for platform API calls)
  • Context token (for A2A requests)

Quick Start recap

  • Create a platform API client with the user access token.
  • List providers, pick a providerId, then create a context and contextToken.
  • Create the A2A client with the contextToken and start a message stream.
The advanced sections assume you already have client, context, and contextToken from the quick start. If not, see Getting Started.

Advanced guide

In the next steps, you’ll wire up a full A2A message flow: resolve initial agent demands, start a streaming task, handle UI‑driven updates (like forms), and submit follow‑ups on the same task. The snippets are intentionally minimal but map directly to a real client implementation.

1. Read the agent card and resolve demands

Fetch the agent card, inspect demands, and build the initial fulfillments. To keep the rest of this guide working without extra complexity, focus on the most common requirement: LLM access via the platform OpenAI‑compatible proxy. We map each LLM demand key to a model the user picked in your UI, then resolve the metadata that gets attached to your first message.
import { handleAgentCard, type Fulfillments } from "agentstack-sdk";

const card = await client.getAgentCard();
const { resolveMetadata, demands } = handleAgentCard(card);

// Example: model picker state keyed by demand id.
const selectedLlmModels: Record<string, string> = {
  default: "gpt-4o",
};

const fulfillments: Fulfillments = {
  llm: demands.llmDemands
    ? async ({ llm_demands }) => ({
        llm_fulfillments: Object.fromEntries(
          Object.keys(llm_demands).map((key) => [
            key,
            {
              identifier: "llm_proxy",
              api_base: "{platform_url}/api/v1/openai/",
              api_key: contextToken.token,
              api_model: selectedLlmModels[key],
            },
          ]),
        ),
      })
    : undefined,
};

const agentMetadata = await resolveMetadata(fulfillments);
See Agent Requirements for the available service and UI extension helpers.

2. Send the initial message stream

Start a task by sending the user prompt that triggers the project brief flow.
const stream = client.sendMessageStream({
  message: {
    kind: "message",
    role: "user",
    messageId: crypto.randomUUID(),
    contextId: context.id,
    parts: [
      {
        kind: "text",
        text: "Draft a short project brief for a new onboarding flow. Ask me for any details you need.",
      },
    ],
    metadata: agentMetadata,
  },
});

3. Handle streaming updates and show the form

Use handleTaskStatusUpdate to detect form requests and capture the task ID so you can continue the same task.
import { handleTaskStatusUpdate, type TaskStatusUpdateType } from "agentstack-sdk";

let taskId: string | undefined;

for await (const event of stream) {
  if (event.kind === "task") {
    taskId = event.id;
  }

  if (event.kind === "status-update") {
    taskId = event.taskId;

    for (const update of handleTaskStatusUpdate(event)) {
      if (update.type === TaskStatusUpdateType.FormRequired) {
        renderForm(update.form);
      }
    }
  }

  if (event.kind === "message") {
    renderMessage(event.parts, event.metadata);
  }
}
Why task status updates for chat streaming
  1. A2A uses tasks to represent long running work so you can track progress and cancel.
  2. Status updates provide the incremental channel for UI events and streaming output.
  3. Each streamed token arrives as a TaskStatusUpdateEvent, so you can append text as it arrives.
For rendering message parts and metadata (such as citations), see Agent Responses.

4. Submit the form and continue the task

Convert the user responses into A2A metadata and send a follow-up message for the same task.
import { resolveUserMetadata } from "agentstack-sdk";

// Form values follow the SDK schemas: each field is { type, value }.
const formValues = {
  project_name: { type: "text", value: "New user onboarding" },
  channels: { type: "multiselect", value: ["web", "mobile"] },
  deadline: { type: "date", value: "2026-03-01" },
  legal_review: { type: "checkbox", value: true },
};

const userMetadata = await resolveUserMetadata({
  form: formValues,
});

const responseStream = client.sendMessageStream({
  message: {
    kind: "message",
    role: "user",
    messageId: crypto.randomUUID(),
    contextId: context.id,
    taskId,
    parts: [{ kind: "text", text: "Here are the project details." }],
    metadata: { ...agentMetadata, ...userMetadata },
  },
});
User metadataUser metadata is the structured response payload that you attach to a message. It is separate from text parts and lets the agent consume form fields, approvals, and canvas actions in a typed way.
Include taskId when responding to an in progress task. Omit it when starting a new task.
For a focused look at composing messages, see User Messages.

5. Render the final output and artifacts

Stream the response and render the final message and any artifacts.
for await (const event of responseStream) {
  if (event.kind === "artifact-update") {
    renderArtifact(event.artifact);
  }

  if (event.kind === "message") {
    renderMessage(event.parts, event.metadata);
  }
}

6. Cancel a running task

If a user clicks “Stop”, cancel the current task by ID. You’ll still receive a final status update with state canceled, so update your UI and stop streaming when you see it.
const cancel = async () => {
  if (!taskId) return;

  await client.cancelTask({ id: taskId });
};

for await (const event of responseStream) {
  if (event.kind === "status-update" && event.status.state === "canceled") {
    renderCanceledState();
    break;
  }

  // ...handle other events
}

Common pitfalls

  • Wrong token in A2A requests: use the context token for A2A fetches, not the user access token.
  • Missing metadata merge: merge agent card fulfillments with user metadata when you send responses.
  • Streaming text handling: status updates can include partial text; append incrementally.
  • Node fetch missing: Node < 18 requires a fetch polyfill or custom fetch passed to the API client.

Error handling

When a status update fails, read error metadata from the error extension to show a user friendly message.
import { errorExtension, extractUiExtensionData } from "agentstack-sdk";

const readError = extractUiExtensionData(errorExtension);
const errorMetadata = readError(event.status.message?.metadata);

if (errorMetadata) {
  console.error(errorMetadata.message ?? "Agent error");
}
For more about error handling, see Error Handling.