Skip to main content
To communicate with agents, you use the A2A protocol client to send messages and receive streaming events. The Agent Stack SDK provides helpers that turn those events into UI updates. This guide shows how to integrate @a2a-js/sdk with the Agent Stack SDK helpers, mirroring the same flow used in agentstack-ui. If you only need the fast path, start with Getting Started.

Prereqs

  • Packages installed: agentstack-sdk and @a2a-js/sdk
  • Platform base URL and provider ID
  • User access token (for platform API calls)
  • Context token (for A2A requests)

Quick Start recap

  • Create a platform API client with the user access token.
  • List providers, pick a providerId, then create a context and contextToken.
  • Create the A2A client with the contextToken and start a message stream.
The advanced sections assume you already have client, context, and contextToken from the quick start. If not, see Getting Started.

Advanced guide

In the next steps, you’ll wire up a full A2A message flow: resolve initial agent demands, start a streaming task, handle UI‑driven updates (like forms), and submit follow‑ups on the same task. The snippets are intentionally minimal but map directly to a real client implementation.

1. Read the agent card and resolve demands

Fetch the agent card, inspect demands, and build the initial fulfillments. To keep the rest of this guide working without extra complexity, focus on the most common requirement: LLM access via the platform OpenAI‑compatible proxy. We map each LLM demand key to a model the user picked in your UI, then resolve the metadata that gets attached to your first message.
import { handleAgentCard, type Fulfillments } from "agentstack-sdk";

const card = await client.getAgentCard();
const { resolveMetadata, demands } = handleAgentCard(card);

// Example: model picker state keyed by demand id.
const selectedLlmModels: Record<string, string> = {
  default: "gpt-4o",
};

const fulfillments: Fulfillments = {
  llm: demands.llmDemands
    ? async ({ llm_demands }) => ({
        llm_fulfillments: Object.fromEntries(
          Object.keys(llm_demands).map((key) => [
            key,
            {
              identifier: "llm_proxy",
              api_base: "{platform_url}/api/v1/openai/",
              api_key: contextToken.token,
              api_model: selectedLlmModels[key],
            },
          ]),
        ),
      })
    : undefined,
};

const agentMetadata = await resolveMetadata(fulfillments);
See Agent Requirements for the available service and UI extension helpers.

2. Send the initial message stream

Start a task by sending the user prompt that triggers the project brief flow.
const stream = client.sendMessageStream({
  message: {
    kind: "message",
    role: "user",
    messageId: crypto.randomUUID(),
    contextId: context.id,
    parts: [
      {
        kind: "text",
        text: "Draft a short project brief for a new onboarding flow. Ask me for any details you need.",
      },
    ],
    metadata: agentMetadata,
  },
});

3. Handle streaming updates and show the form

Use handleTaskStatusUpdate to detect form requests and capture the task ID so you can continue the same task. Render streamed output from status-update events, and keep message as a fallback.
import { handleTaskStatusUpdate, type TaskStatusUpdateType } from "agentstack-sdk";

let taskId: string | undefined;

for await (const event of stream) {
  if (event.kind === "task") {
    taskId = event.id;
  }

  if (event.kind === "status-update") {
    taskId = event.taskId;

    // UI actions like forms, approvals, secrets, OAuth
    for (const update of handleTaskStatusUpdate(event)) {
      if (update.type === TaskStatusUpdateType.FormRequired) {
        renderForm(update.form);
      }
    }

    // Streaming agent output usually arrives here
    if (event.status.message) {
      renderMessage(event.status.message.parts, event.status.message.metadata);
    }
  }

  // Fallback for non-streaming agents or final messages
  if (event.kind === "message") {
    renderMessage(event.parts, event.metadata);
  }
}
Why task status updates for chat streaming
  1. A2A uses tasks to represent long running work so you can track progress and cancel.
  2. Status updates provide the incremental channel for UI events and streaming output.
  3. Each streamed token typically arrives as a TaskStatusUpdateEvent in Agent Stack, so you can append text as it arrives.
For rendering message parts and metadata (such as citations), see Agent Responses.

4. Submit the form and continue the task

Convert the user responses into A2A metadata and send a follow-up message for the same task.
import { resolveUserMetadata } from "agentstack-sdk";

// Form values follow the SDK schemas: each field is { type, value }.
const formValues = {
  project_name: { type: "text", value: "New user onboarding" },
  channels: { type: "multiselect", value: ["web", "mobile"] },
  deadline: { type: "date", value: "2026-03-01" },
  legal_review: { type: "checkbox", value: true },
};

const userMetadata = await resolveUserMetadata({
  form: formValues,
});

const responseStream = client.sendMessageStream({
  message: {
    kind: "message",
    role: "user",
    messageId: crypto.randomUUID(),
    contextId: context.id,
    taskId,
    parts: [{ kind: "text", text: "Here are the project details." }],
    metadata: { ...agentMetadata, ...userMetadata },
  },
});
User metadataUser metadata is the structured response payload that you attach to a message. It is separate from text parts and lets the agent consume form fields, approvals, and canvas actions in a typed way.
Include taskId when responding to an in progress task. Omit it when starting a new task.
For a focused look at composing messages, see User Messages.

5. Render the final output and artifacts

Stream the response and render the final message and any artifacts.
for await (const event of responseStream) {
  if (event.kind === "artifact-update") {
    renderArtifact(event.artifact);
  }

  if (event.kind === "message") {
    renderMessage(event.parts, event.metadata);
  }
}

6. Cancel a running task

If a user clicks “Stop”, cancel the current task by ID. You’ll still receive a final status update with state canceled, so update your UI and stop streaming when you see it.
const cancel = async () => {
  if (!taskId) return;

  await client.cancelTask({ id: taskId });
};

for await (const event of responseStream) {
  if (event.kind === "status-update" && event.status.state === "canceled") {
    renderCanceledState();
    break;
  }

  // ...handle other events
}

Basic concepts: tasks, status updates, and streaming output

Agent Stack’s A2A streaming is task-based. A single sendMessageStream call can yield several event shapes, and you should handle all of them:
  • task: emits the initial Task object. Capture task.id, and use task.status (and optional history/artifacts) to seed your UI state.
  • status-update: emits a task status transition. When the agent is streaming, incremental output is typically delivered via event.status.message (a Message payload attached to the status).
  • artifact-update: emits artifacts as they are generated or updated (useful for streamed files, canvases, or structured outputs).
  • message: emits a standalone Message. This is common for non‑streaming agents and may also appear as a final response.
Key detail for Agent Stack streaming: incremental output usually arrives inside status-update events (event.status.message). If you only render message events, you may miss streamed output.

Handling failed states

Tasks can fail at any point during streaming. Failed or rejected updates arrive as status-update events with event.status.state set to failed or rejected. In those cases:
  • Read structured error metadata from the status message (if present).
  • Fall back to a generic error message if no metadata is provided.
  • Update the UI to a terminal error state and stop streaming.
import { errorExtension, extractUiExtensionData } from "agentstack-sdk";

const readError = extractUiExtensionData(errorExtension);

for await (const event of responseStream) {
  if (event.kind === "status-update") {
    const state = event.status.state;

    if (state === "failed" || state === "rejected") {
      const errorMetadata = readError(event.status.message?.metadata);

      renderError(errorMetadata?.message ?? "Agent error");

      break;
    }
  }
}
For more about error handling, see Error Handling.

Common pitfalls

  • Wrong token in A2A requests: use the context token for A2A fetches, not the user access token.
  • Missing metadata merge: merge agent card fulfillments with user metadata when you send responses.
  • Streaming text handling: status updates can include partial text; append incrementally.
  • Ignoring status updates: streamed agent output typically arrives via status-update events (in event.status.message), not message.
  • Node fetch missing: Node < 18 requires a fetch polyfill or custom fetch passed to the API client.