Skip to main content
Agent Stack provides built-in observability through OpenTelemetry (OTLP). This allows you to monitor agent health, performance metrics, and execution traces in real-time. By default, the stack integrates with Arize Phoenix for local visualization, but it can be routed to any OTLP-compliant backend. For telemetry to flow successfully, it must be enabled at three levels:
1

The Agent Logic

You must instrument your agent logic to capture LLM calls and tool usage.
2

The Python SDK

The server must be configured to export that data by setting configure_telemetry=True.
3

The Agent Stack Platform

The infrastructure must be running a collector and a backend (like Arize Phoenix) to receive and display the data.

Agent SDK Configuration

Before configuring an observability platform, your agent code must be “telemetry-aware.” This configuration applies to all implementations, whether you are using local Phoenix or a cloud provider like Langfuse. You must initialize instrumentation at the agent logic level and enable the export flag in the Agent Stack SDK:
from agentstack_sdk.server import Server
# 1. Agent Level: Import your specific instrumentor
# from openinference.instrumentation.beeai import BeeAIInstrumentor

# Initialize the instrumentor to capture framework events
# BeeAIInstrumentor().instrument()

server = Server()

@server.agent()
async def my_agent(): 
    ...

# 2. SDK Level: Enable the bridge to the platform collector
server.run(configure_telemetry=True)
For an enhanced user experience and richer trace detail, consider instrumenting agents using the OpenInference standard for custom instrumentation.

Simple Monitoring: Agent Logs

The quickest way to see what your agent is doing is by streaming its logs directly to your terminal. This is ideal for debugging container lifecycle events and immediate request errors.
agentstack logs
What you’ll see:
  • Agent startup and initialization
  • Request processing steps
  • Error messages and stack traces
  • Container lifecycle events
Logs are only available for managed (containerized) agents that are currently running on Agent Stack.

Advanced Observability: Traces & Metrics

By default, Agent Stack integrates with Arize Phoenix for local visualization of agent traces. For cloud-based observability and production monitoring, you can easily integrate Langfuse. Telemetry details include:
  • Platform version and runtime details
  • Agent execution traces

Enable Phoenix Observability

License Notice: Phoenix is licensed under the Elastic License v2 (ELv2). It is disabled by default in Agent Stack. By enabling it, you acknowledge responsibility for ensuring compliance with these terms in your specific use case.
1

Install and Enable

Install and start Phoenix using the agentstack platform start command:
agentstack platform start --set phoenix.enabled=true
You can run this even if your platform is already running; it will update the configuration without losing existing data.
2

Verify Initialization

Spinning up the Phoenix container can take a moment, even after the CLI reports success. Go to http://localhost:6006 and check if it’s running. If not, please wait a few moments.
3

Run and View

Execute an agent to generate data:
agentstack run chat "Hello"
4

View Traces in Phoenix

Open http://localhost:6006 in your browser and navigate to the default project to explore the collected traces.

Enable Langfuse Observability

To route traces to Langfuse, provide a custom OTLP configuration:
1

Get Langfuse credentials

  1. Sign up at cloud.langfuse.com
  2. Create a project and generate API keys
  3. Encode your keys: echo -n "public_key:secret_key" | base64
2

Create a configuration file (config.yaml):

collector:
  exporters:
    otlphttp/langfuse:
      endpoint: "https://cloud.langfuse.com/api/public/otel" # EU data region
      headers:
        Authorization: "Basic <auth-string>"
  pipelines:
    traces:
      receivers: [ otlp ]
      processors: [ memory_limiter, filter/phoenix, batch ]
      exporters: [ otlphttp/langfuse ]
3

Start the platform with the configuration

agentstack platform start -f config.yaml
4

Run and View

Execute an agent to generate data:
agentstack run chat "Hello"
Check your Langfuse project dashboard for incoming traces and metrics.

Additional Resources