Monitoring Claude Code/Cowork at scale with OTel in Elastic

How Elastic's InfoSec team built a monitoring pipeline for Claude Code and Claude Cowork using their native OTel export capabilities and Elastic's OTel ingestion infrastructure.

As AI coding assistants become standard tools in engineering workflows, security teams face a new challenge: how do you maintain visibility into what an AI agent is doing (and why) across your organization? When those agents can execute shell commands, read files, call APIs, and interact with internal systems via MCP connectors, you need real-time observability to support threat detection, incident response, and compliance.

This post walks through how Elastic's InfoSec team built a monitoring pipeline for Claude Code and Claude Cowork using their native OpenTelemetry (OTel) export capabilities and Elastic's own OTel ingestion infrastructure. We cover the telemetry schema, the gateway deployment, custom Elasticsearch mappings and ingest pipelines, managed configuration delivery, and the security use cases enabled by this data.

Why Elastic's InfoSec team monitors AI agents

At Elastic, we practice what we call "Customer Zero." The InfoSec team is the first and most demanding user of Elastic's products, always running the newest versions in production. Our goal is to use our own products to improve our security posture whenever we can.

Claude Code and Cowork are now in active use across Elastic's engineering organization. Claude Code runs locally on developer machines as a CLI-based AI coding assistant. Cowork is part of the Claude Desktop app and also runs locally. It can read files, execute code in a sandbox, search the web, and interact with connected services like Slack, GitHub, Jira, and Google Calendar through MCP connectors. Both tools support connecting to internal systems, which means they operate in a trust boundary that security teams need to monitor.

What Claude Code and Cowork export via OpenTelemetry

Both products export telemetry through standard OpenTelemetry protocols, emitting the same five event types:

  • api_request — model, cost, token counts, latency
  • tool_result — tool name, MCP server and tool name, success/failure, duration
  • tool_decision — auto-approved vs user-approved
  • user_prompt — what the user asked the agent to do
  • api_error — error message, status code

Every event includes user identity (user.email, organization.id), session context (session.id, prompt.id, event.sequence), and cost/token fields on API request events. Claude Code telemetry is opt-in and redacts prompts and tool arguments by default; enable them with OTEL_LOG_USER_PROMPTS=1 and OTEL_LOG_TOOL_DETAILS=1. Cowork is configured centrally in the Anthropic admin portal and includes full details automatically.

For the full telemetry schema, see the Claude Code Monitoring documentation and the Claude Cowork Monitoring documentation.

Architecture: Getting the data to Elasticsearch

There are two ways to get Claude Code and Cowork OTel data into Elasticsearch. We deployed the self-managed gateway approach first, but Elastic Cloud users have a simpler option.

Option 1: EDOT OTel Gateway (self-managed)

This is the approach we used internally. Since Elastic's InfoSec team runs self-managed ECK (Elastic Cloud on Kubernetes) clusters, we deployed an Elastic Distribution of the OpenTelemetry Collector (EDOT) as a gateway. Both Claude Code and Cowork run locally on user machines and send OTLP/HTTP to the gateway, which authenticates the request and writes to Elasticsearch.

We used the opentelemetry-collector Helm chart with the EDOT collector image (docker.elastic.co/elastic-agent/elastic-otel-collector). The EDOT image provides native Elastic data stream routing, which is important for getting logs into the right data streams without extra configuration.

The gateway runs in deployment mode and uses bearer token authentication via the bearertokenauth extension.

Here is the core collector configuration:

config:
  extensions:
    bearertokenauth:
      scheme: "Bearer"
      token: "${env:OTEL_GATEWAY_TOKEN}"
  receivers:
    otlp:
      protocols:
        http:
          endpoint: "0.0.0.0:4318"
          auth:
            authenticator: bearertokenauth
  processors:
    transform/route:
      log_statements:
        - context: log
          conditions:
            - resource.attributes["service.name"] == "claude-code"
          statements:
            - set(resource.attributes["data_stream.dataset"], "claude_code")
        - context: log
          conditions:
            - resource.attributes["service.name"] == "cowork"
          statements:
            - set(resource.attributes["data_stream.dataset"], "claude_cowork")
  exporters:
    elasticsearch:
      endpoints: ["https://your-elasticsearch:9200"]
      user: "${env:ES_USERNAME}"
      password: "${env:ES_PASSWORD}"
  service:
    extensions: [bearertokenauth]
    pipelines:
      logs:
        receivers: [otlp]
        processors: [transform/route]
        exporters: [elasticsearch]

Option 2: Elastic Cloud Managed OTLP Endpoint (no gateway needed)

If you are running Elastic Cloud (Serverless or Hosted), you can skip the gateway entirely. Elastic's Managed OTLP (mOTLP) endpoint provides a resilient, auto-scaling ingestion layer that accepts OTLP data directly — no collector infrastructure to deploy or maintain.

To use it, point Claude Code's OTLP exporter directly at your Elastic Cloud mOTLP endpoint:

export CLAUDE_CODE_ENABLE_TELEMETRY=1
export OTEL_LOGS_EXPORTER=otlp
export OTEL_METRICS_EXPORTER=otlp
export OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_ENDPOINT="https://<your-motlp-endpoint>"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=ApiKey <your-api-key>"
export OTEL_RESOURCE_ATTRIBUTES="data_stream.dataset=claude_code"

The data_stream.dataset resource attribute is important here; it controls which data stream receives the logs. Without it, data lands in a generic OTel data stream where your custom index templates and ingest pipelines will not apply. Set it to claude_code or claude_cowork so the data routes to the dedicated logs-claude_code.otel-* or logs-claude_cowork.otel-* streams with the correct field mappings.

With mOTLP, you get native OTLP ingestion with automatic data stream routing, a built-in failure store to protect data during indexing issues, and no APM Server requirement.

The managed endpoint supports all the same custom index templates and ingest pipelines described below, you just don't need to operate the gateway.

For full setup details, see the Elastic Cloud Managed OTLP documentation.

Custom Elasticsearch mappings and ingest pipelines

By default, OTel attributes are indexed as keywords in Elasticsearch. That works for filtering and grouping, but it breaks numeric aggregations. You cannot SUM or AVG a keyword field. We created custom mappings to fix the field types and an ingest pipeline to parse JSON string fields into structured objects.

Component template

The component template overrides the default keyword mappings for numeric and boolean fields, and adds flattened type mappings for the JSON-encoded tool parameters:

PUT _component_template/logs-claude_code.otel@custom
{
  "template": {
    "mappings": {
      "properties": {
        "cost_usd":                  { "type": "float" },
        "duration_ms":               { "type": "long" },
        "input_tokens":              { "type": "long" },
        "output_tokens":             { "type": "long" },
        "cache_creation_tokens":     { "type": "long" },
        "cache_read_tokens":         { "type": "long" },
        "prompt_length":             { "type": "long" },
        "tool_result_size_bytes":    { "type": "long" },
        "success":                   { "type": "boolean" },
        "tool_parameters_flattened": { "type": "flattened" },
        "tool_input_flattened":      { "type": "flattened" }
      }
    }
  }
}

The flattened type is important here. tool_parameters and tool_input arrive as JSON strings containing nested keys like mcp_server_name, mcp_tool_name, bash_command, or command. By parsing them into flattened fields, you can query individual keys without creating an unbounded number of mapped fields.

A future enhancement will be to extract high-value fields from these JSON payloads into dedicated mapped fields — things like MCP server names, tool names, and bash commands — to drive richer analytics, aggregations, and detection rules directly on those values.

Index template

The index template composes in all the standard OTel component templates plus our custom one. It matches both logs-claude_code.otel-* and logs-claude_cowork.otel-* so both data streams share the same field mappings:

PUT _index_template/logs-claude_code.otel
{
  "index_patterns": [
    "logs-claude_code.otel-*",
    "logs-claude_cowork.otel-*"
  ],
  "composed_of": [
    "logs@mappings",
    "logs@settings",
    "otel@mappings",
    "otel@settings",
    "logs-otel@mappings",
    "semconv-resource-to-ecs@mappings",
    "logs@custom",
    "logs-otel@custom",
    "logs-claude_code.otel@custom",
    "ecs@mappings"
  ],
  "priority": 150,
  "data_stream": {},
  "allow_auto_create": true,
  "ignore_missing_component_templates": [
    "logs@custom",
    "logs-otel@custom"
  ]
}

Ingest pipeline

The ingest pipeline parses tool_parameters and tool_input from JSON strings into objects, writing to separate *_flattened target fields to avoid conflicts with the original keyword-mapped attributes:

PUT _ingest/pipeline/logs-claude_code.otel@custom
{
  "description": "Parse JSON string fields in Claude Code/Cowork OTel telemetry",
  "processors": [
    {
      "json": {
        "field": "attributes.tool_parameters",
        "target_field": "tool_parameters_flattened",
        "if": "ctx.attributes?.tool_parameters != null && ctx.attributes.tool_parameters.startsWith('{')",
        "ignore_failure": true
      }
    },
    {
      "json": {
        "field": "attributes.tool_input",
        "target_field": "tool_input_flattened",
        "if": "ctx.attributes?.tool_input != null && ctx.attributes.tool_input.startsWith('{')",
        "ignore_failure": true
      }
    }
  ]
}

After creating all three resources, new data flowing into the logs-claude_code.otel-* and logs-claude_cowork.otel-* data streams will have correct numeric field types and searchable structured tool parameters.

Configuring telemetry export

Claude Code and Cowork are configured differently. Claude Code uses standard OpenTelemetry environment variables. Cowork OTel export is configured centrally by administrators in the Anthropic admin portal.

Claude Code supports managed settings that are deployed by IT and cannot be overridden by users. The configuration is a JSON file containing an env block:

{
  "env": {
    "CLAUDE_CODE_ENABLE_TELEMETRY": "1",
    "OTEL_METRICS_EXPORTER": "otlp",
    "OTEL_LOGS_EXPORTER": "otlp",
    "OTEL_LOG_TOOL_DETAILS": "1",
    "OTEL_LOG_USER_PROMPTS": "1",
    "OTEL_EXPORTER_OTLP_PROTOCOL": "http/protobuf",
    "OTEL_EXPORTER_OTLP_ENDPOINT": "https://your-otel-gateway:443",
    "OTEL_EXPORTER_OTLP_HEADERS": "Authorization=Bearer your-token"
  }
}

This managed settings file can be delivered via MDM (Jamf, Intune), server-managed settings through the Claude.ai Admin Console, or file-based deployment. See the Claude Code managed settings documentation for the full list of delivery mechanisms and their security properties.

For local testing, you can put the same configuration in ~/.claude/settings.json on your own machine before rolling it out organization-wide.

Cowork

Cowork OTel export is configured centrally by administrators in the Anthropic admin portal. Administrators set the OTLP endpoint and authentication headers in the admin console, and Cowork instances automatically pick up the configuration. Prompt content and tool details are included by default without requiring additional flags.

Because Cowork runs in a sandbox, the OTel gateway endpoint must be allowlisted for outbound network access from the sandbox environment. Without this, telemetry export will fail silently.

Security use cases

The combination of event types, identity fields, and tool parameters creates a rich dataset for security operations. Here are the use cases we are building detection and investigation capabilities around.

Tool invocation auditing. Every tool call is logged with the tool name and input parameters. For MCP tools, this includes the MCP server name and tool name (e.g., slack_send_message, github/search_issues). You can detect unauthorized data access, unusual shell commands, or unexpected MCP server interactions. Use the attributes.tool_name + attributes.tool_parameters fields.

Session reconstruction. The session.id field combined with event.sequence provides a monotonically increasing counter within each session. You can reconstruct the complete sequence of a Claude session: what the user asked, what tools ran, what data was accessed, and what APIs were called. This is valuable for incident response — if you detect a suspicious tool call, you can pull the full session context.

Permission decision analysis. The attributes.event.name: tool_decision events provide insight into how each tool use was approved. This lets you detect users auto-approving risky tool categories, or identify unusual permission patterns across the fleet.

Decision SourceMeaning
configAuto-allowed by settings or policy
hookDecided by a configured hook script
user_temporaryUser clicked accept for this invocation
user_permanentUser clicked "always allow" for this tool
user_abortUser aborted the session
user_rejectUser explicitly rejected the tool use

Cost anomaly detection. The cost_usd field on every api_request event enables per-request, per-session, and per-user cost tracking. You can alert on unusually expensive sessions or identify users with outsized consumption patterns.

Correlating with EDR data. If you are running Elastic Defend on your endpoints, you can correlate Claude's OTel telemetry with EDR process and file events to understand the full picture. When Claude Code executes a Bash command, the OTel tool_result event tells you what the agent decided to run and why (via the preceding user_prompt). The corresponding Elastic Defend process event tells you exactly what happened on the host — child processes spawned, files written, network connections made. Joining these two data sources by timestamp and host gives you both the intent (from the AI agent telemetry) and the impact (from endpoint telemetry) in a single investigation.

MCP server access monitoring. As organizations connect AI agents to internal systems through MCP, monitoring which servers are accessed and with what tools becomes critical. The tool_parameters_flattened.mcp_server_name and tool_parameters_flattened.mcp_tool_name fields provide this visibility.

For example, to see tool invocations for Slack, you could query tool_name: "mcp_tool" AND tool_parameters_flattened.mcp_tool_name:slack*.

Beyond OTel: Claude enterprise audit logs

Telemetry from Claude Code and Cowork covers agent activity on endpoints, but it doesn't capture everything. For full visibility, organizations should also collect Claude enterprise audit logs from the Compliance API. This is the only source of activity on the web interface (claude.ai) and of traditional security audit events, such as login activity, permission changes, and organization-level administration. Combining both data sources gives security teams a complete picture across all Claude products.

Conclusion

AI coding assistants and autonomous agents are becoming part of the standard enterprise toolkit. If your security team doesn't have visibility into what these tools are doing, you have a gap. Claude Code and Cowork ship with OpenTelemetry support that provides exactly the kind of telemetry security teams need; identity, session context, tool invocation details, cost data, and permission decisions. Elastic's native OTel ingestion capabilities, whether through the Managed OTLP endpoint on Elastic Cloud or the EDOT Collector in a self-managed environment, make it straightforward to get this data into Elasticsearch, where you can search it, build dashboards, and write detection rules.

If you want to get started, sign up for a free trial of Elastic Cloud and try the Managed OTLP endpoint, or install the EDOT OTel Collector in your existing environment.

References

Share this article