Java Management Extensions (JMX) is the JVM's built-in management interface, exposing runtime and component metrics such as memory, threads, and request pools. It is useful for collecting operational telemetry from Java services without changing application code.
Collecting JMX metrics with OpenTelemetry can be done in two main ways depending on your environment, requirements and constraints:
- from inside the JVM with the OpenTelemetry Instrumentation Java agent (or EDOT Java)
- from outside the JVM with the jmx-scraper.
Thorough this article, we will use the term "Java agent" to refer to the OpenTelemetry Java instrumentation agent, this also applies to the Elastic own distribution (EDOT Java) which is based on it and provides the same features.
This walkthrough uses a Tomcat server as the target and shows how to validate which metrics are emitted with the logging exporter.
The configuration examples in this article use Java system properties that must be passed using -D flags in the JVM startup command, equivalent environment variables can also be used for configuration.
Prerequisites
- A local Tomcat install (or any JVM app you can start with custom JVM flags)
- Java 8+ on the host, the Tomcat version used might require a more recent version though.
- An OpenTelemetry Collector endpoint if you want to ship metrics beyond local logging
Choosing between the Java agent and jmx-scraper
Use the Java agent (or EDOT Java) when you can modify JVM startup flags and want in-process collection with full context from the running application: this allows to capture traces, logs and metrics with a single tool deployment.
Use jmx-scraper when you cannot install an agent on the JVM or prefer out-of-process collection from a separate host. This requires the JVM and the network to be configured for remote JMX access and also dealing with authentication and credentials.
Both approaches rely on the same JMX metric mappings and can use the logging exporter for validation and then use OTLP to send metrics to the collector / an OTLP endpoint.
Option 1: Collect JMX metrics inside the JVM with the Java agent
OpenTelemetry Java instrumentation ships with a curated set of JMX metric mappings. For Tomcat, you just need to enable the Java agent and set otel.jmx.target.system=tomcat.
Step 1 - Download the OpenTelemetry Java agent
The agent is downloaded in /opt/otel but you can choose any location on the host.
Make sure the path is consistent with the -javaagent flag in the next step.
mkdir -p /opt/otel
curl -L -o /opt/otel/opentelemetry-javaagent.jar \
https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar
Step 2 - Configure Tomcat with bin/setenv.sh
Create or update bin/setenv.sh so Tomcat launches with the agent and JMX target system enabled.
#!/bin/bash
export CATALINA_OPTS="$CATALINA_OPTS \
-javaagent:/opt/otel/opentelemetry-javaagent.jar \
-Dotel.service.name=tomcat-demo \
-Dotel.metrics.exporter=otlp,logging \
-Dotel.jmx.target.system=tomcat"
This will configure the agent to log metrics (using the logging exporter) in addition to sending them to the Collector.
Step 3 - Validate the emitted metrics
Start Tomcat and watch stdout.
./bin/catalina.sh run
By defaults metrics are sampled and emitted every minute, so you might have to wait a bit for the metrics to be logged.
If needed, you can use otel.metric.export.interval configuration to increase or reduce the frequency.
You should see logging exporter output with JVM and Tomcat metrics. Look for lines containing the LoggingMetricExporter class name.
INFO io.opentelemetry.exporter.logging.LoggingMetricExporter - MetricData{name=tomcat.threadpool.currentThreadsBusy, ...}
INFO io.opentelemetry.exporter.logging.LoggingMetricExporter - MetricData{name=jvm.memory.used, ...}
Step 4 - Send metrics to a Collector
Once metric capture is validated, you should be ready to send metrics to a collector.
You will have to:
- remove the
loggingexporter as it's no longer necessary for production - configure the OTLP endpoint (
otel.exporter.otlp.endpoint) and headers (otel.exporter.otlp.headers) if needed
The bin/setenv.sh file should be modified to look like this:
#!/bin/bash
export CATALINA_OPTS="$CATALINA_OPTS \
-javaagent:/opt/otel/opentelemetry-javaagent.jar \
-Dotel.service.name=tomcat-demo \
-Dotel.jmx.target.system=tomcat \
-Dotel.exporter.otlp.endpoint=https://your-collector:4317 \
-Dotel.exporter.otlp.headers=Authorization=Bearer <your-token>"
When using the Java agent, the JVM metrics are automatically captured by the runtime-telemetry module, it is thus not necessary to include jvm in the otel.jmx.target.system configuration option.
Option 2: Collect JMX metrics from outside the JVM with jmx-scraper
When you cannot install an agent in the JVM or if only metrics are required, jmx-scraper lets you query JMX remotely and export metrics to an OTLP endpoint.
Step 1 - Enable remote JMX on Tomcat
Add JMX remote options to bin/setenv.sh and create access/password files.
Warning: This uses trivial credentials and disables SSL. Do not use this configuration in production.
mkdir -p /opt/jmx
cat <<EOF > ${CATALINA_HOME}/jmxremote.access
monitorRole readonly
EOF
cat <<EOF > ${CATALINA_HOME}/jmxremote.password
monitorRole monitorPass
EOF
chmod 600 ${CATALINA_HOME}/jmxremote.password
export CATALINA_OPTS="$CATALINA_OPTS \
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=9010 \
-Dcom.sun.management.jmxremote.rmi.port=9010 \
-Dcom.sun.management.jmxremote.authenticate=true \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.access.file=${CATALINA_HOME}/jmxremote.access \
-Dcom.sun.management.jmxremote.password.file=${CATALINA_HOME}/jmxremote.password \
-Djava.rmi.server.hostname=127.0.0.1"
Step 2 - Download jmx-scraper
The jmx-scraper is downloaded in /opt/otel but you can choose any location on the host.
mkdir -p /opt/otel
curl -L -o /opt/otel/opentelemetry-jmx-scraper.jar \
https://github.com/open-telemetry/opentelemetry-java-contrib/releases/latest/download/opentelemetry-jmx-scraper.jar
Step 3 - Check the JMX connection
Run jmx-scraper with credentials from previous step to confirm it can reach Tomcat. If the credentials are wrong, you will see authentication errors.
java -jar /opt/otel/opentelemetry-jmx-scraper.jar \
-Dotel.jmx.service.url=service:jmx:rmi:///jndi/rmi://localhost:9010/jmxrmi
-Dotel.jmx.username=monitorRole \
-Dotel.jmx.password=monitorPass \
-Dotel.jmx.target.system=tomcat \
-test
You should get in the standard output:
JMX connection test OKif the connection and authentication is successfulJMX connection test ERRORotherwise
Step 4 - Validate the emitted metrics
Using the logging exporter allows to inspect metrics and attributes before sending them to a collector.
In order to capture both Tomcat and JVM metrics, it is required to set otel.jmx.target.system to tomcat,jvm.
java -jar /opt/otel/opentelemetry-jmx-scraper.jar \
-Dotel.jmx.service.url=service:jmx:rmi:///jndi/rmi://localhost:9010/jmxrmi
-Dotel.jmx.username=monitorRole \
-Dotel.jmx.password=monitorPass \
-Dotel.jmx.target.system=tomcat,jvm \
-Dotel.metrics.exporter=logging
Step 5 - Send metrics to a Collector
After validation, to send metrics to an OTLP endpoint, you will have to:
- remove the
-Dotel.metrics.exporterto restore theotlpdefault value. - configure the OTLP endpoint (
otel.exporter.otlp.endpoint) and headers (otel.exporter.otlp.headers) if needed
java -jar /opt/otel/opentelemetry-jmx-scraper.jar \
-Dotel.jmx.service.url=service:jmx:rmi:///jndi/rmi://localhost:9010/jmxrmi
-Dotel.jmx.username=monitorRole \
-Dotel.jmx.password=monitorPass \
-Dotel.jmx.target.system=tomcat,jvm \
-Dotel.exporter.otlp.endpoint=https://your-collector:4317
-Dotel.exporter.otlp.headers="Authorization=Bearer <your-token>"
Customizing the JMX Metrics Collection
Once the built-in Tomcat and JVM mappings are flowing, you can add custom rules with otel.jmx.config. Create a YAML file and pass its path alongside otel.jmx.target.system.
For example, the following custom.yaml file allows to capture the custom.jvm.thread.count metric from the java.lang:type=Threading MBean:
---
rules:
- bean: "java.lang:type=Threading"
mapping:
ThreadCount:
metric: custom.jvm.thread.count
type: gauge
unit: "{thread}"
desc: Current number of live threads.
For complete reference on the configuration format and syntax, refer to jmx-metrics module in Opentelemetry Java instrumentation.
This custom configuration can be used both with jmx-scraper and Java agent, both support the otel.jmx.config configuration option, for example with jmx-scraper:
java -jar /opt/otel/opentelemetry-jmx-scraper.jar \
-Dotel.jmx.service.url=service:jmx:rmi:///jndi/rmi://localhost:9010/jmxrmi
-Dotel.jmx.username=monitorRole \
-Dotel.jmx.password=monitorPass \
-Dotel.jmx.target.system=tomcat,jvm \
otel.jmx.config=/opt/otel/jmx/custom.yaml
You can pass multiple custom files as a comma-separated list to otel.jmx.config when you need to organize metrics by team or component.
Using the JMX Metrics in Kibana
Once you have collected the JMX metrics using one of the approaches described in this article, you can start using them in Kibana. You can build custom dashboards and visualizations to explore and analyze the metrics, create custom alerts on top of them or build MCP tools and AI Agents to use them in your agentic workflows.
Here is an example of how you can use the JMX metrics in Kibana through ES|QL:
TS metrics*
| WHERE telemetry.sdk.language == "java"
| WHERE service.name == ?instance
| STATS
request_rate = SUM(RATE(tomcat.request.count))
BY Time = BUCKET(@timestamp, 100, ?_tstart, ?_tend)
You can use the native metric and dimension names of the JMX metrics to build your queries.
With the TS command you get first-class support for time series aggregation functions and dimensions on your metrics.
This kind of queries constitute the building blocks for your dashboards, alerts, workflows and AI agent tools.
Here is an example of a dashboard that visualizes the typical JMX metrics for Apache Tomcat:
Conclusion
In this article, we have seen how to collect JMX metrics with OpenTelemetry using the Java agent or jmx-scraper. We have also seen how to use the JMX metrics in Kibana through ES|QL to build custom dashboards, alerts, workflows and AI agent tools.
This is just the beginning of what you can do with the JMX metrics and Elastic Observability. Try it out yourself and explore the full potential of your JMX metrics when combined with powerful features provided by the Elastic Observability platform.