Monitoring and metrics
Configure ForgeRock® Identity Management server logs and monitoring metrics.
ForgeRock Identity Platform™ serves as the basis for our simple and comprehensive Identity and Access Management solution. We help our customers deepen their relationships with their customers, and improve the productivity and connectivity of their employees and partners. For more information about ForgeRock and about the platform, refer to https://www.forgerock.com.
The ForgeRock Common REST API works across the platform to provide common ways to access web resources and collections of resources.
Server logs
Server logging is not the same as auditing. Auditing logs activity on the IDM system, such as access, and synchronization. Server logging records information about the internal workings of IDM, like system messages, error reporting, service loading, or startup and shutdown messaging.
Configure server logging in your project’s conf/logging.properties
file. Changes to logging settings require a server restart before they take effect. Alternatively, use JMX via jconsole to change the logging settings. In this case, changes take effect without restarting the server.
Log message handlers
The way IDM logs messages is set in the handlers
property in the logging.properties
file. This property has the following value by default:
handlers=java.util.logging.FileHandler, java.util.logging.ConsoleHandler
The default handlers are:
-
FileHandler
writes formatted log records to a single file or to a set of rotating log files. By default, log files are written tologs/openidm*.log
files. -
ConsoleHandler
writes formatted logs toSystem.err
.
Additional log message handlers are listed in the logging.properties
file.
Log message format
IDM supports the two default log formatters included with Java. These are set in the conf/logging.properties
file:
-
java.util.logging.SimpleFormatter.format
outputs a text log file that is human-readable. This is the default formatter. -
java.util.logging.XMLFormatter
outputs logs as XML, for use in logging software that can read XML logs.
IDM extends the Java SimpleFormatter
with the following formatting options:
org.forgerock.openidm.logger.SanitizedThreadIdLogFormatter
-
This is the default formatter for console and file logging. It extends the
SimpleFormatter
to include the thread ID of the thread that generated each message. The thread ID helps with debugging when reviewing the logs.In the following example log excerpt, the thread ID is
[19]
:[19] May 23, 2018 10:30:26.959 AM org.forgerock.openidm.repo.opendj.impl.Activator start INFO: Registered bootstrap repository service [19] May 23, 2018 10:30:26.960 AM org.forgerock.openidm.repo.opendj.impl.Activator start INFO: DS bundle started
The SanitizedThreadIdLogFormatter
also encodes all control characters (such as newline characters) using URL-encoding, to protect against log forgery. Control characters in stack traces are not encoded. org.forgerock.openidm.logger.ThreadIdLogFormatter
-
Similar to the
SanitizedThreadIdLogFormatter
, but does not encode control characters. If you do not want to encode control characters in file and console log messages, edit the file and console handlers inconf/logging.properties
as follows:java.util.logging.FileHandler.formatter = org.forgerock.openidm.logger.ThreadIdLogFormatter java.util.logging.ConsoleHandler.formatter = org.forgerock.openidm.logger.ThreadIdLogFormatter
The SimpleFormatter
(and, by extension, the SanitizedThreadIdLogFormatter
and ThreadIdLogFormatter
) lets you customize what information to include in log messages, and how this information is laid out. By default, log messages include the date, time (down to the millisecond), log level, source of the message, and the message sent (including exceptions). To change the defaults, adjust the value of java.util.logging.SimpleFormatter.format
in your conf/logging.properties
file. For more information on how to customize the log message format, refer to the related Java documentation.
Logging level
By default, IDM logs messages at the INFO
level. This logging level is specified with the following global property in conf/logging.properties
:
.level=INFO
You can specify different separate logging levels for individual server features which override the global logging level. Set the log level, per package to one of the following:
SEVERE (highest value)
WARNING
INFO
CONFIG
FINE
FINER
FINEST (lowest value)
For example, the following setting decreases the messages logged by the embedded PostgreSQL database:
# reduce the logging of embedded postgres since it is very verbose
ru.yandex.qatools.embed.postgresql.level = SEVERE
Set the log level to OFF
to disable logging completely (Disable Logs), or to ALL
to capture all possible log messages.
If you use logger
functions in your JavaScript scripts, set the log level for the scripts as follows:
org.forgerock.openidm.script.javascript.JavaScript.level=level
You can override the log level settings, per script, with the following setting:
org.forgerock.openidm.script.javascript.JavaScript.script-name.level=level
For more information about using logger
functions in scripts, refer to Log Functions.
It is strongly recommended that you do not log messages at the |
Log file rotation
By default, IDM rotates log files when the size reaches 5 MB, and retains up to 5 files. All system and custom log messages are also written to these files. You can modify these limits in the following properties in the logging.properties
file for your project:
# Limiting size of output file in bytes:
java.util.logging.FileHandler.limit = 5242880
# Number of output files to cycle through, by appending an
# integer to the base file name:
java.util.logging.FileHandler.count = 5
There is currently no |
Disable logs
If necessary, you can disable logs. For example, to disable ConsoleHandler
logging, make the following changes in your project’s conf/logging.properties
file before you start IDM.
Set java.util.logging.ConsoleHandler.level = OFF
, and comment out other references to ConsoleHandler
, as shown in the following excerpt:
# ConsoleHandler: A simple handler for writing formatted records to System.err
#handlers=java.util.logging.FileHandler, java.util.logging.ConsoleHandler
handlers=java.util.logging.FileHandler
...
# --- ConsoleHandler ---
# Default: java.util.logging.ConsoleHandler.level = INFO
java.util.logging.ConsoleHandler.level = OFF
#java.util.logging.ConsoleHandler.formatter = ...
#java.util.logging.ConsoleHandler.filter=...
Monitoring
IDM includes the following tools for monitoring metrics:
-
A Dropwizard dashboard widget, for viewing metrics within IDM.
Widgets are deprecated and will be removed in a future release of IDM. For more information, refer to Deprecation. -
A Prometheus endpoint, for viewing metrics through external resources such as Prometheus and Grafana.
Enable metrics
IDM does not collect metrics by default. To enable metrics collection, open conf/metrics.json
and set the enabled
property to true
:
{
"enabled" : true
}
After you have enabled metrics, the following command returns all collected metrics:
curl \ --header "X-OpenIDM-Username: openidm-admin" \ --header "X-OpenIDM-Password: openidm-admin" \ --header "Accept-API-Version: resource=1.0" \ --request GET \ 'http://localhost:8080/openidm/metrics/api?_queryFilter=true'
Show example response
{
"result": [
{
"_id": "jvm.memory-usage.pools.Metaspace.used",
"value": 101709640,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.non-heap.used",
"value": 159728512,
"_type": "gauge"
},
{
"_id": "repo.ds.get-connection",
"count": 25,
"max": 13.407542,
"mean": 7.016551422258608,
"min": 2.274208,
"p50": 7.038666999999999,
"p75": 8.653042,
"p95": 12.613916999999999,
"p98": 13.407542,
"p99": 13.407542,
"p999": 13.407542,
"stddev": 3.0043480716919446,
"m15_rate": 1.00220378348439,
"m1_rate": 1.0294250758954837,
"m5_rate": 1.0065021413358448,
"mean_rate": 1.173715776010422,
"duration_units": "milliseconds",
"rate_units": "calls/second",
"total": 174.284168,
"_type": "timer"
},
{
"_id": "jvm.memory-usage.pools.G1-Old-Gen.committed",
"value": 794820608,
"_type": "gauge"
},
{
"_id": "user.session.static-user",
"m15_rate": 0.19780232116334415,
"m1_rate": 0.17175127368841633,
"m5_rate": 0.1935515941358193,
"mean_rate": 0.09993098620692964,
"units": "events/second",
"total": 2,
"count": 2,
"_type": "summary"
},
{
"_id": "jvm.max-memory",
"value": 2147483648,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.Compressed-Class-Space.usage",
"value": 0.015285782516002655,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'non-profiled-nmethods'.init",
"value": 2555904,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.non-heap.usage",
"value": -233855696,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Old-Gen.init",
"value": 2034237440,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.total.max",
"value": 2147483647,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.total.committed",
"value": 2399019008,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.heap.init",
"value": 2147483648,
"_type": "gauge"
},
{
"_id": "repo.ds.update.cluster",
"count": 5,
"max": 13.490832999999999,
"mean": 11.40983226004801,
"min": 8.795417,
"p50": 10.932459,
"p75": 12.708499999999999,
"p95": 13.490832999999999,
"p98": 13.490832999999999,
"p99": 13.490832999999999,
"p999": 13.490832999999999,
"stddev": 1.594812363576534,
"m15_rate": 0.2011018917421949,
"m1_rate": 0.21471253794774184,
"m5_rate": 0.2032510706679223,
"mean_rate": 0.23483436767444082,
"duration_units": "milliseconds",
"rate_units": "calls/second",
"total": 56.608459,
"_type": "timer"
},
{
"_id": "repo.ds.read.cluster",
"count": 5,
"max": 13.253,
"mean": 9.663193140378318,
"min": 6.366667,
"p50": 10.924292,
"p75": 11.00375,
"p95": 13.253,
"p98": 13.253,
"p99": 13.253,
"p999": 13.253,
"stddev": 2.480672375020272,
"m15_rate": 0.19999386134317423,
"m1_rate": 0.1987214208736065,
"m5_rate": 0.19994536143224584,
"mean_rate": 0.23467002606408544,
"duration_units": "milliseconds",
"rate_units": "calls/second",
"total": 49.324167,
"_type": "timer"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'profiled-nmethods'.init",
"value": 2555904,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'non-nmethods'.usage",
"value": 0.42355263157894735,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.Compressed-Class-Space.init",
"value": 0,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Old-Gen.used",
"value": 137279336,
"_type": "gauge"
},
{
"_id": "jvm.thread-state.timed_waiting.count",
"value": 84,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Old-Gen.usage",
"value": 0.08353511989116669,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.Metaspace.init",
"value": 0,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Survivor-Space.committed",
"value": 52428800,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'non-profiled-nmethods'.usage",
"value": 0.12785444714742736,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.heap.usage",
"value": 0.5991601198911667,
"_type": "gauge"
},
{
"_id": "jvm.garbage-collector.G1-Old-Generation.count",
"value": 4,
"_type": "gauge"
},
{
"_id": "jvm.garbage-collector.G1-Young-Generation.count",
"value": 18,
"_type": "gauge"
},
{
"_id": "jvm.thread-state.waiting.count",
"value": 50,
"_type": "gauge"
},
{
"_id": "jvm.class-loading.loaded",
"value": 22747,
"_type": "gauge"
},
{
"_id": "jvm.thread-state.terminated.count",
"value": 0,
"_type": "gauge"
},
{
"_id": "jvm.available-cpus",
"value": 10,
"_type": "gauge"
},
{
"_id": "jvm.garbage-collector.G1-Old-Generation.time",
"value": 360,
"_type": "gauge"
},
{
"_id": "filter.scripted.on-request.d6fc81179beaca37094a23c2fcd00aaf54bb3ef9:router:onRequest",
"count": 2,
"max": 21.174791,
"mean": 16.456464351980753,
"min": 12.961041999999999,
"p50": 12.961041999999999,
"p75": 21.174791,
"p95": 21.174791,
"p98": 21.174791,
"p99": 21.174791,
"p999": 21.174791,
"stddev": 4.061101381329072,
"m15_rate": 0.19780232116334415,
"m1_rate": 0.17175127368841633,
"m5_rate": 0.1935515941358193,
"mean_rate": 0.09992547412748008,
"duration_units": "milliseconds",
"rate_units": "calls/second",
"total": 34.135833,
"_type": "timer"
},
{
"_id": "jvm.memory-usage.heap.committed",
"value": 2147483648,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.Metaspace.committed",
"value": 110043136,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'non-profiled-nmethods'.committed",
"value": 10813440,
"_type": "gauge"
},
{
"_id": "jvm.used-memory",
"value": 2147483648,
"_type": "gauge"
},
{
"_id": "scheduler.job-store.repo.query-list.triggers",
"count": 5,
"max": 21.151916999999997,
"mean": 15.297513466089498,
"min": 8.745917,
"p50": 15.716375,
"p75": 16.422957999999998,
"p95": 21.151916999999997,
"p98": 21.151916999999997,
"p99": 21.151916999999997,
"p999": 21.151916999999997,
"stddev": 3.80884629646711,
"m15_rate": 0.39669429076432344,
"m1_rate": 0.355760156614281,
"m5_rate": 0.3902458849001428,
"mean_rate": 0.2410821468791895,
"duration_units": "milliseconds",
"rate_units": "calls/second",
"total": 76.092959,
"_type": "timer"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'non-nmethods'.committed",
"value": 2555904,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.total.init",
"value": 2155151360,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'non-nmethods'.used",
"value": 2432384,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.non-heap.committed",
"value": 171778048,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Survivor-Space.usage",
"value": 1,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Eden-Space.init",
"value": 113246208,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.Metaspace.usage",
"value": 0.9206230255320343,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Eden-Space.max",
"value": -1,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Old-Gen.max",
"value": 2147483648,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.total.used",
"value": 1520570400,
"_type": "gauge"
},
{
"_id": "jvm.thread-state.blocked.count",
"value": 0,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Survivor-Space.used-after-gc",
"value": 52428800,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Eden-Space.usage",
"value": 0.8114423851732474,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'non-profiled-nmethods'.used",
"value": 10729600,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'profiled-nmethods'.used",
"value": 33729792,
"_type": "gauge"
},
{
"_id": "repo.ds.query._adhoc-filter.scheduler",
"count": 5,
"max": 9.139959,
"mean": 7.781217638351263,
"min": 6.122667,
"p50": 7.9247499999999995,
"p75": 8.001249999999999,
"p95": 9.139959,
"p98": 9.139959,
"p99": 9.139959,
"p999": 9.139959,
"stddev": 0.9531334102258491,
"m15_rate": 0.39669429076432344,
"m1_rate": 0.355760156614281,
"m5_rate": 0.3902458849001428,
"mean_rate": 0.2411032736278605,
"duration_units": "milliseconds",
"rate_units": "calls/second",
"total": 38.649876,
"_type": "timer"
},
{
"_id": "jvm.memory-usage.pools.G1-Survivor-Space.init",
"value": 0,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.non-heap.max",
"value": -1,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Survivor-Space.max",
"value": -1,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Survivor-Space.used",
"value": 52428800,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'profiled-nmethods'.max",
"value": 122908672,
"_type": "gauge"
},
{
"_id": "jvm.thread-state.daemon.count",
"value": 98,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Eden-Space.used-after-gc",
"value": 0,
"_type": "gauge"
},
{
"_id": "jvm.thread-state.new.count",
"value": 0,
"_type": "gauge"
},
{
"_id": "repo.ds.query._adhoc-filter.cluster",
"count": 10,
"max": 7.115333,
"mean": 4.415241990632845,
"min": 2.32275,
"p50": 4.271917,
"p75": 5.5420419999999995,
"p95": 7.115333,
"p98": 7.115333,
"p99": 7.115333,
"p999": 7.115333,
"stddev": 1.57203480094502,
"m15_rate": 0.5967004294211492,
"m1_rate": 0.5570387357406746,
"m5_rate": 0.590300523467897,
"mean_rate": 0.4695941183571473,
"duration_units": "milliseconds",
"rate_units": "calls/second",
"total": 43.476667,
"_type": "timer"
},
{
"_id": "jvm.memory-usage.pools.G1-Eden-Space.used",
"value": 317718528,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.Compressed-Class-Space.committed",
"value": 14024704,
"_type": "gauge"
},
{
"_id": "jvm.garbage-collector.G1-Young-Generation.time",
"value": 465,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'non-nmethods'.init",
"value": 2555904,
"_type": "gauge"
},
{
"_id": "jvm.thread-state.count",
"value": 180,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'non-profiled-nmethods'.max",
"value": 122912768,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.non-heap.init",
"value": 7667712,
"_type": "gauge"
},
{
"_id": "audit.authentication",
"m15_rate": 0.19780232116334415,
"m1_rate": 0.17175127368841633,
"m5_rate": 0.1935515941358193,
"mean_rate": 0.09988653077391328,
"units": "events/second",
"total": 2,
"count": 2,
"_type": "summary"
},
{
"_id": "jvm.memory-usage.heap.used",
"value": 507426664,
"_type": "gauge"
},
{
"_id": "jvm.class-loading.unloaded",
"value": 16,
"_type": "gauge"
},
{
"_id": "jvm.thread-state.runnable.count",
"value": 46,
"_type": "gauge"
},
{
"_id": "audit.access",
"m15_rate": 0.19779007785878447,
"m1_rate": 0.16929634497812282,
"m5_rate": 0.1934432200964012,
"mean_rate": 0.05002186361867778,
"units": "events/second",
"total": 1,
"count": 1,
"_type": "summary"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'profiled-nmethods'.committed",
"value": 34340864,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Eden-Space.committed",
"value": 1300234240,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.Metaspace.max",
"value": -1,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.G1-Old-Gen.used-after-gc",
"value": 121026408,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.Compressed-Class-Space.max",
"value": 1073741824,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.heap.max",
"value": 2147483648,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'non-nmethods'.max",
"value": 5836800,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.CodeHeap-'profiled-nmethods'.usage",
"value": 0.39190126470490216,
"_type": "gauge"
},
{
"_id": "jvm.memory-usage.pools.Compressed-Class-Space.used",
"value": 11149728,
"_type": "gauge"
},
{
"_id": "jvm.free-used-memory",
"value": 860110576,
"_type": "gauge"
}
],
"resultCount": 85,
"pagedResultsCookie": null,
"totalPagedResultsPolicy": "EXACT",
"totalPagedResults": 85,
"remainingPagedResults": -1
}
Metrics are only collected after they have been triggered by activity in IDM, such as a reconciliation. |
Learn more:
Dropwizard widget
Widgets are deprecated and will be removed in a future release of IDM. For more information, refer to Deprecation. |
The Dropwizard widget creates a graph of metrics based on server activity and is useful for lightweight, live monitoring of IDM. The widget has the following limitations:
-
The graph created by the widget does not persist. If you reload or navigate away from the page, the graph restarts.
-
The widget only works with time-based metrics.
To add the Dropwizard widget:
-
From the navigation bar, click Dashboards > Dashboard Name.
-
On the Dashboard Name page, click Add Widget.
-
In the Add Widget window, from the Select a Widget drop-down list, select Dropwizard Table with Graph.
-
To preview any metric on the graph, click Add to Graph adjacent to any metric.
-
Click Add.
The Dropwizard widget now displays on the dashboard.
Prometheus endpoint
This topic describes how to configure Prometheus and Grafana to collect IDM metrics. These third-party tools are not supported by ForgeRock. Refer to the Prometheus documentation.
Prometheus is a third-party tool used for gathering and processing monitoring data. Prometheus uses the openidm/metrics/prometheus
endpoint to gather information. This endpoint is protected by a basic authentication filter, using the following credentials, set in the resolver/boot.properties
file:
openidm.prometheus.username=username
openidm.prometheus.password=password
The Prometheus endpoint also supports secret resolution. Refer to Secret stores. |
Disable Prometheus
To disable IDM’s Prometheus handler, comment out or remove openidm.prometheus.username
and openidm.prometheus.password
from the resolver/boot.properties
file. If these properties are not set, IDM does not enable the Prometheus handler.
Configure Prometheus
-
Download Prometheus.
-
Create a
prometheus.yml
configuration file. For more information, refer to the Prometheus configuration documentation. An exampleprometheus.yml
file:global: scrape_interval: 15s external_labels: monitor: 'my_prometheus' # https://prometheus.io/docs/operating/configuration/#scrape_config scrape_configs: - job_name: 'openidm' scrape_interval: 15s scrape_timeout: 5s metrics_path: 'openidm/metrics/prometheus' scheme: http basic_auth: username: 'prometheus' password: 'prometheus' static_configs: - targets: ['localhost:8080']
This example configures Prometheus to poll the
openidm/metrics/prometheus
endpoint every 5 seconds (scrape_interval: 5s
), receiving metrics in a plain text format (_fields: ['text']
and_mimeType: ['text/plain;version=0.0.4']
). For more information about reporting formats, refer to the Prometheus documentation on Exposition Formats. -
Verify the configuration returns metric results:
Requestcurl \ --user prometheus:prometheus \ --header "Accept-API-Version: resource=1.0" \ --request GET \ 'http://localhost:8080/openidm/metrics/prometheus'
Show example response
Response# HELP idm_jvm_available_cpus Automatically generated # TYPE idm_jvm_available_cpus gauge idm_jvm_available_cpus 10.0 # HELP idm_jvm_class_loading_loaded Automatically generated # TYPE idm_jvm_class_loading_loaded gauge idm_jvm_class_loading_loaded 24876.0 # HELP idm_jvm_class_loading_unloaded Automatically generated # TYPE idm_jvm_class_loading_unloaded gauge idm_jvm_class_loading_unloaded 1.0 # HELP idm_jvm_free_used_memory_bytes Automatically generated # TYPE idm_jvm_free_used_memory_bytes gauge idm_jvm_free_used_memory_bytes 9.77543264E8 # HELP idm_jvm_garbage_collector_g1_old_generation_count Automatically generated # TYPE idm_jvm_garbage_collector_g1_old_generation_count gauge idm_jvm_garbage_collector_g1_old_generation_count 0.0 # HELP idm_jvm_garbage_collector_g1_old_generation_time Automatically generated # TYPE idm_jvm_garbage_collector_g1_old_generation_time gauge idm_jvm_garbage_collector_g1_old_generation_time 0.0 # HELP idm_jvm_garbage_collector_g1_young_generation_count Automatically generated # TYPE idm_jvm_garbage_collector_g1_young_generation_count gauge idm_jvm_garbage_collector_g1_young_generation_count 82.0 # HELP idm_jvm_garbage_collector_g1_young_generation_time Automatically generated # TYPE idm_jvm_garbage_collector_g1_young_generation_time gauge idm_jvm_garbage_collector_g1_young_generation_time 2127.0 # HELP idm_jvm_max_memory_bytes Automatically generated # TYPE idm_jvm_max_memory_bytes gauge idm_jvm_max_memory_bytes 2.147483648E9 ...
-
Start Prometheus with the
prometheus.yml
configuration file:prometheus --config.file=/path/to/prometheus.yml
-
To confirm that Prometheus is gathering data from IDM, go to the Prometheus monitoring page (default
http://localhost:9090
).
Configure Grafana
Prometheus lets you monitor and process information provided by IDM. If you need deeper analytics, you can use tools such as Grafana to create customized charts and graphs based on Prometheus data. For information on installing and running Grafana, refer to the Grafana website.
You can also monitor aspects of IDM’s performance using Prometheus to plug JVM metrics into a Grafana dashboard. For more information on using metrics to observe the system under load, refer to Load testing.
Before you get started, download the Monitoring Dashboard Samples from the ForgeRock BackStage download site. Open monitoring.dashboard.json from the downloaded .zip file, as you’ll need it during the following procedure.
|
To set up a Grafana dashboard with IDM metrics using Prometheus:
-
In a browser, go to the main Grafana page (default
http://localhost:3000
) and log in.The default username and password for Grafana is admin
. -
To add your Prometheus installation to Grafana as a data source, click the toggle menu button , and click Connections > Data sources.
-
On the Data sources page, click Add data source.
-
On the Add data source page, select Prometheus.
-
Enter information and select options, as needed. The information you enter here should match the settings in the
monitoring.dashboard.json
file:-
Give your data source a name; for example,
ForgeRockIDM
. -
Set the URL (default
http://localhost:9090
). -
Enable Basic auth.
-
Enter the User (default
prometheus
). -
Enter the Password (default
prometheus
).
-
-
-
Click Save & test.
If the test succeeds, Grafana displays Data source is working.
Create a Grafana dashboard
After Prometheus has been configured as a data source in Grafana, you can create a dashboard with IDM metrics:
-
In Grafana, click the toggle menu button , and click Dashboards.
-
Click New, and do one of the following:
-
Select Import.
-
On the Import dashboard page, drag the
monitoring.dashboard.json
file from its location on your system to the Upload dashboard JSON file area. -
Enter information in the Options area, and select the Prometheus data source you previously created.
-
Click Import.
-
-
Select New dashboard.
-
Click Add visualization.
-
Select the Prometheus data source you previously created.
-
Configure the panel.
For more information, refer to:
-
-
Load testing
Load testing can help you get the most out of IDM and other ForgeRock products. The benefits load testing provides include:
-
Reducing the chance that unexpected spikes in system activity will cause the system to become unstable
-
Allowing developers and system administrators to reason more accurately and be more confident in release cycle timelines
-
Providing baseline statistics which can be used to identify and investigate unexpected behavior
Load testing is a complex subject that requires knowledge of your system and a disciplined approach. There is no "one-size-fits-all" solution that applies in all circumstances. However, there are some basic principles to keep in mind while planning, executing, and evaluating load tests.
Plan tests
The first step is to determine what metrics need to be examined, what components are going to be tested, what levels of load are going to be used, and what response ranges are acceptable. Answering these questions requires:
-
Service-level Agreements (SLAs)
-
Understanding of your use case
-
Baseline knowledge of your system
SLAs provide a stationary, business-based target to aim for in testing. An example SLA appears as follows:
Service/Endpoint | Sustained load | Peak load | Required response time |
---|---|---|---|
Customer auth against LDAP repo |
50,000 over 16 hours |
4,000 per second three times in a 16-hour period |
200ms |
Employee auth against AD repo |
4,000 over 10 hours |
100/second |
400ms |
Customer registration |
1,000 over 24 hours |
10/second |
500ms |
Employee password reset |
10 over 24 hours |
1/second |
500ms |
Sample SLA warnings and details:
|
Details will vary depending on your use case and application flow, present usage patterns, full load profile, and environment. To get the most benefit, collect this information.
The system’s full load profile depends on how it is designed and used. For example, some systems have thousands of clients each using a small slice of bandwidth, while others have only a few high-bandwidth connections. Understanding these nuances helps determine an appropriate number of connections and threads of execution to use to generate a test load.
If you have trouble determining which systems and components are being used at various points during your application flow, consider modeling your application using a sequence diagram. |
Understand resource usage
Understanding what resources are heavily consumed by ForgeRock products will help you with your test planning. The following chart details some products and their consumed resources:
Product | Consumed resources |
---|---|
AM with external stores |
CPU, memory |
DS as a user repository |
I/O, memory |
DS as a token store |
I/O, memory (if high token count) |
IDM |
I/O; CPU and memory play an important role in provisioning, sync, and user self-service |
IG |
CPU |
All of the above depends on network performance, including name resolution and proper load balancing when required. |
Execute tests
When it comes to executing tests, these are the basic principles to keep in mind:
-
Every system is different; "it depends" is the cardinal rule.
-
Testing scenarios that don’t happen in reality gives you test results that don’t happen in reality.
-
System performance is constrained by the scarcest resource.
One way to ensure that your tests reflect real use patterns is to begin with a load generator that creates periods of consistent use and periods of random spikes in activity. During the consistent periods, gradually add load until you exceed your SLAs and baselines. By using that data and the data from the periods of spiking activity, you can determine how your system handles spikes in activity in many different scenarios.
Your load generator should be located on separate hardware/instances from your production systems. It should have adequate resources to generate the expected load. |
When testing systems with many components, begin by testing the most basic things — I/O, CPU, and memory use. IDM provides insight into these by exposing JVM Metrics.
Once you have an understanding of the basic elements of your system, introduce new components into the tests. Keep a record of each test’s environment and the components which were under test. These components may include:
-
Hardware/Hypervisor/Container platform
-
Hosting OS/VM/Container environment
-
Hosted OS
-
Java Virtual Machine (JVM)
-
Web/J2EE Container (if used to host ForgeRock AM/IG or ForgeRock AM Agent)
-
Databases, repositories, and directory servers used with ForgeRock
-
Networking, load balancers, and firewalls between instances
-
SSL, termination points, and other communications
-
Points of integration, if any
-
Other applications and services that utilize ForgeRock components
-
Load generation configuration
-
Sample data, logs from test runs, and other generated files
While there are many tools that can help you monitor your system, a thorough understanding of your system logs is the best path to understanding its behavior. |
To keep your results clear and focused, only add or adjust one variable at a time. Do not run tests designed to stress the system to its theoretical limit. The results you get from these stress tests rarely provide actionable insights. |
Change the JVM heap size
Changing the JVM heap size can improve performance and reduce the time it takes to run reconciliations.
You can set the JVM heap size via the OPENIDM_OPTS
environment variable. If OPENIDM_OPTS
is undefined, the JVM maximum heap size defaults to 2GB. For example, to set the minimum and maximum heap sizes to 4GB, enter the following before starting IDM:
cd /path/to/openidm/ export OPENIDM_OPTS="-Xms4096m -Xmx4096m" ./startup.sh Using OPENIDM_HOME: /path/to/openidm Using PROJECT_HOME: /path/to/openidm Using OPENIDM_OPTS: -Xms4096m -Xmx4096m ... OpenIDM ready
cd \path\to\openidm set OPENIDM_OPTS=-Xms4096m -Xmx4096m startup.bat "Using OPENIDM_HOME: \path\to\openidm" "Using PROJECT_HOME: \path\to\openidm" "Using OPENIDM_OPTS: -Xms4096m -Xmx4096m -Dfile.encoding=UTF-8" ... OpenIDM ready
You can also edit the OPENIDM_OPTS
values in startup.sh
or startup.bat
.
Metrics reference
IDM exposes a number of metrics. All metrics are available at both the openidm/metrics/api
and openidm/metrics/prometheus
endpoints. The actual metric names can vary, depending on the endpoint used. Also refer to Monitoring.
Metric types
Metrics are organized into the following types:
Timer
Timers provide a histogram of the duration of an event, along with a measure of the rate of occurrences. Timers can be monitored using the Dropwizard dashboard widget and the IDM Prometheus endpoint. Durations in timers are measured in milliseconds. Rates are reported in number of calls per second. The following example shows a Timer metric:
{
"_id": "sync.source.perform-action",
"count": 2,
"max": 371.53391,
"mean": 370.1752705,
"min": 368.816631,
"p50": 371.53391,
"p75": 371.53391,
"p95": 371.53391,
"p98": 371.53391,
"p99": 371.53391,
"p999": 371.53391,
"stddev": 1.3586395,
"m15_rate": 0.393388581528647,
"m1_rate": 0.311520313228562,
"m5_rate": 0.3804917698002856,
"mean_rate": 0.08572717156016606,
"duration_units": "milliseconds",
"rate_units": "calls/second",
"total": 740.350541,
"_type": "timer"
}
Summary
Summaries are similar to Timers in that they measure a distribution of events. However, Summaries record values that aren’t units of time, such as user login counts. Summaries cannot be graphed in the Dropwizard dashboard widget, but are available through the Prometheus endpoint, and by querying the openidm/metrics/api
endpoint directly. The following example shows a Summary metric:
{
"_id": "audit.recon",
"m15_rate": 0.786777163057294,
"m1_rate": 0.623040626457124,
"m5_rate": 0.7609835396005712,
"mean_rate": 0.16977218861919927,
"units": "events/second",
"total": 4,
"count": 4,
"_type": "summary"
}
API metrics
Metrics accessed at the api
endpoint (such as those consumed by the Dropwizard dashboard widget) use dot notation for their metric names; for example, recon.target-phase
. The following table lists the API metrics available in IDM:
API metrics available in IDM
API Metric Name | Type | Description |
---|---|---|
|
Summary |
Count of all audit events generated of a given topic type. |
|
Timer |
Rate of reading response objects to fulfill the |
|
Timer |
Rate of reading response objects to fulfill the |
|
Timer |
Rate that filter scripts are executed per action. Monitors scripted filters and delegated admin. |
|
Timer |
Rate of ICF query executions with queryExpression and the time taken to perform this operation. |
|
Timer |
Rate of ICF query executions with queryFilter and the time taken to perform this operation. |
|
Timer |
Rate of ICF query executions with queryId, and time taken to perform this operation. |
|
Timer |
Rate of ICF query executions when the query type is UNKNOWN, and time taken to perform this operation. |
|
Timer |
Rate of operations on internal objects. |
|
Timer |
Rate of fetch operations of relationship fields for internal objects. |
|
Timer |
Query rate on relationship values for internal objects. |
|
Timer |
Rate of script executions on internal object. |
|
Timer |
Rate of validate operations of relationship fields for internal objects. |
|
Duration of live sync on a system object. |
|
|
Timer |
Rate of responses requiring field augmentation. When the repository cannot retrieve all data in a single call, IDM performs additional read operations to complete (augment) the missing data. |
|
Timer |
Rate of operations on a managed object. |
|
Timer |
Rate of fetches of relationship fields of a managed object. |
|
Timer |
Rate of queries to get relationship values for a resource on a managed object. |
|
Timer |
Rate of validations of relationship fields of a managed object. |
|
Timer |
Rate of executions of a script on a managed object. |
|
Timer |
Latency of enforcing temporal constraints on role objects during object creation. |
|
Timer |
Latency of enforcing temporal constraints on role objects during object deletion. |
|
Timer |
Latency of enforcing temporal constraints on role objects during object update. |
|
Timer |
Latency of enforcing temporal constraints on relationship grants during edge creation. |
|
Timer |
Latency of enforcing temporal constraints on relationship grants during edge deletion. |
|
Timer |
Latency of enforcing temporal constraints on relationship grants during edge update. |
|
Timer |
Rate of reads on relationship endpoint edges for validation. |
|
Timer |
Time spent in filter that maps non-nullable and null-valued array fields to an empty array. This filter is traversed for all repo access relating to internal and managed objects. |
|
Timer |
Rate of executions of a full reconciliation, and time taken to perform this operation. |
|
Timer |
Rate of merge operations after source and/or target objects have been retrieved during a merged query of recon association entries. |
|
Timer |
Rate of individual paged recon association entry queries during a merged query. More than one page of entries might be requested to build a single page of merged results. |
|
Timer |
Rate of source object retrieval via query when merging source objects to recon association entries. |
|
Timer |
Rate of target object retrieval via query when merging target objects to recon association entries. |
|
Timer |
The time taken to persist association data. The operation can be |
|
Timer |
Rate of executions of the id query phase of a reconciliation, and time taken to perform this operation. |
|
Timer |
Rate of executions of the source phase of a reconciliation, and time taken to perform this operation. |
|
Timer |
Rate of pagination executions of the source phase of a reconciliation, and time taken to perform this operation. |
|
Timer |
Rate of executions of the target phase of a reconciliation, and time taken to perform this operation. |
|
Timer |
Time (ms) spent running the Edge→Vertex relationship join query on the database and collecting the result set. |
|
Timer |
Rate of relationship graph query execution times. |
|
Timer |
Rate of relationship graph query result processing times. |
|
Timer |
Rate of executions of a query with queryId at a repository level and the time taken to perform this operation. |
|
Count |
Counts the usage statistics of the |
|
Timer |
Rate of retrievals of a repository connection. |
|
Timer |
Rate of actions to a repository datasource for a generic/explicit mapped table. |
|
Timer |
Rate of filtered queries (using native query expressions) on the relationship table. This metric measures the time spent making the query (in ms), and the number of times the query is invoked. |
|
Timer |
Rate of filtered queries (using the |
|
Timer |
Rate of execution time on the JDBC database for the |
|
Timer |
Rate of execution time on the JDBC database for CRUD operations. This rate does not include the time taken to obtain a connection to the database from the connection pool. The physical connections to the database have already been established inside the connection pool. |
|
Timer |
Rate of execution time on the JDBC database for queries (either |
|
Timer |
Rate of CRUDPAQ operations to a repository datasource for a generic/explicit/relationship mapped table. |
|
Timer |
Time (ms) spent in the various phases to retrieve relationship expanded data referenced by queried objects. |
|
Timer |
Rate of initiations of a CRUDPAQ operation to a repository datasource. |
|
Timer |
Rate of actions over the router and the time taken to perform this operation. |
|
Timer |
Rate of creates over the router and the time taken to perform this operation. |
|
Timer |
Rate of deletes over the router and the time taken to perform this operation. |
|
Timer |
Rate of patches over the router and the time taken to perform this operation. |
|
Timer |
Rate of queries with queryExpression completed over the router and the time taken to perform this operation. |
|
Timer |
Rate of queries with queryFilter completed over the router and the time taken to perform this operation. |
|
Timer |
Rate of reads over the router and the time taken to perform this operation. |
|
Timer |
Rate of updates over the router and the time taken to perform this operation. |
|
Timer |
Rate of calls to a script and time taken to complete. |
|
Summary |
Count of all successful user self-service password resets. |
|
Summary |
Count of all successful user self-service registrations by registration type. |
|
Summary |
Count of all successful user self-service registrations by registration type and provider. |
|
Timer |
Rate of requests to create a target object, and time taken to perform the operation. |
|
Timer |
Rate of requests to delete a target object, and time taken to perform the operation. |
|
Timer |
Rate of configurations applied to a mapping. |
|
Timer |
Rate of acquisition of queued synchronization events from the queue. |
|
Timer |
Rate of deletion of synchronization events from the queue. |
|
Timer |
Rate at which queued synchronization operations are executed. |
|
Summary |
Number of queued synchronization operations that failed. |
|
Summary |
Number of queued synchronization events acquired by another node in the cluster. |
|
Summary |
Number of queued synchronization events rejected because the backing thread-pool queue was at full capacity and the thread-pool had already allocated its maximum configured number of threads. |
|
Timer |
Rate at which queued synchronization events are released. |
|
Timer |
Times the release of queued synchronization events after a failure and before exceeding the retry count. |
|
Timer |
Rate of insertion of synchronization events into the queue. |
|
Timer |
The latency involved in polling for synchronization events. |
|
Timer |
Rate of reads of an object. |
|
Timer |
Rate of assessments of a synchronization situation. |
|
Timer |
Rate of correlations between a target and a given source, and time taken to perform this operation. |
|
Timer |
Rate of determinations done on a synchronization action based on its current situation. |
|
Timer |
Rate of completions of an action performed on a synchronization operation. |
|
Timer |
Rate of assessments of a target situation. |
|
Timer |
Rate of determinations done on a target action based on its current situation. |
|
Timer |
Rate of completions of an action performed on a target sync operation. |
|
Timer |
Rate of requests to update an object on the target, and the time taken to perform this operation. |
|
Summary |
Count of all successful logins by user type. |
|
Summary |
Count of all successful logins by user type and provider. |
|
Summary |
Number of 404 responses encountered when querying the |
|
Summary |
Number of edges skipped due to an unsatisfied temporal constraint on either the edge or the referred-to vertex. Encountered when querying the resource collection and relationship field at the traversal_depthX tag for the most recent X. |
|
Timer |
Time spent traversing relationship fields to calculate the specified virtual properties. The managed objects linked to by the traversal relationship fields define a tree whose root is the virtual property host. This object tree is traversed depth-first with the traversal_depthX corresponding to the latency involved with each relationship traversal. Traversal_depth0 corresponds to the first relationship field traversed. Because the tree is traversed depth-first, traversal_depthX subsumes all the traversal latencies for all traversal_depth Y, where Y>X. |
API JVM metrics available in IDM
These metrics depend on the JVM version and configuration. In particular, garbage-collector-related metrics depend on the garbage collector that the server uses. The garbage-collector metric names are unstable and can change even in a minor JVM release. |
API Metric Name | Type | Unit | Description |
---|---|---|---|
|
Gauge |
Count |
Number of processors available to the JVM. For more information, refer to Runtime. |
|
Gauge |
Count |
Number of classes loaded since the Java virtual machine started. For more information, refer to ClassLoadingMXBean. |
|
Gauge |
Count |
Number of classes unloaded since the Java virtual machine started. For more information, refer to ClassLoadingMXBean. |
|
Gauge |
Bytes |
For more information, refer to Runtime. |
|
Gauge |
Count |
For each garbage collector in the JVM. For more information, refer to GarbageCollectorMXBean. |
|
Gauge |
Milliseconds |
|
|
Gauge |
Count |
|
|
Gauge |
Milliseconds |
|
|
Gauge |
Bytes |
For more information, refer to Runtime. |
|
Gauge |
Bytes |
Amount of heap memory committed for the JVM to use. For more information, refer to MemoryMXBean. |
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
Maximum amount of heap memory available to the JVM. |
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
Amount of heap memory used by the JVM. |
|
Gauge |
Bytes |
Amount of non-heap memory committed for the JVM to use. |
|
Gauge |
Bytes |
Amount of non-heap memory the JVM initially requested from the operating system. |
|
Gauge |
Bytes |
Maximum amount of non-heap memory available to the JVM. |
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
Amount of non-heap memory used by the JVM. |
|
Gauge |
Bytes |
For each pool. For more information, refer to MemoryPoolMXBean. |
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
Amount of memory that is committed for the JVM to use. For more information, refer to MemoryMXBean. |
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Count |
For more information, refer to ThreadMXBean. |
|
Gauge |
Count |
Number of live threads including both daemon and non-daemon threads. |
|
Gauge |
Count |
Number of live daemon threads. |
|
Gauge |
Count |
Number of threads in the |
|
Gauge |
Count |
Number of threads in the |
|
Gauge |
Count |
Number of threads in the |
|
Gauge |
Count |
Number of threads in the |
|
Gauge |
Count |
Number of threads in the |
|
Gauge |
Bytes |
For more information, refer to totalMemory(). |
API scheduler metrics available in IDM
For example requests, refer to Scheduler metrics.
API Metric Name | Type | Description |
---|---|---|
|
Summary |
A summary of completed jobs for the specified job-group and job-name. |
|
Timer |
Time spent on executed jobs for the specified job-group and job-name. |
|
Timer |
Time spent storing scheduled jobs in the repository for the specified operation and scheduler-object. |
|
Summary |
A summary of successfully acquired jobs. |
|
Summary |
A summary of acquired jobs that time out. |
|
Summary |
A summary of fired schedule triggers. |
|
Summary |
A summary of misfired schedule triggers. |
|
Timer |
Time spent on recovered triggers. |
|
Timer |
Execution rate of scheduler requests for the specified type and operation. |
API workflow metrics available in IDM
API Metric Name | Type | Description |
---|---|---|
|
Timer |
Time spent invoking a message event. |
|
Timer |
Time spent invoking a signal event. |
|
Timer |
Time spent triggering an execution. |
|
Timer |
Time spent querying executions. |
|
Timer |
Time spent forcing synchronous execution of a job. |
|
Timer |
Time spent displaying the stacktrace for a job that triggered an exception. |
|
Timer |
Time spent deleting a job. |
|
Timer |
Time spent querying jobs. |
|
Timer |
Time spent reading a single job. |
|
Timer |
Time spent to execute dead-letter job. |
|
Timer |
Time spent to retrieve the stacktrace for a dead-letter job. |
|
Timer |
Time spent to delete a dead letter job. |
|
Timer |
Time spent to query dead letter jobs. |
|
Timer |
Time spent to read a dead letter job. |
|
Timer |
Time spent to deploy a model. |
|
Timer |
Time spent to list model deployments. |
|
Timer |
Time spent to validate BPMN content. |
|
Timer |
Time spent to create a model. |
|
Timer |
Time spent to delete a model. |
|
Timer |
Time spent to query models. |
|
Timer |
Time spent to read a model. |
|
Timer |
Time spent to update a model. |
|
Timer |
Time spent to delete a process definition. |
|
Timer |
Time spent to query process definitions. |
|
Timer |
Time spent to read a process definition. |
|
Timer |
Time spent to migrate a process instance. |
|
Timer |
Time spent to validate a migration of a process instance. |
|
Timer |
Time spent to create a process instance. |
|
Timer |
Time spent to delete a process instance. |
|
Timer |
Time spent to query process instances. |
|
Timer |
Time spent to read a process instance. |
|
Timer |
Time spent to query task definitions. |
|
Timer |
Time spent to read a task definition. |
|
Timer |
Time spent to complete a task instance. |
|
Timer |
Time spent to query task instances. |
|
Timer |
Time spent to read a task instance. |
|
Timer |
Time spent to update a task instance. |
Prometheus metrics
Metrics accessed through the Prometheus endpoint are prepended with idm_
and use underscores between words; for example, idm_recon_target_phase_seconds
. The following table lists the Prometheus metrics available in IDM:
Prometheus metrics available in IDM
Prometheus Metric Name | Type | Description |
---|---|---|
|
Summary |
Count of all audit events generated of a given topic type. |
|
Timer |
Rate of reading response objects, to fulfill the |
|
Timer |
Rate of reading response objects, to fulfill the |
|
Timer |
Rate at which filter scripts are executed, per action. Monitors scripted filters and delegated admin. |
|
Timer |
Rate of ICF query executions with queryExpression, and time taken to perform this operation. |
|
Timer |
Rate of ICF query executions with queryFilter, and time taken to perform this operation. |
|
Timer |
Rate of ICF query executions with queryId, and time taken to perform this operation. |
|
Timer |
Rate of ICF query executions when the query type is UNKNOWN, and time taken to perform this operation. |
|
Timer |
Rate of fetch operations of relationship fields for internal objects. |
|
Timer |
Query rate on relationship values for internal objects. |
|
Timer |
Rate of validate operations of relationship fields for internal objects. |
|
Timer |
Rate of script executions on internal objects. |
|
Timer |
Rate of operations on internal objects. |
|
Timer |
Duration of live sync on a system object. |
|
Timer |
Rate of responses requiring field augmentation. When the repository is unable to retrieve all the data in a single call, IDM performs additional read operations to complete (augment) the missing data. |
|
Timer |
Rate of fetches of relationship fields of a managed object. |
|
Timer |
Rate of queries to get relationship values for a resource on a managed object. |
|
Timer |
Rate of validations of relationship fields of a managed object. |
|
Timer |
Rate of executions of a script on a managed object. |
|
Timer |
Latency of enforcing temporal constraints on role objects during object creation. |
|
Timer |
Latency of enforcing temporal constraints on role objects during object deletion. |
|
Timer |
Latency of enforcing temporal constraints on role objects during object update. |
|
Timer |
Latency of enforcing temporal constraints on relationship grants during edge creation. |
|
Timer |
Latency of enforcing temporal constraints on relationship grants during edge deletion. |
|
Timer |
Latency of enforcing temporal constraints on relationship grants during edge update. |
|
Timer |
Rate of reads on relationship endpoint edges for validation. |
|
Timer |
Rate of operations on a managed object. |
|
Timer |
Time spent in filter which maps non-nullable, null-valued array fields to an empty array. This filter is traversed for all repo access relating to internal and managed objects. |
|
Timer |
Rate of merge operations after source and/or target objects have been retrieved during a merged query of recon association entries. |
|
Timer |
Rate of individual paged recon association entry queries during a merged query. More than one page of entries might be requested to build a single page of merged results. |
|
Timer |
Rate of source object retrieval via query when merging source objects to recon association entries. |
|
Timer |
Rate of target object retrieval via query when merging target objects to recon association entries. |
|
Timer |
The time taken to persist association data. The operation can be |
|
Timer |
Rate of executions of the id query phase of a reconciliation, and time taken to perform this operation. |
|
Timer |
Rate of executions of a full reconciliation, and time taken to perform this operation. |
|
Timer |
Rate of pagination executions of the source phase of a reconciliation, and time taken to perform this operation. |
|
Timer |
Rate of executions of the source phase of a reconciliation, and time taken to perform this operation. |
|
Timer |
Rate of executions of the target phase of a reconciliation, and time taken to perform this operation. |
|
Timer |
Rate of filtered queries (using native query expressions) on the relationship table. This metric measures the time spent making the query (in ms), and the number of times the query is invoked. |
|
Timer |
Rate of filtered queries (using the |
|
Timer |
Rate of execution time on the JDBC database for the |
|
Timer |
Rate of execution time on the JDBC database for CRUD operations. This rate does not include the time taken to obtain a connection to the database from the connection pool. The physical connections to the database have already been established inside the connection pool. |
|
Timer |
Rate of execution time on the JDBC database for queries (either |
|
Timer |
Rate of retrievals of a repository connection. |
|
Count |
Counts the usage statistics of the |
|
Timer |
Time (ms) spent running the Edge→Vertex relationship join query on the database and collecting the result set. |
|
Timer |
Rate of relationship graph query execution times. |
|
Timer |
Rate of relationship graph query result processing times. |
|
Timer |
Rate of executions of a query with queryId at a repository level, and time taken to perform this operation. |
|
Timer |
Time (ms) spent in the various phases to retrieve relationship expanded data referenced by queried objects. |
|
Timer |
Rate of CRUDPAQ operations to a repository datasource for a generic/explicit/relationship mapped table. |
|
Timer |
Rate of actions to a repository datasource for a generic/explicit mapped table. |
|
Timer |
Rate of initiations of a CRUDPAQ operation to a repository datasource. |
|
Timer |
Rate of actions over the router, and time taken to perform this operation. |
|
Timer |
Rate of creates over the router, and time taken to perform this operation. |
|
Timer |
Rate of deletes over the router, and time taken to perform this operation. |
|
Timer |
Rate of patches over the router, and time taken to perform this operation. |
|
Timer |
Rate of queries with queryExpression completed over the router, and time taken to perform this operation. |
|
Timer |
Rate of queries with queryFilter completed over the router, and time taken to perform this operation. |
|
Timer |
Rate of reads over the router, and time taken to perform this operation. |
|
Timer |
Rate of updates over the router, and time taken to perform this operation. |
|
Timer |
Rate of calls to a script and time taken to complete. |
|
Summary |
Count of all successful user self-service password resets. |
|
Summary |
Count of all successful user self-service registrations by registration type and provider. |
|
Summary |
Count of all successful user self-service registrations by registration type. |
|
Timer |
Rate of requests to create an object on the target, and the time taken to perform this operation. |
|
Timer |
Rate of requests to delete an object on the target, and the time taken to perform this operation. |
|
Timer |
Rate of configurations applied to a mapping. |
|
Timer |
Rate of acquisition of queued synchronization events from the queue. |
|
Timer |
Rate of deletion of synchronization events from the queue. |
|
Timer |
Rate at which queued synchronization operations are executed. |
|
Summary |
Number of queued synchronization operations that failed. |
|
Timer |
The latency involved in polling for synchronization events. |
|
Summary |
Number of queued synchronization events that were acquired by another node in the cluster. |
|
Summary |
Number of queued synchronization events that were rejected because the backing thread-pool queue was at full capacity and the thread-pool had already allocated its maximum configured number of threads. |
|
Timer |
Times the release of queued synchronization events after a failure and before exceeding the retry count. |
|
Timer |
Rate at which queued synchronization events are released. |
|
Timer |
Rate of insertion of synchronization events into the queue. |
|
Timer |
Rate of reads of an object. |
|
Timer |
Rate of assessments of a synchronization situation. |
|
Timer |
Rate of correlations between a target and a given source, and time taken to perform this operation. |
|
Timer |
Rate of determinations done on a synchronization action based on its current situation. |
|
Timer |
Rate of completions of an action performed on a synchronization operation. |
|
Timer |
Rate of assessments of a target situation. |
|
Timer |
Rate of determinations done on a target action based on its current situation. |
|
Timer |
Rate of completions of an action performed on a target sync operation. |
|
Timer |
Rate of requests to update an object on the target, and the time taken to perform this operation. |
|
Summary |
Count of all successful logins by user type. |
|
Summary |
Count of all successful logins by user type and provider. |
|
Summary |
Number of 404 responses encountered when querying the |
|
Summary |
Number of edges skipped due to an unsatisfied temporal constraint on either the edge or the referred-to vertex. Encountered when querying the resource collection and relationship field at the traversal_depthX tag for the most recent X. X corresponds to the relationship field sequence. |
|
Timer |
Time spent traversing relationship fields to calculate the specified virtual properties. The managed objects linked to by the traversal relationship fields define a tree, whose root is the virtual property host. This object tree is traversed depth-first, with the traversal_depthX corresponding to the latency involved with each relationship traversal. Traversal_depth0 corresponds to the first relationship field traversed. Because the tree is traversed depth-first, traversal_depthX will subsume all the traversal latencies for all traversal_depth Y, where Y>X. X corresponds to the relationship field sequence. |
Prometheus JVM metrics available in IDM
These metrics depend on the JVM version and configuration. In particular, garbage-collector-related metrics depend on the garbage collector that the server uses. The garbage-collector metric names are unstable, and can change even in a minor JVM release. |
Prometheus Metric Name | Type | Unit | Description |
---|---|---|---|
|
Gauge |
Count |
Number of processors available to the JVM. For more information, refer to Runtime. |
|
Gauge |
Count |
Number of classes loaded since the Java virtual machine started. For more information, refer to ClassLoadingMXBean. |
|
Gauge |
Count |
Number of classes unloaded since the Java virtual machine started. For more information, refer to ClassLoadingMXBean. |
|
Gauge |
Bytes |
For more information, refer to Runtime. |
|
Gauge |
Count |
For each garbage collector in the JVM. For more information, refer to GarbageCollectorMXBean. |
|
Gauge |
Milliseconds |
|
|
Gauge |
Count |
|
|
Gauge |
Milliseconds |
|
|
Gauge |
Bytes |
For more information, refer to Runtime. |
|
Gauge |
Bytes |
Amount of heap memory committed for the JVM to use. For more information, refer to MemoryMXBean. |
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
Maximum amount of heap memory available to the JVM. |
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
Amount of heap memory used by the JVM. |
|
Gauge |
Bytes |
Amount of non-heap memory committed for the JVM to use. |
|
Gauge |
Bytes |
Amount of non-heap memory the JVM initially requested from the operating system. |
|
Gauge |
Bytes |
Maximum amount of non-heap memory available to the JVM. |
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
Amount of non-heap memory used by the JVM. |
|
Gauge |
Bytes |
For each pool. For more information, refer to MemoryPoolMXBean. |
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
Amount of memory that is committed for the JVM to use. For more information, refer to MemoryMXBean. |
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Bytes |
|
|
Gauge |
Count |
For more information, refer to ThreadMXBean. |
|
Gauge |
Count |
Number of live threads including both daemon and non-daemon threads. |
|
Gauge |
Count |
Number of live daemon threads. |
|
Gauge |
Count |
Number of threads in the |
|
Gauge |
Count |
Number of threads in the |
|
Gauge |
Count |
Number of threads in the |
|
Gauge |
Count |
Number of threads in the |
|
Gauge |
Count |
Number of threads in the |
|
Gauge |
Bytes |
For more information, refer to totalMemory(). |
Prometheus scheduler metrics available in IDM
Prometheus Metric Name | Type | Description |
---|---|---|
|
Summary |
A summary of completed jobs for the specified job-group and job-name. |
|
Timer |
Time spent on executed jobs for the specified job-group and job-name. |
|
Timer |
Time spent storing scheduled jobs in the repository for the specified operation and scheduler-object. |
|
Summary |
A summary of successfully acquired jobs. |
|
Summary |
A summary of acquired jobs that time out. |
|
Summary |
A summary of fired schedule triggers. |
|
Summary |
A summary of misfired schedule triggers. |
|
Timer |
Time spent on recovered triggers. |
|
Timer |
Execution rate of scheduler requests for the specified type and operation. |
Prometheus workflow metrics available in IDM
Prometheus Metric Name | Type | Description |
---|---|---|
|
Timer |
Time spent invoking a message event. |
|
Timer |
Time spent invoking a signal event. |
|
Timer |
Time spent triggering an execution. |
|
Timer |
Time spent querying executions. |
|
Timer |
Time spent forcing synchronous execution of a job. |
|
Timer |
Time spent displaying the stacktrace for a job that triggered an exception. |
|
Timer |
Time spent deleting a job. |
|
Timer |
Time spent querying jobs. |
|
Timer |
Time spent reading a single job. |
|
Timer |
Time spent to execute dead-letter job. |
|
Timer |
Time spent to retrieve the stacktrace for a dead-letter job. |
|
Timer |
Time spent to delete a dead letter job. |
|
Timer |
Time spent to query dead letter jobs. |
|
Timer |
Time spent to read a dead letter job. |
|
Timer |
Time spent to deploy a model. |
|
Timer |
Time spent to list model deployments. |
|
Timer |
Time spent to validate BPMN content. |
|
Timer |
Time spent to create a model. |
|
Timer |
Time spent to delete a model. |
|
Timer |
Time spent to query models. |
|
Timer |
Time spent to read a model. |
|
Timer |
Time spent to update a model. |
|
Timer |
Time spent to delete a process definition. |
|
Timer |
Time spent to query process definitions. |
|
Timer |
Time spent to read a process definition. |
|
Timer |
Time spent to migrate a process instance. |
|
Timer |
Time spent to validate a migration of a process instance. |
|
Timer |
Time spent to create a process instance. |
|
Timer |
Time spent to delete a process instance. |
|
Timer |
Time spent to query process instances. |
|
Timer |
Time spent to read a process instance. |
|
Timer |
Time spent to query task definitions. |
|
Timer |
Time spent to read a task definition. |
|
Timer |
Time spent to complete a task instance. |
|
Timer |
Time spent to query task instances. |
|
Timer |
Time spent to read a task instance. |
|
Timer |
Time spent to update a task instance. |