IDM 7.2.2

Manage nodes in a cluster

You can manage clusters and nodes over the REST interface, or using the admin UI.

Manage nodes over REST

You can manage clusters and individual nodes over the REST interface, at the endpoint openidm/cluster/. These sample REST commands demonstrate the cluster information that is available over REST:

Display the Nodes in the Cluster

The following REST request displays the nodes configured in the cluster, and their status.

curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--header "Accept-API-Version: resource=1.0"  \
--request GET \
"http://localhost:8080/openidm/cluster?_queryFilter=true"
{
  "result": [
    {
       "_id": "node1",
       "state": "running",
       "instanceId": "node1",
       "startup": "2017-09-16T15:37:04.757Z",
       "shutdown": ""
    },
    {
       "_id": "node2",
       "state": "running",
       "instanceId": "node2",
       "startup": "2017-09-16T15:45:05.652Z",
       "shutdown": ""
    }
  ]
}
Check Node Status

To check the status of a specific node, include its node ID in the URL. For example:

curl \
--header "X-OpenIDM-Username: openidm-admin" \
--header "X-OpenIDM-Password: openidm-admin" \
--header "Accept-API-Version: resource=1.0"  \
--request GET \
"http://localhost:8080/openidm/cluster/node1"
{
  "_id": "node1",
  "instanceId": "node1",
  "startup": "2017-09-16T15:37:04.757Z",
  "shutdown": "",
  "state": "running"
}

Manage Nodes Using the admin UI

The admin UI provides a status widget that lets you monitor the activity and status of all nodes in a cluster.

To add the widget to a Dashboard, click Add Widget then scroll down to System Status > Cluster Node Status and click Add.

The cluster node status widget shows the current status and number of running jobs of each node.

Select Status to obtain more information on the latest startup and shutdown times of that node. Select Jobs to obtain detailed information on the tasks that the node is running.

The widget can be managed in the same way as any other dashboard widget. For more information, see Manage Dashboards.

Remove Nodes

IDM automatically addresses cluster node removal. Shut down the instance to remove, and IDM no longer utilizes the node.

Considerations when removing an existing node from the cluster:

  • Ensure a clustered reconciliation is not taking place when removing the node as this could cause unexpected behavior.

  • Remnants of the node will still exist in the chosen IDM database. For example, you may see the removed node still appearing in the cluster node status widget in the Dashboard (for more context on this, refer to Manage nodes using the admin UI).

  • Example using DS

  • Example using Oracle database

  1. Shut down all instances of IDM.

  2. Delete the old instance record from DS:

    1. Locate the DN of the node using the ldapsearch command:

      ./ldapsearch -D "uid=admin" -w password -h localhost -p 1389 -b "dc=openidm,dc=forgerock,dc=com" "uid=node1"
      
      dn: uid=node1,ou=states,ou=cluster,dc=openidm,dc=forgerock,dc=com
      objectClass: uidObject
      objectClass: fr-idm-generic-obj
      objectClass: top
      fr-idm-json: {"recoveryAttempts":1,"detectedDown":"0000001660588116038","type":"state","recoveryFinished":"0000001660588116096","instanceId":"node1","startup":"0000001660588118032","recoveringInstanceId":"node2","state":3,"recoveringTimestamp":"0000001660588116038","recoveryStarted":"0000001660588116038","shutdown":"0000001660588563707","timestamp":"0000001660588563018"}
      uid: node1
      The name of the node instance can be found in the openidm/resolver/boot.properties file by the value of the openidm.node.id attribute.
    2. Delete the node record:

      ./ldapdelete -D "uid=admin" -w password -h localhost -p 1389 -b "dc=openidm,dc=forgerock,dc=com" "uid=node1,ou=states,ou=cluster,dc=openidm,dc=forgerock,dc=com"
      
      # DELETE operation successful for DN uid=node1,ou=states,ou=cluster,dc=openidm,dc=forgerock,dc=com
  3. Start all operational IDM nodes.

  1. Shut down all instances of IDM.

  2. Delete references to the old instance from clusterobjects and the associated rows from clusterobjectproperties in the database:

    1. Locate the name of the node, in this case node1:

      SELECT * FROM clusterobjects; 
      
      |----------------------------------------------------|
      | id | objecttypes_id | objectid | rev   | fullobject|
      |----|----------------|----------|-------|-----------|
      |  1 |              2 | node1    | 43789 | {"redacted"}
      |----|----------------|----------|-------|-----------|
    2. Delete the references of the old node from the clusterobjects table:

      DELETE FROM openidm.clusterobjects WHERE objectid = 'node1';
      
      |-------------------|-----------------------|-------------------|---------------------|
      | clusterobjects_id | propkey               | proptype          | propvalue           |
      |-------------------|-----------------------|-------------------|---------------------|
      |                 1 | /recoveryAttempts     | java.lang.Integer | 0                   |
      |                 1 | /_rev                 | java.lang.String  | 43790               |
      |                 1 | /detectedDown         | java.lang.String  | 0000000000000000000 |
      |-------------------|-----------------------|-------------------|---------------------|
    3. Delete references of the old node from the clusterobjectproperties table:

      DELETE FROM openidm.clusterobjectproperties WHERE clusterobjects_id = 'node1';
  3. Start all operational IDM nodes.