When deployed appropriately, server clustering can facilitate high availability of critical services. Clustering can also increase performance and overall system throughput. However, availability and performance are often at opposite ends of the deployment spectrum. You might need to make some configuration tradeoffs that balance availability with performance to accommodate specific deployment goals.

PingAccess clusters are made up of three types of nodes:

Administrative Node
Provides the administrator with a configuration interface.
Replica Administrative Node
Provides the administrator with the ability to recover a failed administrative node using a manual failover procedure.
Engine Node
Handles incoming client requests and evaluates policy decisions based on the configuration replicated from the administrative node.

Any number of engine nodes can be configured in a cluster, but you can configure only one administrative node and one replica administrative node in a cluster.

You should manage incoming traffic to the engine nodes using load balancers or other mechanisms. PingAccess clusters do not dynamically manage or load-balance request traffic to individual engine nodes.

Configuration information is replicated to all of the engine nodes and the replica administrative node from the administrative node. State information sharing between engine nodes is not part of a default cluster configuration. However, some environments can benefit from runtime state clustering, which is an optional function that lets engine nodes replicate and share some state information with each other.

The license file on the administrative node is replicated to all of the engine nodes and the replica administrative node. The engine nodes do not require a license to function, but some default templates appear differently depending on the information in the license.

The features documented here are affected by the settings in the configuration file. See the Configuration file reference for more information.

Node failure implications

The failure of a node within a PingAccess cluster can have short-term or long-term implications, depending on the node and your network state.

Node issues
Node issue Result Recommendation
Administrative node failure The engine nodes function using stored configurations, but cannot update their configurations. Fail over to the replica administrative node until the administrative node can be restarted.
Replica administrative node failure The engine nodes and administrative node function normally. No failover is available in case of administrative node failure. Restart the replica administrative node as soon as possible.
Administrative and replica node failure The engine nodes function using stored configurations, but cannot update their configurations. No failover is available. Restart the administrative node as soon as possible, or restart the replica administrative node and fail over.
Some engine nodes cannot reach the administrative node Affected engine nodes function using stored configurations, if any, but cannot update their configurations. If the administrative node performs key rolling, the affected engine nodes cannot recognize the new PingAccess internal cookie. Restore administrative node access as soon as possible.

Cluster properties

Use the run.properties and bootstrap.properties files to configure your environment.

In a cluster, you can configure each PingAccess node to serve as either an administrative node, a replica administrative node, or an engine node in the run.properties file. The run.properties file for the administrative node also contains server-specific configuration data.

At startup, a PingAccess engine node in a cluster checks its local configuration and then makes a call to the administrative node to check for changes. You can configure how often each engine node in a cluster checks the administrative node for changes in the engine run.properties file.

Configuration information is replicated to all engine nodes. By default, engine nodes do not share runtime state. You can configure nodes for runtime state clustering using the run.properties file.

Information needed to bootstrap an engine node is stored in the bootstrap.properties file on each engine node.

bootstrap.properties
engine.admin.configuration.host Defines the host where the administrative console is available. The default is localhost.
engine.admin.configuration.port Defines the port where the administrative console is running. The default is 9000.
engine.admin.configuration.userid Defines the name of the engine.
engine.admin.configuration.keypair Defines an elliptic curve key pair that is in the JSON Web Key (JWK) format.
engine.admin.configuration.bootstrap.truststore Defines the trust store, in JWK format, that is used for communication with the administrative console.
Info:

You can tune the cache using the EHCache Configuration Properties, pa.ehcache.*, listed in the Configuration file reference guide.

Diagram of the Runtime Engines.

Cluster node status

Engine nodes and replica administrative nodes include a status indicator that indicates the health of the node and a Last Updated field that indicates the date and time of the last update. The status indicator can be green (good status), yellow (degraded status), or red (failed status).

The status is determined by using the value for admin.polling.delay as an interval to measure health:
Green (good status)
The node contacted the administrative node on the last pull request.
Yellow (degraded status)
The node contacted the administrative node between 2 and 10 intervals.
Red (failed status)
The node has either never contacted the administrative node, or it has been more than 10 intervals since the nodes communicated.

Using multiple network interface cards to route traffic

The routing of different types of traffic over specific interfaces is a network infrastructure exercise. However, PingAccess does support the routing of traffic over multiple network interfaces since, by default, PingAccess binds to all interfaces, as specified by a 0.0.0.0 address for the following parameters in conf/run.properties.

admin.bindAddress=0.0.0.0
clusterconfig.bindAddress=0.0.0.0
engine.http.bindAddress=0.0.0.0
agent.http.bindAddress=0.0.0.0

You can override this setting by specifying a single bind address.

Runtime state clustering

Runtime state clustering is an optional feature that provides better scaling of large PingAccess deployments by allowing multiple engine nodes in the configuration to share certain information. A load balancer is placed in front of each group of nodes in order to distribute connections to the nodes.

Runtime state clustering serves three purposes:

  • Provide fault-tolerance for mediated tokens if an engine node is taken offline.
  • Reduce the number of STS transactions with PingFederate when the front-end load balancer does not provide a sticky session.
  • Ensure rate limits are enforced properly if the front-end load balancer does not provide a sticky session.

Runtime state clustering is not necessary in most environments. It can be beneficial in very large environments or environments using rate limiting rules or token mediation.