While there might be tradeoffs between availability and performance, PingAccess is designed to operate efficiently in a clustered environment.

PingAccess clusters consist of three types of nodes:

Administrative Console
Provides the administrator with a configuration interface.
Replica Administrative Console
Provides the administrator with the ability to recover a failed administrative console using a manual failover procedure.
Clustered Engine
Handles incoming client requests and evaluates policy decisions based on the configuration replicated from the administrative console.

You can configure multiple clustered engines in a cluster, but you can only configure one administrative console and one replica administrative console in a cluster.

Configuration information is replicated to all of the clustered engine nodes and the replica administrative node from the administrative console. State information replication is not part of a default cluster configuration, but some state information can be replicated using PingAccess subclusters.

PingAccess Subclusters

Subclusters are a method to provide better scaling of very large PingAccess deployments by allowing multiple engine nodes in the configuration to share certain information. A load balancer is placed in front of each subcluster in order to distribute connections to the nodes in the subcluster.

Subclusters serve three purposes:

  • Providing fault-tolerance for mediated tokens if a cluster node is taken offline
  • Reducing the number of security token service (STS) transactions with PingFederate when the front-end load balancer does not provide a sticky session
  • Ensuring rate limits are enforced properly if the front-end load balancer does not provide a sticky session
If you don't use token mediation and rate limiting in your environment, subclustering is unnecessary.
Note:

You can tune this cache by using the EHCache Configuration Properties listed in the Configuration Properties documentation.

Diagram illustrating configuration between load balancers, runtime engines, and administrative consoles.

PingAccess provides clustering features that allow a group of PingAccess servers to appear as a single system. When deployed appropriately, server clustering can facilitate high availability of critical services. Clustering can also increase performance and overall system throughput.
Note:

Availability and performance are often at opposite ends of the deployment spectrum. You might need to make some configuration tradeoffs that balance availability with performance to accommodate specific deployment goals.

In a cluster, you can configure each PingAccess engine, or node, as an administrative console, a replica administrative console, or a runtime engine in the run.properties file. Runtime engines service client requests, while the console server administers policy and configuration for the entire cluster, through the administrative console. The replica administrative console provides a backup copy of the information on the administrative node in the event of a non-recoverable failure of the administrative console node. A cluster can contain one or more runtime nodes, but only one console node and only one replica console node. Server-specific configuration data is stored in the PingAccess administrative console server in the run.properties file. Information needed to bootstrap an engine is stored in the bootstrap.properties file on each engine.

At startup, a PingAccess engine node in a cluster checks its local configuration and then makes a call to the administrative console to check for changes. You can configure the frequency that the engine in a cluster checks the console for changes in the engine run.properties file.

Configuration information is replicated to all engine nodes. By default, engines do not share runtime state. For increased performance, you can configure engines to share runtime state by configuring cluster interprocess communication using the run.properties file.
Note:

Runtime state clustering consists solely of a shared cache of security tokens acquired from the PingFederate STS for token mediation use cases using the Token Mediator Site Authenticator.

Engine nodes include a status indicator that indicates the health of the node and a Last Updated field that indicates the date and time of the last update. The status indicator can be green (good status), yellow (degraded status), or red (failed status).

The status is determined by using the value for admin.polling.delay as an interval to measure health:
Green (good status):
The replica administrative node contacted the primary administrative node on the last pull request.
Yellow (degraded status):
The replica administrative node contacted the primary administrative node between 2 and 10 intervals.
Red (failed status):
The replica administrative node has either never contacted the primary administrative node, or it has been more than 10 intervals since the nodes communicated.