Product
Hosting Environment
Operating System
Capability
Task Type
Draft Beta
Close

Clustering reference guide

Updated 94

Add to MyDocs | Hide Show Table of Contents

Clustering

PingAccess can be configured in a clustered environment to provide higher scalability and availability for critical services. While it is important to understand that there may be tradeoffs between availability and performance, PingAccess is designed to operate efficiently in a clustered environment.

PingAccess clusters are made up of three types of nodes:

Administrative Console
Provides the administrator with a configuration interface
Replica Administrative Console
Provides the administrator with the ability to recover a failed administrative console using a manual failover procedure.
Clustered Engine
Handles incoming client requests and evaluates policy decisions based on the configuration replicated from the administrative console

Any number of clustered engines can be configured in a cluster, but only one administrative console and one replica administrative console can be configured in a cluster.

Configuration information is replicated to all of the clustered engine nodes and the replica administrative node from the administrative console. State information replication is not part of a default cluster configuration, but some state information can be replicated using PingAccess subclusters.

Using multiple network interface cards to route traffic

The routing of different types of traffic over specific interfaces is a network infrastructure exercise. However, PingAccess does support the routing of traffic over multiple network interfaces since, by default, PingAccess binds to all interfaces, as specified by a 0.0.0.0 address for the following parameters in conf/run.properties.

admin.bindAddress=0.0.0.0
clusterconfig.bindAddress=0.0.0.0
engine.http.bindAddress=0.0.0.0
agent.http.bindAddress=0.0.0.0

You can override this setting by specifying a single bind address.

PingAccess Subclusters

Subclusters are a method to provide better scaling of very large PingAccess deployments by allowing multiple engine nodes in the configuration to share certain information. A load balancer is placed in front of each subcluster in order to distribute connections to the nodes in the subcluster.

Subclusters serve three purposes:

  • Providing fault-tolerance for mediated tokens if a cluster node is taken offline.
  • Reducing the number of STS transactions with PingFederate when the front-end load balancer does not provide a sticky session.
  • Ensure rate limits are enforced properly if the front-end load balancer does not provide a sticky session.

Rate limiting rules and token mediation require a runtime state cluster. If token mediation and rate limiting are not used in your environment, subclustering is not necessary.

Info: The cache can be tuned using the EHCache Configuration Properties (pa.ehcache.*) listed in the Configuration file reference guide.

PingAccess provides clustering features that allow a group of PingAccess servers to appear as a single system. When deployed appropriately, server clustering can facilitate high availability of critical services. Clustering can also increase performance and overall system throughput. It is important to understand, however, that availability and performance are often at opposite ends of the deployment spectrum. Thus, you may need to make some configuration tradeoffs that balance availability with performance to accommodate specific deployment goals.

In a cluster, you can configure each PingAccess engine, or node, as an administrative console, a replica administrative console, or a runtime engine in the run.properties file. Runtime engines service client requests, while the console server administers policy and configuration for the entire cluster (via the administrative console). The replica administrative console provides a backup copy of the information on the administrative node in the event of a non-recoverable failure of the administrative console node. A cluster may contain one or more runtime nodes, but only one console node and only one replica console node. Server-specific configuration data is stored in the PingAccess administrative console server in the run.properties file. Information needed to bootstrap an engine is stored in the bootstrap.properties file on each engine.

bootstrap.properties
engine.admin.configuration. host Defines the host where the administrative console is available. The default is localhost.
engine.admin.configuration. port Defines the port where the administrative console is running. The default is 9000.
engine.admin.configuration. userid Defines the name of the engine.
engine.admin.configuration. keypair Defines an elliptic curve key pair that is in the JSON Web Key (JWK) format.
engine.admin.configuration. bootstrap.truststore Defines the truststore, in JWK format, that is used for communication with the administrative console.

At startup, a PingAccess engine node in a cluster checks its local configuration and then makes a call to the administrative console to check for changes. How often each engine in a cluster checks the console for changes is configurable in the engine run.properties file.

Configuration information is replicated to all engine nodes. By default, engines do not share runtime state. For increased performance, you can configure engines to share runtime state by configuring cluster interprocess communication using the run.properties file.

Info: Runtime state clustering consists solely of a shared cache of security tokens acquired from the PingFederate STS for Token Mediation use cases using the Token Mediator Site Authenticator.

Engine nodes include a status indicator that indicates the health of the node and a Last Updated field that indicates the date and time of the last update. The status indicator can be green (good status), yellow (degraded status), or red (failed status).

The status is determined by using the value for admin.polling.delay as an interval to measure health:
Green (good status):
The replica administrative node contacted the primary administrative node on the last pull request.
Yellow (degraded status):
The replica administrative node contacted the primary administrative node between 2 and 10 intervals.
Red (failed status):
The replica administrative node has either never contacted the primary administrative node, or it has been more than 10 intervals since the nodes communicated.

Tags Product > PingAccess > PingAccess 5.2; Product > PingAccess