Server clustering allows multiple PingFederate servers to share a single configuration and service single sign-on and logout requests as a single system.
When deployed appropriately, server clustering can facilitate high availability of critical services. Clustering can also increase performance and overall system throughput. It is important to understand, however, that availability and performance are often at opposite ends of the deployment spectrum. Thus, you (the administrator) may need to make some configuration tradeoffs that balance availability with performance to accommodate specific deployment goals. Some of these choices are identified throughout this topic.
PingFederate provides separate failover capabilities specifically for Outbound Provisioning, which by itself does not require either load balancing or state management (see Deploy provisioning failover).
Running multiple maintenance versions
As of PingFederate 10.0, you can run multiple maintenance versions in a cluster; for example, a cluster can contain servers running 10.0.0, 10.0.1, and 10.0.2. However, a cluster cannot contain servers running multiple major or minor versions, such as 10.0.0 and 10.1.0.
Running multiple versions of PingFederate in a cluster may cause inconsistencies in runtime behavior.
Clustering with multiple maintenance versions is meant to reduce the upgrade burden by eliminating downtime. Running in mixed mode should be used as a temporary solution, letting you gradually update each node until all of them are running the same maintenance version.
When you upgrade the servers in the cluster, upgrade the administrative console first. The maintenance release might contain UI changes that will need to be replicated throughout the cluster.
When your cluster is running multiple versions, a message is displayed at the top of the console that says "The cluster is running more than one version of PingFederate. Visit the Cluster Management page to see the versions." The Cluster management window also contains a Version column that displays the version of PingFederate being run on each server node.
The cluster architecture has two layers: the cluster-protocol layer and the runtime state-management services. In addition, these services can use different runtime state-management architectures when applicable.
- Cluster-protocol layer
- The cluster-protocol layer allows the PingFederate servers to discover a cluster, communicate with each other, detect and relay connectivity failures, and maintain the cluster as individual servers leave and join the cluster.
- Runtime state-management services
- The runtime state-management services communicate session-state information required to process SSO and logout requests. PingFederate abstracts its runtime state-management services behind Java service interfaces. This enables PingFederate to use interface implementations without regard to underlying storage and sharing mechanisms. The abstraction also provides a well-defined point of extensibility in PingFederate. Depending on the chosen runtime state-management architecture, each service may share session-state information with a subset of nodes or all nodes.
- Runtime state-management architectures
- PingFederate supports adaptive clustering and directed clustering. Adaptive clustering offers the benefits of scaling PingFederate horizontally with no or barely any configuration requirement while directed clustering allows administrators to specify which runtime state-management service uses which architecture model.
Group-RPC oriented approach
The prepackaged state-management implementations are designed to accommodate a variety of deployments. The implementations leverage a remote-procedure-call (RPC) framework for reliable group communication, allowing PingFederate servers within a cluster to share state information.
Clustered deployments of PingFederate for single sign-on (SSO) and logout transactions typically require the use of at least one load balancer, fronting multiple PingFederate servers.
When a client accesses the load balancer's virtual IP, the balancer distributes the request to one of the PingFederate servers in the cluster. Based on the configuration of the associated runtime-state management service, the processing server contacts other PingFederate servers via remote procedure calls as it processes SSO and logout requests.
PingFederate does not automatically balance the traffic among the servers in the cluster. SSO and logout requests must be managed externally to avoid overloading individual servers in the cluster. Because each server can only handle a certain amount of traffic, refer to Performance Tuning Guide to plan ahead.
The method that the balancer uses to select the appropriate server can vary from simple to highly complex, depending on deployment requirements. Specific balancing strategies, their strengths and weaknesses, as well as the impacts on PingFederate are discussed later (see Runtime state-management architectures).
Load balancers may incorporate SSL/TLS accelerators or work closely with them. Due to the high computational overhead of the SSL handshake, Ping Identity recommends terminating SSL/TLS on a dedicated server external to PingFederate for deployments in which performance is a concern. You can still use SSL between the proxy or balancer and PingFederate, but as a separate connection.
In a cluster, you can configure each PingFederate instance, or node, as either an administrative console or a runtime engine. Runtime engines (also known as engine nodes) service federated-identity protocol requests, while the console server (also known as the console node) administers policy and configuration for the entire cluster (via the administrative console). A cluster may contain one or more runtime nodes but only one console node.
The PingFederate administrative console node must be configured to run outside of the load-balanced group to successfully process SSO requests.