Server clustering allows you to define a single configuration for multiple PingFederate servers and to address single sign-on (SSO) and single logout (SLO) requests as a single system.
PingFederate includes clustering features that allow a group of PingFederate servers to appear as a single system to browsers and partner federation servers. In this configuration, all client traffic normally goes through a load balancer, which routes requests to the PingFederate servers in the cluster.
Server clustering can facilitate high availability of critical services and increase both performance and overall system throughput. However, availability and performance are often at opposite ends of the deployment spectrum, requiring administrators to balance one against the other to accommodate specific deployment goals. This topic identifies some of these choices.
PingFederate provides separate failover capabilities specifically for Outbound Provisioning, which by itself does not require either load balancing or state management. For more information, see Deploying provisioning failover.
Running multiple maintenance versions
As of PingFederate 10.0, you can run multiple maintenance versions in a cluster. For example, a cluster can contain servers running 10.0.0, 10.0.1, and 10.0.2. However, a cluster cannot contain servers running multiple major or minor versions, such as 10.0.0 and 10.1.0.
When running multiple versions in a cluster, the following message appears at the top of the console: "The cluster is running more than one version of PingFederate. Visit the Cluster Management page to see the versions." The Cluster management menu also contains a Version column that displays the version of PingFederate running on each server node.
Running multiple versions of PingFederate in a cluster might cause inconsistencies in runtime behavior.
Clustering with multiple maintenance versions is meant to reduce the upgrade burden by eliminating downtime. Running in mixed mode should be a temporary solution, letting you gradually update each node until all of them are running the same maintenance version.
When upgrading the servers in the cluster, upgrade the administrative console first. The maintenance release might contain UI changes that will need to be replicated throughout the cluster.
The cluster architecture has two layers:
- Cluster-protocol layer
- The cluster-protocol layer allows the PingFederate servers to discover a cluster, communicate with each other, detect and relay connectivity failures, and maintain the cluster as individual servers join and leave.
- Runtime state-management services
- The runtime state-management services communicate session-state information required to process SSO and logout requests. PingFederate abstracts its runtime state-management services behind Java service interfaces, which enables PingFederate to use interface implementations without regard to underlying storage and sharing mechanisms. Abstracting also provides a well-defined point of extensibility in PingFederate. Depending on the chosen runtime state-management architecture, each service can share session-state information with a subset of nodes or all nodes.
Runtime state-management architectures
PingFederate supports both adaptive clustering and directed clustering. Adaptive clustering offers the benefits of scaling PingFederate horizontally with little or no configuration requirement. Directed clustering allows administrators to specify which runtime state-management service uses which architecture model.
Group-RPC oriented approach
The prepackaged state-management implementations use a remote-procedure-call (RPC) framework for reliable group communication in a variety of deployments, allowing PingFederate servers to share state information within a cluster.
Clustered deployments of PingFederate for single sign-on (SSO) and logout transactions typically require the use of at least one load balancer, fronting multiple PingFederate servers.
When a client accesses the load balancer's virtual IP, the balancer distributes the request to one of the PingFederate servers in the cluster. Based on the configuration of the associated runtime-state management service, the processing server contacts other PingFederate servers through remote procedure calls as it processes SSO and logout requests.
PingFederate does not automatically balance the traffic among the servers in the cluster. You must manage SSO and logout requests externally to avoid overloading individual servers in the cluster. Because each server can only handle a certain amount of traffic, see PingFederate Performance Tuning Guide to plan ahead.
The method that the balancer uses to select the appropriate server can vary from simple to highly complex, depending on deployment requirements. For specific balancing strategies, their strengths and weaknesses, as well as the impacts on PingFederate performance, see Runtime state-management architectures.
Load balancers can incorporate SSL/TLS accelerators or work closely with them. Due to the high computational overhead of the SSL handshake, Ping Identity recommends terminating SSL/TLS on a dedicated server external to PingFederate for deployments in which performance is a concern. You can still use SSL between the proxy or balancer and PingFederate, but as a separate connection.
In a cluster, you can configure each PingFederate instance, or node, as either an administrative console or a runtime engine. Runtime engines (also known as engine nodes) service federated-identity protocol requests, while the console server (also known as the console node) administers policy and configuration for the entire cluster through the administrative console. A cluster can contain one or more engine nodes but only one console node.
You must set the administrative console node to run outside of the load-balanced group in order to successfully process SSO requests.