ASE Cluster runs either in a single cloud or across multiple clouds. All ASE cluster nodes communicate over a TCP connection to continuously synchronize the configuration in real time. Cluster nodes are symmetrical which eliminates a single point of failure. Key features of ASE clustering are:
- ASE node addition to a live cluster without configuring the node – true auto-scaling
- Configuration (ase.conf, API JSON files) synchronization across all cluster nodes
- Update and delete operations using CLI and REST APIs
- Run time addition or deletion of cluster nodes
- Real-time blacklist synchronization across cluster
- A single cluster with nodes spanning across multiple data centers
Several cluster features are unique to the deployed environment including:
- Authentication token for API gateway (ASE sideband only)
- Cookie replication across all cluster nodes (ASE inline only)
CLI configuration commands executed at any cluster node are automatically replicated across all cluster nodes. All nodes remain current with respect to configuration modifications. Cluster nodes synchronize SSL certificates across various ASE nodes.
Add or remove a node from the cluster without disrupting any live traffic. The amount of time required to activate a new cluster node is dependent on the time to synchronize the configuration and cookie information from other nodes.
ASE cluster performs real-time synchronization of cookies for ASE inline configurations. This is critical for session mirroring or handling a DNS flip between requests from the same client. Since no master or slave nodes exist, all cluster nodes synchronize cookie information – which means that each node has the same cookies as other nodes.
ASE also synchronizes ase.conf files across cluster nodes with the exception of a few parameters: data ports, management ports, and number of processes.
ASE cluster deployment
ASE cluster is a distributed node architecture. Ping Identity recommends that one cluster node be designated the management node through which all configuration changes are performed. This helps maintain consistency of operations across nodes. However, no restrictions exist on using other nodes in the cluster to make changes. If two different nodes are used to modify the ASE cluster, then the latest configuration change based on time-stamps is synchronized across the nodes.
ASE cluster uses a circular deployment. During setup, the first node of the cluster acts as the central node of the cluster from which all cluster nodes synchronize configuration and cookie data. When the setup of all nodes is complete, the nodes communicate with each other to synchronize the latest session information.
When an ASE cluster is setup, the peer_node parameter must be configured with an IPv4 address and port number. ASE uses this value to connect to other nodes of the cluster. To add new cluster nodes, activate one node at a time. In the following example, the peer_node IP address for all nodes is the IP address of the first node. Each node must wait until the process of adding the previous node is completed.
Use the status command to verify status before adding the next node in the cluster.
/opt/pingidentity/ase/bin/cli.sh status -u admin -p Status: starting
After all cluster nodes are added, use the management or first node to carry out all cluster operations.
|Item||Synchronized (Yes or No)||Synchronization (restart or live)|
|API JSON||Yes||Live and restart|
|Cookies||Yes||Live and restart|
|CLI admin password||No||No|
|Authorization token for sideband ASE||Yes||Live and restart|
|Blacklist and whitelist (create, delete, and delete all)||Yes||Live and restart|
|Real-time attacks (IP, cookie, and token is blocked)||Yes||Live|
|CLI commands that are not synchronized||The following commands are not synchronized:
Note: The commands listed above require the entire ASE to restart for the commands to synchronize.