You can configure the Directory Proxy Server to use criteria-based load-balancing algorithms to balance providing a consistent view of directory server data for applications that require it with taking advantage of all servers in a topology for handling read-only operations, such as search and bind. The flexible configuration model supports a wide range of criteria for choosing which load-balancing algorithm to use for each operation.

In most Directory Proxy Server deployments, you should use a failover load-balancing algorithm for at least add, delete, and modifyDN operations

Each Proxying Request Processor configured in the Directory Proxy Server uses a load-balancing algorithm to choose which Directory Server to use for a particular operation.

The load-balancing algorithm takes several factors into account when choosing a server:

  • The availability of the directory servers
  • The location of the directory servers

    By default, load-balancing algorithms prefer directory servers in the same location as the Directory Proxy Server.

  • If the Directory Server is degraded for any reason, such as having a local database (DB) index being rebuilt
  • The result of configured health checks

    For instance, a server with a small replication backlog can be preferred over one with a larger backlog.

  • Recent operation routing history

How these factors are used depends on the specific load-balancing algorithm. The two most commonly used load-balancing algorithms are:

  • The failover load-balancing algorithm
  • The fewest operations load-balancing algorithm

These two algorithms are similar when determining which Directory Servers are the possible candidates for a specific operation. The algorithms use the same criteria to determine server availability and health, and by default they prefer Directory Servers in the same location as the Directory Proxy Server. However, they differ in the criteria they use to choose between available servers. The following sections describe the algorithms in more detail.

Choosing between available servers

The failover load-balancing algorithm sends all operations to a single server until it is unavailable, and then sends all operations to the next preferred server, and so forth. This algorithm provides the most consistent view of the topology to clients because all clients, at least those in the same location as the Directory Proxy Server, see the same, up-to-date view of the data, but it leaves unused capacity in the failover instances because most topologies include multiple Directory Server replicas within each data center.

The fewest operations load-balancing algorithm efficiently distributes traffic among multiple servers by choosing to send each operation to the server that has the fewest number of outstanding operations, which is the server from the Directory Proxy Server's point of view that is the least busy.

Note:

The fewest operations load-balancing algorithm routes traffic to the least loaded server, which, in a lightly-loaded environment, can result in an imbalance because the first server in the list of configured servers is more likely to receive a request.

This algorithm naturally routes to servers that are more responsive as well as limits the impact of servers that have become unreachable. However, this implies that consecutive operations that depend on each other can be routed to different Directory Servers, which can cause issues for some types of clients:

  • If two entries are added in quick succession where the first entry is the parent of the second in the LDAP hierarchy, the addition of the child entry could fail if that operation is routed to a different Directory Server instance than the first add operation, and this happens within the replication latency.
  • Some clients add or modify an entry and then immediately read the entry back from the server, expecting to see the updates reflected in the entry.

In these situations, you should configure the Directory Proxy Server to route dependent requests to the same server.

The server affinity feature

The server affinity feature achieves this in some environments but not in all because the affinity is tracked independently by each Directory Proxy Server instance, and some clients send requests to multiple proxies. It is common for a client to not connect to the Directory Proxy Servers directly but instead to connect through a network load balancer, which in turn opens connections to the Directory Proxy Servers. Each individual client connection is established to a single Directory Proxy Server so that operations on that connection are routed to the same Directory Proxy Server, and server affinity configured within the Directory Proxy Server ensures those operations are routed to the same Directory Server.

However, many clients establish a pool of connections that are reused across operations, and within this pool, connections are established through the load balancer to different Directory Proxy Servers. Dependent operations sent on different connections could then be routed to different Directory Proxy Servers, and then on to different Directory Servers.

Note:

For more information about the server affinity feature, see Configuring Server Affinity.

A failover load-balancing algorithm addresses this issue by routing all requests to a single server, but that leaves unused search capacity on the other instances. A criteria-based load-balancing algorithm enables the proxy to route certain types of requests (or requests from certain clients) using a different load-balancing algorithm than the default. For instance, all write operations (add, delete, modify, and modifyDN ) could be routed using a failover load-balancing algorithm while all other operations (bind, search, and compare) use a fewest operations load-balancing algorithm.

If there are clients that are particularly sensitive to reading entries immediately after modifying them, additional connection criteria can be specified to all operations from those clients using the failover load-balancing algorithm.

Note:

Routing all write requests to a single server in a location instead of evenly across servers does not limit the overall throughput of the system because all servers ultimately have to process all write operations either from the client directly or through replication.

Replication conflicts

Another benefit of using the failover load-balancing algorithm for write operations is reducing replication conflicts. The Ping Identity Directory Server follows the traditional LDAP replication model of eventual consistency. This provides high availability for handling write traffic even in the presence of network partitions, but it can lead to replication conflicts. Replication conflicts involving modify operations can be automatically resolved, leaving the servers in a consistent state where each attribute on each entry reflects the most recent update to that attribute. However, conflicts involving add, delete, and modifyDN operations cannot always be resolved automatically and might require manual involvement from an administrator. By routing all write operations (or at least add, delete, and modifyDN operations) to a single server, replication conflicts can be avoided.

Note:

When using a failover load-balancing algorithm, consider the following:

  • When using the failover load-balancing algorithm in a configuration with multiple locations, the load-balancing algorithm fails over between local instances before failing over to servers in a remote location. The list of servers in the backend-server configuration property of the load-balancing algorithm should be ordered such that preferred local servers should appear before failover local servers, but the relative order of servers in different locations is unimportant as the preferred-failover-location of the Directory Proxy Server's configuration is used to decide which remote location to fail over to. It is also advisable that the order of local servers match the gateway-priority configuration settings of the Replication Server configuration object on the Directory Server instances. This can reduce the WAN replication delay because the Directory Proxy Server will then prefer to send writes to the Directory Server with the WAN Gateway role, avoiding an extra hop to the remote locations.
  • For Directory Proxy Server configurations that include multiple Proxying Request Processors, including Entry-Balancing environments, each Proxying Request Processor should be updated to include its own criteria-based load-balancing algorithm.