You can distribute the load on your Directory Proxy Server using one of the load-balancing algorithms provided with PingDirectory Server. By default, the Directory Proxy Server prefers local servers over non-local servers, unless you set the use-location property of the load-balancing algorithm to false. Within a given location, the Directory Proxy Server prefers available servers over degraded servers. This means that if at all possible, the Directory Proxy Server sends requests to servers that are local and available before considering selecting any server that is non-local or degraded.
Note: If the use-location property is set to true, then the load is balanced only among available external servers in the same location. If no external servers are available in the same location, the Directory Proxy Server will attempt to use available servers in the first preferred failover location, and so on. The failover based on no external servers with AVAILABLE health state can be customized to allow the Directory Proxy Server to prefer local DEGRADED health servers to servers in a failover location. See the online PingDirectory Server Configuration Reference Guide for more information on the prefer-degraded-servers-over-failover property.
The Directory Proxy Server provides the following load-balancing algorithms:
  • Failover load balancing. This algorithm forwards requests to servers in a given order, optionally taking the location into account. If the preferred server is not available, then it will fail over to the alternate server in a predefined order. This balancing method can be useful if certain operations, such as LDAP writes, need to be forwarded to a primary external server, with secondary external servers defined for failover if necessary.

    This algorithm also offers load spreading to multiple failover servers. If the failover load-balancing algorithm is configured with one or more load-spreading base DNs, then requests that target entries below a load-spreading base DN can be balanced across any of the servers with the same health check state and location. Requests targeting a specific portion of the data will consistently be routed to the same server, but requests targeting a different portion of the data may be sent to a different server.

  • Fewest operations load balancing. This algorithm forwards requests to the backend server with the fewest operations currently in progress and tends to exhibit the best performance.

  • Health weighted load balancing. This algorithm assigns weights to servers based on their health scores and, optionally, their locations. For example, servers with a higher health check score will receive a higher proportion of the requests than servers with lower health check scores.

  • Single server load balancing. This algorithm forwards all operations to a single external server that you specify.

  • Weighted load balancing. This algorithm uses statically defined weights for sets of servers to divide load among external servers. External servers are grouped into weighted sets, the values of which, when added to all of the weighted sets for the load balancing algorithm, represent a percentage of the load the external servers should receive.

  • Criteria based load balancing. This algorithm allows you to balance your load across a server topology depending on the types of operations received or the client issuing the request.

    For example, ds1 and ds2 are assigned to a weighted set named Set-80 and assigned the weight 80. The external servers ds3 and ds4 are assigned to the weighted set Set-20 and assigned the weight 20. When both sets, Set-80 and Set-20, are assigned to the load balancing algorithm, 80 percent of the load will be forwarded to ds1 and ds2, while the remaining 20 percent will be forwarded to ds3 and ds4.