What's New

These are new features for this release of PingDirectoryProxy Server

  • New capabilities have been added to the Delegated Admin application (packaged separately). Now directory administrators can delegate the responsibility of managing group memberships for users in PingDirectory Server. Administrators can delegate to individuals or groups of users, and assign authority over one or more groups in PingDirectory Server.

  • Improved the way PingDirectoryProxy Server distributes requests in the failover load-balancing configuration. This is especially helpful for multi-tenant environments to better distribute requests per tenant. Now you can configure a load-spreading base DN such that requests to DIT branches below the load-spreading base DN are balanced among the PingDirectory Servers. The proxy will automatically maintain affinity between servers and DIT branches.

Known Issues/Workarounds

The following are known issues in the current version of the PingDirectoryProxy Server:

  • An ACI starting with "GENERATED D-ADMIN ACCESS" is generated automatically by the server from Delegated Admin configuration. Do not create your own custom ACI with the same prefix, for example by copying and pasting from the generated ACI. A custom ACI with this prefix will be deleted when the server is restarted, and whenever a Delegated Admin configuration change causes the Delegated Admin ACI to be regenerated.

  • Servers to be monitored by the PingDataMetrics Server must have an instance name of less than 256 characters. A server's instance name is specified during setup.

Resolved Issues

The following issues have been resolved with this release of the PingDirectoryProxy Server:

Ticket ID Description

Updated the failover load-balancing algorithm to add support for load spreading. By default, the failover load-balancing algorithm will consistently route all requests to the same server (subject to the health and location of each of the backend servers), which provides the highest level of protection against issues that may result from replication propagation delay. However, it also means that most of the servers are left idle, only to be used if a problem arises with the primary server.

Load spreading allows the server to retain many of the consistent routing benefits of the failover load-balancing algorithm's default configuration while also taking better advantage of the available servers in the topology. If the failover load-balancing algorithm is configured with one or more load-spreading base DNs, then requests that target entries below a load-spreading base DN may be balanced across any of the servers with the same health check state and location. Requests targeting a specific portion of the data will consistently be routed to the same server, but requests targeting a different portion of the data may be sent to a different server.

Load spreading is primarily beneficial to deployments in which the DIT contains a large number of branches below a common parent, and in which most operations (including search operations, as indicated by the search base DN) only target entries at least one level below that common parent. For example, this may be a good fit for a multi-tenant deployment in which all of the entries for a given tenant are within their own branch, and all of the tenant branches reside below a common parent.


Improved the behavior that the server exhibits under certain network conditions when it is not possible to write to a client without blocking. This includes:

* If the server cannot write data to a client after waiting for a length of time specified by the connection handler's max-blocked-write-time-limit configuration property, the access log message indicating that the client has been disconnected because of an I/O timeout will now more clearly indicate that the reason was the inability to write data to the client.

* The server now limits the number of threads that can be blocked while trying to send data to the same client over the same client connection. If too many threads would have been blocked while trying to send data over the same connection, that connection will be terminated, and the disconnect access log message will include the reason for the disconnect.

* If the server is trying to send data to the client that it considers optional (for example, certain types of unsolicited notifications), then the server may skip sending that optional data if the write would have caused the server thread to block.


Added a configuration option to allow a null serverFQDN for the GSSAPI SASL mechanism to allow an unbound SASL server connection.


Fixed an issue in which an unprivileged Consent API client could modify the actor value of a consent record.


Delegated Admin operations now appear in the LDAP access log.


Changed Resource IDs produced by the Delegated Admin API so that they no longer contain percent characters from Base64 padding.


Updated the external server monitor entry to include a histogram of response times per operation type. This makes it easier to understand the source of response time outliers in the proxy.


Added a getRequestProcessors() method to the ProxyServerContext interface within the Server SDK. This can be used to gain access to an EntryBalancingRequestProcessor, and hence the global index.


Updated the name of LDAP external server monitor entries to not include product version information, since this can affect integration with JMX monitoring tools.


Updated the keys and values used in the monitoring JMX MBeans to conform with best practices. The keys "type" and "name" are now used in place of "Rdn1" and "Rdn2".

To maintain backwards compatibility with existing monitoring solutions, installations upgrading to this release will retain the old behavior, but they can revert to the default behavior by changing the Global Configuration property jmx-use-legacy-mbean-names to false.


The Notification Delivery Thread will now log unexpected errors rather than throwing them as exceptions.


Prevent a notification destination from assuming the master notification delivery role if that server is in lockdown mode or replication hasn't finished initialization.