Concurrency
This section describes how to configure PingFederate to support more concurrent requests to optimize your deployment.
The more requests processed in parallel, the more requests processed over all. Given the appropriate amount of hardware, processing N requests concurrently is typically faster than processing N requests serially.
In PingFederate, there are two main pools of threads that control the level of concurrent user requests: Acceptor Threads and Server Threads. Acceptor threads receive the HTTPS requests and pass those requests on to available server threads to be processed.
Caveats
This topic serves as a guideline for optimizing the concurrency of your deployment. On a large system with multiple CPUs, or cores, a thread pool that is too small will under-utilize the available processor resources. A thread pool that is too large can cause the system to become flooded and unusable.
A good target for the CPU is between 60%-80% utilization when under nominal, standard user load. This way CPU resources are not under-utilized while still allowing room for occasional load spikes. The level of concurrency in PingFederate might need to be decreased, or even increased, depending on the system’s configuration, the adapters in use, available memory, and other processes competing for resources. All deployments are different. This section serves as a guideline of where to start when tuning the server.