Access Management 7.4.1

Sizing systems

Given availability requirements and estimates on sizing for services, estimate the required capacity for individual systems, networks, and storage. This page considers the AM server systems, not the load balancers, firewalls, independent directory services, and client applications.

Although you can start with a rule of thumb, you see from the previous sections that the memory and storage footprints for the deployment depend in large part on the services you plan to provide. With that in mind, to performance test a basic deployment providing SSO, you can start with AM systems having at least 4 GB free RAM, 4 CPU cores (not throughput computing cores, but normal modern cores), plenty of local storage for configuration, policy, and CTS data, and LAN connections to other AM servers. This rule of thumb assumes the identity stores are sized separately, and that the service is housed on a single local site. Notice that this rule of thumb does not take into account anything particular to the service levels you expect to provide. Consider it a starting point when you lack more specific information.

Sizing system CPU and memory

AM services use CPU resources to process requests and responses, and essentially to make policy decisions. Encryption, decryption, signing, and checking signatures can absorb CPU resources when processing requests and responses. Policy decision evaluation depends both on the number of policies configured and on their complexity.

Memory depends on space for AM code, on the number of live connections AM maintains, on caching of configuration data, user profile data, and server-side session data. The AM code in memory probably never changes while the server is running, as JSPs deployed are unlikely ever to change in production.

The number of connections and data caching depending on server tuning, as described in Tune AM.

If AM uses the embedded DS server, then the memory needed depends on what you store in the embedded directory and what you calculated as described in Size hardware and services for deployment. The embedded DS server shares memory with the AM server process. By default, the directory server takes half of the available heap as database cache for directory data. That setting is configurable as described in the DS server documentation.

Sizing network connections

When sizing network connections, you must account for the requests and notifications from other servers and clients, as well as the responses. This depends on the service levels that the deployment provides, as described in Size hardware and services for deployment. Responses for browser-based authentication can be quite large if each time a new user visits the authentication UI pages, AM must respond with the UI page, plus all images and JavaScript logic and libraries included to complete the authentication process. Inter-server synchronization and replication can also require significant bandwidth.

For deployments with sites in multiple locations, be sure to account for configuration, CTS, and identity directory data over WAN links, as this is much more likely to be an issue than replication traffic over LAN links.

Make sure to size enough bandwidth for peak throughput, and do not forget redundancy for availability.

Sizing disk I/O and storage

As described in Deployment requirements, the largest disk I/O loads for AM servers arise from logging and from the embedded DS server writing to disk. You can estimate your storage requirements as described in that page.

I/O rates depend on the service levels that the deployment provides, as described in Size hardware and services for deployment. When you size disk I/O and disk space, you must account for peak rates and leave a safety margin when you must briefly enable debug logging to troubleshoot any issues that arise.

Also, keep in mind the possible sudden I/O increases that can arise in a highly available service when one server fails and other servers must take over for the failed server temporarily.

Another option is to consider placing log, configuration, and database files on a different file system to maximize throughput and minimize service disruption due to a file system full or failure scenarios.