Topology planning
Based on existing production deployments, we have determined a suggested number of servers and settings based on the numbers of identities, entitlements, assignments, and applications. These suggested number of servers and settings are general guidelines for your particular deployment requirements. Each deployment is unique, and requires review prior to implementation.
For a description of possible production deployments, refer to Deployment Architecture in the Autonomous Identity Installation Guide.
Data sizing
ForgeRock has determined general categories of dataset sizes based on a company’s total number of identities, entitlements, assignments, and applications.
A key determining factor for sizing is the number of applications. If a company has identities, entitlements, and assignments in the Medium range, but if applications are close to 150, then the deployment could be sized for large datasets.
Small |
Medium |
Large |
Extra Large |
|
Total Identities |
<10K |
10K-50K |
50K-100K |
100K-1M |
Total Entitlements |
<10K |
10K-50K |
50K-100K |
100K+ |
Total Assignments |
<1M |
1M-6M |
6M-15M |
15M+ |
Total Applications |
<50 |
50-100 |
100-150 |
150+ |
Suggested number of servers
Based on dataset sizing, the following chart shows the number of servers for each deployment. These numbers were derived from existing customer deployments and internal testing setups.
These numbers are not hard-and-fast rules, but are only presented as starting points for deployment planning purposes. Each deployment is unique and requires proper review prior to implementation. |
Small |
Medium |
Large |
Extra Large |
|
Deployer |
1[1] |
1 |
1 |
1 |
Docker |
1 |
2 (manager; worker) |
2 (manager; worker) |
Custom[2] |
Database |
1 |
2 (2 seeds) |
3 (3 seeds) |
Custom[2] |
Analytics |
1 |
3 (master; 2 workers) |
5 (master; 4 workers) |
Custom[2] |
Opensearch |
1 |
2 (master; worker) |
3 (master; 2 workers) |
Custom[2] |
Opensearch Dashboards |
1 |
1 |
1 |
1 |
[1] This figure assumes that you have a separate deployer machine from the target machine for single-node deployments. You can also run the deployer on the target machine for a single-node deployment. For multi-node deployments, we recommend running the deployer on a dedicated low-spec box.
[2] For extra-large deployments, server requirements will need to be specifically determined.
Suggested analytics settings
Analytics settings require proper sizing for optimal machine-learning performance.
The following chart shows the analytics settings that are for each deployment size. The numbers were derived from customer deployments and internal testing setups.
These numbers are not hard-and-fast rules, but are only presented as starting points for deployment planning purposes. Each deployment is unique and requires proper review prior to implementation. |
Small |
Medium |
Large |
Extra Large |
|
Driver Memory (GB) |
2 |
10 |
50 |
Custom[1] |
Driver Cores |
3 |
3 |
12 |
Custom[1] |
Executor Memory (GB) |
3 |
3-6 |
12 |
Custom[1] |
Executor Cores |
6 |
6 |
6 |
Custom[1] |
Elastic Heap Size[2] |
2 |
4-8 |
8 |
Custom[1] |
[1] For extra-large deployments, server requirements will need to be specifically customized.
[2] Set in the vars.yml
file.
Production technical recommendations
Autonomous Identity 2022.11.10 has the following technical specifications for production deployments:
Deployer |
Database |
Database |
Analytics |
Opensearch |
|
Installed Components |
Docker |
Cassandra |
MongoDB |
Spark (Spark Master)/Apache Livy |
Opensearch |
OS |
CentOS |
CentOS |
CentOS |
CentOS |
CentOS |
Number of Servers |
Refer to Suggested number of servers |
Refer to Suggested number of servers |
Refer to Suggested number of servers |
Refer to Suggested number of servers |
Refer to Suggested number of servers |
RAM (GB) |
4-32 |
32 |
32 |
64-128 |
64 |
CPUs |
2-4 |
8 |
8 |
16 |
16 |
Non-OS Disk Space (GB)[1] |
32 |
1000 |
1000 |
1000 |
1000 |
NFS Shared Mount |
N/A |
N/A |
N/A |
1 TB NFS mount shared across all Docker Swarm nodes (if more than 1 node is provisioned) at location separate
from the non-OS disk space requirement. For example, |
N/A |
Networking |
nginx: 443 Docker Manager: 2377 (TCP) Docker Swarm:
|
Client Protocol Port: 9042 Cassandra Nodes: 7000 |
Client Protocol Port: 27017 MongoDB Nodes: 30994 |
Spark Master: 7077 Spark Workers: Randomly assigned ports |
Opensearch: 9300 Opensearch (REST): 9200 Opensearch Dashboards: 5601 |
Licensing |
N/A using Docker CE free version |
N/A |
N/A |
N/A |
N/A |
Software Version |
Docker: 20.10.17 |
Cassandra: 4.0.6 |
MongoDB: 4.4 |
Spark: 3.3.2 Apache Livy: 0.8.0-incubating |
Opensearch/Opensearch Dashboards 1.3.14 |
Component Reference |
Refer to below.[2] |
Refer to below.[3] |
Refer to below.[4] |
Refer to below.[5] |
Refer to below.[6] |
[1] At root directory "/"
[2] https://docs.docker.com/ee/ucp/admin/install/system-requirements/
[3] https://docs.datastax.com/en/dse-planning/doc/planning/planningHardware.html
[4] http://cassandra.apache.org/doc/latest/operating/hardware.html
[4] http://www.mongodb.com
[5] https://spark.apache.org/docs/latest/security.html#configuring-ports-for-network-security
[6] https://Opensearch.org/