ForgeOps

CDM architecture

Once you deploy the CDM, the ForgeRock Identity Platform is fully operational within a Kubernetes cluster. forgeops artifacts provide well-tuned JVM settings, memory, CPU limits, and other CDM configurations.

Here are some of the characteristics of the CDM:

Multi-zone Kubernetes cluster

ForgeRock Identity Platform is deployed in a Kubernetes cluster.

For high availability, CDM clusters are distributed across three zones.

Go here for a diagram that shows the organization of pods in zones and node pools in a CDM cluster.

Cluster sizes

When deploying the CDM, you specify one of three cluster sizes:

  • A small cluster with capacity to handle 1,000,000 test users

  • A medium cluster with capacity to handle 10,000,000 test users

  • A large cluster with capacity to handle 100,000,000 test users

Third-party deployment and monitoring tools
Ready-to-use ForgeRock Identity Platform components
  • Multiple DS instances are deployed for higher availability. Separate instances are deployed for Core Token Service (CTS) tokens and identities. The instances for identities also contain AM and IDM run-time data.

  • The AM configuration is file-based, stored at the path /home/forgerock/openam/config inside the AM Docker container (and in the AM pods).

  • Multiple AM instances are deployed for higher availability. The AM instances are configured to access the DS data stores.

  • Multiple IDM instances are deployed for higher availability. The IDM instances are configured to access the DS data stores.

Highly available, distributed deployment

Deployment across the three zones ensures that the ingress controller and all ForgeRock Identity Platform components are highly available.

Pods that run DS are configured to use soft anti-affinity. Because of this, Kubernetes schedules DS pods to run on nodes that don’t have any other DS pods whenever possible.

The exact placement of all other CDM pods is delegated to Kubernetes.

In small and medium CDM clusters, pods are organized across three zones in a single primary node pool [1] with six nodes. Pod placement among the nodes might vary, but the DS pods should run on nodes without any other DS pods.

Small and medium CDM clusters have three zones and one node pool. The node pool has six nodes.

In large CDM clusters, pods are distributed across two node pools—​primary [1] and DS. Each node pool has six nodes. Again, pod placement among the nodes might vary, but the DS pods should run on nodes without any other DS pods.

Large CDM clusters have three zones and two node pools. The node pools have six nodes each.
Load balancing

The NGINX Ingress Controller provides load balancing services for CDM deployments. Ingress controller pods run in the nginx namespace. Implementation varies by cloud provider.

Secret generation and management

ForgeRock’s open source Secret Agent operator generates Kubernetes secrets for ForgeRock Identity Platform deployments. It also integrates with Google Cloud Secret Manager, AWS Secrets Manager, and Azure Key Vault, providing cloud backup and retrieval for secrets.

Directory services management

ForgeRock’s open source Directory Services operator provides management of Directory Services instances deployed with the CDM, including:

  • Creation of stateful sets, services, and persistent volume claims

  • Addition of new directory pods to the replication topology

  • Modification of service account passwords in the directory, using a Kubernetes secret

  • Taking volume snapshots of the directory disk, and restoring directories from snapshots

  • Backing up and restoring directory data using LDIF

Secured communication

The ingress controller is TLS-enabled. TLS is terminated at the ingress controller. Incoming requests and outgoing responses are encrypted.

Inbound communication to DS instances occurs over secure LDAP (LDAPS).

For more information, see Secure HTTP.

Stateful sets

The CDM uses Kubernetes stateful sets to manage the DS pods. Stateful sets protect against data loss if Kubernetes client containers fail.

The CTS data stores are configured for affinity load balancing for optimal performance.

AM connections to CTS servers use token affinity in CDM.

The AM policies, application data, and identities reside in the idrepo directory service. The deployment uses a single idrepo master that can fail over to one of two secondary directory services.

For all the AM pods
Authentication

IDM is configured to use AM for authentication.

DS replication

All DS instances are configured for full replication of identities and session tokens.

Backup and restore

Backup and restore can be performed using several techniques. You can:

  • Use the volume snapshot capability in GKE, EKS, or AKS. The cluster that the CDM is deployed in must be configured with a volume snapshot class before you can take volume snapshots, and that persistent volume claims must use a CSI driver that supports volume snapshots.

  • Use a "last mile" backup archival solutions, such as Amazon S3, Google Cloud Storage, and Azure Cloud Storage that is specific to the cloud provider.

  • Use a Kubernetes backup and restore product, such as Velero, Kasten K10, TrilioVault, Commvault, or Portworx PX-Backup.

For more information, see Backup and restore overview.

Initial data loading

When it starts up, the CDM runs the amster job, which loads application data, such as OAuth 2.0 client definitions, to the idrepo DS instance.

Next step


1. On GKE, the node pool shown in the diagram as Primary is named default-pool.