Installation
This chapter shows you how to install and deploy Autonomous Identity for intelligent entitlements management in production environments. For hardware and software requirements, refer to the Release notes.
For a description of the Autonomous Identity UI console, refer to the Autonomous Identity Users Guide.
Deployment architectures
To simplify your deployments, ForgeRock provides a deployer script to install Autonomous Identity on a target node. The deployer pulls in images from the ForgeRock Google Cloud Repository and uses it to deploy the microservices and analytics for Autonomous Identity on a target machine. The target machine only requires the base operating system.
If you are upgrading Autonomous Identity on a RHEL 7/CentOS 7, the upgrade to 2022.11 uses RHEL 7/CentOS 7 only. For new and clean installations, Autonomous Identity requires RHEL 8 or CentOS Stream 8 only. |
Autonomous Identity 2022.11.0 introduced a new deployer, Deployer Pro, that pulls in the base code from the ForgeRock Google Cloud repository. Customers must now pre-install the third-party software dependencies prior to running deployer pro. For more information, refer to Install a single node deployment.
There are four basic deployments, all of them similar, but in slightly different configurations:
-
Single-Node Target Deployment. Deploy Autonomous Identity on a single Internet-connected target machine. The deployer script lets you deploy the system from a local laptop or machine or from the target machine itself. The target machine can be on on-prem or on a cloud service, such as Google Cloud Platform (GCP), Amazon Web Services (AWS), Microsoft Azure or others. For installation instructions, refer to Install a Single-Node Deployment.
Figure 1. A single-node target deployment. -
Single-Node Air-Gapped Target Deployment. Deploy Autonomous Identity on a single-node target machine that resides in an air-gapped deployment. In an air-gapped deployment, the target machine is placed in an enhanced security environment where external Internet access is not available. You transfer the deployer and image to the target machine using media, such as a portable drive. Then, run the deployment within the air-gapped environment. For installation instruction, refer to Install a Single-Node Air-Gapped.
Figure 2. An air-gapped deployment. -
Multi-Node Deployment. Deploy Autonomous Identity on multi-node deployment to distribute the process load on the servers. For installation instruction, refer to Install a Multi-Node Deployment.
Figure 3. A multi-node target deployment. -
Multi-Node Air-Gapped Deployment. Deploy Autonomous Identity a multi-node configuration in an air-gapped network. The multinode network has no external Internet connection. For installation instruction, refer to Install a Multi-Node Air-Gapped Deployment.
Figure 4. A multi-node air-gapped target deployment.
Install a single node deployment
This section presents instructions on deploying Autonomous Identity in a single-target machine with Internet connectivity.
Autonomous Identity 2022.11.0 introduced a new installation script, deployer pro (Deployer for Production), letting customers manage their third-party software dependencies in their particular Autonomous Identity environments. Autonomous Identity 2022.11.8 continues to use this deployer script. For background information about the deployer, refer to About the new deployer pro script.
The procedures presented in this section are generalized examples to help you get acquainted with Autonomous Identity. Consult with your ForgeRock Professional Services or technical partner for specific assistance to install Autonomous Identity within your particular environment. |
Prerequisites
For new and clean deployments, the following are prerequisites:
-
Operating System. The target machine requires Red Hat Linux 8/CentOS Stream 8. The deployer machine can use any operating system as long as Docker is installed. For this chapter, we use CentOS Stream 8 as its base operating system.
If you are upgrading Autonomous Identity on a RHEL 7/CentOS 7, the upgrade to 2022.11 uses RHEL 7/CentOS 7 only. For new and clean installations, Autonomous Identity requires RHEL 8 or CentOS Stream 8 only. -
Memory Requirements. Make sure you have enough free disk space on the deployer machine before running the
deployer.sh
commands. We recommend at least 500GB. -
Default Shell. The default shell for the
autoid
user must be bash. -
Deployment Requirements. Autonomous Identity provides a Docker image that creates a
deployer.sh
script. The script downloads additional images necessary for the installation. To download the deployment images, you must first obtain a registry key to log into the ForgeRock Google Cloud Registry. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, refer to How To Configure Service Credentials (Push Auth, Docker) in Backstage. -
Database Requirements. Decide which database you are using: Apache Cassandra or MongoDB.
-
IPv4 Forwarding. Many high security environments run their CentOS-based systems with IPv4 forwarding disabled. However, Docker Swarm does not work with a disabled IPv4 forward setting. In such environments, make sure to enable IPv4 forwarding in the file
/etc/sysctl.conf
:net.ipv4.ip_forward=1
Install third-party components
First, set up your GCP virtual machine and install the third-party package dependencies required for the Autonomous Identity deployment:
-
Create a GCP Red Hat Enterprise Linux (RHEL) 8 or CentOS Stream 8 virtual machine: n2-standard-4 (4 vCPU and 16GB memory). Refer to https://www.centos.org/centos-stream/.
-
Create an
autoid
user with the proper privileges to run the installation. For example:sudo adduser autoid sudo passwd autoid echo "autoid ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/autoid sudo usermod -aG wheel autoid su - autoid
-
Install the following packages needed in the Autonomous Identity deployment:
-
Java 11. For example,
sudo dnf install java-11-openjdk-devel
. -
wget. For example,
sudo dnf install wget
. -
unzip. For example,
sudo dnf install unzip
. -
elinks. For example,
sudo yum install -y elinks
.
-
-
Install Python 3.10.9.
-
Refer to https://docs.python.org/release/3.10.9/.
-
Make sure no other Python versions are installed on the machine. Remove those versions. For example:
sudo rm -rf /usr/bin/python3 sudo rm -rf /usr/bin/python3.6 sudo rm -rf /usr/bin/python3m sudo rm -rf /usr/bin/pip3 sudo rm -rf /usr/bin/easy_install-3 sudo rm -rf /usr/bin/easy_install-3.6
-
Create symlinks for python3:
sudo ln -s /usr/bin/python 3.10 /usr/bin/python3 sudo ln -s /usr/bin/eash_install-3.10 /usr/bin/easy_install-3 sudo ln -s /usr/bin/pip3.10 /usr/bin/pip3
-
-
Install Cassandra 4.0.6. Refer to https://cassandra.apache.org/doc/latest/cassandra/getting_started/index.html. (For MongoDB installations, follow the instructions in Download MongoDB.)
-
Log in to the Cassandra shell. For example:
cassandra/bin/cqlsh <$ipaddress> -u cassandra -p cassandra
-
Create the Cassandra roles for Autonomous Identity. Refer to https://cassandra.apache.org/doc/latest/cassandra/cql/security.html. For example:
cassandra/bin/cqlsh <$ipaddress> -u cassandra -p cassandra -e "CREATE ROLE zoran_dba WITH PASSWORD = 'password' AND SUPERUSER = true AND LOGIN = true;" cassandra/bin/cqlsh <$ipaddress> -u cassandra -p cassandra -e "CREATE ROLE zoranuser WITH PASSWORD = ''password' AND LOGIN = true;" cassandra/bin/cqlsh <$ipaddress> -u zoran_dba -p 'password -e "ALTER ROLE cassandra WITH PASSWORD='randompassword123' AND SUPERUSER=false AND LOGIN = false;" cassandra/bin/cqlsh <$ipaddress> -u zoran_dba -p 'password -e "ALTER KEYSPACE "system_auth" WITH REPLICATION = {'class' :'NetworkTopologyStrategy','datacenter1' : 1};"
-
Configure security for Cassandra. Refer to https://cassandra.apache.org/doc/latest/cassandra/operating/security.html.
-
-
Install MongoDB 4.4. Follow the instructions in https://www.mongodb.com/docs/v4.4/tutorial/install-mongodb-on-red-hat/.
-
Create a MongoDB user with username
mongoadmin
with admin privileges. Follow the instructions in https://www.mongodb.com/docs/v4.4/core/security-users/.For example:
db.createUser({ user: "mongoadmin",pwd: "~@C~O>@%^()-_+=|<Y*$$rH&&/m#g{?-o!z/1}2??3=!*&", roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]})
-
Set up SSL, refer to https://www.mongodb.com/docs/v4.4/tutorial/configure-ssl/#procedures—using-net.ssl-settings. For example, the MongoDB configuration file (
/etc/mongod.conf
) would include a section similar to the following:net: tls: mode: requireTLS certificateKeyFile: /etc/ssl/mongodb.pem CAFile: /etc/ssl/rootCA.pem
- IMPORTANT
-
Make sure that the CN entry in the
mongodb.pem
certificate is the IP address/hostname of themongodb
instance. You need to add this same CN value to thehosts
file during the Autonomous Identity deployment.
-
Restart the daemon and MongoDB.
sudo systemctl daemon-reload sudo systemctl restart mongod
-
-
Install Apache Spark 3.3.2. Refer to https://spark.apache.org/downloads.html.
The official release of Apache Livy does not support Apache Spark 3.3.1 or 3.3.2. ForgeRock has re-compiled and packaged Apache Livy to work with Apache Spark 3.3.1 hadoop 3 and Apache Spark 3.3.2 hadoop 3. Use the zip file located at autoid-config/apache-livy/apache-livy-0.8.0-incubating-SNAPSHOT-bin.zip
to install Apache Livy on the Spark-Livy machine.-
Configure the
SPARK_HOME
in yourbashrc
file. For example:SPARK_HOME=/opt/spark/spark-3.3.2-bin-hadoop3 export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
-
Configure authentication on Spark, refer to https://spark.apache.org/docs/latest/security.html#authentication. For example:
spark.authenticate true spark.authenticate.secret <your-secret>
-
Enable and start the Spark main and secondary servers:
sudo chown -R $USER:USER $SPARK_HOME
-
Spark 3.3.1 and 3.3.2 no longer uses log4j1 and has upgraded to log4j2. Copy or move the log4j template file to the
log4j2.properties
file. For example:mv /opt/spark/spark-3.3.2-bin-hadoop3/conf/log4j.properties.template /opt/spark/spark-3.3.2-bin-hadoop3/conf/log4j2.properties
You will install Apache Livy in a later step. Refer to Install Apache Livy.
-
-
Install Opensearch 1.3.13 and Opensearch Dashboards 1.3.13. For more information, refer to Opensearch 1.3.13.
-
Configure Opensearch Dashboards using the
/Opensearch-dashboards/config/Opensearch_dashboards.yml
file. Refer to https://Opensearch.org/docs/1.3/dashboards/install/index/. -
Configure TLS/SSL security:
-
Follow the instructions in https://Opensearch.org/docs/latest/security-plugin/configuration/tls/.
-
Follow the instructions in https://Opensearch.org/docs/2.0/security-plugin/configuration/generate-certificates/.
- IMPORTANT
-
Make sure that the CN entry in the
esnode.pem
certificate is the IP address/hostname of the Opensearch instance. You need to add this same CN value to thehosts
file during the Autonomous Identity deployment.
-
-
-
Set up Docker using the procedures in https://docs.docker.com/engine/install/centos/.
-
For post-installation Docker steps, follow the instructions in https://docs.docker.com/engine/install/linux-postinstall/.
Do not use /opt/autoid
as Docker root as the directory is overwritten during the Autonomous Identity installation and will result in a recursive error.
-
Set up SSH on the deployer
This section shows how to set up SSH keys for the autoid
user to the target machine. This is a critical step and
necessary for a successful deployment.
-
On the deployer machine, change to the SSH directory.
cd ~/.ssh
-
Run
ssh-keygen
to generate a 2048-bit RSA keypair for theautoid
user, and then click Enter. Use the default settings, and do not enter a passphrase for your private key.ssh-keygen -t rsa -C "autoid"
The public and private rsa key pair is stored in
home-directory/.ssh/id_rsa
andhome-directory/.ssh/id_rsa.pub
. -
Copy the SSH key to the
autoid-config
directory.cp id_rsa ~/autoid-config
-
Change the privileges and owner to the file.
chmod 400 ~/autoid-config/id_rsa
-
Copy your public SSH key,
id_rsa.pub
, to the target machine’s~/.ssh/authorized_keys
folder. If your target system does not have an~/.ssh/authorized_keys
, create it usingsudo mkdir -p ~/.ssh
, thensudo touch ~/.ssh/authorized_keys
.This example uses
ssh-copy-id
to copy the public key to the target machine, which may or may not be available on your operating system. You can also manually copy-n-paste the public key to your~/.ssh/authorized_keys
on the target machine.ssh-copy-id -i id_rsa.pub autoid@<Target IP Address>
The ssh-copy-id
command requires that you have public key authentication enabled on the target server. You can enable it by editing the/etc/ssh/sshd_config
file on the target machine. For example: sudo vi /etc/ssh/sshd_config, set PubkeyAuthentication yes, and save the file. Next, restart sshd: sudo systemctl restart sshd. -
On the deployer machine, test your SSH connection to the target machine. This is a critical step. Make sure the connection works before proceeding with the installation.
ssh -i ~/.ssh/id_rsa autoid@<Target IP Address> Last login: Tue Dec 14 14:06:06 2020
-
While SSH’ing into the target node, set the privileges on your
~/.ssh
and~/.ssh/authorized_keys
.chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys
-
If you successfully accessed the remote server and changed the privileges on the
~/.ssh
, enterexit
to end your SSH session.
Install Autonomous Identity
Make sure you have the following prerequisites:
-
IP address of machines running Opensearch, MongoDB, or Cassandra.
-
The Autonomous Identity user should have permission to write to
/opt/autoid
on all machines -
To download the deployment images for the install, you still need your registry key to log into the ForgeRock Google Cloud Registry to download the artifacts.
-
Make sure you have the proper Opensearch certificates with the exact names for both pem and JKS files copied to
~/autoid-config/certs/elastic
:-
esnode.pem
-
esnode-key.pem
-
root-ca.pem
-
elastic-client-keystore.jks
-
elastic-server-truststore.jks
-
-
Make sure you have the proper MongoDB certificates with exact names for both pem and JKS files copied to
~/autoid-config/certs/mongo
:-
mongo-client-keystore.jks
-
mongo-server-truststore.jks
-
mongodb.pem
-
rootCA.pem
-
-
Make sure you have the proper Cassandra certificates with exact names for both pem and JKS files copied to ~/autoid-config/certs/cassandra:
-
Zoran-cassandra-client-cer.pem
-
Zoran-cassandra-client-keystore.jks
-
Zoran-cassandra-server-cer.pem
-
zoran-cassandra-server-keystore.jks
-
Zoran-cassandra-client-key.pem
-
Zoran-cassandra-client-truststore.jks
-
Zoran-cassandra-server-key.pem
-
Zoran-cassandra-server-truststore.jks
-
-
Create the
autoid-config
directory.mkdir autoid-config
-
Change to the directory.
cd autoid-config
-
Log in to the ForgeRock Google Cloud Registry using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, refer to How To Configure Service Credentials (Push Auth, Docker) in Backstage.
docker login -u _json_key -p "$(cat autoid_registry_key.json)" https://gcr.io/forgerock-autoid
The following output is displayed:
Login Succeeded
-
Run the create-template command to generate the
deployer.sh
script wrapper and configuration files. Note that the command sets the configuration directory on the target node to/config
. The--user
parameter eliminates the need to usesudo
while editing the hosts file and other configuration files.docker run --user=$(id -u) -v ~/autoid-config:/config -it gcr.io/forgerock-autoid/deployer-pro:2022.11.8 create-template
-
Create a certificate directory for elastic.
mkdir -p autoid-config/certs/elastic
-
Copy the Opensearch certificates and JKS files to
autoid-config/certs/elastic
. -
Create a certificate directory for MongoDB.
mkdir -p autoid-config/certs/mongo
-
Copy the MongoDB certificates and JKS files to
autoid-config/certs/mongo
. -
Create a certificate directory for Cassandra.
mkdir -p autoid-config/certs/cassandra
-
Copy the Cassandra certificates and JKS files to
autoid-config/certs/cassandra
. -
Update the
hosts
file with the IP addresses of the machines. Thehosts
file must include the IP addresses for Docker nodes, Spark main/livy, and the MongoDB master. While the deployer pro does not install or configure the MongoDB main server, the entry is required to run the MongoDB CLI to seed the Autonomous Identity schema.[docker-managers] [docker-workers] [docker:children] docker-managers docker-workers [spark-master-livy] [cassandra-seeds] #For replica sets, add the IPs of all Cassandra nodes [mongo_master] # Add the MongoDB main node in the cluster deployment # For example: 10.142.15.248 mongodb_master=True [odfe-master-node] # Add only the main node in the cluster deployment [kibana-node] # Please add only the master node in cluster deployment
-
Update the
vars.yml
file:-
Set
db_driver_type
tomongo
orcassandra
. -
Set
elastic_host
,elastic_port
, andelastic_user
properties. -
Set
kibana_host
. -
Set the Apache livy install directory.
-
Ensure the
elastic_user
,elastic_port
, andmongo_part
are correctly configured. -
Update the
vault.yml
passwords for elastic and mongo to refect your installation. -
Set the
mongo_ldap
variable totrue
if you want Autonomous Identity to authenticate with Mongo DB, configured as LDAP.The mongo_ldap
variable only appears in fresh installs of 2022.11.0 and its upgrades (2022.11.1+). If you upgraded from a 2021.8.7 deployment, the variable is not available in your upgraded 2022.11.x deployment. -
If you are using Cassandra, set the Cassandra-related parameters in the
vars.yml
file. Default values are:cassandra: enable_ssl: "true" contact_points: 10.142.15.248 # comma separated values in case of replication set port: 9042 username: zoran_dba cassandra_keystore_password: "Acc#1234" cassandra_truststore_password: "Acc#1234" ssl_client_key_file: "zoran-cassandra-client-key.pem" ssl_client_cert_file: "zoran-cassandra-client-cer.pem" ssl_ca_file: "zoran-cassandra-server-cer.pem" server_truststore_jks: "zoran-cassandra-server-truststore.jks" client_truststore_jks: "zoran-cassandra-client-truststore.jks" client_keystore_jks: "zoran-cassandra-client-keystore.jks"
-
-
Download images:
./deployer.sh download-images
-
Install Apache Livy.
-
The official release of Apache Livy does not support Apache Spark 3.3.1 or 3.3.2. ForgeRock has re-compiled and packaged Apache Livy to work with Apache Spark 3.3.1 hadoop 3 and Apache Spark 3.3.2 hadoop 3. Use the zip file located at
autoid-config/apache-livy/apache-livy-0.8.0-incubating-SNAPSHOT-bin.zip
to install Apache Livy on the Spark-Livy machine. -
For Livy configuration, refer to https://livy.apache.org/get-started/.
-
-
On the Spark-Livy machine, run the following commands to install the python package dependencies:
-
Change to the
/opt/autoid
directory:cd /opt/autoid
-
Create a
requirements.txt
file with the following content:six==1.11 certifi==2019.11.28 python-dateutil==2.8.1 jsonschema==3.2.0 cassandra-driver numpy==1.22.0 pyarrow==6.0.1 wrapt==1.11.0 PyYAML==6.0 requests==2.31.0 urllib3==1.26.18 pymongo pandas==1.3.5 tabulate openpyxl wheel cython
-
Install the requirements file:
pip3 install -r requirements.txt
-
-
Make sure that the
/opt/autoid
directory exists and that it is both readable and writable. -
Run the deployer script:
./deployer.sh run
-
On the Spark-Livy machine, run the following commands to install the Python egg file:
-
Install the egg file:
cd /opt/autoid/eggs pip3.10 install autoid_analytics-2021.3-py3-none-any.whl
-
Source the
.bashrc
file:source ~/.bashrc
-
Restart Spark and Livy.
./spark/sbin/stop-all.sh ./livy/bin/livy-server stop ./spark/sbin/start-all.sh ./livy/bin/livy-server start
-
Resolve Hostname
After installing Autonomous Identity, set up the hostname resolution for your deployment.
-
Configure your DNS servers to access Autonomous Identity dashboard on the target node. The following domain names must resolve to the IP address of the target node:
<target-environment>-ui.<domain-name>
. -
If DNS cannot resolve target node hostname, edit it locally on the machine that you want to access Autonomous Identity using a browser. Open a text editor and add an entry in the
/etc/hosts
(Linux/Unix) file orC:\Windows\System32\drivers\etc\hosts
(Windows) for the self-service and UI services for each managed target node.<Target IP Address> <target-environment>-ui.<domain-name>
For example:
34.70.190.144 autoid-ui.forgerock.com
-
If you set up a custom domain name and target environment, add the entries in
/etc/hosts
. For example:34.70.190.144 myid-ui.abc.com
For more information on customizing your domain name, refer to Customize Domains.
Access the Dashboard
-
Open a browser. If you set up your own url, use it for your login.
-
Log in as a test user.
test user: bob.rodgers@forgerock.com password: <password>
Check Apache Cassandra
-
Make sure Cassandra is running in cluster mode. For example
/opt/autoid/apache-cassandra-3.11.2/bin/nodetool status
Check MongoDB
-
Make sure MongoDB is running. For example:
mongo --tls \ --host <Host IP> \ --tlsCAFile /opt/autoid/mongo/certs/rootCA.pem \ --tlsAllowInvalidCertificates \ --tlsCertificateKeyFile /opt/autoid/mongo/certs/mongodb.pem
Check Apache Spark
-
SSH to the target node and open Spark dashboard using the bundled text-mode web browser
elinks http://localhost:8080
Spark Master status should display as ALIVE and worker(s) with State ALIVE.
Click to display an example of the Spark dashboard
Start the Analytics
If the previous installation steps all succeeded, you must now prepare your data’s entity definitions, data sources, and attribute mappings prior to running your analytics jobs. These step are required and are critical for a successful analytics process.
For more information, refer to Set Entity Definitions.
Install a single node air-gapped deployment
This section presents instructions on deploying Autonomous Identity in a single-node target machine that has no Internet connectivity. This type of configuration, called an air-gap or offline deployment, provides enhanced security by isolating itself from outside Internet or network access.
The air-gap installation is similar to that of the single-node target deployment with Internet connectivity, except that the image and deployer script must be saved on a portable drive and copied to the air-gapped target machine.
Installation steps for an airgap deployment
The general procedure for an air-gap deployment is practically identical to that of a single node non-airgapped, except that you must prepare a tar file and copy the files to an air-gap machine.
Set up the nodes
Set up each node as presented in Install a single node deployment.
Make sure you have sufficient storage for your particular deployment. For more information on sizing considerations, refer to Deployment Planning Guide.
Set up the third-party software dependencies
Download and unpack the third-party software dependencies in Install third-party components.
Set up SSH on the deployer
While SSH is not necessary to connect the deployer to the target node as the machines are isolated from one another. You still need SSH on the deployer so that it can communicate with itself.
-
On the deployer machine, run
ssh-keygen
to generate an RSA keypair, and then click Enter. You can use the default filename. Enter a password for protecting your private key.ssh-keygen -t rsa -C "autoid"
The public and private rsa key pair is stored in
home-directory/.ssh/id_rsa
andhome-directory/.ssh/id_rsa.pub
. -
Copy the SSH key to the
~/autoid-config
directory.cp ~/.ssh/id_rsa ~/autoid-config
-
Change the privileges to the file.
chmod 400 ~/autoid-config/id_rsa
Prepare the tar file
Run the following steps on an Internet-connected host machine:
-
On the deployer machine, change to the installation directory.
cd ~/autoid-config/
-
Log in to the ForgeRock Google Cloud Registry using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, refer to How To Configure Service Credentials (Push Auth, Docker) in Backstage.
docker login -u _json_key -p "$(cat autoid_registry_key.json)" https://gcr.io/forgerock-autoid
The following output is displayed:
Login Succeeded
-
Run the
create-template
command to generate thedeployer.sh
script wrapper. The command sets the configuration directory on the target node to/config
. Note that the--user
parameter eliminates the need to usesudo
while editing the hosts file and other configuration files.docker run --user=$(id -u) -v ~/autoid-config:/config -it gcr.io/forgerock-autoid/deployer-pro:2022.11.8 create-template
-
Open the
~/autoid-config/vars.yml
file, set theoffline_mode
property totrue
, and then save the file.offline_mode: true
-
Download the Docker images. This step downloads software dependencies needed for the deployment and places them in the
autoid-packages
directory../deployer.sh download-images
-
Create a tar file containing all of the Autonomous Identity binaries.
tar czf autoid-packages.tgz deployer.sh autoid-packages/*
-
Copy the
autoid-packages.tgz
,deployer.sh
, and SSH key (id_rsa
) to a portable hard drive.
Install on the air-gap target
Before you begin, make sure you have CentOS Stream 8 and Docker installed on your air-gapped target machine.
-
Create the
~/autoid-config
directory if you haven’t already.mkdir ~/autoid-config
-
Copy the
autoid-package.tgz
tar file from the portable storage device. -
Unpack the tar file.
tar xf autoid-packages.tgz -C ~/autoid-config
-
On the air-gap host node, copy the SSH key to the
~/autoid-config
directory. -
Change the privileges to the file.
chmod 400 ~/autoid-config/id_rsa
-
Change to the configuration directory.
cd ~/autoid-config
-
Import the deployer image.
./deployer.sh import-deployer
The following output is displayed:
… db631c8b06ee: Loading layer [=============================================⇒] 2.56kB/2.56kB 2d62082e3327: Loading layer [=============================================⇒] 753.2kB/753.2kB Loaded image: gcr.io/forgerock-autoid/deployer:2022.11.8
-
Create the configuration template using the
create-template
command. This command creates the configuration files:ansible.cfg
,vars.yml
,vault.yml
andhosts
../deployer.sh create-template
The following output is displayed:
Config template is copied to host machine directory mapped to /config
Install Autonomous Identity
Make sure you have the following prerequisites:
-
IP address of machines running Opensearch, MongoDB, or Cassandra.
-
The Autonomous Identity user should have permission to write to
/opt/autoid
on all machines -
To download the deployment images for the install, you still need your registry key to log into the ForgeRock Google Cloud Registry to download the artifacts.
-
Make sure you have the proper Opensearch certificates with the exact names for both pem and JKS files copied to
~/autoid-config/certs/elastic
:-
esnode.pem
-
esnode-key.pem
-
root-ca.pem
-
elastic-client-keystore.jks
-
elastic-server-truststore.jks
-
-
Make sure you have the proper MongoDB certificates with exact names for both pem and JKS files copied to
~/autoid-config/certs/mongo
:-
mongo-client-keystore.jks
-
mongo-server-truststore.jks
-
mongodb.pem
-
rootCA.pem
-
-
Make sure you have the proper Cassandra certificates with exact names for both pem and JKS files copied to ~/autoid-config/certs/cassandra:
-
Zoran-cassandra-client-cer.pem
-
Zoran-cassandra-client-keystore.jks
-
Zoran-cassandra-server-cer.pem
-
zoran-cassandra-server-keystore.jks
-
Zoran-cassandra-client-key.pem
-
Zoran-cassandra-client-truststore.jks
-
Zoran-cassandra-server-key.pem
-
Zoran-cassandra-server-truststore.jks
-
-
Create a certificate directory for elastic.
mkdir -p autoid-config/certs/elastic
-
Copy the Opensearch certificates and JKS files to
autoid-config/certs/elastic
. -
Create a certificate directory for MongoDB.
mkdir -p autoid-config/certs/mongo
-
Copy the MongoDB certificates and JKS files to
autoid-config/certs/mongo
. -
Create a certificate directory for Cassandra.
mkdir -p autoid-config/certs/cassandra
-
Copy the Cassandra certificates and JKS files to
autoid-config/certs/cassandra
. -
Update the
hosts
file with the IP addresses of the machines. Thehosts
file must include the IP addresses for Docker nodes, Spark main/livy, and the MongoDB master. While the deployer pro does not install or configure the MongoDB main server, the entry is required to run the MongoDB CLI to seed the Autonomous Identity schema.[docker-managers] [docker-workers] [docker:children] docker-managers docker-workers [spark-master-livy] [cassandra-seeds] #For replica sets, add the IPs of all Cassandra nodes [mongo_master] # Add the MongoDB main node in the cluster deployment # For example: 10.142.15.248 mongodb_master=True [odfe-master-node] # Add only the main node in the cluster deployment
-
Update the
vars.yml
file:-
Set
offline_mode
totrue
. -
Set
db_driver_type
tomongo
orcassandra
. -
Set
elastic_host
,elastic_port
, andelastic_user
properties. -
Set
kibana_host
. -
Set the Apache livy install directory.
-
Ensure the
elastic_user
,elastic_port
, andmongo_part
are correctly configured. -
Update the
vault.yml
passwords for elastic and mongo to refect your installation. -
Set the
mongo_ldap
variable totrue
if you want Autonomous Identity to authenticate with Mongo DB, configured as LDAP.The mongo_ldap
variable only appears in fresh installs of 2022.11.0 and its upgrades (2022.11.1+). If you upgraded from a 2021.8.7 deployment, the variable is not available in your upgraded 2022.11.x deployment. -
If you are using Cassandra, set the Cassandra-related parameters in the
vars.yml
file. Default values are:cassandra: enable_ssl: "true" contact_points: 10.142.15.248 # comma separated values in case of replication set port: 9042 username: zoran_dba cassandra_keystore_password: "Acc#1234" cassandra_truststore_password: "Acc#1234" ssl_client_key_file: "zoran-cassandra-client-key.pem" ssl_client_cert_file: "zoran-cassandra-client-cer.pem" ssl_ca_file: "zoran-cassandra-server-cer.pem" server_truststore_jks: "zoran-cassandra-server-truststore.jks" client_truststore_jks: "zoran-cassandra-client-truststore.jks" client_keystore_jks: "zoran-cassandra-client-keystore.jks"
-
-
Install Apache Livy.
-
The official release of Apache Livy does not support Apache Spark 3.3.1 or 3.3.2. ForgeRock has re-compiled and packaged Apache Livy to work with Apache Spark 3.3.1 hadoop 3 and Apache Spark 3.3.2 hadoop 3. Use the zip file located at
autoid-config/apache-livy/apache-livy-0.8.0-incubating-SNAPSHOT-bin.zip
to install Apache Livy on the Spark-Livy machine. -
For Livy configuration, refer to https://livy.apache.org/get-started/.
-
-
On the Spark-Livy machine, run the following commands to install the python package dependencies:
-
Change to the
/opt/autoid
directory:cd /opt/autoid
-
Create a
requirements.txt
file with the following content:six==1.11 certifi==2019.11.28 python-dateutil==2.8.1 jsonschema==3.2.0 cassandra-driver numpy==1.22.0 pyarrow==6.0.1 wrapt==1.11.0 PyYAML==6.0 requests==2.31.0 urllib3==1.26.18 pymongo pandas==1.3.5 tabulate openpyxl wheel cython
-
Install the requirements file:
pip3 install -r requirements.txt
-
-
Make sure that the
/opt/autoid
directory exists and that it is both readable and writable. -
Run the deployer script:
./deployer.sh run
-
On the Spark-Livy machine, run the following commands to install the Python egg file:
-
Install the egg file:
cd /opt/autoid/eggs pip3.10 install autoid_analytics-2021.3-py3-none-any.whl
-
Source the
.bashrc
file:source ~/.bashrc
-
Restart Spark and Livy.
./spark/sbin/stop-all.sh ./livy/bin/livy-server stop ./spark/sbin/start-all.sh ./livy/bin/livy-server start
-
Resolve Hostname
After installing Autonomous Identity, set up the hostname resolution for your deployment.
-
Configure your DNS servers to access Autonomous Identity dashboard on the target node. The following domain names must resolve to the IP address of the target node:
<target-environment>-ui.<domain-name>
. -
If DNS cannot resolve target node hostname, edit it locally on the machine that you want to access Autonomous Identity using a browser. Open a text editor and add an entry in the
/etc/hosts
(Linux/Unix) file orC:\Windows\System32\drivers\etc\hosts
(Windows) for the self-service and UI services for each managed target node.<Target IP Address> <target-environment>-ui.<domain-name>
For example:
34.70.190.144 autoid-ui.forgerock.com
-
If you set up a custom domain name and target environment, add the entries in
/etc/hosts
. For example:34.70.190.144 myid-ui.abc.com
For more information on customizing your domain name, refer to Customize Domains.
Access the Dashboard
-
Open a browser. If you set up your own url, use it for your login.
-
Log in as a test user.
test user: bob.rodgers@forgerock.com password: <password>
Check Apache Cassandra
-
Make sure Cassandra is running in cluster mode. For example
/opt/autoid/apache-cassandra-3.11.2/bin/nodetool status
Check MongoDB
-
Make sure MongoDB is running. For example:
mongo --tls \ --host <Host IP> \ --tlsCAFile /opt/autoid/mongo/certs/rootCA.pem \ --tlsAllowInvalidCertificates \ --tlsCertificateKeyFile /opt/autoid/mongo/certs/mongodb.pem
Check Apache Spark
-
SSH to the target node and open Spark dashboard using the bundled text-mode web browser
elinks http://localhost:8080
Spark Master status should display as ALIVE and worker(s) with State ALIVE.
Click to display an example of the Spark dashboard
Start the Analytics
If the previous installation steps all succeeded, you must now prepare your data’s entity definitions, data sources, and attribute mappings prior to running your analytics jobs. These step are required and are critical for a successful analytics process.
For more information, refer to Set Entity Definitions.
Install a multi-node deployment
This section presents instructions on deploying Autonomous Identity in a multi-node deployment. Multi-node deployments are configured in production environments, providing performant throughput by distributing the processing load across servers and supporting failover redundancy.
Like single-node deployment, ForgeRock provides a Deployer Pro script to pull a Docker image from
ForgeRock’s Google Cloud Registry repository with the microservices and analytics needed for the system.
The deployer also uses the node IP addresses specified in your hosts
file to set up an overlay network and your nodes.
The procedures are similar to multinode deployments using older Autonomous Identity release, except that you must install and configure the dependent software packages (for example, Apache Cassandra/MongoDB, Apache Spark and Livy, Opensearch and Opensearch Dashboards, and Docker) prior to running Autonomous Identity. |
Prerequisites
Deploy Autonomous Identity on a multi-node target on Redhat Linux Enterprise 8 or CentOS Stream 8. The following are prerequisites:
-
Operating System. The target machine requires Redhat Linux Enterprise 8 or CentOS Stream 8. The deployer machine can use any operating system as long as Docker is installed. For this chapter, we use Redhat Linux Enterprise 8 as its base operating system.
If you are upgrading Autonomous Identity on a RHEL 7/CentOS 7, the upgrade to 2022.11 uses RHEL 7/CentOS 7 only. For new and clean installations, Autonomous Identity requires RHEL 8 or CentOS Stream 8 only. -
Default Shell. The default shell for the
autoid
user must bebash
. -
Subnet Requirements. We recommend deploying your multi-node machines within the same subnet. Ports must be open for the installation to succeed. Each instance should be able to communicate to the other instances.
If any hosts used for the Docker cluster (docker-managers, docker-workers) have an IP address in the range of 10.0.x.x, they will conflict with the Swarm network. As a result, the services in the cluster will not connect to the Cassandra database or Elasticsearch backend.
The Docker cluster hosts must be in a subnet that provides IP addresses 10.10.1.x or higher.
-
Deployment Requirements. Autonomous Identity provides a
deployer.sh
script that downloads and installs the necessary Docker images. To download the deployment images, you must first obtain a registry key to log into the ForgeRock Google Cloud Registry. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, refer to How To Configure Service Credentials (Push Auth, Docker) in Backstage. -
Filesystem Requirements. Autonomous Identity requires a shared filesystem accessible from the Spark main, Spark worker, analytics hosts, and application layer. The shared filesystem should be mounted at the same mount directory on all of those hosts. If the mount directory for the shared filesystem is different from the default,
/data
, update the/autoid-config/vars.yml
file to point to the correct directories:analytics_data_dir: /data analytics_conf_dif: /data/conf
-
Architecture Requirements. Make sure that the Spark main is on a separate node from the Spark workers.
-
Database Requirements. Decide which database you are using: Apache Cassandra or MongoDB. The configuration procedure is slightly different for each database.
-
Deployment Best-Practice. The example combines the Opensearch data and Opensearch Dashboards nodes. For best performance in production, dedicate a separate node to Opensearch, data nodes, and Opensearch Dashboards.
-
IPv4 Forwarding. Many high-security environments run their CentOS-based systems with IPv4 forwarding disabled. However, Docker Swarm does not work with a disabled IPv4 forward setting. In such environments, make sure to enable IPv4 forwarding in the file
/etc/sysctl.conf
:net.ipv4.ip_forward=1
We recommend that your deployer team have someone with Cassandra expertise. This guide is not sufficient to troubleshoot any issues that may arise. |
Set up the nodes
Set up three virtual machines.
-
Create a Redhat Linux Enterprise 8 or CentOS Stream 8 virtual machine: N2 4 core and 16 GB. Verify your operating system.
sudo cat /etc/centos-release
For multinode deployments, there is a known issue with RHEL 8/CentOS Stream 8 and overlay network configurations. Refer to Known Issues in 2022.11.0. -
Set the user for the target node to
autoid
. In this example, create userautoid
:sudo adduser autoid sudo passwd autoid echo "autoid ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/autoid sudo usermod -aG wheel autoid su - autoid
-
Optional. Install yum-utils package on the deployer machine. yum-utils is a utilities manager for the Yum RPM package repository. The repository compresses software packages for Linux distributions.
sudo yum install -y yum-utils
-
Install the following packages needed in the Autonomous Identity deployment:
-
Java 11. For example,
sudo dnf install java-11-openjdk-devel
. -
wget. For example,
sudo dnf install wget
. -
unzip. For example,
sudo dnf install unzip
. -
elinks. For example,
sudo yum install -y elinks
. -
Python 3.10.9. Refer to https://docs.python.org/release/3.10.9/.
-
-
Repeat this procedure for the other nodes.
Install third-party components
Set up a machine with the required third-party software dependencies. Refer to: Install third-party components.
Set up SSH on the deployer
-
On the deployer machine, change to the
~/.ssh
directory.cd ~/.ssh
-
Run
ssh-keygen
to generate an RSA keypair, and then click Enter. You can use the default filename.Do not add a key passphrase as it results in a build error. ssh-keygen -t rsa -C "autoid"
The public and private rsa key pair is stored in
home-directory/.ssh/id_rsa
andhome-directory/.ssh/id_rsa.pub
. -
Copy the SSH key to the
autoid-config
directory.cp id_rsa ~/autoid-config
-
Change the privileges to the file.
chmod 400 ~/autoid-config/id_rsa
-
Copy your public SSH key,
id_rsa.pub
, to each of your nodes.If your target system does not have an ~/.ssh/authorized_keys
, create it usingsudo mkdir -p ~/.ssh
, thensudo touch ~/.ssh/authorized_keys
.For this example, copy the SSH key to each node:
ssh-copy-id -i id_rsa.pub autoid@<Node IP Address>
-
On the deployer machine, test your SSH connection to each target machine. This is a critical step. Make sure the connection works before proceeding with the installation.
For example, SSH to first node:
ssh -i id_rsa autoid@<Node 1 IP Address> Last login: Sat Oct 3 03:02:40 2020
-
If you can successfully SSH to each machine, set the privileges on your
~/.ssh
and~/.ssh/authorized_keys
.chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys
-
Enter Exit to end your SSH session.
-
Repeat steps 5–8 again for each node.
Set up a shared data folder
The Docker main and worker nodes plus the analytics main and worker nodes require a shared data directory, typically, /data
.
There are numerous ways to set up a shared directory, the following procedure is just one example and sets up an NFS server on the analytics master.
-
On the Analytics Spark Main node, install
nfs-utils
. This step may require that you run the install with root privileges, such assudo
or equivalent.sudo yum install -y nfs-utils
-
Create the
/data
directory.mkdir -p /data
-
Change the permissions on the
/data
directory.chmod -R 755 /data chown nfsnobody:nfsnobody /data
-
Start the services and enable them to start at boot.
systemctl enable rpcbind systemctl enable nfs-server systemctl enable nfs-lock systemctl enable nfs-idmap systemctl start rpcbind systemctl start nfs-server systemctl start nfs-lock systemctl start nfs-idmap
-
Define the sharing points in the
/etc/exports
file.vi /etc/exports /data <Remote IP Address 1>(rw,sync,no_root_squash,no_all_squash) /data <Remote IP Address 2>(rw,sync,no_root_squash,no_all_squash)
If you change the domain name and target environment, you need to also change the certificates to reflect the new changes. For more information, refer to Customize Domains.
-
Start the NFS service.
systemctl restart nfs-server
-
Add the NFS service to the
firewall-cmd
public zone service:firewall-cmd --permanent --zone=public --add-service=nfs firewall-cmd --permanent --zone=public --add-service=mountd firewall-cmd --permanent --zone=public --add-service=rpc-bind firewall-cmd --reload
-
On each spark worker node, run the following:
-
Install
nfs-utils
:yum install -y nfs-utils
-
Create the NFS directory mount points:
mkdir -p /data
-
Mount the NFS shared directory:
mount -t nfs <NFS Server IP>:/data /data
-
Test the new shared directory by creating a small text file. On an analytics worker node, run the following, and then check for the presence of the test file on the other servers:
cd /data touch test
-
Install Autonomous Identity
Make sure you have the following prerequisites:
-
IP address of machines running Opensearch, MongoDB, or Cassandra.
-
The Autonomous Identity user should have permission to write to
/opt/autoid
on all machines -
To download the deployment images for the install, you still need your registry key to log into the ForgeRock Google Cloud Registry to download the artifacts.
-
Make sure you have the proper Opensearch certificates with the exact names for both pem and JKS files copied to
~/autoid-config/certs/elastic
:-
esnode.pem
-
esnode-key.pem
-
root-ca.pem
-
elastic-client-keystore.jks
-
elastic-server-truststore.jks
-
-
Make sure you have the proper MongoDB certificates with exact names for both pem and JKS files copied to
~/autoid-config/certs/mongo
:-
mongo-client-keystore.jks
-
mongo-server-truststore.jks
-
mongodb.pem
-
rootCA.pem
-
-
Make sure you have the proper Cassandra certificates with exact names for both pem and JKS files copied to ~/autoid-config/certs/cassandra:
-
Zoran-cassandra-client-cer.pem
-
Zoran-cassandra-client-keystore.jks
-
Zoran-cassandra-server-cer.pem
-
zoran-cassandra-server-keystore.jks
-
Zoran-cassandra-client-key.pem
-
Zoran-cassandra-client-truststore.jks
-
Zoran-cassandra-server-key.pem
-
Zoran-cassandra-server-truststore.jks
-
-
Create the
autoid-config
directory.mkdir autoid-config
-
Change to the directory.
cd autoid-config
-
Log in to the ForgeRock Google Cloud Registry using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, refer to How To Configure Service Credentials (Push Auth, Docker) in Backstage.
docker login -u _json_key -p "$(cat autoid_registry_key.json)" https://gcr.io/forgerock-autoid
The following output is displayed:
Login Succeeded
-
Run the create-template command to generate the
deployer.sh
script wrapper and configuration files. Note that the command sets the configuration directory on the target node to/config
. The--user
parameter eliminates the need to usesudo
while editing the hosts file and other configuration files.docker run --user=$(id -u) -v ~/autoid-config:/config -it gcr.io/forgerock-autoid/deployer-pro:2022.11.8 create-template
-
Create a certificate directory for elastic.
mkdir -p autoid-config/certs/elastic
-
Copy the Opensearch certificates and JKS files to
autoid-config/certs/elastic
. -
Create a certificate directory for MongoDB.
mkdir -p autoid-config/certs/mongo
-
Copy the MongoDB certificates and JKS files to
autoid-config/certs/mongo
. -
Create a certificate directory for Cassandra.
mkdir -p autoid-config/certs/cassandra
-
Copy the Cassandra certificates and JKS files to
autoid-config/certs/cassandra
. -
Update the
hosts
file with the IP addresses of the machines. Thehosts
file must include the IP addresses for Docker nodes, Spark main/livy, and the MongoDB master. While the deployer pro does not install or configure the MongoDB main server, the entry is required to run the MongoDB CLI to seed the Autonomous Identity schema.[docker-managers] [docker-workers] [docker:children] docker-managers docker-workers [spark-master-livy] [cassandra-seeds] #For replica sets, add the IPs of all Cassandra nodes [mongo_master] # Add the MongoDB main node in the cluster deployment # For example: 10.142.15.248 mongodb_master=True [odfe-master-node] # Add only the main node in the cluster deployment
-
Update the
vars.yml
file:-
Set
db_driver_type
tomongo
orcassandra
. -
Set
elastic_host
,elastic_port
, andelastic_user
properties. -
Set
kibana_host
. -
Set the Apache livy install directory.
-
Ensure the
elastic_user
,elastic_port
, andmongo_part
are correctly configured. -
Update the
vault.yml
passwords for elastic and mongo to refect your installation. -
Set the
mongo_ldap
variable totrue
if you want Autonomous Identity to authenticate with Mongo DB, configured as LDAP.The mongo_ldap
variable only appears in fresh installs of 2022.11.0 and its upgrades (2022.11.1+). If you upgraded from a 2021.8.7 deployment, the variable is not available in your upgraded 2022.11.x deployment. -
If you are using Cassandra, set the Cassandra-related parameters in the
vars.yml
file. Default values are:cassandra: enable_ssl: "true" contact_points: 10.142.15.248 # comma separated values in case of replication set port: 9042 username: zoran_dba cassandra_keystore_password: "Acc#1234" cassandra_truststore_password: "Acc#1234" ssl_client_key_file: "zoran-cassandra-client-key.pem" ssl_client_cert_file: "zoran-cassandra-client-cer.pem" ssl_ca_file: "zoran-cassandra-server-cer.pem" server_truststore_jks: "zoran-cassandra-server-truststore.jks" client_truststore_jks: "zoran-cassandra-client-truststore.jks" client_keystore_jks: "zoran-cassandra-client-keystore.jks"
-
-
Download images:
./deployer.sh download-images
-
Install Apache Livy.
-
The official release of Apache Livy does not support Apache Spark 3.3.1 or 3.3.2. ForgeRock has re-compiled and packaged Apache Livy to work with Apache Spark 3.3.1 hadoop 3 and Apache Spark 3.3.2 hadoop 3. Use the zip file located at
autoid-config/apache-livy/apache-livy-0.8.0-incubating-SNAPSHOT-bin.zip
to install Apache Livy on the Spark-Livy machine. -
For Livy configuration, refer to https://livy.apache.org/get-started/.
-
-
On the Spark-Livy machine, run the following commands to install the python package dependencies:
-
Change to the
/opt/autoid
directory:cd /opt/autoid
-
Create a
requirements.txt
file with the following content:six==1.11 certifi==2019.11.28 python-dateutil==2.8.1 jsonschema==3.2.0 cassandra-driver numpy==1.22.0 pyarrow==6.0.1 wrapt==1.11.0 PyYAML==6.0 requests==2.31.0 urllib3==1.26.18 pymongo pandas==1.3.5 tabulate openpyxl wheel cython
-
Install the requirements file:
pip3 install -r requirements.txt
-
-
Make sure that the
/opt/autoid
directory exists and that it is both readable and writable. -
Run the deployer script:
./deployer.sh run
-
On the Spark-Livy machine, run the following commands to install the Python egg file:
-
Install the egg file:
cd /opt/autoid/eggs pip3.10 install autoid_analytics-2021.3-py3-none-any.whl
-
Source the
.bashrc
file:source ~/.bashrc
-
Restart Spark and Livy.
./spark/sbin/stop-all.sh ./livy/bin/livy-server stop ./spark/sbin/start-all.sh ./livy/bin/livy-server start
-
Set the Cassandra replication factor
Once Cassandra has been deployed, you need to set the replication factor to match the number of nodes on your system. This ensures that each record is stored in each of the nodes. In the event one node is lost, the remaining node can continue to serve content even if the cluster itself is running with reduced redundancy.
You can define replication on a per keyspace-basis as follows:
-
Start the Cassandra shell,
cqlsh
, and define theautoid
keyspace. Change the replication factor to match the number of seed nodes. The default admin user for Cassandra iszoran_dba
.bin/cqlsh -u zoran_dba zoran_dba@cqlsh> desc keyspace autoid; CREATE KEYSPACE autoid WITH replication = {'class':'SimpleStrategy','replication_factor':'2'} AND durable_writes=true; CREATE TABLE autoid.user_access_decisions_history( user text, entitlement text, date_created timestamp, …
-
Restart Cassandra on this node.
-
Repeat these steps on the other Cassandra seed node(s).
Resolve Hostname
After installing Autonomous Identity, set up the hostname resolution for your deployment.
-
Configure your DNS servers to access Autonomous Identity dashboard on the target node. The following domain names must resolve to the IP address of the target node:
<target-environment>-ui.<domain-name>
-
If DNS cannot resolve target node hostname, edit it locally on the machine that you want to access Autonomous Identity using a browser.
Open a text editor and add an entry in the
/etc/hosts
(Linux/Unix) file orC:\Windows\System32\drivers\etc\hosts
(Windows) for the target node.For multi-node, use the Docker Manager node as your target.
<Docker Mgr Node Public IP Address> <target-environment>-ui.<domain-name>
For example:
<IP Address> autoid-ui.forgerock.com
-
If you set up a custom domain name and target environment, add the entries in
/etc/hosts
. For example:<IP Address> myid-ui.abc.com
For more information on customizing your domain name, see Customize Domains.
Access the Dashboard
-
Open a browser. If you set up your own url, use it for your login.
-
Log in as a test user.
test user: bob.rodgers@forgerock.com password: <password>
Check Apache Cassandra
-
Make sure Cassandra is running in cluster mode. For example
/opt/autoid/apache-cassandra-3.11.2/bin/nodetool status
Check MongoDB
-
Make sure MongoDB is running. For example:
mongo --tls \ --host <Host IP> \ --tlsCAFile /opt/autoid/mongo/certs/rootCA.pem \ --tlsAllowInvalidCertificates \ --tlsCertificateKeyFile /opt/autoid/mongo/certs/mongodb.pem
Check Apache Spark
-
SSH to the target node and open Spark dashboard using the bundled text-mode web browser
elinks http://localhost:8080
Spark Master status should display as ALIVE and worker(s) with State ALIVE.
Click to display an example of the Spark dashboard
Start the Analytics
If the previous installation steps all succeeded, you must now prepare your data’s entity definitions, data sources, and attribute mappings prior to running your analytics jobs. These step are required and are critical for a successful analytics process.
For more information, refer to Set Entity Definitions.
Install a multi-node air-gapped deployment
This chapter presents instructions on deploying Autonomous Identity in a multi-node air-gapped or offline target machine with no external Internet connectivity. ForgeRock provides a deployer script that pulls a Docker image from ForgeRock’s Google Cloud Registry repository. The image contains the microservices, analytics, and backend databases needed for the system.
The air-gap installation is similar to that of the multi-node deployment, except that the image and deployer script must be stored on a portable drive and copied to the air-gapped target environment.
The deployment depends on how the network is configured. You could have a Docker cluster with multiple Spark nodes and Cassandra or MongoDB nodes. The key is to determine the IP addresses of each node.
Prerequisites
Deploy Autonomous Identity on a multi-node air-gapped target on Redhat Linux Enterprise 8 or CentOS Stream 8. The following are prerequisites:
-
Operating System. The target machine requires Redhat Linux Enterprise 8 or CentOS Stream 8. The deployer machine can use any operating system as long as Docker is installed. For this chapter, we use Redhat Linux Enterprise 8 as its base operating system.
If you are upgrading Autonomous Identity on a RHEL 7/CentOS 7, the upgrade to 2022.11 uses RHEL 7/CentOS 7 only. For new and clean installations, Autonomous Identity requires RHEL 8 or CentOS Stream 8 only. -
Default Shell. The default shell for the
autoid
user must bebash
. -
Subnet Requirements. We recommend deploying your multi-node machines within the same subnet. Ports must be open for the installation to succeed. Each instance should be able to communicate to the other instances.
If any hosts used for the Docker cluster (docker-managers, docker-workers) have an IP address in the range of 10.0.x.x, they will conflict with the Swarm network. As a result, the services in the cluster will not connect to the Cassandra database or Elasticsearch backend. The Docker cluster hosts must be in a subnet that provides IP addresses 10.10.1.x or higher.
-
Deployment Requirements. Autonomous Identity provides a
deployer.sh
script that downloads and installs the necessary Docker images. To download the deployment images, you must first obtain a registry key to log into the ForgeRock Google Cloud Registry. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, refer to How To Configure Service Credentials (Push Auth, Docker) in Backstage. -
Filesystem Requirements. Autonomous Identity requires a shared filesystem accessible from the Spark main, Spark worker, analytics hosts, and application layer. The shared filesystem should be mounted at the same mount directory on all of those hosts. If the mount directory for the shared filesystem is different from the default,
/data
, update the/autoid-config/vars.yml
file to point to the correct directories:analytics_data_dir: /data analytics_conf_dif: /data/conf
-
Architecture Requirements. Make sure that the Spark main is on a separate node from the Spark workers.
-
Database Requirements. Decide which database you are using: Apache Cassandra or MongoDB. The configuration procedure is slightly different for each database.
-
Deployment Best-Practice. The example combines the Opensearch data and Opensearch Dashboards nodes. For best performance in production, dedicate a separate node to Opensearch, data nodes, and Opensearch Dashboards.
-
IPv4 Forwarding. Many high-security environments run their CentOS-based systems with IPv4 forwarding disabled. However, Docker Swarm does not work with a disabled IPv4 forward setting. In such environments, make sure to enable IPv4 forwarding in the file
/etc/sysctl.conf
:net.ipv4.ip_forward=1
We recommend that your deployer team have someone with Cassandra expertise. This guide is not sufficient to troubleshoot any issues that may arise. |
Set up the nodes
Set up each node as presented in Set Up the Nodes.
Make sure you have sufficient storage for your particular deployment. For more information on sizing considerations, refer to Deployment Planning Guide.
For multinode deployments, there is a known issue with RHEL 8/CentOS Stream 8 and overlay network configurations. Refer to Known Issues in 2022.11.0. |
Install third-party components
Set up a machine with the required third-party software dependencies. Refer to: Install third-party components.
Set up SSH on the deployer
-
On the deployer machine, run
ssh-keygen
to generate an RSA keypair, and then click Enter. You can use the default filename. Enter a password for protecting your private key.ssh-keygen -t rsa -C "autoid"
The public and private rsa key pair is stored in
home-directory/.ssh/id_rsa
andhome-directory/.ssh/id_rsa.pub
. -
Copy the SSH key to the
autoid-config
directory.cp ~/.ssh/id_rsa ~/autoid-config
-
Change the privileges to the file.
chmod 400 ~/autoid-config/id_rsa
Prepare the tar file
Run the following steps on an Internet-connected host machine:
-
On the deployer machine, change to the installation directory.
cd ~/autoid-config/
-
Log in to the ForgeRock Google Cloud Registry using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, refer to How To Configure Service Credentials (Push Auth, Docker) in Backstage.
docker login -u _json_key -p "$(cat autoid_registry_key.json)" https://gcr.io/forgerock-autoid
The following output is displayed:
Login Succeeded
-
Run the
create-template
command to generate thedeployer.sh
script wrapper. Note that the command sets the configuration directory on the target node to/config
. Note that the--user
parameter eliminates the need to usesudo
while editing the hosts file and other configuration files.docker run --user=$(id -u) -v ~/autoid-config:/config -it gcr.io/forgerock-autoid/deployer-pro:2022.11.8 create-template
-
Open the
~/autoid-config/vars.yml
file, set theoffline_mode
property totrue
, and then save the file.offline_mode: true
-
Download the Docker images. This step downloads software dependencies needed for the deployment and places them in the
autoid-packages
directory.sudo ./deployer.sh download-images
-
Create a tar file containing all of the Autonomous Identity binaries.
tar czf autoid-packages.tgz deployer.sh autoid-packages/*
-
Copy the
autoid-packages.tgz
to a portable hard drive.
Install Autonomous Identity air-gapped
Make sure you have the following prerequisites:
-
IP address of machines running Opensearch, MongoDB, or Cassandra.
-
The Autonomous Identity user should have permission to write to
/opt/autoid
on all machines -
To download the deployment images for the install, you still need your registry key to log into the ForgeRock Google Cloud Registry to download the artifacts.
-
Make sure you have the proper Opensearch certificates with the exact names for both pem and JKS files copied to
~/autoid-config/certs/elastic
:-
esnode.pem
-
esnode-key.pem
-
root-ca.pem
-
elastic-client-keystore.jks
-
elastic-server-truststore.jks
-
-
Make sure you have the proper MongoDB certificates with exact names for both pem and JKS files copied to
~/autoid-config/certs/mongo
:-
mongo-client-keystore.jks
-
mongo-server-truststore.jks
-
mongodb.pem
-
rootCA.pem
-
-
Make sure you have the proper Cassandra certificates with exact names for both pem and JKS files copied to ~/autoid-config/certs/cassandra:
-
Zoran-cassandra-client-cer.pem
-
Zoran-cassandra-client-keystore.jks
-
Zoran-cassandra-server-cer.pem
-
zoran-cassandra-server-keystore.jks
-
Zoran-cassandra-client-key.pem
-
Zoran-cassandra-client-truststore.jks
-
Zoran-cassandra-server-key.pem
-
Zoran-cassandra-server-truststore.jks
-
-
Change to the directory.
cd autoid-config
-
Run the create-template command to generate the
deployer.sh
script wrapper and configuration files. Note that the command sets the configuration directory on the target node to/config
. The--user
parameter eliminates the need to usesudo
while editing the hosts file and other configuration files.docker run --user=$(id -u) -v ~/autoid-config:/config -it gcr.io/forgerock-autoid/deployer-pro:2022.11.3 create-template
-
Create a certificate directory for elastic.
mkdir -p autoid-config/certs/elastic
-
Copy the Opensearch certificates and JKS files to
autoid-config/certs/elastic
. -
Create a certificate directory for MongoDB.
mkdir -p autoid-config/certs/mongo
-
Copy the MongoDB certificates and JKS files to
autoid-config/certs/mongo
. -
Create a certificate directory for Cassandra.
mkdir -p autoid-config/certs/cassandra
-
Copy the Cassandra certificates and JKS files to
autoid-config/certs/cassandra
. -
Update the
hosts
file with the IP addresses of the machines. Thehosts
file must include the IP addresses for Docker nodes, Spark main/livy, and the MongoDB master. While the deployer pro does not install or configure the MongoDB main server, the entry is required to run the MongoDB CLI to seed the Autonomous Identity schema.[docker-managers] [docker-workers] [docker:children] docker-managers docker-workers [spark-master-livy] [cassandra-seeds] #For replica sets, add the IPs of all Cassandra nodes [mongo_master] # Add the MongoDB main node in the cluster deployment # For example: 10.142.15.248 mongodb_master=True [odfe-master-node] # Add only the main node in the cluster deployment
-
Update the
vars.yml
file:-
Set
offline_mode
totrue
. -
Set
db_driver_type
tomongo
orcassandra
. -
Set
elastic_host
,elastic_port
, andelastic_user
properties. -
Set
kibana_host
. -
Set the Apache livy install directory.
-
Ensure the
elastic_user
,elastic_port
, andmongo_part
are correctly configured. -
Update the
vault.yml
passwords for elastic and mongo to refect your installation. -
Set the
mongo_ldap
variable totrue
if you want Autonomous Identity to authenticate with Mongo DB, configured as LDAP.The mongo_ldap
variable only appears in fresh installs of 2022.11.0 and its upgrades (2022.11.1+). If you upgraded from a 2021.8.7 deployment, the variable is not available in your upgraded 2022.11.x deployment. -
If you are using Cassandra, set the Cassandra-related parameters in the
vars.yml
file. Default values are:cassandra: enable_ssl: "true" contact_points: 10.142.15.248 # comma separated values in case of replication set port: 9042 username: zoran_dba cassandra_keystore_password: "Acc#1234" cassandra_truststore_password: "Acc#1234" ssl_client_key_file: "zoran-cassandra-client-key.pem" ssl_client_cert_file: "zoran-cassandra-client-cer.pem" ssl_ca_file: "zoran-cassandra-server-cer.pem" server_truststore_jks: "zoran-cassandra-server-truststore.jks" client_truststore_jks: "zoran-cassandra-client-truststore.jks" client_keystore_jks: "zoran-cassandra-client-keystore.jks"
-
-
Install Apache Livy.
-
The official release of Apache Livy does not support Apache Spark 3.3.1 or 3.3.2. ForgeRock has re-compiled and packaged Apache Livy to work with Apache Spark 3.3.1 hadoop 3 and Apache Spark 3.3.2 hadoop 3. Use the zip file located at
autoid-config/apache-livy/apache-livy-0.8.0-incubating-SNAPSHOT-bin.zip
to install Apache Livy on the Spark-Livy machine. -
For Livy configuration, refer to https://livy.apache.org/get-started/.
-
-
On the Spark-Livy machine, run the following commands to install the python package dependencies:
-
Change to the
/opt/autoid
directory:cd /opt/autoid
-
Create a
requirements.txt
file with the following content:six==1.11 certifi==2019.11.28 python-dateutil==2.8.1 jsonschema==3.2.0 cassandra-driver numpy==1.19.5 pyarrow==0.16.0 wrapt==1.11.0 PyYAML==5.4 requests==2.31.0 urllib3==1.26.18 pymongo pandas==1.0.5 tabulate openpyxl
-
Install the requirements file:
pip3 install -r requirements.txt
-
-
Make sure that the
/opt/autoid
directory exists and that it is both readable and writable. -
Run the deployer script:
./deployer.sh run
-
On the Spark-Livy machine, run the following commands to install the Python egg file:
-
Install the egg file:
cd /opt/autoid/eggs pip3.10 install autoid_analytics-2021.3-py3-none-any.whl
-
Source the
.bashrc
file:source ~/.bashrc
-
Restart Spark and Livy.
./spark/sbin/stop-all.sh ./livy/bin/livy-server stop ./spark/sbin/start-all.sh ./livy/bin/livy-server start
-
Set the replication factor
Once Cassandra has been deployed, you need to set the replication factor to match the number of nodes on your system. This ensures that each record is stored in each of the nodes. In the event one node is lost, the remaining node can continue to serve content even if the cluster itself is running with reduced redundancy.
Resolve Hostname
After installing Autonomous Identity, set up the hostname resolution for your deployment.
-
Configure your DNS servers to access Autonomous Identity dashboard on the target node. The following domain names must resolve to the IP address of the target node:
<target-environment>-ui.<domain-name>
-
If DNS cannot resolve target node hostname, edit it locally on the machine that you want to access Autonomous Identity using a browser.
Open a text editor and add an entry in the
/etc/hosts
(Linux/Unix) file orC:\Windows\System32\drivers\etc\hosts
(Windows) for the target node.For multi-node, use the Docker Manager node as your target.
<Docker Mgr Node Public IP Address> <target-environment>-ui.<domain-name>
For example:
<IP Address> autoid-ui.forgerock.com
-
If you set up a custom domain name and target environment, add the entries in
/etc/hosts
. For example:<IP Address> myid-ui.abc.com
For more information on customizing your domain name, see Customize Domains.
Access the Dashboard
-
Open a browser. If you set up your own url, use it for your login.
-
Log in as a test user.
test user: bob.rodgers@forgerock.com password: <password>
Start the Analytics
If the previous installation steps all succeeded, you must now prepare your data’s entity definitions, data sources, and attribute mappings prior to running your analytics jobs. These step are required and are critical for a successful analytics process.
For more information, refer to Set Entity Definitions.
Upgrade Autonomous Identity
Autonomous Identity provides an upgrade command to update your core software to the latest version while migrating your data.
Upgrade Considerations
-
Database Systems are the Same. If your current database is Apache Cassandra, you cannot upgrade to a MongoDB-based system. You will need to run a clean installation with the new version.
-
Host IPs should be the Same. Host IP addresses must be the same for existing components. You must update the
~/autoid-config/hosts
file by adding the IP addresses for the Elasticsearch entries. Refer to the instructions below. -
Registry Key Required. To download the deployment images for the upgrade, you still need your registry key to log into the ForgeRock Google Cloud Registry. Copy your registry key from your previous build to your new upgrade.
Make sure to test the upgrade on a staging or QA server before running it in production. |
Upgrade Paths
The upgrade assumes the following upgrade paths depends on your current deployment version. The preferred upgrade path is to the latest patch release.
Clean installations of Autonomous Identity 2022.11.x (2022.11.0–2022.11.7) to 2022.11.8 use
the new deployer pro script.
Upgrades from version 2021.8.7 to 2022.11.x to 2022.11.8 use the older deployer script.
The upgrade procedures differ slightly between the deployer pro and deployer versions,
primarily in certificates directory creation (deployer versions)
and using the proper image name during the create-template command (deployer pro and deployer versions).
|
The following chart summarizes these upgrade paths:
Version | Upgrade To | Refer to |
---|---|---|
2022.11.x (deployer-pro) |
2022.11.8 (deployer-pro) |
|
2022.11.x Air-Gapped (deployer-pro) |
2022.11.8 Air-Gapped (deployer-pro) |
|
2022.11.0 (deployer) |
2022.11.8 (deployer) |
|
2022.11.0 Air-Gapped (deployer) |
2022.11.8 Air Gapped (deployer) |
Upgrade from Autonomous Identity 2022.11.x to 2022.11.8 using deployer pro
The following instructions are for upgrading from Autonomous Identity version 2022.11.0–2022.11.7 to the latest version 2022.11.8 in non air-gapped deployments using the deployer pro.
The following steps assume you ran a fresh install of Autonomous Identity 2022.11.x, which uses deployer pro. Make sure you have upgraded your third-party software packages to the supported versions prior to upgrade. |
-
Start on the target server, and back up your
/data/conf
configuration file. The upgrade overwrites this file when updating, so you must restore this file after running the upgrade.sudo mv /data/conf ~/backup-data-conf-2022.11.x
-
Next, if you changed any analytic settings on your deployment, make note of your configuration, so that you can replicate those settings on the upgraded server. Log in to Autonomous Identity, navigate to Administration > Analytic Settings, and record your settings.
-
On the deployer machine, back up the 2022.11.x
~/autoid-config
directory or move it to another location.mv ~/autoid-config ~/backup-2022.11.x
-
Create a new
~/autoid-config
directory.mkdir ~/autoid-config
-
Copy your
autoid_registry_key.json
,ansible.cfg
, andvault.yml
files from your backup directory to~/autoid-config
. If yourvault.yml
file is encrypted, copy the.autoid_vault_password
file to~/autoid-config
. -
Set up your certificate directories for Opensearch, MongoDB, or Cassandra for the deployer:
-
Create a certificate directory Opensearch:
mkdir -p autoid-config/certs/elastic
-
Copy the Opensearch certificates and JKS files to
autoid-config/certs/elastic
. -
Create a certificate directory for MongoDB (if you use MongoDB):
mkdir -p autoid-config/certs/mongo
-
Copy the MongoDB certificates and JKS files to
autoid-config/certs/mongo
. -
Create a certificate directory for Cassandra (if you use Cassandra):
mkdir -p autoid-config/certs/cassandra
-
Copy the Cassandra certificates and JKS files to
autoid-config/certs/cassandra
.
-
-
Copy your original SSH key into the new directory.
cp ~/.ssh/id_rsa ~/autoid-config
-
Change the permission on the SSH key.
chmod 400 ~/autoid-config/id_rsa
-
Check if you can successfully SSH to the target server.
ssh autoid@<Target-IP-Address> Last login: Mon Dec 04 12:20:18 2023
-
On the deployer node, change to the
~/autoid-config
directory.cd ~/autoid-config
-
Log in to the ForgeRock Google Cloud Registry using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, see How To Configure Service Credentials (Push Auth, Docker) in Backstage.
docker login -u _json_key -p "$(cat autoid_registry_key.json)" https://gcr.io/forgerock-autoid
You should see:
Login Succeeded
-
Run the
create-template
command to generate thedeployer.sh
script wrapper and configuration files. Note that the command sets the configuration directory on the target node to/config
. The--user
parameter eliminates the need to usesudo
while editing the hosts file and other configuration files.docker run --user=$(id -u) -v ~/autoid-config:/config \ -it gcr.io/forgerock-autoid/deployer-pro:2022.11.8 create-template
-
Configure your upgraded system by editing the
~/autoid-config/vars.yml
,~/autoid-config/hosts
, and~/autoid-config/vault.yml
files on the deployer machine.You must keep your configuration settings consistent from one system to another. -
Stop the stack.
If you are upgrading a multi-node deployment, run this command on the Docker Manager node. docker stack rm configuration-service consul-server consul-client nginx jas swagger-ui ui api notebook
You should see:
Removing service configuration-service_configuration-service Removing service consul-server_consul-server Removing service consul-client_consul-client Removing service nginx_nginx Removing service jas_jasnode Removing service swagger-ui_swagger-ui Removing service ui_zoran-ui Removing service api_zoran-api Nothing found in stack: notebook
-
Prune old Docker images before running the upgrade command:
-
Get all of the Docker images:
docker images
-
Identify the images that are Autonomous Identity-related. They start with the URL of the ForgeRock Google Cloud Registry (ForgeRock GCR). For example:
REPOSITORY TAG IMAGE ID CREATED SIZE <ForgeRock GCR>/ci/develop/deployer 650879186 075481cea4c2 2 hours ago 823MB <ForgeRock GCR>/ci/develop/offline-packages 650879186 e1a90f389ccc 2 hours ago 3.03GB <ForgeRock GCR>/ci/develop/zoran-ui 650879186 bd303a28b5df 2 hours ago 35.3MB <ForgeRock GCR>/ci/develop/zoran-api 650879186 114d1aca5b0a 2 hours ago 421MB <ForgeRock GCR>/ci/develop/nginx 650879186 43b410661269 2 hours ago 16.7MB <ForgeRock GCR>/ci/develop/jas 650879186 2821e5c365d8 2 hours ago 491MB
-
Remove the old images using the
docker rmi
command. For example:docker rmi -f <image ID> Example: docker rmi -f 075481cea4c2
-
Repeat the previous command to remove all of the Autonomous Identity-related Docker images.
-
-
For multinode deployments, run the following on the Docker Worker node:
docker swarm leave
-
Enter
exit
to end your SSH session. -
From the deployer, restart Docker command:
sudo systemctl restart docker
-
Download the images. This step downloads software dependencies needed for the deployment and places them in the
autoid-packages
directory. Make sure you are in the~/autoid-config
directory../deployer.sh download-images
-
On the Spark-Livy machine, run the following commands to install the python package dependencies:
-
Change to the
/opt/autoid
directory:cd /opt/autoid
-
Create a
requirements.txt
file with the following content:six==1.11 certifi==2019.11.28 python-dateutil==2.8.1 jsonschema==3.2.0 cassandra-driver numpy==1.22.0 pyarrow==6.0.1 wrapt==1.11.0 PyYAML==6.0 requests==2.31.0 urllib3==1.26.18 pymongo pandas==1.3.5 tabulate openpyxl wheel cython
-
Install the requirements file:
pip3 install -r requirements.txt
-
-
Run the upgrade:
./deployer.sh upgrade
-
On the Spark-Livy machine, run the following commands to install the Python wheel distribution:
-
Install the wheel file:
cd /opt/autoid/eggs pip3.10 install autoid_analytics-2021.3-py3-none-any.whl
-
Source the
.bashrc
file:source ~/.bashrc
-
Restart Spark and Livy.
./spark/sbin/stop-all.sh ./livy/bin/livy-server stop ./spark/sbin/start-all.sh ./livy/bin/livy-server start
-
-
SSH to the target server.
-
On the target server, restore your
/data/conf
configuration data file from your previous installation.sudo mv ~/backup-data-conf-2022.11.x /data/conf
-
Re-apply your analytics settings to your upgraded server if you made changes on your previous Autonomous Identity machine. Log in to Autonomous Identity, navigate to Administration > Analytics Settings, and edit your changes.
-
Log out, and then log back in to Autonomous Identity.
You have successfully upgraded your Autonomous Identity server to 2022.11.8.
Upgrade from Autonomous Identity 2022.11.x to 2022.11.8 Air-Gapped using deployer pro
The following instructions are for upgrading from Autonomous Identity version 2022.11.0–2022.11.7 on air-gapped deployments using the deployer pro.
The following steps assume you ran a fresh install of Autonomous Identity 2022.11.x, which uses deployer pro. Make sure you have upgraded your third-party software packages to the supported versions prior to upgrade. |
-
Start on the target server, and back up your
/data/conf
configuration file. The upgrade overwrites this file when updating, so you must restore this file after running the upgrade.sudo mv /data/conf ~/backup-data-conf-2022.11.x
-
Next, if you changed any analytic settings on your deployment, make note of your configuration, so that you can replicate those settings on the upgraded server. Log in to Autonomous Identity, navigate to Administration > Analytic Settings, and record your settings.
-
On the deployer machine, back up the 2022.11.x
~/autoid-config
directory or move it to another location.mv ~/autoid-config ~/backup-2022.11.x
-
Create a new
~/autoid-config
directory.mkdir ~/autoid-config
-
Copy your
autoid_registry_key.json
,ansible.cfg
, andvault.yml
files from your backup directory to~/autoid-config
. If yourvault.yml
file is encrypted, copy the.autoid_vault_password
file to~/autoid-config
. -
Set up your certificate directories for Opensearch, MongoDB, or Cassandra for the deployer:
-
Create a certificate directory Opensearch:
mkdir -p autoid-config/certs/elastic
-
Copy the Opensearch certificates and JKS files to
autoid-config/certs/elastic
. -
Create a certificate directory for MongoDB (if you use MongoDB):
mkdir -p autoid-config/certs/mongo
-
Copy the MongoDB certificates and JKS files to
autoid-config/certs/mongo
. -
Create a certificate directory for Cassandra (if you use Cassandra):
mkdir -p autoid-config/certs/cassandra
-
Copy the Cassandra certificates and JKS files to
autoid-config/certs/cassandra
.
-
-
Copy your original SSH key into the new directory.
cp ~/.ssh/id_rsa ~/autoid-config
-
Change the permission on the SSH key.
chmod 400 ~/autoid-config/id_rsa
-
On the deployer node, change to the
~/autoid-config
directory.cd ~/autoid-config
-
Log in to the ForgeRock Google Cloud Registry using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, see How To Configure Service Credentials (Push Auth, Docker) in Backstage.
docker login -u _json_key -p "$(cat autoid_registry_key.json)" https://gcr.io/forgerock-autoid
You should see:
Login Succeeded
-
Run the
create-template
command to generate thedeployer.sh
script wrapper and configuration files. Note that the command sets the configuration directory on the target node to/config
. The--user
parameter eliminates the need to usesudo
while editing the hosts file and other configuration files.docker run --user=$(id -u) -v ~/autoid-config:/config \ -it gcr.io/forgerock-autoid/deployer-pro:2022.11.8 create-template
-
Configure your upgraded system by editing the
~/autoid-config/vars.yml
,~/autoid-config/hosts
, and~/autoid-config/vault.yml
files on the deployer machine.You must keep your configuration settings consistent from one system to another. -
Download the images. This step downloads software dependencies needed for the deployment and places them in the
autoid-packages
directory. Make sure you are in the~/autoid-config
directory../deployer.sh download-images
-
On the Spark-Livy machine, run the following commands to install the python package dependencies:
-
Change to the
/opt/autoid
directory:cd /opt/autoid
-
Create a
requirements.txt
file with the following content:six==1.11 certifi==2019.11.28 python-dateutil==2.8.1 jsonschema==3.2.0 cassandra-driver numpy==1.22.0 pyarrow==6.0.1 wrapt==1.11.0 PyYAML==6.0 requests==2.31.0 urllib3==1.26.18 pymongo pandas==1.3.5 tabulate openpyxl wheel cython
-
Install the requirements file:
pip3 install -r requirements.txt
-
-
Stop the stack.
If you are upgrading a multi-node deployment, run this command on the Docker Manager node. docker stack rm configuration-service consul-server consul-client nginx jas swagger-ui ui api notebook
You should see:
Removing service configuration-service_configuration-service Removing service consul-server_consul-server Removing service consul-client_consul-client Removing service nginx_nginx Removing service jas_jasnode Removing service swagger-ui_swagger-ui Removing service ui_zoran-ui Removing service api_zoran-api Nothing found in stack: notebook
-
Prune old Docker images before running the upgrade command:
-
Get all of the Docker images:
docker images
-
Identify the images that are Autonomous Identity-related. They start with the URL of the ForgeRock Google Cloud Registry (ForgeRock GCR). For example:
REPOSITORY TAG IMAGE ID CREATED SIZE <ForgeRock GCR>/ci/develop/deployer 650879186 075481cea4c2 2 hours ago 823MB <ForgeRock GCR>/ci/develop/offline-packages 650879186 e1a90f389ccc 2 hours ago 3.03GB <ForgeRock GCR>/ci/develop/zoran-ui 650879186 bd303a28b5df 2 hours ago 35.3MB <ForgeRock GCR>/ci/develop/zoran-api 650879186 114d1aca5b0a 2 hours ago 421MB <ForgeRock GCR>/ci/develop/nginx 650879186 43b410661269 2 hours ago 16.7MB <ForgeRock GCR>/ci/develop/jas 650879186 2821e5c365d8 2 hours ago 491MB
-
Remove the old images using the
docker rmi
command. For example:docker rmi -f <image ID> Example: docker rmi -f 075481cea4c2
-
-
For multinode deployments, run the following on the Docker Worker node:
docker swarm leave
-
From the deployer, restart Docker:
sudo systemctl restart docker
-
Create a tar file containing all of the Autonomous Identity binaries.
tar czf autoid-packages.tgz deployer.sh autoid-packages/*
-
Copy the
autoid-packages.tgz
,deployer.sh
, and SSH key (id_rsa ) to a portable hard drive. -
On the air-gapped target machine, backup your previous
~/autoid-config
directory, and then create a new~/autoid-config
directory.mkdir ~/autoid-config
-
Copy the
autoid-package.tgz
tar file,deployer.sh
, and SSH key from the portable storage device to the/autoid-config
folder. -
Unpack the tar file.
tar xf autoid-packages.tgz -C ~/autoid-config
-
Set up your certificate directories for Opensearch, MongoDB, or Cassandra for the deployer:
-
Create a certificate directory Opensearch:
mkdir -p autoid-config/certs/elastic
-
Copy the Opensearch certificates and JKS files to
autoid-config/certs/elastic
. -
Create a certificate directory for MongoDB (if you use MongoDB):
mkdir -p autoid-config/certs/mongo
-
Copy the MongoDB certificates and JKS files to
autoid-config/certs/mongo
. -
Create a certificate directory for Cassandra (if you use Cassandra):
mkdir -p autoid-config/certs/cassandra
-
Copy the Cassandra certificates and JKS files to
autoid-config/certs/cassandra
.
-
-
Copy the SSH key to the
~/autoid-config
directory. -
Change the privileges to the file.
chmod 400 ~/autoid-config/id_rsa
-
Change to the configuration directory.
cd ~/autoid-config
-
Import the deployer image.
./deployer.sh import-deployer
You should see:
… db631c8b06ee: Loading layer [=============================================⇒] 2.56kB/2.56kB 2d62082e3327: Loading layer [=============================================⇒] 753.2kB/753.2kB Loaded image: <ForgeRock Google cloud registry URl>/deployer:2022.11.8
-
Create the configuration template using the
create-template
command. This command creates the configuration files:ansible.cfg
,vars.yml
,vault.yml
andhosts
../deployer.sh create-template
You should see:
Config template is copied to host machine directory mapped to /config
-
Configure your upgraded system by editing the
~/autoid-config/vars.yml
,~/autoid-config/hosts
, and~/autoid-config/vault.yml
files on the deployer machine.You must keep your configuration settings consistent from one system to another. -
Run the upgrade:
./deployer.sh upgrade
-
On the Spark-Livy machine, run the following commands to install the Python wheel distribution:
-
Install the wheel file:
cd /opt/autoid/eggs pip3.10 install autoid_analytics-2021.3-py3-none-any.whl
-
Source the
.bashrc
file:source ~/.bashrc
-
Restart Spark and Livy.
./spark/sbin/stop-all.sh ./livy/bin/livy-server stop ./spark/sbin/start-all.sh ./livy/bin/livy-server start
-
-
SSH to the target server.
-
On the target server, restore your
/data/conf
configuration data file from your previous installation.sudo mv ~/backup-data-conf-2022.11.x /data/conf
-
Re-apply your analytics settings to your upgraded server if you made changes on your previous Autonomous Identity machine. Log in to Autonomous Identity, navigate to Administration > Analytics Settings, and edit your changes.
-
Log out, and then log back in to Autonomous Identity.
You have successfully upgraded your Autonomous Identity server to 2022.11.8.
Upgrade from Autonomous Identity 2022.11.x to 2022.11.8 using the deployer
The following instructions are for upgrading from Autonomous Identity version 2022.11.0–2022.11.7 to the latest version 2022.11.8 in non air-gapped deployments using the deployer.
If you upgraded from any Autonomous Identity version 2021.8.7 or earlier to version 2022.11.x, then you are using the deployer .
|
-
Start on the target server, and back up your
/data/conf
configuration file. The upgrade overwrites this file when updating, so you must restore this file after running the upgrade.sudo mv /data/conf ~/backup-data-conf-2022.11.x
-
Next, if you changed any analytic settings on your deployment, make note of your configuration, so that you can replicate those settings on the upgraded server. Log in to Autonomous Identity, navigate to Administration > Analytic Settings, and record your settings.
-
On the deployer machine, back up the 2022.11.x
~/autoid-config
directory or move it to another location.mv ~/autoid-config ~/backup-2022.11.x
-
Create a new
~/autoid-config
directory.mkdir ~/autoid-config
-
Copy your
autoid_registry_key.json
from your backup directory to~/autoid-config
. -
Copy your original SSH key into the new directory.
cp ~/.ssh/id_rsa ~/autoid-config
-
Change the permission on the SSH key.
chmod 400 ~/autoid-config/id_rsa
-
Check if you can successfully SSH to the target server.
ssh autoid@<Target-IP-Address> Last login: Mon Dec 04 12:20:18 2023
-
On the deployer node, change to the
~/autoid-config
directory.cd ~/autoid-config
-
Log in to the ForgeRock Google Cloud Registry using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, see How To Configure Service Credentials (Push Auth, Docker) in Backstage.
docker login -u _json_key -p "$(cat autoid_registry_key.json)" https://gcr.io/forgerock-autoid
You should see:
Login Succeeded
-
Run the
create-template
command to generate thedeployer.sh
script wrapper and configuration files. Note that the command sets the configuration directory on the target node to/config
. The--user
parameter eliminates the need to usesudo
while editing the hosts file and other configuration files.docker run --user=$(id -u) -v ~/autoid-config:/config \ -it gcr.io/forgerock-autoid/deployer:2022.11.8 create-template
-
Configure your upgraded system by editing the
~/autoid-config/vars.yml
,~/autoid-config/hosts
, and~/autoid-config/vault.yml
files on the deployer machine.You must keep your configuration settings consistent from one system to another. -
Stop the stack.
If you are upgrading a multi-node deployment, run this command on the Docker Manager node. docker stack rm configuration-service consul-server consul-client nginx jas swagger-ui ui api notebook
You should see:
Removing service configuration-service_configuration-service Removing service consul-server_consul-server Removing service consul-client_consul-client Removing service nginx_nginx Removing service jas_jasnode Removing service swagger-ui_swagger-ui Removing service ui_zoran-ui Removing service api_zoran-api Nothing found in stack: notebook
-
Prune old Docker images before running the upgrade command:
-
Get all of the Docker images:
docker images
-
Identify the images that are Autonomous Identity-related. They start with the URL of the ForgeRock Google cloud registry (ForgeRock GCR). For example:
REPOSITORY TAG IMAGE ID CREATED SIZE <ForgeRock GCR>/ci/develop/deployer 650879186 075481cea4c2 2 hours ago 823MB <ForgeRock GCR>/ci/develop/offline-packages 650879186 e1a90f389ccc 2 hours ago 3.03GB <ForgeRock GCR>/ci/develop/zoran-ui 650879186 bd303a28b5df 2 hours ago 35.3MB <ForgeRock GCR>/ci/develop/zoran-api 650879186 114d1aca5b0a 2 hours ago 421MB <ForgeRock GCR>/ci/develop/nginx 650879186 43b410661269 2 hours ago 16.7MB <ForgeRock GCR>/ci/develop/jas 650879186 2821e5c365d8 2 hours ago 491MB
-
Remove the old images using the
docker rmi
command. For example:docker rmi -f <image ID> Example: docker rmi -f 075481cea4c2
-
Repeat the previous command to remove all of the Autonomous Identity-related Docker images.
-
-
For multinode deployments, run the following on the Docker Worker node:
docker swarm leave
-
Enter
exit
to end your SSH session. -
From the deployer, restart Docker command:
sudo systemctl restart docker
-
Download the images. This step downloads software dependencies needed for the deployment and places them in the
autoid-packages
directory. Make sure you are in the/autoid-config
directory../deployer.sh download-images
-
Run the upgrade:
./deployer.sh upgrade
-
SSH to the target server.
-
On the target server, restore your
/data/conf
configuration data file from your previous installation.sudo mv ~/backup-data-conf-2022.11.x /data/conf
-
Re-apply your analytics settings to your upgraded server if you made changes on your previous Autonomous Identity machine. Log in to Autonomous Identity, navigate to Administration > Analytics Settings, and edit your changes.
-
Log out, and then log back in to Autonomous Identity.
You have successfully upgraded your Autonomous Identity server to 2022.11.8.
Upgrade from Autonomous Identity 2022.11.x to 2022.11.8 Air-Gapped using the deployer
The following instructions are for upgrading from Autonomous Identity version 2022.11.0–2022.11.7 to the latest version 2022.11.8 on air-gapped deployments using the deployer.
-
Start on the target server, and back up your
/data/conf
configuration file. The upgrade overwrites this file when updating, so you must restore this file after running the upgrade.sudo mv /data/conf ~/backup-data-conf-2022.11.x
-
Next, if you changed any analytic settings on your deployment, make note of your configuration, so that you can replicate those settings on the upgraded server. Log in to Autonomous Identity, navigate to Administration > Analytic Settings, and record your settings.
-
On the deployer machine, back up the 2022.11.x
~/autoid-config
directory or move it to another location.mv ~/autoid-config ~/backup-2022.11.x
-
Create a new
~/autoid-config
directory.mkdir ~/autoid-config
-
Copy your
autoid_registry_key.json
from your backup directory to~/autoid-config
. -
Copy your original SSH key into the new directory.
cp ~/.ssh/id_rsa ~/autoid-config
-
Change the permission on the SSH key.
chmod 400 ~/autoid-config/id_rsa
-
On the deployer node, change to the
~/autoid-config
directory.cd ~/autoid-config
-
Log in to the ForgeRock Google Cloud Registry using the registry key. The registry key is only available to ForgeRock Autonomous Identity customers. For specific instructions on obtaining the registry key, see How To Configure Service Credentials (Push Auth, Docker) in Backstage.
docker login -u _json_key -p "$(cat autoid_registry_key.json)" https://gcr.io/forgerock-autoid
You should see:
Login Succeeded
-
Run the
create-template
command to generate thedeployer.sh
script wrapper and configuration files. Note that the command sets the configuration directory on the target node to/config
. The--user
parameter eliminates the need to usesudo
while editing the hosts file and other configuration files.docker run --user=$(id -u) -v ~/autoid-config:/config \ -it gcr.io/forgerock-autoid/deployer:2022.11.8 create-template
-
Configure your upgraded system by editing the
~/autoid-config/vars.yml
,~/autoid-config/hosts
, and~/autoid-config/vault.yml
files on the deployer machine.You must keep your configuration settings consistent from one system to another. -
Download the images. This step downloads software dependencies needed for the deployment and places them in the
autoid-packages
directory. Make sure you are in the~/autoid-config
directory../deployer.sh download-images
-
Stop the stack.
If you are upgrading a multi-node deployment, run this command on the Docker Manager node. docker stack rm configuration-service consul-server consul-client nginx jas swagger-ui ui api notebook
You should see:
Removing service configuration-service_configuration-service Removing service consul-server_consul-server Removing service consul-client_consul-client Removing service nginx_nginx Removing service jas_jasnode Removing service swagger-ui_swagger-ui Removing service ui_zoran-ui Removing service api_zoran-api Nothing found in stack: notebook
-
Prune old Docker images before running the upgrade command:
-
Get all of the Docker images:
docker images
-
Identify the images that are Autonomous Identity-related. They start with the URL of the ForgeRock Google Cloud Registry (ForgeRock GCR). For example:
REPOSITORY TAG IMAGE ID CREATED SIZE <ForgeRock GCR>/ci/develop/deployer 650879186 075481cea4c2 2 hours ago 823MB <ForgeRock GCR>/ci/develop/offline-packages 650879186 e1a90f389ccc 2 hours ago 3.03GB <ForgeRock GCR>/ci/develop/zoran-ui 650879186 bd303a28b5df 2 hours ago 35.3MB <ForgeRock GCR>/ci/develop/zoran-api 650879186 114d1aca5b0a 2 hours ago 421MB <ForgeRock GCR>/ci/develop/nginx 650879186 43b410661269 2 hours ago 16.7MB <ForgeRock GCR>/ci/develop/jas 650879186 2821e5c365d8 2 hours ago 491MB
-
Remove the old images using the
docker rmi
command. For example:docker rmi -f <image ID> Example: docker rmi -f 075481cea4c2
-
-
For multinode deployments, run the following on the Docker Worker node:
docker swarm leave
-
From the deployer, restart Docker:
sudo systemctl restart docker
-
Create a tar file containing all of the Autonomous Identity binaries.
tar czf autoid-packages.tgz deployer.sh autoid-packages/*
-
Copy the
autoid-packages.tgz
,deployer.sh
, and SSH key (id_rsa ) to a portable hard drive. -
On the air-gapped target machine, backup your previous
~/autoid-config
directory, and then create a new~/autoid-config
directory.mkdir ~/autoid-config
-
Copy the
autoid-package.tgz
tar file,deployer.sh
, and SSH key from the portable storage device to the/autoid-config
folder. -
Unpack the tar file.
tar xf autoid-packages.tgz -C ~/autoid-config
-
Copy the SSH key to the
~/autoid-config
directory. -
Change the privileges to the file.
chmod 400 ~/autoid-config/id_rsa
-
Change to the configuration directory.
cd ~/autoid-config
-
Import the deployer image.
./deployer.sh import-deployer
You should see:
… db631c8b06ee: Loading layer [=============================================⇒] 2.56kB/2.56kB 2d62082e3327: Loading layer [=============================================⇒] 753.2kB/753.2kB Loaded image: https://gcr.io/forgerock-autoid/deployer:2022.11.8
-
Create the configuration template using the
create-template
command. This command creates the configuration files:ansible.cfg
,vars.yml
,vault.yml
andhosts
../deployer.sh create-template
You should see:
Config template is copied to host machine directory mapped to /config
-
Configure your upgraded system by editing the
~/autoid-config/vars.yml
,~/autoid-config/hosts
, and~/autoid-config/vault.yml
files on the deployer machine.You must keep your configuration settings consistent from one system to another. -
Run the upgrade:
./deployer.sh upgrade
-
On the target server, restore your
/data/conf
configuration data file from your previous installation.sudo mv ~/backup-data-conf-2022.11.x /data/conf
-
Re-apply your analytics settings to your upgraded server if you made changes on your previous Autonomous Identity machine. Log in to Autonomous Identity, navigate to Administration > Analytics Settings, and edit your changes.
-
Log out, and then log back in to Autonomous Identity.
You have successfully upgraded your Autonomous Identity server to 2022.11.8.
Appendix A: Appendix A: Autonomous Identity ports
The Autonomous Identity deployment uses the following ports. The Docker deployer machine opens the ports in the firewall on the target node. If you are using cloud virtual machines, you need to open these ports on the virtual cloud network.
Autonomous Identity uses the following ports:
Port | Protocol | Machine | Source | Description |
---|---|---|---|---|
2377 |
TCP |
Docker managers |
Docker managers and nodes |
Communication between the nodes of a Docker swarm cluster |
7946 |
TCP/UDP |
Docker managers and workers |
Docker managers and workers |
Communication among nodes for container network discovery |
4789 |
UDP |
Docker managers and workers |
Docker managers and workers |
Overlay network traffic |
7001 |
TCP |
Cassandra |
Cassandra nodes |
Internode communication |
9042 |
TCP |
Cassandra |
Cassandra nodes, Docker managers and nodes |
CQL native transport |
27017 |
TCP |
MongoDB |
MongoDB nodes, Docker managers and nodes |
Default ports for mongod and mongos instances |
9200 |
TCP |
Open Distro for Elasticsearch |
Docker managers and nodes |
Elasticsearch REST API endpoint |
7077 |
TCP |
Spark master |
Spark workers |
Spark master internode communication port |
40040-40045 |
TCP |
Spark Master |
Spark Workers |
Spark driver ports for Spark workers to callback |
443 |
TCP |
Docker managers |
User’s browsers/API clients |
Port to access the dashboard and API |
10081 |
TCP |
Docker managers |
User’s browsers/API clients |
Port for the JAS service. |
Appendix B: vars.yml
Autonomous Identity has a configuration file where you can set the analytics data and configuration directories,
private IP address mapping, LDAP/SSO options, and session duration during installation.
The file is created when running the create-template
command during the installation and is located in the
/autoid-config
directory.
The file is as follows:
ai_product: auto-id # Product name domain_name: forgerock.com # Default domain name target_environment: autoid # Default namespace analytics_data_dir: /data # Default data directory analytics_conf_dir: /data/conf # Default config directory for analytics # set to true for air-gap installation offline_mode: false # choose the DB Type : cassandra| mongo db_driver_type: cassandra # Needed only if private and public IP address of # target nodes are different. If cloud VMs the private # is different than the IP address (public ip) used for # SSH. Private IP addresses are used by various services # to reach other services in the cluster # Example: # private_ip_address_mapping: # 35.223.33.21: "10.128.0.5" # 108.59.83.132: "10.128.0.37" # ... private_ip_address_mapping: # private and external IP mapping #private_ip_address_mapping-ip-addesses# api: authentication_option: "Local" # Values: "Local", "SSO", "LocalAndSSO" access_log_enabled: true # Enable access logs jwt_expiry: "30 minutes" # Default session duration jwt_secret_file: "{{ install_path }}/jwt/secret.txt" # Location of JWT secret file jwt_audience: "http://my.service" oidc_jwks_url: "na" local_auth_mode_password: Welcome123 session_secret: "q0civ3L33W" # set the following API parameters when # SSO and LdapAndSSO properties # authentication_option is SSO or LdapAndSSO # oidc_issuer: # oidc_auth_url # oidc_token_url: # oidc_user_info_url: # oidc_callback_url: # oidc_jwks_url: # oidc_client_scope: # oidc_groups_attribute: # oidc_uid_attribute: # oidc_client_id: # oidc_client_secret: # admin_object_id: # entitlement_owner_object_id: # executive_object_id: # supervisor_object_id: # user_object_id: # application_owner_object_id: # role_owner_object_id: # role_engineer_object_id: # oidc_end_session_endpoint: # oidc_logout_redirect_url: # mongo config starts # uncomment below for mongo with replication enabled. Not needed for # single node deployments # mongodb_replication_replset: mongors # custom key # password for inter-process authentication # # please regenerate this file on production environment with command 'openssl rand -base64 741' #mongodb_keyfile_content: | # 8pYcxvCqoe89kcp33KuTtKVf5MoHGEFjTnudrq5BosvWRoIxLowmdjrmUpVfAivh # CHjqM6w0zVBytAxH1lW+7teMYe6eDn2S/O/1YlRRiW57bWU3zjliW3VdguJar5i9 # Z+1a8lI+0S9pWynbv9+Ao0aXFjSJYVxAm/w7DJbVRGcPhsPmExiSBDw8szfQ8PAU # 2hwRl7nqPZZMMR+uQThg/zV9rOzHJmkqZtsO4UJSilG9euLCYrzW2hdoPuCrEDhu # Vsi5+nwAgYR9dP2oWkmGN1dwRe0ixSIM2UzFgpaXZaMOG6VztmFrlVXh8oFDRGM0 # cGrFHcnGF7oUGfWnI2Cekngk64dHA2qD7WxXPbQ/svn9EfTY5aPw5lXzKA87Ds8p # KHVFUYvmA6wVsxb/riGLwc+XZlb6M9gqHn1XSpsnYRjF6UzfRcRR2WyCxLZELaqu # iKxLKB5FYqMBH7Sqg3qBCtE53vZ7T1nefq5RFzmykviYP63Uhu/A2EQatrMnaFPl # TTG5CaPjob45CBSyMrheYRWKqxdWN93BTgiTW7p0U6RB0/OCUbsVX6IG3I9N8Uqt # l8Kc+7aOmtUqFkwo8w30prIOjStMrokxNsuK9KTUiPu2cj7gwYQ574vV3hQvQPAr # hhb9ohKr0zoPQt31iTj0FDkJzPepeuzqeq8F51HB56RZKpXdRTfY8G6OaOT68cV5 # vP1O6T/okFKrl41FQ3CyYN5eRHyRTK99zTytrjoP2EbtIZ18z+bg/angRHYNzbgk # lc3jpiGzs1ZWHD0nxOmHCMhU4usEcFbV6FlOxzlwrsEhHkeiununlCsNHatiDgzp # ZWLnP/mXKV992/Jhu0Z577DHlh+3JIYx0PceB9yzACJ8MNARHF7QpBkhtuGMGZpF # T+c73exupZFxItXs1Bnhe3djgE3MKKyYvxNUIbcTJoe7nhVMrwO/7lBSpVLvC4p3 # wR700U0LDaGGQpslGtiE56SemgoP # mongo config ends elastic_heap_size: 1g # sets the heap size (1g|2g|3g) for the Elastic Servers jas: auth_enabled: true auth_type: 'jwt' signiture_key_id: 'service1-hmac' signiture_algorithm: 'hmac-sha256' max_memory: 4096M mapping_entity_type: /common/mappings datasource_entity_type: /common/datasources mongo_port: 27017 # Port where Mongo is running mongo_ldap: false # Specify if Mongo is authenticated against an LDAP elastic_host: 10.128.0.28 # IP Address of master node where Opensearch is running elastic_port: 9200 # Port of master node where Opensearch is running elastic_user: elasticadmin # Opensearch username kibana_host: 10.128.0.28 # IP Address of node where Opensearch Dashboard is running apache_livy: dest_dir: /home/ansible/livy # Folder where livy is installed. AutoID copies analytics files to this directory. cassandra: # Cassandra Nodes details. enable_ssl: "true" # Set if SSL is enabled. contact_points: # Comma seperated list of ip addresses - first ip is master# port: 9042 # Port where cassandra node is running username: zoranuser # User created for AutoID to seed Schema cassandra_keystore_password: "Acc#1234" # Keystore Password cassandra_truststore_password: "Acc#1234" # Truststore Password ssl_client_key_file: "zoran-cassandra-client-key.pem" # Cassandra Client Key File ssl_client_cert_file: "zoran-cassandra-client-cer.pem" # Cassandra Client Cert File ssl_ca_file: "zoran-cassandra-server-cer.pem" # Cassandra Server Root CA File server_truststore_jks: "zoran-cassandra-server-truststore.jks" # Server Truststore file for services to connect client_truststore_jks: "zoran-cassandra-client-truststore.jks" # Client Truststore file for services to connect client_keystore_jks: "zoran-cassandra-client-keystore.jks" # Client Keystore file for services to use