PingIntelligence

Upgrading PingIntelligence

After upgrading Elasticsearch, upgrade PingIntelligence 5.1 to 5.2 and switch to RHEL 8.

Before you begin

Stop all PingIntelligence 5.1 components before starting the upgrade.

About this task

To upgrade PingIntelligence:

Steps

  1. Upgrade API Security Enforcer (ASE) from 5.1.1 to 5.1.3 in the corresponding RHEL 7.9 instance.

    There is no 5.2 RHEL 8 build for ASE.

    1. Make sure that ASE is stopped.

    2. Make a backup of the existing ASE base folder.

    3. Copy the ASE 5.1.3 build.

    4. Untar the new build.

    5. Update the ase.conf and abs.conf with the required details, such as port, ASE mode, and API Behavioral Security (ABS) IP, by referring to backed-up conf files.

    6. Add the actual passwords for the following passwords:

      config/ase.conf: sender_password, keystore_password
      config/abs.conf: access_key, secret_key
      config/cluster.conf: cluster_secret_key
    7. Copy ase.crt from the backup folder to the ase/config directory.

    8. Copy the PingIntelligence.lic license file to the ase/config directory.

    9. Generate the master key.

      /opt/pingidentity/ase/bin/cli.sh generate_obfkey -u admin -p
    10. Obfuscate the key.

      /opt/pingidentity/ase/bin/cli.sh obfuscate_keys -u admin -p
    11. Copy the API JSON files to the config/api directory.

    12. Start ASE.

      /opt/pingidentity/ase/bin/start.sh
  2. In the already migrated RHEL 8 mongo, add the new DB pi4api_dashboard and grant readWrite roles for the absuser for this DB.

    Make sure the pi4api_dashboard collection does not exist before the 5.2 upgrade. The 5.2 upgrade will remove the data from the following tables, which are under the pi4api_dashboard collection, if it exists:

    api_groups
    api_state
    user_sessions
    users
    1. Shut down the mongo primary and secondary.

      mongod --shutdown --dbpath data/
    2. Start the primary mongo without the --auth flag.

      mongod --dbpath ./data/ --logpath ./logs/mongo.log --port 27017 --replSet absrs01 --fork -bind_ip 0.0.0.0
    3. Sign on to mongo without specifying a user.

      mongo
    4. Run the following commands:

      create pi4api_dashboard DB
       use pi4api_dashboard
      Switch to admin db
       use admin

      Execute the command to grant the readWrite role for absuser for pi4api_dashboard.

      db.grantRolesToUser("absuser", ["readWrite",{ role: "readWrite", db: "pi4api_dashboard" }]);
    5. Shut down the mongo primary.

    6. Restart mongo (both primary and secondary) with --auth enabled and with --tlsMode.

      mongod --auth --dbpath ./data/ --logpath ./logs/mongo.log --port 27017 --replSet absrs01 --fork --keyFile ./key/mongodb-keyfile -bind_ip 0.0.0.0 --tlsMode requireTLS --tlsCertificateKeyFile ./key/mongodb.pem
  3. Make the following changes in Kafka:

    1. Create the discovery topic.

      /home/ec2-user/pingidentity/kafka/bin/kafka-topics.sh --bootstrap-server 172.16.40.244:9091 --create --topic pi4api.queuing.apis --partitions 1 --replication-factor 1 --command-config /home/ec2-user/pingidentity/kafka/config/client.properties
    2. Create the access control lists (ACL) for the ABS producer user for the discovery topic.

      /home/ec2-user/pingidentity/kafka/bin/kafka-acls.sh --bootstrap-server 172.16.40.244:9091 --add --allow-principal User:abs_producer --operation Create --operation Read --operation Write --topic pi4api.queuing.apis --command-config /home/ec2-user/pingidentity/kafka/config/client.properties
    3. Create the ACLs for the ABS consumer user for the discovery topic.

      /home/ec2-user/pingidentity/kafka/bin/kafka-acls.sh --bootstrap-server 172.16.40.244:9091 --add --allow-principal User:abs_consumer --operation Read --topic pi4api.queuing.apis --command-config /home/ec2-user/pingidentity/kafka/config/client.properties
    4. Create the ACLs for the data engine consumer user.

      /home/ec2-user/pingidentity/kafka/bin/kafka-acls.sh --bootstrap-server 172.16.40.244:9091 --add --allow-principal User:pi4api_de_user --operation Create --operation Read --operation Write --topic pi4api.queuing.apis --command-config /home/ec2-user/pingidentity/kafka/config/client.properties
    5. Add the ACLs below in Kafka if they have not already been added:

      Current ACLs for resource ResourcePattern(resourceType=TOPIC, name=pi4api.queuing.anomalies, patternType=LITERAL):
       	(principal=Group:pi4api.abs, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_consumer, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_consumer, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=Group:pi4api.abs, host=, operation=READ, permissionType=ALLOW)
      	(principal=Group:pi4api.data-engine, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:abs_producer, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=Group:pi4api.data-engine, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_producer, host=, operation=WRITE, permissionType=ALLOW)
      Current ACLs for resource ResourcePattern(resourceType=GROUP, name=pi4api.abs, patternType=LITERAL):
       	(principal=User:abs_consumer, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_consumer, host=, operation=DESCRIBE, permissionType=ALLOW)
      Current ACLs for resource ResourcePattern(resourceType=TOPIC, name=pi4api.queuing.ioas, patternType=LITERAL):
       	(principal=Group:pi4api.abs, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_consumer, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_consumer, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=Group:pi4api.abs, host=, operation=READ, permissionType=ALLOW)
      	(principal=Group:pi4api.data-engine, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:abs_producer, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=Group:pi4api.data-engine, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_producer, host=, operation=WRITE, permissionType=ALLOW)
      Current ACLs for resource ResourcePattern(resourceType=TOPIC, name=pi4api.queuing.apis, patternType=LITERAL):
       	(principal=User:abs_producer, host=, operation=READ, permissionType=ALLOW)
      	(principal=Group:pi4api.abs, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_consumer, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_consumer, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=Group:pi4api.abs, host=, operation=READ, permissionType=ALLOW)
      	(principal=Group:pi4api.data-engine, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=WRITE, permissionType=ALLOW)
      	(principal=User:abs_producer, host=, operation=CREATE, permissionType=ALLOW)
      	(principal=User:abs_producer, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=Group:pi4api.data-engine, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_producer, host=, operation=WRITE, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=CREATE, permissionType=ALLOW)
      Current ACLs for resource ResourcePattern(resourceType=GROUP, name=pi4api.data-engine, patternType=LITERAL):
       	(principal=User:pi4api_de_user, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=DESCRIBE, permissionType=ALLOW)
      Current ACLs for resource ResourcePattern(resourceType=TOPIC, name=pi4api.queuing.transactions, patternType=LITERAL):
       	(principal=Group:pi4api.abs, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_consumer, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_consumer, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=Group:pi4api.abs, host=, operation=READ, permissionType=ALLOW)
      	(principal=Group:pi4api.data-engine, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:pi4api_de_user, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=User:abs_producer, host=, operation=DESCRIBE, permissionType=ALLOW)
      	(principal=Group:pi4api.data-engine, host=, operation=READ, permissionType=ALLOW)
      	(principal=User:abs_producer, host=, operation=WRITE, permissionType=ALLOW)
  4. Upgrade the Dashboard from 5.1.0.2 to 5.1.1.

    Make sure the discovered_apis index does not exist before upgrading the Dashboard from 5.1.0.2. During the 5.2 Dashboard upgrade, all the documents related to this index will be cleaned up.

    1. Stop the dataengine and webgui.

    2. Make a backup of dataengine.jks, kafka_truststore.jks, and webgui.jks files, and save them separately.

    3. Delete the dataengine and webgui folders inside the Ping Identity directory.

    4. Download the Dashboard 5.1.1 build to the Dashboard instance /home/ec2-user folder.

    5. Untar the downloaded build.

      Result:

    The dataengine and webgui folders will be created inside the Ping Identity directory.

    1. Make the following changes to dataengine:

      1. Go to the dataengine folder.

        cd /home/ec2-user/pingidentity/dataengine/config
      2. Copy the dataenine.jks (from the backup) file to the dataengine/config directory.

      3. Copy kafka_truststore.jks (from the backup) file to the dataengine/config directory.

      4. Update the properties below in kafka.properties with valid entries:

        pi.kafka.bootstrap-servers=10.96.6.45:9093
        pi.kafka.consumer.sslTruststoreLocation=/home/ec2-user/pingidentity/dataengine/config/kafka_truststore.jks
        pi.kafka.consumer.sslTruststorePassword=changeme
        pi.kafka.consumer.groupId=pi4api.data-engine
        pi.kafka.consumer.authentication.username=pi4api_de_user
        pi.kafka.consumer.authentication.password=changeme
      5. Update dataengine.properties with valid entries:

        pi.dataengine.server.ssl.key-store-password=changeme
        pi.dataengine.server.ssl.key-alias=<alias-name>
        pi.dataengine.abs.url=https://10.96.6.82:8080
        pi.dataengine.abs.access_key=abs_ak
        pi.dataengine.abs.secret_key=abs_sk
        pi.dataengine.elasticsearch.url=https://10.96.6.45:9200
        pi.dataengine.elasticsearch.username=elastic
        pi.dataengine.elasticsearch.password=changeme
    2. Generate a new master key.

      cd /home/ec2-user/dataengine
      ./bin/cli.sh generate_obfkey
    3. Obfuscate keys.

      ./bin/cli.sh obfuscate_keys
    4. Start the datenegine.

      ./bin/start.sh
    5. Make the following changes to webgui:

      1. Go to the webgui folder.

        cd /home/ec2-user/pingidentity/webgui/config
      2. Copy the webgui.jks (from the backup) file to the webgui/config directory.

      3. Update the properties below in the webgui.properties file:

        pi.webgui.server.ssl.key-store-password=changeme
        pi.webgui.server.ssl.key-alias=<alis-name>
        pi.webgui.abs.url=https://10.96.6.82:8080
        pi.webgui.abs.api-service-url=https://10.96.6.82:8050
        pi.webgui.abs.access-key=abs_ak
        pi.webgui.abs.secret-key=abs_sk
        pi.webgui.ase.url=https://10.96.6.80:8010
        pi.webgui.ase.access-key=ase_ak
        pi.webgui.ase.secret-key=ase_sk
        pi.webgui.elasticsearch.url=https://10.96.6.45:9200
        pi.webgui.elasticsearch.username=elastic
        pi.webgui.elasticsearch.password=changeme
        pi.webgui.datasource.username=sa
        pi.webgui.datasource.password=changeme
        pi.webgui.datasource.encryption-password=changeme
      4. Generate a new master key.

        cd /home/ec2-user/webgui
        ./bin/cli.sh generate_obfkey
      5. Obfuscate keys.

        ./bin/cli.sh obfuscate_keys
      6. Start the webgui.

        ./bin/start.sh

        Use only RHEL 8 instances to run the remaining PingIntelligence 5.2 components (ABS, API Publish, machine learning (ML) service, data engine, and web GUI).

  5. Stop all PingIntelligence 5.1 components (ABS, API Publish, Dashboard) before starting the 5.2 upgrade.

  6. To upgrade ABS, make sure you have a RHEL 8 instance ready to install the PingIntelligence ABS 5.2 build and then proceed with the following:

    1. Install Java 11.0.2 and set JAVA_HOME.

    2. Download the PingIntelligence ABS 5.2 build from the PingIntelligence Downloads website.

    3. Copy the build to the RHEL 8 instance.

    4. Untar the build.

      Result:

      An abs folder will be created inside the pingidentity folder.

    5. Copy the PingIntelligence.lic license file to the pingidentity/abs/config directory.

    6. Copy the abs.jks file from the old ABS (RHEL 7, ABS 5.1) to the new RHEL 8 ABS ssl directory.

      /pingidentity/abs/config/ssl/

    You can alternatively create a new abs.jks file.

    1. Copy the kafka.truststore.jks file from the old ABS (RHEL 7, ABS 5.1) to new RHEL 8 ABS corresponding path.

      /opt/pingidentity/abs/config/kafka.truststore.jks
    2. Update the properties below in Kafka.properties:

      pi.kafka.bootstrap-servers=10.96.6.196:9093
      pi.kafka.sslTruststoreLocation=/opt/pingidentity/abs/config/kafka.truststore.jks
      pi.kafka.sslTruststorePassword=<actual_password>
      pi.kafka.consumer.authentication.password=<actual_password>
      pi.kafka.producer.authentication.password=<actual_password>
      pi.kafka.producer.min-insync-replicas=1
    3. Update the abs.properties file with the details below:

      jks_password=<actual_password>
      Mongo_rs=mongodb://10.96.6.242:27017,10.96.6.201:27017
      mongo_username=absuser
      mongo_password=abs123
      mongo_ssl=true
      email_password=<actual_password>
    4. Generate a new ABS master key.

      /opt/pingidentity/abs/bin/cli.sh generate_obfkey -u admin -p admin
    5. Obfuscate keys.

      /opt/pingidentity/abs/bin/cli.sh obfuscate_keys -u admin -p admin

      Result:

      The following keys will be obfuscated:

      config/abs.properties: mongo_password, jks_password, and email_password

      config/kafka.properties: pi.kafka.consumer.authentication.password, pi.kafka.producer.authentication.password, and pi.kafka.sslTruststorePassword

    6. Start ABS.

      /opt/pingidentity/abs/bin/start.sh
  7. To upgrade API Publish, make sure you have a RHEL 8 instance ready to install the PingIntelligence ABS 5.2 build and then proceed with the following:

    1. Install Java 11.0.2 and set JAVA_HOME.

    2. Download the PingIntelligence API Publish 5.2 build from the PingIntelligence Downloads website.

    3. Copy the build to the RHEL 8 instance.

    4. Untar the build.

      Result:

      An apipublish folder will be created inside the pingidentity folder.

    5. Copy the apipublish.jks file from the old API Publish (RHEL 7, API Publish 5.1) to the new RHEL 8 API Publish ssl directory.

      /pingidentity/apipublish/config/ssl/

    You can alternatively create a new apipublish.jks file.

    1. Update the apipublish.properties file with the details below:

      pi.apipublish.ssl.key-store-password=api123
      pi.apipublish.datasource.mongo_rs=mongodb://10.96.6.242:27017,10.96.6.201:27017
      pi.apipublish.datasource.username=absuser
      pi.apipublish.datasource.password=abs123
      pi.apipublish.datasource.mongo_ssl=true
    2. Generate a new API Publish master key.

      /pingidentity/apipublish/bin/cli.sh generate_obfkey -u admin -p admin
    3. Obfuscate keys.

      /pingidentity/apipublish/bin/cli.sh obfuscate_keys -u admin -p admin

      Result:

      The following keys will be obfuscated:

      config/apipublish.properties: pi.apipublish.ssl.key-store-password and pi.apipublish.datasource.password

    4. Start API Publish.

      ../bin/start.sh
  8. Install the new ML service 5.2 build on the RHEL 8 instance by following the steps in Installing the PingIntelligence machine learning service.

  9. To install dataengine, make sure you have a RHEL 8 instance with 8 core CPU, 16 GB, 1 TB hard disk drive (HDD).

    1. Download the PingIntelligence 5.2 Dashboard build and extract it in the RHEL 8 instance.

    2. Install Java 11.0.2 and set JAVA_HOME.

    3. Copy the data-engine.jks file from the old dataengine and copy it to the new dataengine RHEL 8 instance in dataengine/config directory.

    4. Add the Mongo certificate to data-engine.jks.

      1. In the RHEL 8 mongo primary node, go to mongo/key/mongo.pemand copy the public key part.

      2. Store the public key as mongo.crt in dataengine/config.

      3. Run the following command:

        keytool -import -keystore dataengine.jks -storetype JKS -storepass changeme -alias mongo -file mongo.crt -noprompt
    5. Copy the kafka.truststore.jks file to the dataengine/config/ directory.

    6. Update Kafka.properties with the details below:

      pi.kafka.bootstrap-servers=<Kafka_IP>:9093
      pi.kafka.consumer.sslTruststoreLocation=/opt/pingidentity/dataengine/config/kafka_truststore.jks
      pi.kafka.consumer.sslTruststorePassword=<actual_password>
      pi.kafka.consumer.authentication.password=<actual_password>
    7. Update Dataengine.properties with the details below:

      pi.dataengine.server.ssl.key-store-password=<actual_password>
      pi.dataengine.server.ssl.key-alias=<alias-name>
      # abs properties
      pi.dataengine.abs.url=https://<ABS_IP>:8080
      pi.dataengine.abs.access_key=abs_ak
      pi.dataengine.abs.secret_key=abs_sk
      pi.dataengine.elasticsearch.url=https://<elasticsearch_ip>:9200
      pi.dataengine.elasticsearch.username=elastic
      pi.dataengine.elasticsearch.password=<actual_password>
      
      pi.dataengine.datasource.url=mongodb://<mongo_ip>:27017
      pi.dataengine.datasource.username=absuser
      pi.dataengine.datasource.password=abs123
    8. Generate dataengine_master.key.

      ./bin/cli.sh generate_obfkey
    9. Obfuscate keys.

      ./bin/cli.sh obfuscate_keys
    10. Start dataengine.

      ./bin/start.sh
  10. Install webgui.

    1. Copy the h2-backup folder (that was copied and saved in Migrating Elasticsearch from RHEL 7.9 to 8 in step 13e) to the RHEL 8 instance under the webgui/data directory.

    2. Copy webgui.jks from the old webgui instance to the RHEL 8 instance webgui/config directory.

    3. Add the Mongo certificate to webgui.jks.

      1. In the RHEL 8 mongo primary node, go to mongo/key/mongo.pem and copy the public key part.

      2. Store the public key as mongo.crt in webgui/config.

      3. Run the following command:

        keytool -import -keystore webgui.jks -storetype JKS -storepass changeme -alias mongo -file mongo.crt -noprompt
    4. Update webgui.properties with the details below:

      pi.webgui.server.ssl.key-store-password=<actual_password>
      pi.webgui.server.ssl.key-alias=<alias-name>
      
      pi.webgui.abs.url=https://10.96.6.242:8080
      pi.webgui.abs.api-service-url=https://10.96.6.242:8050
      pi.webgui.abs.access-key=<actual_key>
      pi.webgui.abs.secret-key=<actual_key>
      
      # ase properties
      pi.webgui.ase.url=https://10.96.6.217:8010
      pi.webgui.ase.access-key=<actual_key>
      pi.webgui.ase.secret-key=<actual_key>
      
      # elasticsearch properties
      pi.webgui.elasticsearch.url=https://10.96.6.19:9200
      pi.webgui.elasticsearch.username=elastic
      pi.webgui.elasticsearch.password=<actual_password>
      
      pi.webgui.datasource.url=mongodb://10.96.6.242:27017
      pi.webgui.datasource.username=absuser
      pi.webgui.datasource.password=abs123
    5. Generate dataengine_master.key.

      ./bin/cli.sh generate_obfkey
    6. Obfuscate keys.

      ./bin/cli.sh obfuscate_keys
    7. Start webgui.

      ./bin/start.sh