PingIntelligence

PingIntelligence Kubernetes deployment

Install PingIntelligence for APIs in the Kubernetes cluster, Amazon Elastic Kubernetes Service (EKS).

PingIntelligence ships with Helm-Chart that is packaged with the Docker toolkit and can be used to deploy PingIntelligence in a Kubernetes cluster.

PingIntelligence creates the following resources in the Kubernetes cluster:

  • Seven statefulsets with one container each for:

    • MongoDB

    • API Behavioral Security (ABS) AI engine

    • API Security Enforcer (ASE)

    • API Publish

    • PingIntelligence Dashboard

    • Apache Zookeeper

    • Kafka

  • Six external services (LoadBalancer type), one each for (Configurable to expose):

    • MongoDB

    • ABS AI engine

    • ASE

    • API Publish

    • Zookeeper

    • Kafka

      Each component has an external service of type LoadBalancer that can be exposed by setting the flag in values.yaml(expose_external_service: false). By default, this value is true for ASE. The Dashboard will always be exposed since it must be accessible externally.

  • Six internal services (clusterIP type), one each for:

    • MongoDB

    • ABS AI engine

    • ASE

    • API Publish

    • Zookeeper

    • Kafka

PingIntelligence Kubernetes supports RHEL 7.9.

This deployment of PingIntelligence on a Kubernetes cluster node is suitable for Amazon EKS.

The Kubernetes cluster can be configured on the Amazon EKS. You can install PingIntelligence on a Kubernetes cluster node using Amazon EKS.

  • Amazon EKS

  • Kubernetes cluster

Deploying PingIntelligence using Amazon EKS

About this task

To deploy PingIntelligence on a Kubernetes cluster node using Amazon EKS:

Steps

  1. Create an Amazon EKS cluster on a RHEL or host.

    You can use either eksctl or AWS command-line interface (CLI) for creating the Kubernetes cluster. Refer to the following links when creating and managing the Kubernetes cluster using Amazon EKS.

    For more information, see the following:

  2. Follow the steps in Deploying PingIntelligence in Kubernetes cluster and deploy PingIntelligence on the Kubernetes cluster that you created in step 1.

Deploying PingIntelligence in Kubernetes cluster

Before you begin

Make sure you have a valid PingIntelligence license.

About this task

The Helm-Chart to deploy PingIntelligence in Kubernetes is shipped inside the Docker toolkit.

To deploy PingIntelligence in a Kubernetes cluster:

Steps

  1. Download the PingIntelligence Docker toolkit from the PingIntelligence software download site.

  2. Untar the Docker toolkit by running the following command:

    tar -zxf <PingIntelligence Docker toolkit>

    Directory structure:

    pingidentity/
    |-- docker-toolkit
    `-- helm-chart

    Directory structure for Helm-Chart:

    helm-chart/
    `-- pi4api
        |-- Chart.yaml
        |-- templates
        |   |-- abs_deployment.yaml
        |   |-- abs_external_service.yaml
        |   |-- abs_internal_service.yaml
        |   |-- apipublish_deployment.yaml
        |   |-- apipublish_external_service.yaml
        |   |-- apipublish_internal_service.yaml
        |   |-- ase_deployment.yaml
        |   |-- ase_external_service.yaml
        |   |-- ase_internal_service.yaml
        |   |-- dashboard_deployment.yaml
        |   |-- dashboard_external_service.yaml
        |   |-- _helpers.tpl
        |   |-- kafka_deployment.yaml
        |   |-- kafka_external_service.yaml
        |   |-- kafka_internal_service.yaml
        |   |-- license_configmap.yaml
        |   |-- mongo_deployment.yaml
        |   |-- mongo_external_service.yaml
        |   |-- mongo_internal_service.yaml
        |   |-- namespace.yaml
        |   |-- service_account.yaml
        |   |-- zookeeper_deployment.yaml
        |   |-- zookeeper_external_service.yaml
        |   `-- zookeeper_internal_service.yaml
        |-- values.yaml
        `-- version.txt

    pi4api

    Folder contains all PingIntelligence resources to deploy on Kubernetes

    Chart.yaml

    PingIntelligence chart details

    templates

    PingIntelligence deployable resources yaml files

    values.yaml

    PingIntelligence configuration example file

  3. Build the PingIntelligence Docker images by completing the steps in Building the PingIntelligence Docker images.

  4. Create a custom values.yaml file with all the configurations for PingIntelligence.

    The values.yaml packed with Helm-Chart is for example purposes and with default values. Override the values in a custom values.yaml, or change the values in the default .yaml.

  5. Provide the license for PingIntelligence in values.yaml.

    The system prompts you for the following information from the license file:

    #License required for ABS and ASE
     license : |
          ID=
          Organization=
          Product=
          Version=
          IssueDate=
          EnforcementType=
          ExpirationDate=
          MaxTransactionsPerMonth=
          Tier=
          SignCode=
          Signature=
  6. Install PingIntelligence Helm-Chart.

    Helm-Chart creates Kubernetes secrets to store data of the released version in the namespace. The user can decide where to store them, in the default namespace in which PingIntelligence is being deployed.

    Choose from:

    • Create the namespace through Helm-Chart:

      1. Set create_namespace to true in values.yaml.

      2. Install by running the following command:

        helm install -f values.yaml pi4api ~/pingidentity/helm-chart/pi4api

        This creates the namespace at deployment time, and the Helm-Chart release secret is stored in the default namespace.

    • Create the namespace before installation:

      1. Create a namespace:

        kubectl create namespace pingidentity
      2. Install by running the following command:

        helm install -f values.yaml pi4api ~/pingidentity/helm-chart/pi4api

        In this case, Helm-Chart creates a release secret that is stored in the namespace where PingIntelligence gets deployed.

        • Currently, PingIntelligence supports only image and license upgrades in Helm-Chart:

          helm upgrade -f values.yaml pi4api ~/pingidentity/helm-chart/pi4api -n <namespace>
        • Currently, PingIntelligence Helm-Chart supports a single node Kafka, Zookeeper, API Publish, and Mongo installation.

        Example:

        The following is an example of a default values.yaml file:

        # Default values for pi4api kubernetes setup.
        # This template is for example puprose
        # Override these values in custom values.yaml
        # This is a YAML-formatted file.
        # Declare variables to be passed into your templates.
        
        deployment :
         Namespace creation if required
         create_namespace: false
         #Namespace to deploy PI4API
         namespace : pingidentity
         run_user : 10001
         fsgroup_user : 101
         #Timzeone in which PI4API can be deployed.
         timezone : "utc"
         #Update Stratergy for pods.
         updateStrategy: RollingUpdate
         #License required for ABS and ASE
         license : |
             ID=
             Organization=
             Product=
             Version=
             IssueDate=
             EnforcementType=
             ExpirationDate=
             MaxTransactionsPerMonth=
             Tier=
             SignCode=
             Signature=
         #This is required to configure max_map_count in node running dashboard, if set manually you can make enabled as false.
         dashboard_init:
           enabled: true
           repository: "busybox"
           tag: "latest"
           pullPolicy: "Always"
         #ABS configuration
         abs :
           image : pingidentity/abs:5.1
           terminationGracePeriodSeconds : 60
           replicas : 1
           #Port for ABS
           port : 8080
           health_api_path: /abs/health
           #ENVIRONMENT_VARIABLES
           environment_variables :
                         # Access keys and secret keys to access ABS
                         abs_access_key : "abs_ak"
                         abs_secret_key : "abs_sk"
                         abs_access_key_ru : "abs_ak_ru"
                         abs_secret_key_ru : "abs_sk_ru"
                         #Mongo url is passed here, you can configura external mongo url .
                         mongo_rs: "mongodb://mongo-0.mongo-internal-service:27017"
                         # Communication between mongo and ABS
                         mongo_ssl : "true"
                         # Mongo DB Server Certificate Verification
                         # Set to true if Mongo DB instance is configured in SSL mode and you want to do the server certificate verification
                         # By default ABS will not verify the MongoDB server certificate
                         mongo_certificate : "false"
                         # Mongo DB User and password
                         mongo_username : "absuser"
                         mongo_password : "abs123"
                         # Duration of initial training period (units in hours)
                         # This value will be set in the mongo nodes
                         attack_initial_training : "24"
                         attack_update_interval : "24"
                         api_discovery_initial_period : "1"
                         api_discovery_update_interval : "1"
                         api_discovery : "true"
                         api_discovery_subpath : "1"
                         poc_mode: "true"
                         # Give the host:port combination of mutiple kafka server in comma seperated.
                         kafka_server: kafka:9093
                         kafka_min_insync_replica: 1
                         #Users in Kafka for abs
                         abs_consumer_user: abs_consumer
                         abs_producer_user: abs_producer
                         abs_consumer_group: pi4api.abs
                         # Kafka Consumer Producer Password
                         abs_consumer_password: changeme
                         abs_producer_password: changeme
                         #topics to be created in kafka
                         transaction_topic: pi4api.queuing.transactions
                         attack_topic: pi4api.queuing.ioas
                         anomalies_topic: pi4api.queuing.anomalies
                         topic_partition: 1
                         replication_factor: 1
                         retention_period: "172800000"
                         attack_list_count: 100000
                         # Memory for webserver and streaming server (unit is in MB)
                         system_memory: 4096
                         # CLI admin password
                         abs_cli_admin_password: admin
                         # Configure Email Alert. Set enable_emails to true to configure
                         # email settings for ABS
                         enable_emails: false
                         smtp_host: smtp.example.com
                         smtp_port: 587
                         smtp_ssl: true
                         smtp_cert_verification: false
                         sender_email: sender@example.com
                         sender_email_password: changeme
                         receiver_email: receiver@example.com
        
           pvc :
             volume_home_mount_path: /opt/pingidentity
             accessModes: ReadWriteOnce
             abs_data :
               pvc_type: gp2
               pvc_size: 100Gi
             abs_logs:
               pvc_type: gp2
               pvc_size: 10Gi
           resources:
               limits:
                 cpu: "6"
                 memory: 22G
               requests:
                 cpu: "3"
                 memory: 16G
           external_service :
             expose_external_service: false
             type : "LoadBalancer"
             port : 8080
         #API Publish deployment Configuration
         apipublish :
           image : pingidentity/apipublish:5.1
           replicas : 1
           #ports for the PingIntelligence API Publish Service
           port : 8050
           health_api_path: /apipublish/health
           environment_variables :
                         # MongoDB Database names, Mongo url are picked from abs they are same
                         database_name: abs_data
                         meta_database: abs_metadata
           resources:
               limits:
                 cpu: "1"
                 memory: 4G
               requests:
                 cpu: "1"
                 memory: 3G
           external_service :
             expose_external_service: false
             type : "LoadBalancer"
             port: 8050
        
        #Mongo deployment Configuration
         mongo :
           # Flag to install mongo pod as part of setup, currently mongo cluster in not supported , for using external mongo set it to false and put mongo url in abs mongo_rs.
           install_mongo : true
           image : pingidentity/mongo:4.2.0
           replicas: 1
           port: 27017
           environment_variables :
                         mongo_username : "absuser"
                         mongo_password : "abs123"
                         wired_tiger_cache_size_gb : "3"
                         mongo_ssl : "true"
           pvc :
             volume_home_mount_path: /opt/pingidentity
             accessModes: ReadWriteOnce
             mongo_data :
               pvc_type: gp2
               pvc_size: 50Gi
             mongo_logs :
               pvc_type: gp2
               pvc_size: 50Gi
           resources:
                   limits:
                     cpu: "4"
                     memory: 6G
                   requests:
                     cpu: "1"
                     memory: 4G
           service :
             type: "ClusterIP"
           external_service :
             expose_external_service: false
             type : "LoadBalancer"
             port: 27017
        
         #Dashboard Deployment Configuration
         dashboard :
           image : pingidentity/dashboard:5.1
           replicas: 1
           webgui_port: 8030
           dataengine_port : 8040
           terminationGracePeriodSeconds : 60
           dataengine_health_api_path: /status
           environment_variables :
                         #ENVIRONMENT_VARIABLES
                         enable_syslog : "false"
                         syslog_host : "127.0.0.1"
                         syslog_port : "514"
                         enable_attack_log_to_stdout : "true"
                         ase_access_key : "admin"
        
                         webgui_admin_password : "changeme"
        
                         # User with permission set similar to "elastic" user, set user if using external elastic search
                         elastic_username: "elastic"
        
                         # Passwords for "elasticsearch","ping_user" and "ping_admin" users
                         # dataengine will be accessible for these accounts
                         # Please set strong passwords
                         # If enable_xpack is set to false, below passwords are ignored
                         elastic_password: "changeme"
        
                         # Configuration distribution type of elasticsearch. Allowed values are default or aws
                         distro_type : "default"
                         # external elastic search url if not using internal elastic search
                         elasticsearch_url: ""
        
                         #Users and passord in Kafka for dataengine
                         de_consumer_password : "changeme"
                         de_consumer_user: "pi4api_de_user"
                         de_consumer_group: "pi4api.data-engine"
                         # allowed values: native, sso.
                         # In native mode, webgui users are self managed and stored in webgui.
                         # In sso mode, webgui users are managed and stored in an Identity provider.
                         authentication_mode: native
                         # Maximum duration of a session.
                         # Value should be in the form of <number><duration_suffix>
                         # Duration should be > 0.
                         # Allowed duration_suffix values: m for minutes, h for hours, d for days.
                         session_max_age: 6h
        
                         # Number of active UI sessions at any time.
                         # Value should be greater than 1.
                         max_active_sessions: 50
        
                         # webgui "ping_user" account password
                         webgui_ping_user_password: "changeme"
        
                          Below sso configuration properties are applicable in sso authentication_mode only.
                         # Client ID value in Identity provider.
                         sso_oidc_client_id: pingintelligence
                         # Client Secret of the above Client ID.
                         sso_oidc_client_secret: changeme
                         # OIDC Client authentication mode.
                         # Valid values: BASIC, POST, or NONE
                         sso_oidc_client_authentication_method: BASIC
                         # OIDC Provider uri
                         # WebGUI queries <issuer-uri>/.well-known/openid-configuration to get OIDC provider metadata
                         # issuer ssl certificate is not trusted by default. So import issuer ssl certificate into config/webgui.jks
                         # issuer should be reachable from both back-end and front-end
                         sso_oidc_provider_issuer_uri: https://127.0.0.1:9031
        
                         # Place the sso provider issuer-certificate in the following path => <installation_path>/pingidentity/certs/webgui/
                         # Name of the file should be => webgui-sso-oidc-provider.crt
        
                         # claim name for unique id of the user in UserInfo response
                         # a new user is provisioned using this unique id value
                         sso_oidc_provider_user_uniqueid_claim_name: sub
                         # claim name for first name of the user in UserInfo response
                         # either first name or last name can be empty, but both should not be empty
                         sso_oidc_provider_user_first_name_claim_name: given_name
                         # claim name for last name of the user in UserInfo response
                         # either first name or last name can be empty, but both should not be empty
                         sso_oidc_provider_user_last_name_claim_name: family_name
                         # claim name for role of the user in UserInfo response
                         sso_oidc_provider_user_role_claim_name: role
                         # additional scopes in authorization request
                         # multiple scopes should be comma (,) separated
                         # openid,profile scopes are always requested
                         sso_oidc_client_additional_scopes:
                          End of sso configuration
        
                         # ssl key store password of webgui hosts
                         server_ssl_key_store_password: "changeme"
                         server_ssl_key_alias: webgui
        
                         # local h2 db datasource properties
                         h2_db_password: changeme
                         h2_db_encryption_password: changeme
        
                         # allowed values: abs/pingaccess/axway
                         discovery_source: abs
                         # allowed values: auto/manual
                         discovery_mode: auto
                         # value is in minutes
                         discovery_mode_auto_polling_interval: 10
                         discovery_mode_auto_delete_non_discovered_apis: false
        
                         # valid only if discovery_source is set to pingaccess
                         pingaccess_url: https://127.0.0.1:9000/
                         pingaccess_username: Administrator
                         pingaccess_password:
        
                         # valid only if discovery_source is set to axway
                         axway_url: https://127.0.0.1:8075/
                         axway_username: apiadmin
                         axway_password:
           pvc :
             volume_home_mount_path: /opt/pingidentity
             accessModes: ReadWriteOnce
             elasticsearch_data :
               pvc_type: gp2
               pvc_size: 50Gi
             webgui_data :
               pvc_type: gp2
               pvc_size: 50Gi
           resources:
                   limits:
                     cpu: "6"
                     memory: 16G
                   requests:
                     cpu: "2"
                     memory: 6G
           external_service :
             type : "LoadBalancer"
             https_Port : 443
         #ASE Deployment Configuration
         ase :
           image : pingidentity/ase:5.1
           # Define ports for the PingIntelligence API Security Enforcer
           http_ws_port: 8000
           https_wss_port: 8443
           management_port: 8010
           cluster_manager_port: 8020
           replicas : 1
           terminationGracePeriodSeconds : 60
           environment_variables :
                         # Deployment mode for ASE. Valid values are inline or sideband
                         mode : "inline"
                         enable_cluster : "false"
                         enable_abs : "true"
                         # Password for ASE keystore
                         keystore_password: asekeystore
                         # enable keepalive for ASE in sideband mode
                         enable_sideband_keepalive : "false"
                         enable_ase_health : "false"
                         # Set this value to true, to allow API Security Enforcer to fetch attack list from ABS.
                         enable_abs_attack : "true"
                         # Set this value to true, to allow API Security Enforcer to fetch published API list from ABS
                         enable_abs_publish : "false"
                         #This value determines how often API Security Enforcer will get published API list from ABS.
                         abs_publish_request_minutes : 10
                         # enable strict parsing checks for client requests
                         # If enabled, ASE will block request with invalid header start
                         # If disabled, it will allow requests
                         enable_strict_request_parser : "true"
                         # cluster_secret_key for ASE cluster
                         cluster_secret_key: yourclusterkey
                         # CLI admin password
                         ase_secret_key: admin
           pvc :
             volume_home_mount_path: /opt/pingidentity
             accessModes: ReadWriteOnce
             ase_data :
               pvc_type: gp2
               pvc_size: 10Gi
             ase_logs:
               pvc_type: gp2
               pvc_size: 100Gi
             ase_config:
               pvc_type: gp2
               pvc_size: 1Gi
           resources:
                   limits:
                     cpu: "4"
                     memory: 8G
                   requests:
                     cpu: "1"
                     memory: 4G
           external_service :
             expose_external_service: true
             type : "LoadBalancer"
             http_Port : 8000
             https_Port : 8443
        
         #Kafka Deployment Configuration
         kafka :
           # Flag to install kafka and zookeeper pod as part of setup,for using external kafka set it to false and put kafka url in abs kafka_server.
           install_kafka : true
           image : pingidentity/kafka:5.1
           replicas : 1
           # Define ports for the Kafka
           sasl_port : 9093
           ssl_port: 9092
           environment_variables :
                         #Zookeeper Service host:port.
                         zookeeper_url: zookeeper:2182
                         #Enable delete topic
                         delete_topic: true
           resources:
                   limits:
                     cpu: "2"
                     memory: 8G
                   requests:
                     cpu: "1"
                     memory: 4G
           pvc :
             volume_home_mount_path: /opt/pingidentity
             accessModes: ReadWriteOnce
             data_volume :
               pvc_type: gp2
               pvc_size: 10Gi
             log_volume:
               pvc_type: gp2
               pvc_size: 1Gi
           external_service :
             expose_external_service: false
             type : "LoadBalancer"
             sasl_port : 9093
             ssl_port: 9092
          #Zookeeper Deployment Configuration
         zookeeper :
           image : pingidentity/zookeeper:5.1
           replicas: 1
           # Define ports for the Zookeeper
           port : 2181
           ssl_port: 2182
           resources:
                   limits:
                     cpu: "2"
                     memory: 8G
                   requests:
                     cpu: "1"
                     memory: 4G
           pvc :
             volume_home_mount_path: /opt/pingidentity
             accessModes: ReadWriteOnce
             data_volume :
               pvc_type: gp2
               pvc_size: 10Gi
             data_log_volume :
               pvc_type: gp2
               pvc_size: 100Gi
             log_volume:
               pvc_type: gp2
               pvc_size: 1Gi
           external_service :
             expose_external_service: false
             type : "LoadBalancer"
             ssl_port: 2182

Next steps

Verify that the deployment is successful by entering the following command:

kubectl get pods -n pingidentity

Below is an example of what you should see:

NAME           READY   STATUS    RESTARTS   AGE
abs-0          1/1     Running   0          3d
apipublish-0   1/1     Running   1          3d
ase-0          1/1     Running   0          3d
dashboard-0    1/1     Running   0          3d
kafka-0        1/1     Running   0          3d
mongo-0        1/1     Running   0          3d
zookeeper-0    1/1     Running   0          3d

Fetch the IP addresses of ASE, ABS, and Dashboard by entering the following command:

kubectl get svc -n pingidentity

Below is an example of what you should see:

NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP                                                                    PORT(S)                         AGE
abs-internal-service          ClusterIP      None             <none>                                                                         8080/TCP                        3d
apipublish-internal-service   ClusterIP      None             <none>                                                                         8050/TCP                        3d
ase-external-service          LoadBalancer   10.100.249.102   a0f15298c7d7d42f183605d73258ebb1-2044570848.ap-northeast-2.elb.amazonaws.com   8000:30180/TCP,8443:31961/TCP   3d
ase-internal-service          ClusterIP      None             <none>                                                                         8020/TCP,8010/TCP               3d
dashboard-external-service    LoadBalancer   10.100.205.84    aa08fa369b08a4ed997a9371faf4418c-349939151.ap-northeast-2.elb.amazonaws.com    443:32068/TCP                   3d
kafka                         ClusterIP      10.100.198.185   <none>                                                                         9092/TCP,9093/TCP               3d
mongo-internal-service        ClusterIP      None             <none>                                                                         27017/TCP                       3d
zookeeper                     ClusterIP      10.100.59.16     <none>                                                                         2182/TCP,2181/TCP               3d

If you are deploying in the sideband mode, take the NodePort IP address of ASE to use in API gateway integration.