PingIntelligence Health Check Guide - PingIntelligence for APIs - 5.2

PingIntelligence

bundle
pingintelligence-52
ft:publication_title
PingIntelligence
Product_Version_ce
PingIntelligence for APIs 5.2 (Latest)
category
APISecurity
AdvancedAPICybersecurity
Capability
Environment
OS
Product
apisecurity
capability
linux
pi-52
pingintelligence
private
ContentType_ce

This section of the PingIntelligence Monitoring Guide provides administrators with a list of commands that can be used to perform health checks on different PingIntelligence components.

There are multiple methods explained for each component. You can automate the steps or use them in manual mode. The document also captures information on log files, process ID (PID) details, and port details of the API Security Enforcer (ASE) nodes, the API Behavioral Security (ABS) artificial intelligence (AI) engine, and the PingIntelligence Dashboard.

For more information, click the tab for the respective PingIntelligence components:

Performing health checks on ASE

You can use the following options to conduct a health check on ASE nodes:

  • To enable the ASE health check URL in the /pingidentity/ase/config/ase.conf file, set the enable_ase_health config property to true.

    The default value of enable_ase_health is false.

    1. If the configuration is modified on a running ASE node, restart the node after modifying the configuration.

      For more information, see Starting and stopping ASE.

    2. In a clustered ASE environment, stop the ASE cluster and update the ase.conf file of the primary node and restart the other ASE nodes.

      For more information, see Restarting an ASE cluster.

    3. When the enable_ase_health is set to true, go to the following URLs and do a health check:
      • http://<ase-hostname/ip>:<http_port>/ase
      • https://<ase-hostname/ip>:<https_port>/ase

      If ASE is receiving the traffic, the response is 200 OK.

  • To check the status of an ASE process, the running status of HTTP or HTTPS process, and port number, run the status command-line interface (CLI) command.
    $./bin/cli.sh status

    This command also gives basic configuration information.

  • To show the status of communication between ABS and all the ASE nodes in a cluster, run the following command:
    $ ./bin/cli.sh -u admin -p admin abs_info

    The abs_info command shows the last log upload and attack fetch information from ABS. If ASE is having any issues in uploading logs to ABS or connecting to ABS, it is reported in the output of the command.

  • If ASE is running as a systemctl service, use the following command to check the status of the service:
    $ systemctl status pi-ase.service

Performing health checks on the ABS AI engine

Use the following options to conduct a health check on the ABS AI engine:

  • To check the ABS Admin API, use the ABS Admin REST API either from the Postman Collection or use the curl command:
    $ curl -k -X GET 'https://<ABS Hostname>/IP:8080/v4/abs/admin' -H 'x-abs-ak: <ABS access key>' -H 'x-abs-sk: <ABS ssecret key>'
  • If the ABS AI engine is running as a systemctl service, use the following command to check the status of the service:
    $ systemctl status pi-abs.service
  • To check the ABS log for job failures, use the following command:
    $ grep allocated logs/abs/abs.log | grep failure

    If any failures are detected, reach out to the Ping Identity Support team.

  • Check the ABS log for MongoDB heartbeats in the /logs/abs/abs.log file, which reports the status of MongoDB heartbeats at regular intervals.

    This file indicates any ABS to MongoDB connectivity issues.

Performing health checks on the PingIntelligence Dashboard

Use the following commands to check the health status of the PingIntelligence Dashboard and its components:

  • To check the health status of the Dashboard data engine:
    1. Run the status command to check the status of the Dashboard process:
      $ ./bin/cli.sh status

      It returns the status as Running or Not Running.

    2. If the Dashboard data engine is running as a systemctl service, use the following command to check the status of the service:
      $ systemctl status pi-data-engine
    3. To check the Dashboard log file for errors or exceptions, verify the /pingidentity/dataengine/logs/admin/dataengine.log file to detect connectivity issues between the Dashboard data engine and ABS or Elasticsearch:
      $ tail logs/admin/dataengine.log
  • To check the health status of the WebGUI:
    1. To check if the WebGUI component is running, use the following health check URL in a browser or the curl command.

      Choose from:

      • The browser URL: https://<WebGUI Hostname/IP>:<port>/status
      • The curl command:
        $ curl -k -o /dev/null -s -w "%{http_code}\n" https://<webgui>:8030/status
        200

      A 200 OK response results if the component is running.

    2. To show the status of the WebGUI process, run the status command:
      $ ./bin/cli.sh status
    3. If the WebGUI is running as a systemctl service, use the following command to check the status of the service:
      $ systemctl status pi-webgui.service
    4. To check the WebGUI admin log file for errors or exceptions, verify the /pingidentity/webgui/admin/logs/ admin.log file to detect the connectivity issues between WebGUI and ABS or Elasticsearch:
      $ tail logs/admin/admin.log
  • To check the health status of Elasticsearch:
    1. To check the health status of Elasticsearch using a health check URL:

      Choose from:

      • Using anonymous access:
        1. To enable access for anonymous user, add the following line to the elasticsearch.yaml:
          xpack.security.authc.anonymous.roles: monitoring_user

          You can update this during initial setup or later.

        2. If you are making the change on a running instance, restart Elasticsearch.
        3. After updating the elasticsearch.yaml, to check the status of Elasticsearch, go to https://<Elasticsearch Hostname/IP>:9200/.
          Note:

          You can use a browser or the following curl command:

          $ curl -k -o /dev/null -s -w "%{http_code}\n" https://<Elasticsearch Hostname/IP>:9200/

        A 200 OK response indicates a running Elasticsearch.

      • Using a health check user:
        Note:

        This approach does not require an Elasticsearch restart.

        1. To add a health check user to Elasticsearch, run the following command:
          curl -u elastic:<elastic user password> -k -X POST "https://localhost:9200/_xpack/security/user/<health_check_user>?pretty" -H 'Content-Type: application/json' -d'
          {
            "password" : "<password for health_check_user>",
            "roles": ["monitoring_user"]
          }
          '
          
        2. After adding the health check user, to check the status of Elasticsearch, go to https://<health_check_user>:<password>@<Elastcisearch hostname/IP>:9200/.
          Note:

          You can use a browser or the following curl command:

          $ curl -k -o /dev/null -s -w "%{http_code}\n" https://<health_check_user>:<password>@<Elastcisearch hostname/IP>:9200/
          

        A 200 OK response indicates a running Elasticsearch.

      • Using Elasticsearch username and password:
        • To query the health status of Elasticsearch using the elastic user and its password to see a more comprehensive output, which also reports the state of the cluster, run the following curl command:
          $ curl -XGET -k -H 'content-type: application/json; charset=UTF-8' -u "elastic:<password>" 'https://<elasticsearch hostname/IP>:9200/_cluster/health?pretty'
    2. To check the health status of Elasticsearch when it is running as a systemctl service, run the following command:
      $ systemctl status pi-elasticsearch.service
    3. To check the Elasticsearch log for errors or exceptions, verify the Elasticsearch log for any exceptions or errors by running the following command:
      $ tail logs/elasticsearch.log
  • To check the health status of Kibana:
    1. To check the health status of Kibana using a health check URL:

      Choose from:

      • Using anonymous access:
        1. To enable access, add the following line to the kibana.yaml:
          status.allowAnonymous: true

          You can update this during initial setup or later.

        2. If you are making the change on a running instance, restart Kibana.
        3. After updating the kibana.yaml, to check the status, go to https://<Kibana Hostname/IP>:5601/pi/ui/dataengine/api/status.
          Note:

          You can use a browser or the following curl command:

          $ curl -k -o /dev/null -s -w "%{http_code}\n" https://<Kibana Hostname/IP>:5601/pi/ui/dataengine/api/status
          

        A 200 OK response indicates a running Kibana instance.

      • Using health check user:
        1. To add a health check user to Kibana, run the following command:
          curl -u elastic:<elastic user password> -k -X POST "https://localhost:9200/_xpack/security/user/<health_check_user>?pretty" -H 'Content-Type: application/json' -d'
          {
            "password" : "<password for health_check_user>",
            "roles": ["monitoring_user"]
          }
          '
          
        2. After adding the health check user, to check the status of Kibana, go to https://<health_check_user>:<password>@<Kibana hostname/IP>:5601/pi/ui/dataengine/api/status.
          Note:

          You can use a browser or the following curl command:

          $ curl -k -o /dev/null -s -w "%{http_code}\n"https://<health_check_user>:<password>@<Kibana hostname/IP>:5601/pi/ui/dataengine/api/status
          

        A 200 OK response indicates a running Kibana.

    2. To check the health status of Kibana when it is running as a systemctl service, run the following command to check the status of the service:
      $ systemctl status pi-kibana.service
    3. To check Kibana log for errors or exceptions, verify the Kibana log for any exceptions or errors by running the following command:
      $ tail logs/kibana.log

Logs, port numbers, and process IDs

Review this supplementary information on log file details, important port numbers, and process ID (PID) information of PingIntelligence for APIs components.

Log files

The following table shows the main log files of PingIntelligence components.

ASE ABS AI Engine PingIntelligence Dashboard

ASE access, management, and audit logs

ABS logs

Note:

abs.log must be the first place for debugging any issues on the ABS. The log has information about each machine learning job on the host. All incoming communication from ASE or the PingIntelligence Dashboard or REST API requests are logged in this file. It also has a periodic log on heartbeat to MongoDB.

  • Dashboard data engine: /pingidentity/dataengine/logs/dataengine.log
  • WebGUI: /pingidentity/webgui/logs/admin.log and /pingidentity/webgui/logs/sso.log
  • Elasticsearch: /pingidentity/elasticsearch/logs/elasticsearch.log
  • Kibana:/pingidentity/kibana/logs/kibana.log

Port numbers

The following table shows important port numbers used by PingIntelligence components.

ASE ABS AI Engine PingIntelligence Dashboard

ASE ports

ABS ports

  • The PingIntelligence Dashboard server: 8030. Port number 8030 should be exposed to public internet. Make sure that your organization's firewall allows access to this port.
  • Elasticsearch: 9200
  • Kibana: 5601
  • H2 database: 9092. H2 database is installed and runs as a part of the PingIntelligence Dashboard.

PID information

All PingIntelligence components have their respective PID files. Refer to these files for monitoring or for getting the PID information of the processes.

ASE ABS AI Engine PingIntelligence Dashboard

The ASE PID file contains the PID for the controller process and the HTTP balancer and HTTPS balancer processes: /pingidentity/ase/logs/ase.pid

The /pingidentity/abs/data/abs.pid file contains the PID for the main ABS process.

There are separate PID files for the different components of the PingIntelligence Dashboard:

  • /pingidentity/dataengine/data/dataengine.pid
  • /pingidentity/webgui/logs/webgui.pid
  • /pingidentity/elasticsearch/logs/elasticsearch.pid
  • /pingidentity/kibana/logs/kibana.pid