Install the plugin on all the Kong nodes that you want to integrate with PingIntelligence. You can apply the plugin at the global level or a per-service level for both DB-less and database mode of Kong API gateway. For more information on Kong's DB-less and database mode, see Kong documentation.The following is a high-level list of features of the PingIntelligence plugin:

  • You can apply the plugin at the global or per-service level for both database and DB-less mode.
  • The plugin supports keep-alive connections.
  • You can configure API Security Enforcer (ASE) primary and secondary nodes for failover. If both the primary and secondary nodes are not available, the plugin routes the connection to the backend servers.

The following diagram shows the logical setup of PingIntelligence and Kong API gateway:

A diagram of the PingIntelligence and Kong API gateway setup.

The following is the traffic flow through Kong API gateway and PingIntelligence components:

  1. The client sends an incoming request to Kong.
  2. Kong makes an API call to send the request metadata to ASE.
  3. ASE checks the request against a registered set of APIs and looks up the client identifier on the PingIntelligence AI engine-generated deny list. If all checks pass, ASE returns a 200-OK response to Kong. If not, a different response code is sent to Kong. The request information is also logged by ASE and sent to the AI engine for processing.
  4. If Kong receives a 200-OK response from ASE, then it forwards the request to the backend server. A request is blocked only when ASE sends a 403 error code to Kong.
  5. The response from the backend server is received by Kong.
  6. Kong makes a second API call to pass the response information to ASE, which sends the information to the AI engine for processing.
  7. ASE receives the response information and sends a 200-OK to Kong.
  8. Kong sends the response received from the backend server to the client.