This guide describes the deployment of PingIntelligence for APIs in a sideband configuration with NGINX. PingIntelligence policy modules are installed in the NGINX and pass API metadata to PingIntelligence for detailed API activity reporting and attack detection with optional client blocking.

Here is the traffic flow through NGINX and PingIntelligence for APIs components.

  1. Client sends an incoming request to NGINX
  2. NGINX makes an API call to send the request metadata to ASE
  3. ASE checks the request against a registered set of APIs and looks for the origin IP, cookie, OAuth2 token or API key in PingIntelligence AI engine generated Blacklist. If all checks pass, ASE returns a 200-OK response to the NGINX. If not, a different response code is sent to NGINX. The request information is also logged by ASE and sent to the AI Engine for processing.
  4. If NGINX receives a 200-OK response from ASE, then it forwards the request to the backend server. Otherwise, NGINX optionally blocks the client.
  5. The response from the backend server is received by NGINX.
  6. NGINX makes a second API call to pass the response information to ASE which sends the information to the AI engine for processing.
  7. ASE receives the response information and sends a 200-OK to NGINX.
  8. NGINX sends the response received from the backend server to the client.