Configuring Splunk - PingOne Cloud Platform - PingOne

PingOne Cloud Platform

bundle
pingone
ft:publication_title
PingOne Cloud Platform
Product_Version_ce
PingOne Cloud Platform
PingOne
category
Administratorguide
ContentType
Guide
Product
Productdocumentation
p1
p1cloudplatform
ContentType_ce
Guide > Administrator Guide
Guide
Product documentation

Configure the data inputs for Splunk.

  1. Start Splunk.
  2. Click Settings and then click Data Inputs.
    The Splunk Settings menu.
  3. Under Local Inputs, click Scripts.
  4. Select New Local Script, and edit the values as appropriate. For example, to run the script at five minute intervals, enter */5**** for Interval.
    Splunk window containing values for Interval, Source name override, and Source type.
  5. If you created a specific index for the script, click More and then specify the index file.
  6. You can also add a local props.conf for your application. For example, to match the source-type of the example, and specify a TIMESTAMP_FIELDS as createdAt, as shown in the following example.
    $SPLUNK/etc/apps/search/local/props.conf
    [ping-audit-events]
    DATETIME_CONFIG =
    INDEXED_EXTRACTIONS = json
    NO_BINARY_CHECK = true
    TIMESTAMP_FIELDS = createdAt
    TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%l
    TZ = GMT
    category = Structured
    description = json audit events
    pulldown_type = 1                   

    The script will run at the specified interval and will create a status.json file in the same directory as the script that will track requested and finished polls, which will look like the following example.

    {
        "finished": [
             [
                "2018-12-11T17:58:00.000Z",
                "2018-12-12T19:05:00.000Z"
             ]
        ],
        "requested": []
    }

    As new requests come in, they enter the requested node, and then move to the finished node when complete.

    To fill in any gaps, you can manually add a given range in the requested node, and it will be picked up at the next interval. The finished node is pruned at the end of every round so that contiguous intervals such as [1,2], [2,3] are refactored into [1,3].

    If the requested node has more than one interval, it will pick up both, one at a time on its scheduled run.

    If a run failed to finish for any reason, such as a network error, it will not be removed from the requested node and will be retried on the next scheduled interval to guarantee no gaps in the data. However, this can generate duplicates in your index. To remove duplicates from your search results add | dedup id to your query.