Updating your log-streaming services
In the July 2022 release, PingOne Advanced Services added a configurable log-streaming pipeline, which made it possible to customize the ways log data is streamed. You can filter streamed data by application, log, and keywords, and modify JSON files.
If you have not yet migrated to the new streaming processes, be aware that streaming formatting changes need to occur and are outlined here. This format is based on logstash output filters and can be customized to meet your needs.
Here is an example of a log event at the input:
{"@timestamp":"2022-07-14T12:00:10.728763Z",
"message":"All pods in namespace ingress-nginx-public are running |
PASS |\n",
"log_type":"customer_out",
"time":"2022-07-14T12:00:04.314969694Z",
"kubernetes":
{"container_hash":"public.ecr.aws/r2h3l6e4/pingcloud-services/robot-framework@sh
a256:e64b3beb9c23d655f8542e685f0c68c01178498f4b226294f36773832dd1cb48",
"container_image":"public.ecr.aws/r2h3l6e4/pingcloud-services/robot-framework:v1
.3.0",
"docker_id":"9944534ac7556566a40a9f40331e5d07b0cf4244cca2a5a07e6f4b83d0de69a9",
"labels":
{
"app":"ping-cloud",
"controller-uid":"ea007809-075f-4cd6-9955-ad69c94ae190",
"job-name":"healthcheck-cluster-health-27630000"
},
"container_name":"healthcheck-cluster-health",
"host":"ip-10-254-1-222.us-west-2.compute.internal",
"pod_id":"05dc5c68-852e-40f9-93a8-13a24502d545",
"namespace_name":"ping-cloud-antonklyba",
"pod_name":"healthcheck-cluster-health-27630000-4fztv"},
"host":"10.254.12.248",
"@version":"1",
"stream":"stdout",
"log_group":"application"
}
This file contains the following information:
-
@timestamp: Indicates when logstash processed the event.
-
log or message: Ping Identity applications generate logs, and all other types of applications and sidecars.
-
log_type: Provides internal labels for all events sent to the pipeline.
-
time: Date and time when the log was captured, which could be different from the date and time it was generated.
-
kubernetes: The nested JSON object with kubernetes metadata.
-
host: The internal IP address of a fluent-bit pod sent to logstash.
-
@version: This internal logstash field will always be “1”.
-
stream: The name of the stream where the log was captured, and will either be stdout (standard output) or stderr (standard error).
-
log_group: This internal label will always be “application”.
If you do not apply filters and opt to use the default output configuration, a variety of differences exist:
-
If you are exporting log files with Amazon CloudWatch or generic HTTP, you will receive a JSON file similar to the example.
-
If you are exporting log files to an Amazon S3 Bucket:
-
Ensure that the S3 output is appropriately configured. Use
‘codec ⇒ “json”’
to create a JSON file similar to the example. The S3 output is useless when filters are not applied or this setting is not established because it only obtains @timestamp, host, and message fields from the event. -
By default, the following line will be written to text files named
'ls.s3.${randomUUID}.${currentTime}.${tags}.${part}.txt
`${timestamp} ${host} ${message}
(e.g “2022-07-14T12:00:04.314969694Z 10.254.12.248 All pods in namespace ingress-nginx-private are running | PASS |”)Filenames within S3 buckets can be prefixed to create directories, but the filenames are hardcoded and cannot be changed.
-
-
If you are exporting log files to Syslog, it uses the rfc3164 format, by default:
${timestamp} ${host} ${process}: ${message} (e.g. “Jul 14 12:00:04 10.254.12.248 LOGSTASH[-]: All pods in namespace ingress-nginx-private are running |PASS |”)
Ensure that Syslog output is appropriately configured. Either use
‘codec ⇒ “json”’
to send the whole JSON object in a ${message}field, or configure the message property to include specific fields.