Skip to main content
Background

Pega Cloud Log Streaming

Frequently Asked Questions

What logs are available through the Cloud Log Streaming services?

Read this document to learn about available log types. 

Read this page in Pega Academy for additional context about each log type.  

Log Streaming to an S3 bucket

Open Accordion Close Accordion

No, S3 Log Streaming is an ‘add on’ service that you can request at any time. Follow these instructions to configure log streaming.  

 

Open Accordion Close Accordion

Logs stream in near-real time, reaching the S3 bucket in <60 seconds from event generation.  

Open Accordion Close Accordion

No, logs stay within AWS private VPCs between Pega Cloud and the client S3 bucket. 

Open Accordion Close Accordion

This is not necessary as traffic never traverses public IP space. All traffic remains within the AWS VPCs/subnets. 

Open Accordion Close Accordion

Pega Cloud uses AWS Kinesis Firehose to stream logs to client S3 buckets. These logs are encrypted by the client-managed KMS CMK. The client has full control over the associated IAM role responsible for protecting the S3 bucket. 

Kinesis uses CMK to encrypt logs in transit. It can also store the logs at rest in the client S3 bucket. 

Open Accordion Close Accordion

Refer to this AWS article on downloading encrypted files from an S3 bucket. 

Open Accordion Close Accordion

Use the gunzip utility to unzip the compressed files in an S3 bucket. 

Open Accordion Close Accordion

No, Pega sends the logs in RAW format to the client S3 bucket. Each log message is associated with an event. This ensures minimal latency in log delivery and prevents Pega from reading your log data. You may configure a custom Lambda function to re-organize logs within the S3 bucket after they are received. 

 

 

Open Accordion Close Accordion

Yes, these are available on all versions of Pega Cloud 2.23 and later. 

 

Open Accordion Close Accordion

S3 Log Streaming should always be configured for each client environment. 

 

Open Accordion Close Accordion

Kinesis Firehose collects Pega log events from AWS CloudWatch and delivers them in batches to the target S3 bucket. Kinesis generates a unique name for the log file. The log file can contain logs of each type (PegaRULES, PegaCLUSTER, localhost_access_log, etc). Kinesis creates S3 log files in the order it receives the logs from CloudWatch. 

Open Accordion Close Accordion

Kinesis creates a top-level folder with your environment name (dt1, dt2, etc.). For any lambda processor failures, it creates a folder called “processing-failures-<foldername>”. Kinesis will reattempt processing of the failed logs for 24 hours and will deliver them to the correct destination if successful. Please review the Pega S3 Log Streaming documentation to learn about the full folder structure.  

Open Accordion Close Accordion

Yes, you can contact Pega Support to specify a list of log types that you want to exclude. Refer to this document to see the available log types to exclude. 

Log Streaming to Splunk

Open Accordion Close Accordion

No upgrade or patch is required as long as you are on Pega Cloud 2.20 or later. 

Open Accordion Close Accordion

No, we suggest the following if you wish to distinguish logs by environment: 

  1. Assign an HEC token per environment. 

  1. In Splunk config, associate the HEC token with the environment, and add that environment to the sourcetype when the message arrives in Splunk. 

 

Open Accordion Close Accordion

The most common issue relates to IP allow-listing. It’s likely that your load balancer is restricting Pega log traffic to your Splunk instance. Engage Pega Support to get a list of IP addresses (also called NAT IP’s) to allow-list on your load balancer. 

 

Open Accordion Close Accordion

Yes, this is supported. 

Open Accordion Close Accordion

Yes, this is supported.  

Open Accordion Close Accordion

Yes, this is supported.  

Open Accordion Close Accordion

Pega only supports sending log events in JSON format. RAW and other formats are not currently supported.  

Open Accordion Close Accordion

Refer to Splunk Documentation or reach out to your Splunk TAM. Pega does not provide Splunk support after the logs have been received by Splunk.  

Open Accordion Close Accordion

  “source”: <InstanceID + Tier + Pega Log Type>,  

  “event”: <log message content>,  

  “time”: <timestamp when the message was delivered> 

Open Accordion Close Accordion

Pega logs are produced by the application on an AWS EC2 instance and immediately forwarded to CloudWatch. Pega then uses an AWS Lambda blueprint to retrieve the client logs from a Pega-managed AWS CloudWatch instance and streams them to the client Splunk instance via HTTP Event Collector (HEC). 

Open Accordion Close Accordion

The log event arrives as a JSON Content type, but the log messages from the Pega application use different formats depending on the log type.  

For Pega Cloud 3 environments, all messages are JSON formatted.  

For Pega Cloud 2 environments, see the following table for the message format: 

Log Type 

Message Format 

PegaRULESV2 

Plain Text 

PegaRULES-SecurityEvent 

JSON 

PegaRULES-DATAFLOW 

Log4J 

PegaRULES-ALERT 

Plain text 

PegaRULES-ALERTSECURITY 

Plain text 

PegaRULES-CLUSTER 

Log4J 

PegaBIX 

Plain text 

Local_host_access_log 

Plain text 

Open Accordion Close Accordion

When creating the HEC token, ensure that the Indexer ACK is unchecked.  

Still have questions?

We'd prefer it if you saw us at our best.

Pega Collaboration Center has detected you are using a browser which may prevent you from experiencing the site as intended. To improve your experience, please update your browser.

Close Deprecation Notice