Skip to main content
Background

Pega Cloud Log Streaming

Frequently Asked Questions

What logs are available through the Cloud Log Streaming services?

Read this document to learn about available log types. 

Read this page in Pega Academy for additional context about each log type.  

Log Streaming to an S3 bucket

No, S3 Log Streaming is an ‘add on’ service that you can request at any time. Follow these instructions to configure log streaming.  

 

Logs stream in near-real time, reaching the S3 bucket in <60 seconds from event generation.  

No, logs stay within AWS private VPCs between Pega Cloud and the client S3 bucket. 

This is not necessary as traffic never traverses public IP space. All traffic remains within the AWS VPCs/subnets. 

Pega Cloud uses fluentbit to stream logs to client-managed S3 buckets. Fluentbit uses TLS encryption to ensure logs are encrypted in transit. Log are also encrypted in the client S3 bucket using a client-managed KMS CMK (AWS Key Management Service Customer Master Key). The client-managed KMS key must include a policy that grants the Pega-managed log streaming IAM role permission to generate data keys for encrypting logs stored in the S3 bucket. This log streaming security architecture was designed and approved as best practice with Amazon Web Services

The CMK (Customer Master Key) in AWS KMS is used to encrypt the data encryption keys that actually encrypt the log files in the client-managed S3 bucket. This is part of Server Side Encryption with AWS KMS (SSE-KMS). Here's a more detailed breakdown of what's happening:

  1. Fluentbit sends log data to client S3 bucket via TLS encryption
  2. S3 receives the data and uses CMK to generate a data key via the kms:GenerateDataKey* API
  3. Data key encrypts the log objects
  4. The data key itself is encrypted with the CMK and stored alongside object metadata
  5. When a log object is accessed, S3 uses the CMK to decrypt the data key, which is then used to decrypt the log object

All this ensures data is encrypted at rest in the client S3 bucket. Only authorized roles (e.g. Pega-managed IAM roles) can use the CMK to encrypt/decrypt data. 

 

Refer to this AWS article on downloading encrypted files from an S3 bucket. 

Use the gunzip utility to unzip the compressed files in an S3 bucket. 

No, Pega sends the logs in RAW format to the client S3 bucket. Each log message is associated with an event. This ensures minimal latency in log delivery and prevents Pega from reading your log data. You may configure a custom Lambda function to re-organize logs within the S3 bucket after they are received. 

 

 

Yes, these are available on all versions of Pega Cloud 2.23 and later. 

 

S3 Log Streaming should always be configured for each client environment. 

 

Fluentbit delivers logs in batches to the target S3 bucket and generates a unique name for each log file. The log file could contain logs of each type (PegaRULES, PegaCLUSTER, localhost_access_log, etc). Fluentbit creates log files in S3 based on the order that they're generated in the Infinity application. 

Fluentbit creates a top-level folder with your environment name (dt1, dt2, etc.). For any lambda processor failures, it creates a folder called “processing-failures-<foldername>”. Fluentbit will reattempt processing of the failed logs for 24 hours and will deliver them to the correct destination if successful. Please review the Pega S3 Log Streaming documentation to learn about more detailed folder structure options.  

Yes, you can contact Pega Support to specify a list of log types that you want to exclude. Refer to this document to see the available log types to exclude. 

Log Streaming to Splunk

No upgrade or patch is required as long as you are on Pega Cloud 2.20 or later. 

No, we suggest the following if you wish to distinguish logs by environment: 

  1. Assign an HEC token per environment. 

  1. In Splunk config, associate the HEC token with the environment, and add that environment to the sourcetype when the message arrives in Splunk. 

 

The most common issue relates to IP allow-listing. It’s likely that your load balancer is restricting Pega log traffic to your Splunk instance. Engage Pega Support to get a list of IP addresses (also called NAT IP’s) to allow-list on your load balancer. 

 

Yes, this is supported. 

Yes, this is supported.  

Yes, this is supported.  

Pega only supports sending log events in JSON format. RAW and other formats are not currently supported.  

Refer to Splunk Documentation or reach out to your Splunk TAM. Pega does not provide Splunk support after the logs have been received by Splunk.  

  “source”: <InstanceID + Tier + Pega Log Type>,  

  “event”: <log message content>,  

  “time”: <timestamp when the message was delivered> 

Pega Cloud2 Environments

Pega logs are produced by the application on an AWS EC2 instance and immediately forwarded to CloudWatch. Pega then uses an AWS Lambda blueprint to retrieve the client logs from a Pega-managed AWS CloudWatch instance and streams them to the client Splunk instance via HTTP Event Collector (HEC) connection. 

Pega Cloud3 Environments

Pega logs are produced by the application running inside a Kubernetes cluster. Fluentbit takes these logs and streams them in near-real-time to a client-managed Splunk instance via HTTP Event Collector (HEC) connection. 

The log event arrives as a JSON Content type, but the log messages from the Pega application use different formats depending on the log type.  

For Pega Cloud 3 environments, all messages are JSON formatted.  

For Pega Cloud 2 environments, see the following table for the message format: 

Log Type 

Message Format 

PegaRULESV2 

Plain Text 

PegaRULES-SecurityEvent 

JSON 

PegaRULES-DATAFLOW 

Log4J 

PegaRULES-ALERT 

Plain text 

PegaRULES-ALERTSECURITY 

Plain text 

PegaRULES-CLUSTER 

Log4J 

PegaBIX 

Plain text 

Local_host_access_log 

Plain text 

When creating the HEC token, ensure that the Indexer ACK is unchecked.  

Still have questions?

We'd prefer it if you saw us at our best.

Pega Collaboration Center has detected you are using a browser which may prevent you from experiencing the site as intended. To improve your experience, please update your browser.

Close Deprecation Notice