Question
LTIMindtree
SA
Last activity: 23 Jun 2024 2:31 EDT
How to Enable Logs Persistance (for Logs Retention eventhough POD Dies) in Kubernetes Cluster
Need help on the enabling Pega Application logs preservance without configuring any external configurations like Confluent-D etc tools.
Implemented Solution: Pega Logs Retention in OpenShift | Support Center
-
Reply
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!
Accepted Solution
Updated: 18 Jun 2024 7:02 EDT
LTIMindtree
SA
@Kishore Sanagapalli @PhilipShannon
Solution Implemented successfully. Please find the ref: How to Enable Logs Persistance (for Logs Retention eventhough POD Dies) in Kubernetes Cluster | Support Center (pega.com)
Updated: 5 Jan 2023 2:28 EST
Pegasystems Inc.
US
The team that I work with uses EFK, Elasticsearch-Fluentd-Kibana is a standard logging stack that is provided as an example to help you get started.
More details for EFK can be found at the Addons Helm Chart
Updated: 19 Feb 2024 0:04 EST
LTIMindtree
SA
Thanks for your input. We're working with an alternate method to retain the Application logs irrespective of the POD termination status. I'll post the solution that we have been working on, here, once it's successful.
JP Morgan Chase
IN
@Kishore Sanagapalli I’m working on similar setup, tried to use sidecar container method, but all logs are not being retained.
were you able to achieve log retentions with pods? Please help.
Updated: 18 Jun 2024 6:54 EDT
LTIMindtree
SA
Hi,
If you're implementing Sidecar Container concept, you'll need to do the following.
a. Need to Re-direct the logs to a Central location (NFS), instead of storing them in pods / containers. meaning, need to create a separate folder with help of NFS and the path need to be feeded in the Kubernetes configmap.
b. Print Console Output to a log stack file (getting stored in Centralized NFS Location) and filter them as per the need.
c. Use Kibana / Confluent-D which supports logging stack option.
d. Create necessary indexes to identify the log type and re-arrange them.
e. Might need to create a couple of Ingresses to manage the load and traffic to achieve the logs re-direction points.
HSBC
GB
@Kishore Sanagapalli In my case, I mounted GCS bucket as a volume to the Pega container and changed the default log location to this gcs mount. Now my logs started to save in the GCS buckets from where I can see the logs. Even if the pod is deleted logs will stay in the GCS bucket.
Thanks, Azeez
Updated: 16 Jun 2024 9:49 EDT
LTIMindtree
SA
Similar Setup I did but some changes brought in to prlog4j2.xml file. It's working successfully in OpenShift Environment.
Does your setup write the logs when container restarts without any changes to prlog4j2.xml file?
HSBC
GB
@Kishore Sanagapalli , Yeah my solution will write logs even when container restarts, when HPA triggers a new POD
Thanks, Azeez
LTIMindtree
SA
I also figured out the same but with few changes to prlog4j2.xml file changes as well. It's working successfully.
Updated: 18 Jun 2024 7:02 EDT
LTIMindtree
SA
Alternate Method to retain the Pega Application Logs without the need of ELK, Kibana, Fluent-D etc.. Please refer the below link for more details.
Accepted Solution
Updated: 18 Jun 2024 7:02 EDT
LTIMindtree
SA
@Kishore Sanagapalli @PhilipShannon
Solution Implemented successfully. Please find the ref: How to Enable Logs Persistance (for Logs Retention eventhough POD Dies) in Kubernetes Cluster | Support Center (pega.com)