In Pega 7.3.0 Production system (High attachment volumes system) we are facing high JVM memory consumption for the times when uploading and downloading of attachments use cases are performed. Exponential increase of memory spikes around attachment transactions.
Would like to know if any of the application had similar experience and if so what resolution steps were taken to improve the all performance of attachment upload and download.
***Edited by Moderator Marissa to update Platform Capability tags****
Adding more detail to the above query with the specific asks...
Since the application is very attachment heavy (typical attachment size being 15MB, and most cases having one or more such attachments), the team was seeing very heavy DB I/O (since they were using Pega to store the attachments). This was negatively impacting performance.
Recently, they made the move to Amazon S3. However since they are on version 7.3, they had to write custom code to support that. Their custom code does the upload and download in two steps: using a Pega function to load the object from the browser into memory and then an Amazon SDK function to send it to S3 (and vice-versa for the downloads).
While this has helped in reducing the DB related I/O it is still resulting in severe memory use. During recent load tests that they did, within a space of one hour they saw GC happening twice.
So they wanted to get a view from an SME on whether this high memory utilization is inevitable or if there is a better way to implement it. Also they wanted to understand if they will continue to face the same issue when they upgrade to S3 and migrate to using the OOTB S3 capabilities in the platform.