Memory issue for large volume background data processing
Background:
- We're using PEGA 23.1.4 with VM deployment (Redhat 8.8, Java 11, Weblogic 14).
- The PEGA embedded Kafka is still using.
- There's a server node with 24GB physical memory, and "-Xms12g -Xmx12g" was configured for JVM.
- This server node is responsible for background data processing.
Scenario:
- An activity was called to add about 150000 items to the a Queue Processor, this action took about 2 mins.
- The "Number of threads per node" of the Queue Processor was configured to 6.
- It took about 3.4 hours to complete all the Queue Processor items.
- During the processing, the server memory was used up to 98%, and never got released util the server node was restarted.
Questions:
- Currently, we have to restart the server node everytime after the data processing. Is there any way to avoid the restart?
- We can increase the physical memory, but will ths resolve the problem? If yes, is there any advised GB of physical memory? And what about the "xmx & xms" settings?
- Is there any other suggestion for this kind of background data processing?
Thanks very much.