I saw that we are able to configure (Conifguration --> Application Setting) the Batch Size of the Campaign Execution. However, I am able to make up what it is trying to say from the help:
Batch Size Specifies the size (number of records) into which back-end processing is batched, in order to leverage performance improvements from vertical and horizontal scalability.
Currently, we are running on only 1 node, and I see that throughput is very slow, especially when writing to file. However, when I increase it from 250 to 1000, the execution of campaign is drag down, the delay campaign execution is about 2 hours (I do not know if the delay of the campaign execution is caused by this or not).
Does anyone has any suggestion on what should be the Batch size for me to increase the throughput of the campaign execution? Or pointer to read up is very much appreciated.
I suggest you start with 50. What DB platform are you running on? We have seen performance issues with DB2/MSSqlserver with the default size of 250. Reducing batchSize actually helps. Please report your observation with smaller batchSize.
Not necessarily, too big of a batch size would increase JVM gc, batch pressure on DB (i.e., high water mark issue) and it has been proven not handled well for some database optimizers (i.e., DB2, SQLServer, and even Oracle 12c).