Process large number of records (approx 4 million) records through File Listener
Hello Pega Community,
We have a requirement where we need to process a very large file with about 4 million records and load the data into a database table. We are using a File Listener and processing by the "record at a time" option. There is no option to break down the file into smaller size as the source system is a legacy system and always provides the complete file.
We have implemented the functionality in our Dev and ran the listener on a dedicated node with no users. It took ~26 hours to process the file but the file did process without errors or impacts to performance etc.
Config details-
Node based startup on dedicated node
Concurrent thread 1
Service File Processing method - record at a time
Requests per queue item- 1
We realized that 4million records in 26 hours translates to about 42 records/second.
Any thoughts on how we make this process better i.e. faster and safer?
P.s- We do have a staging table before we load data into the actual db table.
Regards
Bidyun
***Edited by Moderator: Marissa to update Product version based on replies***