We have a requirement to process 7L+ of records from external DB. Considering the performance, we have chosen the Dataflow - Dataset to read the external DB records with thread count as 20 records per sec with multinode batch processing. the feature is absolutely working fine.
Since there are no filter options in a dataset like RD, if the Agent in data flow has read 3Lakhs record and due to natural calamity the system got down for a period of time and when it gets back to the stable state, the data set in the dataflow is reading the records from 1. However it already read and processed 3 Lakhs of record, our expectations are to read from 3Lakh+ record.
Is there any way to resolve this issue? We are using Pega 7.3 version.