Need recommended approach to browse record from heavy DB tables (15 million records)
We are planning to migrate around 15 million cases from Mainframe to PEGA cloud application. The Mainframe data is planned to export into 30+ different CSV files. All these 32 CSV sheet information contributes to the single case creation in PEGA.
Currently we have planned to load the CSV file in to 32 DB table using data flow, later the Job scheduler will query these tables in multiple steps to create the cases in Pega application.
We are looking for an effective mechanism to browse the data from high density DB table (approx 15 million records per table). Currently planned to use SQL queries in RDB-Browse to perform this step in job schedulers utility. Is it the better way of browsing records from huge table or please suggest if we have any other recommended approach from Pega.