Request for Recommended Approach to Selectively Extract ~200M Records and Migrate Between Pega Clusters (Export to S3 → Import)
We are seeking guidance on the best-practice approach for extracting a large, selective subset of table data from one Pega cluster and importing it into another. Our requirement is as follows:
-
Data volume: Approximately 200 million rows across one or more Pega/Customer tables.
-
Goal:
-
Selectively extract table data based on specific business criteria (not a full database clone).
-
Package the extracted data in a format that can be exported to Amazon S3.
-
Re-import the data into the target Pega environment using a Pega-supported mechanism.
-
Considerations: These extract jobs will be running on Production instance.