Question
Estes
US
Last activity: 19 Jan 2026 14:07 EST
Recommendations on Date based partitioning of high volume table
Hi All
We have a custom table purged nightly to keep only 7 days of data, but it still reaches ~11 million records during the week (~1.5M/day).
During peak hours, we see slower reads and writes despite: All relevant indexes in place, Regular stats collection, and no obvious missing indexes per explain plans
Question: Would range partitioning by date (e.g. on pxCreateDateTime or pyLastUpdateTimeStamp) meaningfully improve read/write performance? We have a couple of reports on this table that don't use a date filter criterion. Would those be impacted if the table is partitioned by date?
Has anyone applied date-based range partitioning to similar high-insert, short-retention custom tables in Pega? What performance improvements (latency/throughput) did you measure for peak reads/writes and purge operations?
Appreciate any real-world insights, benchmarks, or references to Pega/MS SQL partitioning guidelines.
Pega Version: 8.7.5
DB: MS SQL
Thank You!!
Anand