A Lot of ERROR: duplicate key value violates unique in BDD logs
I'm trying to understand why our PostGRESQL databases have so muche "duplicate key value violates unique" errors for years.
Our DBA have seen it in production and DEV-test env. while we were using PEGA 8.7. The same errors are presents since we updgrade to PEGA 24.1 on Sept 9 2025.
few Examples :
2025-09-28 23:39:00 CEST:<>:pega_base@<>:[847310]: ERROR: duplicate key value violates unique constraint "pr_sys_locks_pk"
2025-09-28 23:39:00 CEST:1<>:pega_base@<>:[847310]: DETAIL: Key (pzinskey)=(PYALERTJOBFORMAXERRORRECORDSOFLSETABLE) already exists.
2025-09-28 23:39:07 CEST:<>:pega_base@<>:[844901]: ERROR: duplicate key value violates unique constraint "pr_sys_locks_pk"
It can happen nearly on any table. It doesn't seem to really impact PEGA but those errors cannot not be just ignored.
Can you clarify please? Config issue? Usual behavior?
@OliOlsz these are almost always harmless lock-contention “noise,” not data corruption. Pega uses a legacy pattern on PR_SYS_LOCKS (and a few scheduler/queue tables) where two threads try to INSERT the same lock row; the winner gets the lock, the loser hits a Postgres 23505 duplicate-key error that Pega catches and moves on. Under load, jobs like Alert/Queue processors and agents will spike these, so you’ll see keys like PYALERTJOB… in the messages. What this really means is: concurrency is working, but Postgres logs it at ERROR level, so it looks scary. Do three things: confirm the errors are almost entirely on PR_SYS_LOCKS/queue tables; ensure only intended nodes run each Job Scheduler/Queue Processor (no duplicate runners), and verify node IDs are unique and clocks are sane. If you truly see violations on business tables, check for sequence drift (nextval < max(id)) and repair with setval; otherwise ignore the lock duplicates or lower DB logging for 23505 via log filtering.