Experiencing an extreme number PEGA0017 messages on one of our appservers
Hi all,
We are experiencing an extreme number PEGA0017 messages on one of our appservers.
Over 30.000 per day.
The other node has no issues and almost no PEGA0017 messages (just one once a week)
And, also weird, it give us these messages.
collections/mru/PropertyReference: Successfully reduced Concurrent MRU Map by 1 down to: 364807.
The setting however in this node is set to a value of 100.000
<env name="collections/mru/propertyreference/instancecountlimit" value="100000" />
In the SMA I can see these values within the cache reporting.
collections/mru/PropertyReference MRU Report Entry class: KeyedSoftReference
Current Target Limit Max Age(m) Drained #Drains Limited #Limits Maxed #Maxes Maint(s) HardToSoftCount SoftToHardCount SoftRefGCedCount
365324 80000 100000 120000 10 386824 4537 0 0 531425553 208545628 703291 531812383 527713262 3633731
The traffic among our 2 nodes is managed by a loadbalancer which is working round-robin.
Does anybody have any idea what is happening and why?
Hi all,
We are experiencing an extreme number PEGA0017 messages on one of our appservers.
Over 30.000 per day.
The other node has no issues and almost no PEGA0017 messages (just one once a week)
And, also weird, it give us these messages.
collections/mru/PropertyReference: Successfully reduced Concurrent MRU Map by 1 down to: 364807.
The setting however in this node is set to a value of 100.000
<env name="collections/mru/propertyreference/instancecountlimit" value="100000" />
In the SMA I can see these values within the cache reporting.
collections/mru/PropertyReference MRU Report Entry class: KeyedSoftReference
Current Target Limit Max Age(m) Drained #Drains Limited #Limits Maxed #Maxes Maint(s) HardToSoftCount SoftToHardCount SoftRefGCedCount
365324 80000 100000 120000 10 386824 4537 0 0 531425553 208545628 703291 531812383 527713262 3633731
The traffic among our 2 nodes is managed by a loadbalancer which is working round-robin.
Does anybody have any idea what is happening and why?
Of course I could change the settings to a higher level, but why are we experiencing this issue only on one node?
Thanks for your reactie in advance.
***Updated by Moderator: Marissa to add SR details***