Saving data and indexing is taking up 50-100% of CPU on Pega nodes
I have a staging environment that has Pega 7.3.1 nodes running at anywhere from 50-100% CPU after starting Tomcat 8.5.24 with managed MS SQL Server in Azure.
Using JProfiler it has been traced back to stack traces such as this (see indexing-foobar.png), or another internal Pega indexing process:
I have a staging environment that has Pega 7.3.1 nodes running at anywhere from 50-100% CPU after starting Tomcat 8.5.24 with managed MS SQL Server in Azure.
Using JProfiler it has been traced back to stack traces such as this (see indexing-foobar.png), or another internal Pega indexing process:
89.1% - 12,462 ms com.pega.pegarules.integration.engine.internal.services.listener.ServiceListenerBaseImpl.run
89.1% - 12,462 ms com.pega.pegarules.integration.engine.internal.services.email.EmailListener.run_
89.1% - 12,462 ms com.pega.pegarules.integration.engine.internal.services.email.EmailListener.oneIteration
89.1% - 12,462 ms com.pega.pegarules.integration.engine.internal.services.email.EmailListener.emailProcess
89.1% - 12,462 ms com.pega.pegarules.integration.engine.internal.services.email.EmailListener.handleRequest
89.1% - 12,462 ms com.pega.pegarules.integration.engine.internal.services.email.EmailListener.handleRequestContents
89.1% - 12,462 ms com.pega.pegarules.integration.engine.internal.services.email.EmailListener.handleStandardRequest
89.1% - 12,462 ms com.pega.pegarules.session.external.engineinterface.service.EngineAPI.processRequest
89.1% - 12,462 ms com.pega.pegarules.session.internal.PRSessionProviderImpl.doWithRequestorLocked
89.1% - 12,462 ms com.pega.pegarules.session.internal.PRSessionProviderImpl.doWithRequestorLocked
89.1% - 12,462 ms com.pega.pegarules.session.internal.PRSessionProviderImpl.performTargetActionWithLock
89.1% - 12,462 ms java.lang.reflect.Method.invoke
89.1% - 12,462 ms com.pega.pegarules.session.external.engineinterface.service.EngineAPI.processRequestInner
89.1% - 12,462 ms com.pega.pegarules.integration.engine.internal.services.ServiceAPI.runActivities
89.1% - 12,462 ms com.pega.pegarules.integration.engine.internal.RuleExecutionUtils.runServiceActivity
89.1% - 12,462 ms com.pega.pegarules.session.internal.mgmt.PRThreadImpl.runActivitiesAlt
89.1% - 12,462 ms com.pega.pegarules.session.internal.mgmt.base.ThreadRunner.runActivitiesAlt
89.1% - 12,462 ms com.pega.pegarules.session.internal.mgmt.Executable.doActivity
89.1% - 12,462 ms com.pegarules.generated.activity.ra_action_applicationsetup_6373072fcfbdf7c63dcbcbd1cc37776e.perform
89.1% - 12,462 ms com.pegarules.generated.activity.ra_action_applicationsetup_6373072fcfbdf7c63dcbcbd1cc37776e.step9_circum0
89.1% - 12,462 ms com.pega.pegarules.session.internal.mgmt.Executable.invokeActivity
89.1% - 12,462 ms com.pega.pegarules.session.internal.mgmt.Executable.doActivity
89.1% - 12,462 ms com.pegarules.generated.activity.ra_action_applicationprofilesetup_0cbae53d9a9a647fe2c096b4fe6f34e0.perform
89.1% - 12,462 ms com.pegarules.generated.activity.ra_action_applicationprofilesetup_0cbae53d9a9a647fe2c096b4fe6f34e0.step3_circum0
89.1% - 12,462 ms com.pega.pegarules.session.internal.mgmt.Executable.invokeActivity
89.1% - 12,462 ms com.pega.pegarules.session.internal.mgmt.Executable.doActivity
89.1% - 12,462 ms com.pegarules.generated.activity.ra_action_applicationprofilesetup_getnextworksetup_fe5444c660d9106a2a29b3143af298b4.perform
89.1% - 12,462 ms com.pegarules.generated.activity.ra_action_applicationprofilesetup_getnextworksetup_fe5444c660d9106a2a29b3143af298b4.step8_circum0
89.1% - 12,462 ms com.pega.pegarules.data.internal.access.DatabaseImpl.save
89.1% - 12,462 ms com.pega.pegarules.data.internal.access.DatabaseImpl.save
89.1% - 12,462 ms com.pega.pegarules.data.internal.access.Saver.save
89.1% - 12,462 ms com.pega.pegarules.data.internal.access.Saver.save
89.1% - 12,462 ms com.pega.pegarules.data.internal.access.Saver.doIndexing
89.1% - 12,462 ms com.pega.pegarules.data.internal.access.Indexer.updateIndexes
89.1% - 12,462 ms com.pega.pegarules.data.internal.access.Indexer.calulateIndexesIncrementallyForInstance
89.1% - 12,462 ms com.pega.pegarules.data.internal.access.Indexer$Updates.write
89.1% - 12,462 ms com.pega.pegarules.data.internal.access.Indexer.writeIndexesForDef
86.4% - 12,084 ms com.pega.pegarules.data.internal.access.Indexer.writeIndex
86.1% - 12,039 ms com.pega.pegarules.data.internal.access.DeferredOperationsImpl.add
Previous version of Pega 7.1 seem to be using Lucene (based on the Student Admin Guide) but we are using Elastic Search.
One of the Pega nodes doesn't seem to respond to HTTP requests on TCP port 9300, as the connection is closed immediately.
Is this likely to be due to this comment in the student guide?
"SystemIndexer is the agent that performs the background indexing for all Rule
records. SystemWorkIndexer is the agent that performs the background indexing for both Data and Work
records. This agent must run for either of these to be indexed. A common error encountered in indexing
comes when the administrator accidentally turns off the SystemWorkIndexer while leaving Data Indexing
enabled."
The Elastic Search instances were not previously able to communicate due to blocked TCP ports so I'm wondering if it relates to ES cluster nodes competing to write to a distributed index that is corrupted.
Is there a way to view the ES index status? I have used the ES Kopf plugin in a previous role.
https://github.com/lmenezes/elasticsearch-kopf
Cheers,
Nigel