Question
IBM India Pvt. Ltd.
IN
Last activity: 24 Feb 2021 2:25 EST
passivated or destroyed requestor
We are having case lost issues into production system PEGA V8.2.7. While analyzing the pega logs, we found the below message,
2020-09-15 11:01:19,505 [fault (self-tuning)'] [ STANDARD] [ ] [ PFMFW:02.04.01] (_FW_PFMFW_Work_Feedback.Action) INFO istpcrmccapp01v| Proprietary information hidden|SOAP|SubmitQueueItemsService|Services|getSubmitQueueItemsResult|ARRMFP2494P8404WKBRZYMV76GY2X584PA - **** CONTIW-141233 has been associated with CaseID: TK-4307897 **** 2020-09-15 11:01:20,622 [84c.cached.thread-15] [ STANDARD] [ ] [ ] ( internal.mgmt.PRRequestorImpl) INFO - [QuiescePassivation] forcePassivateInner: passivated or destroyed requestor [HA0EU9IR2FZ6H2SOIXH6R5VQIS5SNDYOJA]
Can anyone please let me know whether the error causes the case lost into production system.
Regards,
Abhishek
-
Like (0)
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!
Pegasystems Inc.
FR
Hello,
Could it be a load balancer issue? Have you check load balancer logs?
Updated: 25 Sep 2020 5:51 EDT
IBM India Pvt. Ltd.
IN
Hi,
I haven't checked the load balancer logs. But let me know whether this error messages causes the cases to be lost i.e. work object is failed to save into DB.
One thing I want to highlight here is that case is getting lost for the request coming from other channel(WEB). But when we run the same request manually, it is successfully run and created the case accordingly.
Regards,
Abhishek
Pegasystems Inc.
FR
Yes it is possible if a request isn't reaching the expected target.
GDPR Erase
@AbhishekDe - Did you find any resolution for this issue? I have this same issue in 8.3.0 -
@AbhishekDe - Did you find any resolution for this issue? I have this same issue in 8.3.0 -
2021-01-09 11:32:07,788 [tejob-executor-18753] [ STANDARD] [ ] [ ] ( internal.mgmt.PRRequestorImpl) INFO - [QuiescePassivation] forcePassivateInner: passivated or destroyed requestor [HAQSRR961XETRX5ER1C2FOMA3R2TUD2ENA] 2021-01-09 11:32:09,842 [ default task-134] [ ] [ ] [ ] ( mgmt.base.NodeRequestorMgt) INFO - [Quiesce] Found requestor [HAQSRR961XETRX5ER1C2FOMA3R2TUD2ENA] on remote node: 97815d53-4e78-48cd-81fd-f3610ec37843 2021-01-09 11:32:10,007 [ default task-134] [ ] [ ] [ ] ( mgmt.base.NodeRequestorMgt) INFO - [QuiesceActivation] Requestor [HAQSRR961XETRX5ER1C2FOMA3R2TUD2ENA] has been restored from passivated data as a result of quiesce 2021-01-09 11:32:10,363 [tejob-executor-18754] [ STANDARD] [ ] [ ] ( internal.mgmt.PRRequestorImpl) INFO - [QuiescePassivation] forcePassivateInner: passivated or destroyed requestor [HAQSRR961XETRX5ER1C2FOMA3R2TUD2ENA] 2021-01-09 11:32:26,461 [ default task-157] [ ] [ ] [ ] ( mgmt.base.NodeRequestorMgt) INFO - [Quiesce] Found requestor [HAQSRR961XETRX5ER1C2FOMA3R2TUD2ENA] on remote node: 97815d53-4e78-48cd-81fd-f3610ec37843 2021-01-09 11:32:26,595 [ default task-157] [ ] [ ] [ ] ( mgmt.base.NodeRequestorMgt) INFO - [QuiesceActivation] Requestor [HAQSRR961XETRX5ER1C2FOMA3R2TUD2ENA] has been restored from passivated data as a result of quiesce 2021-01-09 11:32:26,877 [tejob-executor-18756] [ STANDARD] [ ] [ ] ( internal.mgmt.PRRequestorImpl) INFO - [QuiescePassivation] forcePassivateInner: passivated or destroyed requestor [HAQSRR961XETRX5ER1C2FOMA3R2TUD2ENA] 2021-01-09 11:32:50,234 [tejob-executor-18760] [ STANDARD] [ ] [ ] ( internal.mgmt.PRRequestorImpl) INFO - [QuiescePassivation] forcePassivateInner: passivated or destroyed requestor [HF7GNG5B6WL7U6QU03F7CKEG4XA8IEPLMA]
The requestor is getting repeatedly passivated or destroyed and then restored.
Capgemini
IN
Ideally if the user is ideal for some time, the data case be passivated. But if it is happening while the user is active you should raise an SR.
- Pages that are idle for at least 15 minutes
- Threads that are idle for at least 30 minutes, and
- Requestors that are idle for at least 60 minutes
Saikat
Anthem, Inc.
US
Did you get any solution? We are also facing the same issue in 8.3.
GDPR Erase
Not yet. We use SAML2.0 Authentication and Auth Service for login. When we run one 1 node it works fine. When we move to 2 nodes it fails 8/10 times, so inconsistent. We are on AWS Public cloud with Application Load Balancer (ALB). When it fails I see this message in the logs:
2021-02-24 14:28:44,276 [otejob-executor-4897] [ STANDARD] [ ] [ ] ( internal.mgmt.PRRequestorImpl) INFO - [QuiescePassivation] forcePassivateInner: passivated or destroyed requestor [H2M094QKEOAARWXTEHFM6UL495E884D4FA]
So yes as Ryan commented it is a HC or Passivation setting issue which kicks in when 2 nodes are in play.
Updated: 3 Feb 2021 16:09 EST
LaunchSafe
US
Quiessence passivation occurs when the node is gracefully shutting down. This means there is an issue on your JVM and the healthcheck response is coming back too slow. Pega receives a shutdown command and starts the process to passivate your requestors. This is either an issue with your jvm stability or your passivation settings.