Discussion
Pegasystems Inc.
IN
Last activity: 24 Mar 2020 10:05 EDT
Ask the Expert - Background Processing with Prithanka Chatterjee
Join Prithanka (@chatp) in this Ask the Expert session (26-30 August) on Background Processing.
Message from Prithanka: Hello all, I am passionate about enabling customers to get the best out of Pega platform, and that includes getting the best out of the background processing capabilities built into the platform. In this segment of our interaction, I would love to answer your questions regarding Background Processing in general and more specifically about Standard Agents, Advanced Agents, Queue Processors, and Job Schedulers.
- Follow the Product Support Community's Community Rules of Engagement
- This is not a Live Chat - Prithanka will reply to your questions over the course of the week (26-30 August)
- Questions should be clearly and succinctly expressed
- Questions should be of interest to many others in the audience
- Have fun!
-
Like (0)
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!
Pegasystems Inc.
IN
Hi Priyanthka
we understand from some articles on Pega community that QP is using Kafka Service and it is by default configured. Can you please give more details on configuration and benefits of Kafka.
Pegasystems Inc.
IN
- It provides high throughput
- It has very low latency
- It is fault tolerant
- It supports both vertical and horizontal scalability
- It is distributed in nature
- It supports both streaming and messaging capabilities
- It provides high concurrency
- It provides high durability
- It provides high throughput
- It has very low latency
- It is fault tolerant
- It supports both vertical and horizontal scalability
- It is distributed in nature
- It supports both streaming and messaging capabilities
- It provides high concurrency
- It provides high durability
- Number of default partitions are set to 20
- Max retention period is set to 7 days
- Maximum message size is set to 5 MB, and so on.
Prithanka.
Pegasystems Inc.
IN
Thanks for this insight on Kafka
Centene Corporation
US
We want to build a rest service that can handle hundreds, if not thousands of requests for every few seconds. From pdn, it is my understanding that this can be done using asynchronous processing (and Service Request Processor). Could you please shed some light on this and if there are any downsides to this approach. Can we associate the Queue ID with the workobject using OOTB rules?
Pegasystems Inc.
IN
Hi Abhinay,
Firstly thank you for the question.
Background processing using Advanced/Standard agents or Queue Processors and Job Schedulers do not support the kind of functionality you need out of the box. However, you can choose to build it if required. But since you are already looking to use Service Request Processor for building the rest service, let me help you and tag the Expert for the same.
@nvkap : Request you to kindly answer this question.
Best regards,
Prithanka
Pegasystems Inc.
IN
Please watch this TechTalk on Background Processing. Feel free to discuss any of the points from this video with our expert!
-
Chithra Jayakumar
Contractor
NL
Hi Prithanka,
Is there a configuration we can set or Rule to use so we can make sure the Queue Processors are automatically restarted every time PEGA crashes or is also restarted?. Specifically we're interested in automatically restarting the following queue processors:
* cyFetchRoutingDestination
* cyReRouteConversationRequest
Reason for this is that after the last restart we had (one of the nodes went down), we noticed the queue processors did not automatically started and had to manually be started.
Many thanks.
Pegasystems Inc.
IN
Hi Ricardo,
Thank you for the question.
Queue Processors are built to be resilient and survive crashes and restarts. As long as the Queue Processor is not Disabled on the Queue Processor Rule-Form or is not explicitly stopped from the Queue Processor Landing Page, in the Admin studio, the Queue Processor is supposed to start automatically on node crashes or generic restarts.
I request you to check for these two configurations, one on the rule-form and the other in the Landing-Page, to see that it is configured correctly. And if the problem persists, I would request you to log a support ticket.
I hope, that answers your question.
Best regards,
Prithanka.
Cognizant
GB
Hi Prithanka,
I was trying to replace an existing advanced agent with Job Scheduler. I was able to successfully create the job scheduler rule and followed the configuration guidelines as per the rule help guide. I'm uploading the configuration snapshot.
Hi Prithanka,
I was trying to replace an existing advanced agent with Job Scheduler. I was able to successfully create the job scheduler rule and followed the configuration guidelines as per the rule help guide. I'm uploading the configuration snapshot.
- Enable job scheduler = yes;
- associated node types = RunOnAllNodes;
- Schedule = Daily every 1 day;
- context = SpecifyAccess group; accessgroup:'App:Administrators';
- Activity = activityname; Class:class context
- Application server is Jboss and pega version 8.2.2
I was trying to monitor the job from Admin Studio to see its status and I couldn't locate the job under Jobs landing page in Admin Studio.
As per your TechTalk episode you mentioned we can check the status of the job in admin studio after we successfully create one.
Can you please help me understand do I need to do any specific configuration changes for the job to get displayed under Jobs landing page in admin studio?
Thanks,
Vyas Raman Loka.
Pegasystems Inc.
IN
Hi Vyas,
Firstly, thanks for your question.
Hi Vyas,
Firstly, thanks for your question.
Every Job Scheduler and Queue Processor in 8.2.x gets rule resolved against the context specified in ASYNCPROCESSOR Requestor Type. And unless a Job Scheduler is resolvable against the specified context, it will not show up in Admin Studio Landing Page. Please see the snippet below from Pega Help for more details:
AsyncProcessor requestor type
You use AsyncProcessor requestor type to resolve Job Scheduler and Queue Processor rules.
Each Pega Platform operator logs in using unique ID that is associated with a specific access group. This access group provides a context to resolve rules. Because Job Scheduler and Queue Processor rules in the background, no user-based context is available to resolve these rules. The AsyncProcessor requestor type provides the context to resolve Job Scheduler and Queue Processor rules.
Requestor type definition
The AsyncProcessor requestor type defines a list of rulesets that create a context for Job Scheduler and Queue Processor rules resolution.
At system startup, unique queue processors and job schedulers are found by using the context defined in the AsyncProcessor requestor type. When the system is running, and a new queue processor is added, or an existing one is overridden in a different ruleset, the context is updated to include the new ruleset to resolve the right rule.
The dafult access group is an Application-based Access group. This definition includes your custom access group that corresponds to your custom rulesets. When you use the Application-based access group, only job schedulers and queue processors that belong to this access group run. The default access group is PRPC:AsyncProcessor.
I request you to explore and assess the Access Group that is currently specified in the ASYNCPROCESSOR Requestor Type, and ensure that your Job Scheduler is resolvable against that.
Best regards,
Prithanka.
Cognizant
GB
Just to clarify the access group context in the Job scheduler or queue processor ruleform only determines the context for the activity rule mentioned in the rule but it doesn't have any significance for the rule to show up in the Admin studio.
For it show up in the Admin Studio we need to check the AsyncProcessor requestor type instance to include the application context for the job schedulers or queue processors created in the corresponding application context right?
Updated: 29 Aug 2019 13:37 EDT
Pegasystems Inc.
IN
That is correct, Vyas.
Best regards,
Prithanka
Virtusa
IN
Is BIX not comes Under Background Processing? if yes please clear this doubt
How Extract rule picks or identifies the our own FTP server instance and location path where the extracted files to store
Can we skip generating manifest file generation and Summary file generation ?
Please help
Pegasystems Inc.
IN
Hi Sagar,
BIX is related but not specifically under Background Processing. I realize that I had branched your earlier reply into a new post for better visibility: BIX question - Best approach for Extracting smoother with low impact on performance
If your present question is related, you could update the post, else create a new one.
Thank you,
Verizon
IN
Hi Prithanka,
I was replacing one of the advanced agents with Job Shedulers.
With advanced agents we had the provision to update the "Agent Interval" or "Pattern" using the Data-Agent-Queue instance.
However, I could not find such a facility with Job Schedulers. All I can do it enable or disable it.
So if there is a need to update the schedule of Job Schedulers in Production, how can we do that?
Updated: 1 Sep 2019 5:56 EDT
Pegasystems Inc.
IN
Hi Mahi,
Firstly , thanks for the question and this is a really interesting one.
Agents always had a rule (Rule-Agent-Queue) and a data (Data-Agent-Queue) instance working together in combination to determine the current configuration. This, though was useful in some cases, it was also a cause for substantial pain for most of our users. For example:
- This made it difficult to ascertain the state of a certain Agent just by looking at the rule.
- Sometimes during migration or upgrades the data instances would be lost and that would cause a change in behavior.
- This would also make Agents behave differently than other Pega rules.
So, in an effort to standardize the behavior, Job Schedulers are created to contain all configurations within the rule itself. Any change in the behavior of the rule has to be made and contained within the rule itself. Though this takes away a little bit of the flexibility, this offers better resilience and management capabilities. As for your particular case Mahi, I would advice you to resave the Job scheduler rule after you make changes (possibly in a production ruleset). And hopefully you are not required to make such changes frequently in Production systems (which again is not advisable).
If this does not solve your use case, please feel free to get in touch with me by sending me a private message here on Pega Community and we can discuss further, and explore your specific use case.
Best regards,
Hi Mahi,
Firstly , thanks for the question and this is a really interesting one.
Agents always had a rule (Rule-Agent-Queue) and a data (Data-Agent-Queue) instance working together in combination to determine the current configuration. This, though was useful in some cases, it was also a cause for substantial pain for most of our users. For example:
- This made it difficult to ascertain the state of a certain Agent just by looking at the rule.
- Sometimes during migration or upgrades the data instances would be lost and that would cause a change in behavior.
- This would also make Agents behave differently than other Pega rules.
So, in an effort to standardize the behavior, Job Schedulers are created to contain all configurations within the rule itself. Any change in the behavior of the rule has to be made and contained within the rule itself. Though this takes away a little bit of the flexibility, this offers better resilience and management capabilities. As for your particular case Mahi, I would advice you to resave the Job scheduler rule after you make changes (possibly in a production ruleset). And hopefully you are not required to make such changes frequently in Production systems (which again is not advisable).
If this does not solve your use case, please feel free to get in touch with me by sending me a private message here on Pega Community and we can discuss further, and explore your specific use case.
Best regards,
Prithanka.
Murex
LB
Hello All,
I have a question related to the Queue Processors Attempts.
Previously in agents we had something called pyAttempts on the System-Queue-DefaultEntry class that we can use to identify the number of the current attempts.
This field doesn't see to work with the Queue Processors. Whats the alternate way to find the current re-try of the queue processor.
Thanks,
Mohamad