Question
Cognizant
CA
Last activity: 31 Jul 2015 0:34 EDT
An advanced agent for purge with looping logic
An advanced agent for purge has following looping logic
1)For 4 product types it loops for 4 times to fetch 500 records in individual loop so in all 2000 records are fetched
2)In individual run of first 500 cases, it creates a PDF for single item in list and then deletes the case and it's associated data from different tables
3)Once the 500 cases are purged ,then from the main loop again it fetches next 500 records and the process continues.
I need to know whether having so many activity looping logic inside other loop can cause any potential problem like system crash/ high CPU utilization?
For purge process normally in Advanced agent the looping logic will always be present.
-
Like (0)
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!
The agent running will have CPU hit based on the complex logic. It depends on OS no. of cpu's and other configurations how much CPU % it would take. It's advised to to do complete test,verify alerts and other OS stats for bench-marking.
Cognizant
CA
Yes , I agree ,but this testing will be done in PAT environment, I need to think of other approach before project goes to PAT if we have problem with looping logic in agent, is there any way or experience from previous purge projects that can help us to come to conclusion that the JVM can handle the purge process load?
Pegasystems
IN
So as I understand you want your agent to create a PDF for each entry in the table and then delete. And this needs to happen for each record.
How often will this agent run?
Will it always fetch 2000 records on each run of the agent?
For purging, are you trying to purge all the records at one go (using a custom delete statement) or will you be deleting one row at a time?
Answers to the above will determine the impact this will have on the DB and on the application server running the agent.
It looks like you trying to do your own custom archive and purge for the cases that you have. Is that the intention?
Cognizant
CA
This advanced agent is of recurring mode ,it will run daily(weekdays) at 12:30 am when there is no user load on the system.
It will fetch 500 records at a time in sub loop and overall 2000 records in main loop.
Yes, we are using custom archive and purge ,at a time for single record the archival will be done and then related data for the main deal in it's custom tables will be deleted in loop with obj-delete-by handle.
once all the data is deleted the commit method will be called to commit all transactions.
The agent will be running on single node.
Pegasystems
IN
Any reason for not running it on multiple nodes considering this agent runs at a time when there is no user load on the system?
Cognizant
CA
As it is advanced agent , we wanted to avoid any locking issues and that's the reason we decided to go with single node, for multiple nodes it would be difficult to calculate the timing on each node and then schedule the agent to run on particular time which won't collide with the agent on other node.As per my previous experience a standard agent can handle thousands of records at a time but I am not sure about advanced agent looping logic.
Pegasystems
IN
I am trying to understand your use case a little more.
1) Does your advanced agent have a queue table associated with it?
2) Are you using the PRQueueIterator to iterate over the entries in your queue? You can get to the iterator using tools.getThread().getQueueManager().iterator(<classname>)
3) A standard agent works with one entry at time but it will wait till the queue is drained. Using a standard agent can help you work with different nodes at the same time because you are only locking one record at a time.
Cognizant
CA
1) No
2)We are not using queue, it's advanced agent which is operating on work table
3)We had the requirement for which advanced agent was good
We have not used standard agent but with Advanced agent for Purge process is there any issue with the logic of looping?
In loop only one entry is processed at a time with logic of PDF and deletion of data from different tables, a JVM with 4GB heap size can handle this load?
Pegasystems
IN
The memory should be sufficient provided you are not holding on to the entry in memory once the PDF has been generated.
Cognizant
CA
Sorry , I don't understand the meaning of "holding on memory once PDF is generated" can you please elaborate?
Once PDF is generated we are removing all pages and then deletion of records from different tables starts..
Pegasystems
IN
If you are removing all pages you should be good. That said, Chunzhi Hong idea of bench marking would be good.
Pegasystems Inc.
JP
Why not perform some bench mark test first to see how many CPUs or memories are required to process 1 case for each of your 4 case type?
Then based on the CPU times and memories available on your PAT environment, you can calculate how many records can be processed in a given time frame.