Question
TD Bank Group
CA
Last activity: 20 May 2016 12:37 EDT
A lot of nodes listed for a single PRPC instance
In the System > General > Systems, Nodes & Requestors screen, for a single PRPC "instance" with a single node, we see a lot of nodes listed, where the only difference is the NodeID.
Does someone know what is causing that?
Thanks,
-
Like (0)
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!
Pegasystems
IN
The node ID is a hash computed from the hostname (machine) + temp directory of PRPC + system name. Changing any one of them would cause the node ID to be recomputed.
TD Bank Group
CA
My issue is that the Node Name is a composition of the machine name and a timestamp and for some reason the timestamp changes. Again, not sure what triggers that timestamp change.
Everything else is the same: jvm temp dir, machine name.
Pegasystems Inc.
US
What version of Pega are you on?
Check "System > Settings > Search > Search Index Host Node Settings" to see how many indexing nodes have been added.
TD Bank Group
CA
We use7.1.7
Pegasystems Inc.
US
Check "System > Settings > Search > Search Index Host Node Settings" to see how many indexing nodes have been added.
TD Bank Group
CA
We only have 2 nodes listed there.
Pegasystems Inc.
US
Patrick,
Can you show the query result for pr_sys_statusnodes table?
thanks,
Kevin
TD Bank Group
CA
That query returns 534 rows... I am not sure it would be useful to paste it here.
Is there another way I can share that with you?
Pegasystems Inc.
US
Save it to a excel spreadsheet and attach it.
Pegasystems Inc.
US
Hi Patrick Capron,
When you save it as an Excel file and want to attach it in your reply, you'll want to click to Reply > then click on "Use advanced editor" on the top right above this box you type into the reply. (Do this before you start typing!)
Then on the lower left, you'll see an Attach option.
Let us know if you have any questions!
TD Bank Group
CA
Here you go...
Pegasystems Inc.
US
Is there an actual problem with these entries? or is this just curiosity/customer question?
This is NOT for a single PRPC instance as is stated in the title of this discussion.
It appears that you have 4 nodes total - 2 of which form the bulk of the entries.
Row Labels | Count of PYNODENAME |
cs1bcmapd01.bns | 274 |
cs1bxabati01.bns | 1 |
prpc-dev.ist.intralink.bns | 258 |
tocwcmapd021ic | 1 |
Grand Total | 534 |
The last time I checked the create datetime was NOT part of the key for this table for 'normal' PRPC nodes.
I do see that it appears that these entries are created sometimes several times a day...
There were several requests above for information about Search Indexing nodes - can you answer those questions?
I am not as familiar with the ways that Indexing names it's nodes, and that may be relevant to answer your question.
In systems before (about) ML9, The entries in this table are not managed - the table can grow like this... it's not usually a problem.
Is this causing a problem?
Is there an actual problem with these entries? or is this just curiosity/customer question?
This is NOT for a single PRPC instance as is stated in the title of this discussion.
It appears that you have 4 nodes total - 2 of which form the bulk of the entries.
Row Labels | Count of PYNODENAME |
cs1bcmapd01.bns | 274 |
cs1bxabati01.bns | 1 |
prpc-dev.ist.intralink.bns | 258 |
tocwcmapd021ic | 1 |
Grand Total | 534 |
The last time I checked the create datetime was NOT part of the key for this table for 'normal' PRPC nodes.
I do see that it appears that these entries are created sometimes several times a day...
There were several requests above for information about Search Indexing nodes - can you answer those questions?
I am not as familiar with the ways that Indexing names it's nodes, and that may be relevant to answer your question.
In systems before (about) ML9, The entries in this table are not managed - the table can grow like this... it's not usually a problem.
Is this causing a problem?
Nodes with old pxcreatedatetime values (more than a couple days old; most of these) can be deleted without causing issues.
If you want to be sure you can also check that the pxLastPulseDateTime and the pxLastIndexBuildDateTime are also more than a few days old.
TD Bank Group
CA
It IS A SINGLE PRPC instance.
From the 4 nodes listed above, only 2 pertain to this instance: cs1bcmapd01.bns (name of the server) and prpc-dev.ist.intralink.bns (DNS Alias). The 2 other ones, I have no idea where they are coming from.
I already answered the questions above about indexing nodes. We have 2 listed (1 for cs1bcmapd01.bns and 1 for prpc-dev.ist.intralink.bns) as at some point we were flipping back an forth between the 2 and having only one would cause the Search Indexing to stop working.
Besides the fact that we sometime bounce between the 2 above mentioned indexing "nodes", it does not seem to be causing an issue, but we would like to understand what is causing all those entries to be created.
Based on what you suggested, we will proceed and delete the rows based on pxLastPulseDateTime and pxLastIndexBuildDateTime.
Pegasystems Inc.
US
Ok, I see you are getting two entries for one actual node.
o One for the actual machine name.
o One for a DNS alias.
thanks, that was not clear from the discussion above.
If I understand your configuration correctly, you have one prpc node which has search configuration to use itself (via machine name) and itself again (via DNS Alias) as the two required search nodes. Did I understand that correctly?
If so, it may work, but I'm not sure if that is actually a supported configuration... that may be leading to some of the behavior you are seeing. We will find the right person to answer that.
As for the other nodes that are showing up, the ones that show up just once... All I can say about them is that at some point in the past nodes with those names were started and referred to the same database as this system.
If looks like one was started recently on 1/28/2016, the other on 10/31/2015.
Once you delete entries these should be gone (or you may need to delete the recent one separately).
If they show up again you'll need to track down the owner of the machine...
TD Bank Group
CA
Your understanding is correct.
We deleted entries in the pr_sys_statusnodes table based on pxcreatedatetime, pxLastPulseDateTime and pxLastIndexBuildDateTime but this does not seem to have removed the nodes from the Designer Studio screen named "Systems, Nodes & Requestors".
Where is this information coming from then?
By doing that delete on this table, did we introduce inconsistency now in the PRPC Data DB? Should we have removed entries from other tables at the same time?
Pegasystems
IN
I am little confused here
It IS A SINGLE PRPC instance.
From the 4 nodes listed above, only 2 pertain to this instance: cs1bcmapd01.bns (name of the server) and prpc-dev.ist.intralink.bns (DNS Alias).
Does this mean that whenever this PRPC instance is started, there are two entries getting created in pr_sys_statusnodes table? Or is the behavior changing for every startup on which one of the node names is used?
We have 2 listed (1 for cs1bcmapd01.bns and 1 for prpc-dev.ist.intralink.bns) as at some point we were flipping back an forth between the 2 and having only one would cause the Search Indexing to stop working.
So the node id is changing whenever the node name being picked up is changing?
If the above is the case, then you will have to keep changing the value on the search landing page and truncate the old entries from pr_sys_statusnodes table. That said, there are enhancements to fix the node id issue in 7.1.9 through a start up system property. That could help you here.
TD Bank Group
CA
We have not been able to find a pattern that explains when one name is used versus the other.
I don't know the PRPC code that "creates" those entries. It would be good if someone with knowledge on this could participate in this discussion.
Pegasystems Inc.
DE
pr_sys_statusnodes is used by to store the Node status mainly used to synchronize the Node (for hazelcast). Normally one should have few entries in this table. If there are more entries check if the system cleaner or purge agent is running or scheduled correctly.
go to the actual NODES list
It sounds like either
a- you have not properly defined your Pega temp dir and every time you start your app server you get a different temp dir -> different node id
b- you are running prpcutils or another batch tool repeatedly, with a unique implicit temp dir on each execution. Don't forget, that as long as prpcutils are running, that batch job IS a node in the hazelcast cluster and it creates statusnode record
Suggest you
a- see how many nodes are defined
b- see how many unique data-agent-queue records you have
c- check table pr_sys_statusdetails. There are bugs in several releases that cause records to pile up in that table.
Pegasystems Inc.
US
This looks like an app that may have been migrated a few times before it came to the instance you are on now. As the system name and servers change new node entries will be put in place. I'd truncate the pr_sys_statusnodes table, restart your app servers, and then reindex. You have residual nodes from a past application life.
Scotiabank
CA
Hello all:
I work with Patrick, the initiator of the thread.
Looking at Andrew Werden's responses a) and b).
They are both correct.
We are using the Batch Utility for Code Deployment purposes. Each time the prpcutils script runs, it starts up a JVM with a uniquely named Temp directory.
Our in house Script will need to be changed to always use the same Temp directory when starting a prpcutils.sh.
Once this is corrected, is there any harm to removing the unused records from the PR_SYS_STATUSNODES table, or should I be aware of joined tables?
Pegasystems Inc.
US
when the util script runs the node that it creates to run in gets added to PR_SYS_STATUSNODES, along with any node that has ever connected to that DB. If the node is no longer being used you can safely remove it from the table.
it also adds a data-admin-node record to table pr_data_admin. You want to clear out both pr_sys_statusnodes and data-admin-nodes to "leave no marks".
Scotiabank
CA
Thank you Ryan and Andrew:
I have competed some analysis and can delete the nodes via a DB2 Script and the NODEID that is generated.
Is there a way for me to incorporate the statements into the Utils JVM IMPORT and EXPORT streams when they are running using the DB2 connections available,
so I do not need to install a DB2 client on the Pega Servers.
you can reuse the import temp directory - believe there is a command line option for temp directory. That way you have one 'node' for batch scripts as long as you run scripts on the same host consistently