Question
t
US
Last activity: 2 Jan 2020 22:46 EST
Index file Unavailable
In one of the test environments, the Index files are getting corrupted often and it causes issue with SEARCH in the portal. The test environment has 20 jvms and index node is setup in 4 jvms (out of 20). Each time when it is corrupted, we are performing the following steps -
1. Stop the server
2. Manually delete all the index files or directories in the /apps/CSAR/PRPCtextIndex directory in all nodes.
3. Restart the server.
4. Start Re-indexing from Designer Studio
This workaround is time consuming and the issue repeats often. Also the re-indexing takes hours and users cannot use SEARCH functionality until it is fixed. We are looking to fix this permanently so that index file does not get corrupted.
Our pega version is 7.2.1. Can you pls help?
***Moderator Edit-Vidyaranjan: Updated Platform Capability***
-
Like (0)
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!
Ernst & Young LLP
MT
Can you check if the index file size exceeds the capacity of disk partition that hosts the index folder?
In one of the previous project, we found the index file corruption was happening as that disk partition didn't have any free space left.
Also, check the log files of the nodes that hosts the index for any issue with indexing, as that might give you a hint about the actual issue.
t
US
Thanks for your reply. We just checked with Admin and it looks like there is no space issue. /apps/CSAR consists of 40 GB and more or less the usage is pretty low around 10-20% all the time
Could there be any other reason for this index corruption?
Pegasystems Inc.
IN
Can you set -Dindex.directory as JVM arg and pass the directory value.
t
US
We restricted Search to single node and it is fine so far. (4 servers with 5 JVMs each - 20 instances). We will expand it later to multiple nodes and see if it is causing the issue
Swedbank AB
SE
Hello Karthik,
Please ensure to not include more than one node in the Search Index Host Node Setting list while rebuilding the index from scratch.
Including more than one node in the list at this point might cause duplicate index builds and
compromise system performance.
After the indexing is complete on the first host node, add any needed additional host nodes. The system replicates the indexes on the new nodes.
You may refer "Configuring search index host node settings" under your respective installation guide for more information.
Thank you,
Pawan
t
US
I added the additional hosts now. will monitor for next 2 weeks and report back if it works fine. Thanks for your input.
Tranzformd
AU
hi Karthik,
Please develop a practice of using only one node for indexing and restarting this node first when there is system maintenance and all the server JVMs need to be started.
Let us know if the issue still persists
t
US
Having one node for indexing resolves the issue. However I believe pega recommends having search index for each node when there is multiple servers (atleast one search node per server) for performance. If we have it one node, will there be performance degrade in production?
Pega reference - https://community.pega.com/sites/default/files/help_v83/procomhelpmain.htm#/express/engine/search/eng-config-index-host-nodes-tsk.htm
Swedbank AB
SE
Most probably yes, keeping just one search host node could lead to a number of issues related to search if the only host node goes down and/or out of the cluster for any reason.
Having multiple nodes would reduce the load and would work as a backup in case of primary host node failures.
Thank you,
Pawan