In one of the test environments, the Index files are getting corrupted often and it causes issue with SEARCH in the portal. The test environment has 20 jvms and index node is setup in 4 jvms (out of 20). Each time when it is corrupted, we are performing the following steps -
1. Stop the server
2. Manually delete all the index files or directories in the /apps/CSAR/PRPCtextIndex directory in all nodes.
3. Restart the server.
4. Start Re-indexing from Designer Studio
This workaround is time consuming and the issue repeats often. Also the re-indexing takes hours and users cannot use SEARCH functionality until it is fixed. We are looking to fix this permanently so that index file does not get corrupted.
Please ensure to not include more than one node in the Search Index Host Node Setting list while rebuilding the index from scratch.
Including more than one node in the list at this point might cause duplicate index builds and
compromise system performance.
After the indexing is complete on the first host node, add any needed additional host nodes. The system replicates the indexes on the new nodes.
You may refer "Configuring search index host node settings" under your respective installation guide for more information.
Having one node for indexing resolves the issue. However I believe pega recommends having search index for each node when there is multiple servers (atleast one search node per server) for performance. If we have it one node, will there be performance degrade in production?