Question
CIMB Niaga
ID
Last activity: 22 Jan 2025 16:42 EST
Asking about Pega 8.1.3 Stream Issue
Hi Teams,
I have issue on production regarding my Pega Stream. I am using Pega 8.1.3 as a product, which used embedded kafka
When i see the logs of kafka, i saw that on certain time, the log show that when the session is expired, they can't create another new session
| Issue | Normal |
|---|---|
|
[2025-01-20 01:51:09,538] WARN Client session timed out, have not heard from server in 20010ms for sessionid 0xffffffffefd70006 (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:09,538] INFO Client session timed out, have not heard from server in 20010ms for sessionid 0xffffffffefd70006, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:11,604] INFO Opening socket connection to server Proprietary information hidden/ Proprietary information hidden:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:11,604] INFO Socket connection established to Proprietary information hidden/ Proprietary information hidden:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:27,086] WARN Unable to reconnect to ZooKeeper service, session 0xffffffffefd70006 has expired (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:27,086] INFO Unable to reconnect to ZooKeeper service, session 0xffffffffefd70006 has expired, closing socket connection (org.apache.zookeeper.ClientCnxn) |
Hi Teams,
I have issue on production regarding my Pega Stream. I am using Pega 8.1.3 as a product, which used embedded kafka
When i see the logs of kafka, i saw that on certain time, the log show that when the session is expired, they can't create another new session
| Issue | Normal |
|---|---|
|
[2025-01-20 01:51:09,538] WARN Client session timed out, have not heard from server in 20010ms for sessionid 0xffffffffefd70006 (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:09,538] INFO Client session timed out, have not heard from server in 20010ms for sessionid 0xffffffffefd70006, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:11,604] INFO Opening socket connection to server Proprietary information hidden/ Proprietary information hidden:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:11,604] INFO Socket connection established to Proprietary information hidden/ Proprietary information hidden:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:27,086] WARN Unable to reconnect to ZooKeeper service, session 0xffffffffefd70006 has expired (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:27,086] INFO Unable to reconnect to ZooKeeper service, session 0xffffffffefd70006 has expired, closing socket connection (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:27,086] INFO EventThread shut down for session: 0xffffffffefd70006 (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:51:27,102] INFO [ZooKeeperClient] Session expired. (kafka.zookeeper.ZooKeeperClient) |
[2025-01-20 01:47:38,873] INFO [ZooKeeperClient] Initializing a new session to Proprietary information hidden:2181. (kafka.zookeeper.ZooKeeperClient) [2025-01-20 01:47:38,873] INFO Initiating client connection, connectString= Proprietary information hidden:2181 sessionTimeout=60000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@3febb011 (org.apache.zookeeper.ZooKeeper) [2025-01-20 01:47:38,873] INFO Creating /brokers/ids/11 (is it secure? false) (kafka.zk.KafkaZkClient) [2025-01-20 01:47:38,888] INFO Opening socket connection to server Proprietary information hidden/ Proprietary information hidden:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:47:38,888] INFO Socket connection established to Proprietary information hidden/ Proprietary information hidden:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:47:38,935] INFO Session establishment complete on server Proprietary information hidden/ Proprietary information hidden:2181, sessionid = 0xffffffffefd70005, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:47:39,232] INFO Result of znode creation at /brokers/ids/11 is: OK (kafka.zk.KafkaZkClient) [2025-01-20 01:47:39,232] INFO Registered broker 11 at path /brokers/ids/11 with addresses: ArrayBuffer(EndPoint( Proprietary information hidden,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient) [2025-01-20 01:47:39,654] WARN We are triggering an exists watch for delete! Shouldn't happen! (org.apache.zookeeper.ZooKeeper) [2025-01-20 01:48:10,292] WARN Client session timed out, have not heard from server in 20002ms for sessionid 0xffffffffefd70005 (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:10,292] INFO Client session timed out, have not heard from server in 20002ms for sessionid 0xffffffffefd70005, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:11,601] INFO Opening socket connection to server Proprietary information hidden/ Proprietary information hidden:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:11,601] INFO Socket connection established to Proprietary information hidden/ Proprietary information hidden:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:11,601] INFO Session establishment complete on server Proprietary information hidden/ Proprietary information hidden:2181, sessionid = 0xffffffffefd70005, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:31,603] WARN Client session timed out, have not heard from server in 20002ms for sessionid 0xffffffffefd70005 (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:31,603] INFO Client session timed out, have not heard from server in 20002ms for sessionid 0xffffffffefd70005, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:33,216] INFO Opening socket connection to server Proprietary information hidden/ Proprietary information hidden:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:33,216] INFO Socket connection established to Proprietary information hidden/ Proprietary information hidden:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:42,570] WARN Unable to reconnect to ZooKeeper service, session 0xffffffffefd70005 has expired (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:42,570] INFO Unable to reconnect to ZooKeeper service, session 0xffffffffefd70005 has expired, closing socket connection (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:42,570] INFO EventThread shut down for session: 0xffffffffefd70005 (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:42,570] INFO [ZooKeeperClient] Session expired. (kafka.zookeeper.ZooKeeperClient) [2025-01-20 01:48:42,570] INFO [ZooKeeperClient] Initializing a new session to Proprietary information hidden:2181. (kafka.zookeeper.ZooKeeperClient) [2025-01-20 01:48:42,570] INFO Initiating client connection, connectString= Proprietary information hidden:2181 sessionTimeout=60000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@3febb011 (org.apache.zookeeper.ZooKeeper) [2025-01-20 01:48:42,586] INFO Opening socket connection to server Proprietary information hidden/ Proprietary information hidden:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:42,586] INFO Creating /brokers/ids/11 (is it secure? false) (kafka.zk.KafkaZkClient) [2025-01-20 01:48:42,586] INFO Socket connection established to Proprietary information hidden/ Proprietary information hidden:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:42,648] INFO Session establishment complete on server Proprietary information hidden/ Proprietary information hidden:2181, sessionid = 0xffffffffefd70006, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn) [2025-01-20 01:48:42,992] INFO Result of znode creation at /brokers/ids/11 is: OK (kafka.zk.KafkaZkClient) [2025-01-20 01:48:42,992] INFO Registered broker 11 at path /brokers/ids/11 with addresses: ArrayBuffer(EndPoint( Proprietary information hidden,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient) [2025-01-20 01:48:43,148] INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient) [2025-01-20 01:48:43,414] INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient) |
And, on pega alert, on that time, i saw this error message too (need to plus timestamp with +7)
2025-01-21 17:07:15,582 GMT*8*PEGA0005*515*500*0db2d247bcf23ad2fa521c85760544b0*NA*NA*B9KEGIH5JYGUSEW54NJQPNN8T8PZV1YKEA*NA*PegaSample*null*6431a92eb843dd43193d0f3ad85a95bb*N*0*B9KEGIH5JYGUSEW54NJQPNN8T8PZV1YKEA*4347*New I/O worker #89*STANDARD*com.pega.pegarules.data.internal.store.DataStorePreparedStatement*NA*NA*NA*NA*NA*****NA*NA*NA*NA*NA*initial Executable;0 additional frames in stack;*NA*Database operation took more than the threshold of 500 ms: 515 ms SQL: UPDATE "PC0" SET "pyTimeout" = ? ,"pyLastSeenDateTime" = ? FROM PegaDATA.pr_data_stream_sessions "PC0" WHERE ( "PC0"."pyId" = ? )*
2025-01-21 17:07:21,390 GMT*8*PEGA0005*563*500*0db2d247bcf23ad2fa521c85760544b0*NA*NA*B9KEGIH5JYGUSEW54NJQPNN8T8PZV1YKEA*NA*PegaSample*null*6431a92eb843dd43193d0f3ad85a95bb*N*0*B9KEGIH5JYGUSEW54NJQPNN8T8PZV1YKEA*4348*New I/O worker #89*STANDARD*com.pega.pegarules.data.internal.store.DataStorePreparedStatement*NA*NA*NA*NA*NA*****NA*NA*NA*NA*NA*initial Executable;0 additional frames in stack;*NA*Database operation took more than the threshold of 500 ms: 563 ms SQL: UPDATE "PC0" SET "pyTimeout" = ? ,"pyLastSeenDateTime" = ? FROM PegaDATA.pr_data_stream_sessions "PC0" WHERE ( "PC0"."pyId" = ? )*
2025-01-21 17:07:26,387 GMT*8*PEGA0005*938*500*0db2d247bcf23ad2fa521c85760544b0*NA*NA*B76D17URWFQEZE0TROP21ER8AGMYGGWKWA*NA*PegaSample*null*6431a92eb843dd43193d0f3ad85a95bb*N*0*B76D17URWFQEZE0TROP21ER8AGMYGGWKWA*4349*charlatan-watcher:2295*STANDARD*com.pega.pegarules.data.internal.store.DataStorePreparedStatement*NA*NA*NA*NA*NA*****NA*NA*NA*NA*NA*initial Executable;0 additional frames in stack;*NA*Database operation took more than the threshold of 500 ms: 938 ms SQL: DELETE "PC0" FROM PegaDATA.pr_data_stream_node_updates "PC0" WHERE ( "PC0"."pyTimestamp" < ? )*
2025-01-21 17:07:28,806 GMT*8*PEGA0005*1391*500*0db2d247bcf23ad2fa521c85760544b0*NA*NA*B9KEGIH5JYGUSEW54NJQPNN8T8PZV1YKEA*NA*PegaSample*null*6431a92eb843dd43193d0f3ad85a95bb*N*0*B9KEGIH5JYGUSEW54NJQPNN8T8PZV1YKEA*4350*New I/O worker #89*STANDARD*com.pega.pegarules.data.internal.store.DataStorePreparedStatement*NA*NA*NA*NA*NA*****NA*NA*NA*NA*NA*initial Executable;0 additional frames in stack;*NA*Database operation took more than the threshold of 500 ms: 1.391 ms SQL: UPDATE "PC0" SET "pyTimeout" = ? ,"pyLastSeenDateTime" = ? FROM PegaDATA.pr_data_stream_sessions "PC0" WHERE ( "PC0"."pyId" = ? )*
2025-01-21 17:08:10,104
Do you have any infromation why this happened? Can you give me a reccomendation because my application is 24x7 and i need to make sure that our applicaiton not significantly impacted to our customer
@HenryJusrin
It looks like your Pega 8.1.3 application is having trouble maintaining connections with ZooKeeper and Kafka, causing session timeouts and slow database operations. To fix this, first check your network for any latency or instability between Pega and ZooKeeper/Kafka and ensure it's reliable. You might need to increase the session timeout settings to give more leeway for connections. Make sure your ZooKeeper and Kafka servers have enough resources (like CPU and memory) and consider adding more nodes to handle the load. For the database issues, optimize your SQL queries by adding proper indexes and reducing any lock contention by managing transactions better. Also, monitor your database performance to identify and resolve any bottlenecks. Regular maintenance, such as reindexing and archiving old data, can help keep the database running smoothly. Additionally, consider externalizing ZooKeeper and Kafka instead of using them embedded within Pega for better scalability and reliability. Finally, reach out to Pega Support with your logs for tailored assistance and ensure your system is continuously monitored to prevent future issues. These steps should help keep your application stable and minimize impact on your customers