Question
Coned
US
Last activity: 19 Dec 2016 18:40 EST
Why prpc needs HornetQ enabled on jboss deployment?
We have our application deployed as ear on jboss eap 6.3. We realized that deploying prpc ear on jboss requires HornetQ to be enabled, but client has reservations enabling HornetQ. We reached out to Pega support to understand why and how prpc needs HornetQ.
We raised below questions to support group, but they directed us to post it here.
- How is HornetQ leveraged by Pega application (.ear deployment) ? What “Core functionality such as the running of Agents” require HornetQ to operate? Specific example of how agents leverage hornet Q and type of agents that rely on HornetQ
- Are those agents putting messages in HornetQ, does Pega maintains those and how? Does Pega monitors/alerts on those queues?
- How the hornetQ messaging work in clustered Pega environment? Does each node has its own isolated hornetq queues or those are shared/synchronized
Is there an impact to the performance of the runtime? Will and how much of resources hornetQ consumes in jvm?
-
Like (0)
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!
Accepted Solution
Pegasystems Inc.
US
answers:
- How is HornetQ leveraged by Pega application (.ear deployment) ? What “Core functionality such as the running of Agents” require HornetQ to operate? Specific example of how agents leverage hornet Q and type of agents that rely on HornetQ
All batch processes (including all OOTB agents - APIs like queueBatchActivity - SOAP invoke with parallel mode etc) are handled by MDB PRAsync which uses the built-in HornetQ for topic communication.
- Are those agents putting messages in HornetQ, does Pega maintains those and how? Does Pega monitors/alerts on those queues?
See above. PRAsync is no different than any other MDB, which is managed by the JBoss container. You can use the JBoss built-in monitor utility (e.g., command line) or any other 3rd party monitoring tool to monitor the PRAsync topic. Here is an example:
/opt/apps/gcsops/jboss-eap-6.4/bin>./jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
1. connect to the admin server
[disconnected /] connect vgcs01:9999
2. enable statistics when needed (set to false when you are done)
[standalone@vgcs01:9999 /] /subsystem=ejb3:write-attribute(name=enable-statistics,value=true)
{"outcome" => "success"}
answers:
- How is HornetQ leveraged by Pega application (.ear deployment) ? What “Core functionality such as the running of Agents” require HornetQ to operate? Specific example of how agents leverage hornet Q and type of agents that rely on HornetQ
All batch processes (including all OOTB agents - APIs like queueBatchActivity - SOAP invoke with parallel mode etc) are handled by MDB PRAsync which uses the built-in HornetQ for topic communication.
- Are those agents putting messages in HornetQ, does Pega maintains those and how? Does Pega monitors/alerts on those queues?
See above. PRAsync is no different than any other MDB, which is managed by the JBoss container. You can use the JBoss built-in monitor utility (e.g., command line) or any other 3rd party monitoring tool to monitor the PRAsync topic. Here is an example:
/opt/apps/gcsops/jboss-eap-6.4/bin>./jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
1. connect to the admin server
[disconnected /] connect vgcs01:9999
2. enable statistics when needed (set to false when you are done)
[standalone@vgcs01:9999 /] /subsystem=ejb3:write-attribute(name=enable-statistics,value=true)
{"outcome" => "success"}
3. read all ejb runtime statistics (pay special attention to PRAsync MDB, which involves a lot of useful runtime info, e.g., peak-concurrent-invocations, onMessage call stats etc (only available since stats are enabled in step 2). You can go to the Redhat support portal to search all the documentation.
[standalone@vgcs01:9999 /] /deployment=prpc_j2ee14_jboss61JBM.ear/subdeployment=prbeans.jar/subsystem=ejb3:read-resource(include-runtime=true, recursive=true)
{
"outcome" => "success",
"result" => {
"entity-bean" => undefined,
"message-driven-bean" => {"PRAsync" => {
"component-class-name" => "PRAsync",
"declared-roles" => [],
"delivery-active" => true,
"execution-time" => 3585L,
"invocations" => 196L,
"methods" => {"onMessage" => {
"execution-time" => 3585L,
"invocations" => 196L,
"wait-time" => 3L
}},
"peak-concurrent-invocations" => 5L,
"pool-available-count" => 100,
"pool-create-count" => 23,
"pool-current-size" => 23,
"pool-max-size" => 100,
"pool-name" => "mdb-strict-max-pool",
"pool-remove-count" => 0,
"run-as-role" => undefined,
"security-domain" => "other",
"timers" => [],
"wait-time" => 3L,
"service" => undefined
}},
"singleton-bean" => undefined,
"stateful-session-bean" => {"PRServiceStateful" => {
"cache-size" => 0,
"component-class-name" => "PRServiceStateful",
"declared-roles" => [],
"execution-time" => 0L,
"invocations" => 0L,
"methods" => {},
"passivated-count" => 0,
"peak-concurrent-invocations" => 0L,
"run-as-role" => undefined,
"security-domain" => "other",
"total-size" => 0,
"wait-time" => 0L,
"service" => undefined
}},
"stateless-session-bean" => {
"LockManager" => {
"component-class-name" => "LockManager",
"declared-roles" => [],
"execution-time" => 8L,
"invocations" => 151L,
"methods" => {"unlockForRequestor" => {
"execution-time" => 8L,
"invocations" => 151L,
"wait-time" => 6L
}},
"peak-concurrent-invocations" => 1L,
"pool-available-count" => 100,
"pool-create-count" => 2,
"pool-current-size" => 2,
"pool-max-size" => 100,
"pool-name" => "slsb-strict-max-pool",
"pool-remove-count" => 0,
"run-as-role" => undefined,
"security-domain" => "other",
"timers" => [],
"wait-time" => 6L,
"service" => undefined
},
"PRServiceStateless" => {
"component-class-name" => "PRServiceStateless",
"declared-roles" => [],
"execution-time" => 0L,
"invocations" => 0L,
"methods" => {},
"peak-concurrent-invocations" => 0L,
"pool-available-count" => 100,
"pool-create-count" => 0,
"pool-current-size" => 0,
"pool-max-size" => 100,
"pool-name" => "slsb-strict-max-pool",
"pool-remove-count" => 0,
"run-as-role" => undefined,
"security-domain" => "other",
"timers" => [],
"wait-time" => 0L,
"service" => undefined
},
"EngineCMT" => {
"component-class-name" => "EngineCMT",
"declared-roles" => [],
"execution-time" => 0L,
"invocations" => 0L,
"methods" => {},
"peak-concurrent-invocations" => 0L,
"pool-available-count" => 100,
"pool-create-count" => 0,
"pool-current-size" => 0,
"pool-max-size" => 100,
"pool-name" => "slsb-strict-max-pool",
"pool-remove-count" => 0,
"run-as-role" => undefined,
"security-domain" => "other",
"timers" => [],
"wait-time" => 0L,
"service" => undefined
},
"Database" => {
"component-class-name" => "Database",
"declared-roles" => [],
"execution-time" => 0L,
"invocations" => 0L,
"methods" => {},
"peak-concurrent-invocations" => 0L,
"pool-available-count" => 100,
"pool-create-count" => 1,
"pool-current-size" => 1,
"pool-max-size" => 100,
"pool-name" => "slsb-strict-max-pool",
"pool-remove-count" => 0,
"run-as-role" => undefined,
"security-domain" => "other",
"timers" => [],
"wait-time" => 0L,
"service" => undefined
},
"EngineBMT" => {
"component-class-name" => "EngineBMT",
"declared-roles" => [],
"execution-time" => 3492L,
"invocations" => 196L,
"methods" => {"invokeEngine" => {
"execution-time" => 3492L,
"invocations" => 196L,
"wait-time" => 1L
}},
"peak-concurrent-invocations" => 5L,
"pool-available-count" => 100,
"pool-create-count" => 23,
"pool-current-size" => 23,
"pool-max-size" => 100,
"pool-name" => "slsb-strict-max-pool",
"pool-remove-count" => 0,
"run-as-role" => undefined,
"security-domain" => "other",
"timers" => [],
"wait-time" => 1L,
"service" => undefined
}
}
}
}
- How the hornetQ messaging work in clustered Pega environment? Does each node has its own isolated hornetq queues or those are shared/synchronized
This is question more for Redhat. I am not aware of special handling from Pega's perspective.
WellCare
US
Hi Kevin,
Excellent explanation. I have a small question though. When you setup JBoss in a clustered setup; do you also setup clustering for HornetQ? is the Symmetric setup the recommended way? Or live-backup? I am also trying to only use unicast(TCP) instead of the default multicast(UDP).
Thanks,
Pegasystems Inc.
US
Fabian,
Thanks! If I understand you correctly, what you mean by JBoss clustering you mean several standalone pega instances (JVMs) pointing to the same database schema, that is Pega cluster implemented by Hazelcast. Not to be confused with native JBoss clustering. If you notice in the standalone.xml setup, the AsyncConnectionFactory is using in-vm connector, which means that the messages will only be processed within the container (jvm). So I do not think the hornetq clustering will be needed or even useful at all. In fact, If I am not mistaken, Redhat only supports hornetq within the context of JBoss EAP, similar to the embedded message providers of Websphere/Weblogic.
Now from your last question, it appears you are actually thinking about real JBoss clustering, I do not think Pega has tested that configuration internally - so be cautious and perform extensive testing before thinking about moving to production. Please refer to Redhat article: Switch from UDP to TCP for HornetQ clustering in JBoss EAP 6 - Red Hat Customer Portal for details how to set up the configuration. You should have an Redhat support account to access the link here.
WellCare
US
Kevin,
As always great insight. And yes, when I asked about clustering I meant the JBoss clustering; I will keep that in mind; I was just wondering if there are any issues with having hazelcast and jboss clustering at the same time or if they even interfere with each other at all. My goal is to have a jboss clustering so that I can use the different load balancing technologies that they provide.
Pegasystems Inc.
US
Very good question. Definitely something to be careful. I am not aware of any such incompatibility btw hazelcast and jboss clustering per se. But again, we have not done much test with such configuration. Here is one issue that is related to the jboss cluster session replication that Pega does not yet support with exceptions like this:
Very good question. Definitely something to be careful. I am not aware of any such incompatibility btw hazelcast and jboss clustering per se. But again, we have not done much test with such configuration. Here is one issue that is related to the jboss cluster session replication that Pega does not yet support with exceptions like this:
WARN [org.jboss.as.clustering.web.infinispan] (http-/ Proprietary information hidden:8080-5) JBAS010322: Failed to load session UXUjDfFxwlxqJeBdqitofM1o: java.lang.RuntimeException: JBAS010333: Failed to load session attributes for session: UXUjDfFxwlxqJeBdqitofM1o
at org.jboss.as.clustering.web.infinispan.DistributedCacheManager$2.invoke(DistributedCacheManager.java:229)
at org.jboss.as.clustering.web.infinispan.DistributedCacheManager$2.invoke(DistributedCacheManager.java:212)
at org.jboss.as.clustering.infinispan.invoker.SimpleCacheInvoker.invoke(SimpleCacheInvoker.java:34)
at org.jboss.as.clustering.infinispan.invoker.BatchCacheInvoker.invoke(BatchCacheInvoker.java:48)
at org.jboss.as.clustering.infinispan.invoker.RetryingCacheInvoker.invoke(RetryingCacheInvoker.java:85)
at org.jboss.as.clustering.web.infinispan.DistributedCacheManager$ForceSynchronousCacheInvoker.invoke(DistributedCacheManager.java:550)
PRPC does not support jboss clustering or session replication. Clustering / replicating the JMS queue is just overhead since we explicitly assume/require that MDB's fire on same jvm that they were 'queued' on.
Keep in mind that for most customers EAR deployment adds complexity and overhead while providing no business benefit. If customer is not using PRPC to be an EJB or MDB provider, is using PRPC for HTTP users or services, or has no XA 2 phase commit requirements, a WAR deployment with no JMS dependencies will be more than enough
WellCare
US
Hi Andy,
Thanks for the clarification. The main reason for me to do this is to utilize the various algorithms that JBoss clustering supports for load balancing I have a 3 node JBoss Standalone running 7.1.9. I want the architecture to be able to balanced out the load between those 3 nodes and in case of failure on one, to not send requests to that specific node; so the user experience is more clean.I am not planning to use any of the session replication; but wanted to know if it was needed for PRPC on regards of JMS; what I really looking to do for HA was to start using Pega's HA capability.
Pegasystems Inc.
US
Hi Fabian,
Typical setup for your purpose normally requires a load balancer (e.g., F5) and possibly a web server in front of your JBoss EAP nodes. Is that not what are planning for in the future?
WellCare
US
Hi Kevin,
Yes, I am using a load balancer F5 to two apache servers which then have each a 3 prpc node cluster.
Keep in mind that by default if you send requests for 'active' sessions to a different node than that where the session was initiated you will get HTTP 410 session one errors, as the requestor context is not going to be available.
PRPC provides support for HA and taking a node out for maintenance -- in 719 you can force all requestors on a node to 'passivate' to the database and when user 'clicks next' we invalidate the JSessionID routing cookie and redirect the browser back to the load balancer for the load balancer to send request to another node (which will restore the requestor from the database).
This does not leverage or support any clustering capabilities inside jboss
not to say that jboss clustering does not add value -- but prpc ha does not need or leverage it
WellCare
US
HI Andy,
Got it; I will do my testing prior to releasing into PRD right now it is just in my SBX/DEV setup that I am playing with.
Pegasystems Inc.
AU
Another suggested solution could be to move to ActiveMQ as discussed in below PDN post:
Unitedhealth Group
US
Fabian,
If I understand your question correctly, yes, you can setup JBOSS clustering and we have implemented JBOSS clustering with hazelcast in DMZ mode (here all the JVM's are in DMZ mode). With this setup sometimes if you restart all the JVM's at at time then some times this may lead to race condition in hazelcast and preventing some of the nodes to come up. I think there are some DSS settings to try out increasing the maxlockout attemps or time.