Kafka Message Consumption by Multiple Pega Deployment
Hello,
We have a need to publish message to a kafka topic (through queue for processing method) and the same has to be consumed (through QP )by another workload/pega deployment.
Context :: These workloads will share a common kafka cluster(Externalized)
Also, if multiple workload(s)/pega deployments have to consume the same message, then is it recommended to create QP Rules in the respective workloads/systems that are sharing the same external kafka cluster for processing?
Also, if we are reusing the same cluster for multiple pega deployments, I think the below setting has to be common across the pega deployments sharing the kafka cluster correct? Please confirm
<env name="services/stream/name/pattern" value="pega-dev-{stream.name}"/>
Hi @SethuS88, Thanks for reaching out. Ideally queue processors are built to handle the workload of a platform (deployment) in the background to process the messages.
Coming to the use-case, Instead of queue processors either stream dataset or Kafka dataset are better suited for this scenario. I would recommend exploring this option.
Also, we don't recommend 2 different pega installations share the same value for below property. This property has to be unique across even though they are using /sharing same kafka cluster.