Kafka Message Consumption by Multiple Pega Deployment
Hello,
We have a need to publish message to a kafka topic (through queue for processing method) and the same has to be consumed (through QP )by another workload/pega deployment.
Context :: These workloads will share a common kafka cluster(Externalized)
Also, if multiple workload(s)/pega deployments have to consume the same message, then is it recommended to create QP Rules in the respective workloads/systems that are sharing the same external kafka cluster for processing?
Also, if we are reusing the same cluster for multiple pega deployments, I think the below setting has to be common across the pega deployments sharing the kafka cluster correct? Please confirm
<env name="services/stream/name/pattern" value="pega-dev-{stream.name}"/>