Question
Accenture
ES
Last activity: 16 Oct 2018 12:03 EDT
Not receiving the messages published in Kafka Apache topic
Hi,
We are trying to create an application that listens to a Kafka Apache server topic. The Pega Platform and the Kafka Apache are not in the same server. We had followed the configuration suggested in the 3 tutorials over the subject released in pega 7.3.
We are publishing messages in the Kafka topic, we had also configured a consumer to verify the messages are flowing to the topic. The issue appears because the messages are properly received in the consumer, but it is not received by the subscriber present in the application. Has somebody else faced this issue? I would appreciate any suggestions to solve the problem.
Best regards,
Jordi
**Moderation Team has archived post**
This post has been archived for educational purposes. Contents and links will no longer be updated. If you have the same/similar question, please write a new post.
-
Like (0)
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!
Pegasystems Inc.
US
Hi Jordi,
Could you tell me a little bit more about your application? You are using the Pega Kafka Dataset to post messages to a Kafka topic? Is this via a Decisioning data flow or are you using an activity to post to Kafka (i.e. via DataSet-Execute)?
What about on the receiving end? Is this a Decisioning data flow or something else? If yes, is the "read from beginning" set or not set on the Kafka dataset? If it is a Decisioning data flow, are you starting the data flow then posting to a Kafka topic...or are you posting to a Kafka topic then starting the data flow?
thanks,
Gareth
Accenture
ES
Hi Gareth,
The application we are building is for a telecommunication company, what we are trying to achieve is to receive the input data from a Kafka topic,. When any data is received, then automatically trigger an event strategy and process the information.
I try to configure the dataset and I can establish the connection with the Kafka server, but I can´t make it consume the messages from the topic. Can you specify in a more detailed way the steps I should follow?
Regards,
Jordi
Pegasystems Inc.
US
Hi Jordi,
To help you out here, I still need some more information on your exact setup. Just to make sure you understand - the Kafka dataset currently does not save any subscriber information inside Kafka/zookeeper itself (i.e. the mapping of subscriber to position in the queue). This information is stored as part of the data flow state. So when you start a new data flow, the "start from beginning" flag determines whether the reading of the Kafka dataset begins from (i.e. either from the start of the Kafka queue...or only read new messages after the data flow has started). When a data flow is resumed, it uses the saved data flow state to set the position on the Kafka queue before starting to read. When a data flow is restarted, the position state will be deleted and the reading start point will be determined by the Kafka dataset settings (i.e. the "read from beginning" flag).
So short answer if you want to read new messages from Kafka as they come in you effectively need a constantly running real-time data flow. Is that what you have...or do you have a different setup?
thanks,
Gareth
-
Jack Mason Miguel Isaias Henriquez Baladitya Sai Mahesh Pathuri Stephen Meshotto
Areteans Tech
NZ
Hi Gareth,
What if we try to run the kafka data set not using a data flow but calling the method in an activity "DataSet-Execute". Do we need to change any settings for this setup to read data from kafka queue.
Younus
Pegasystems Inc.
US
Hi Younus,
You mean call DataSet-Execute Browse on the Kafka dataset in an activity? Why would you not use a data flow? If you call Browse outside of a data flow each call will either only read messages after the DataSet-Execute is called (up to the time limit) or it will read from the beginning. The behaviour will depend on the dataset "start from beginning" flag defined for the dataset.
Gareth
CBA
AU
Don't forget the fact that kafka topics have partitions and each of those subscribers will be streaming data from one partition only until its depleted. Subscriber here in pega would be either DataSet execute or a dataflow.
Centene
US
Hi Jordi,
Were you able to find the rootcause of the issue or were you able to fix the issue. I am facing similar issue.
Thanks,
Raj
Accenture
ES
Hi,
Can someone bring some light over this issue?
Thanks!