Discussion
Highmark Health
Last activity: 21 Feb 2023 11:19 EST
When to use IBM MQ vs. Apache Kafka
What are the Differences?
What is IBM MQ?
Enterprise-grade messaging middleware. It is a messaging middleware that simplifies and accelerates the integration of diverse applications and business data across multiple platforms. It offers proven, enterprise-grade messaging capabilities that skillfully and safely move information.
What is Kafka?
Distributed, fault-tolerant, high throughput pub-sub messaging system. Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.
Use Cases
When to use IBM MQ?
- Request for processing use case: An application making a request to another system or service to complete an action. This may result in a response message being returned to the requester. Consider the following scenario.
-
Conventional Messaging: Request/Response interaction, where applications to interact in either a request only (fire and forget) or request/response pattern.
-
Targeted Delivery: the Message is deliberately targeted to the entity that will process it.
-
Transient Data Persistence: Message is stored until a consumer has processed or when it expires.
When to use Confluent Kafka?
-
Enterprise data use case: In this use case, an application within the enterprise emits data that describes their current state. This data does not normally contain direct instruction for another system to complete an action. Instead, it allows other systems to gain insight into their data and status.
Central to this use case is a publish/subscribe (pub/sub) engine, where publishing applications emit data to a topic and subscribing applications register to one or more topics to receive the data from the publishing application. Consider the following scenario. -
Stream Event History: Does the solution need to be able to retrieve historical events either during normal and/or failure situations? Apache Kafka stores events published to a topic and only removes these when they have been configured to expire. This allows subscribers to replay events. Non-destructive consumption model.
-
Scalable Consumption: Subscribers are scalable to thousands, potentially tens of thousands with minimal impact to the platform. Each subscriber has a pointer into the topic stream history that represents its position, this minimizes the overhead of a new subscriber.
-
Fine-grained subscription: Consumer can subscribe to selected events of a topic instead of all events. More than one partition(s) created within a topic(s) due to fundamental architectural concept within Kafka and provides the capability to scale the solution to handle a massive number of events.
-
Realtime events transform operation: Using Kstream and ksqlDB continuously transform, enrich, join together, and aggregate your Kafka events without writing any complex application code. Easily build stream processing applications with a simple and lightweight SQL syntax.
-
Connect to external systems: Confluent Cloud offers pre-built fully and self-managed connectors that make it easy to instantly connect to popular data sources and sinks.
-
Schema registry: Manage message schemas for topics in Confluent Cloud by registration and validation of the schemas by producers and consumers.
Messaging technologies excel in the request for processing scenario, while event streaming technologies specialize in providing a pub/sub engine with stream history.
will use this thread to share/contribute your use case as well.