1. Review
Apache Kafka is a prominent dispersed streaming system for streaming documents in real-time. It allows programmers to release as well as register for streams of documents making use of Kafka subjects We can make use of docker-compose to handle Kafka streams making use of multi-container Docker applications.
In this tutorial, we’ll discover to develop a Kafka subject making use of Docker Compose. Additionally, we’ll release as well as eat messages from that subject making use of a solitary Kafka broker.
2. Arrangement Kafka making use of Docker Compose
Docker Compose device comes in handy for taking care of numerous solution configurations. The Kafka collection calls for Zookeeper as well as Kafka brokers, which is extremely helpful in such an instance. To establish a Kafka collection, we require to run 2 solutions: Zookeeper as well as Kafka Brokers.
Allow’s consider the docker-compose. yml to establish as well as run the Kafka web server:
variation: '3'.
solutions:.
zookeeper:.
photo: wurstmeister/zookeeper.
container_name: zookeeper.
ports:.
- "2181:2181".
networks:.
- kafka-net.
kafka:.
photo: wurstmeister/kafka.
container_name: kafka.
ports:.
- "9092:9092".
setting:.
KAFKA_ADVERTISED_LISTENERS: WITHIN:// kafka:9092, OUTSIDE:// localhost:9093.
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: WITHIN: PLAINTEXT, EXTERIOR: PLAINTEXT.
KAFKA_LISTENERS: WITHIN:// 0.0.0.0:9092, OUTSIDE:// 0.0.0.0:9093.
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE.
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181.
KAFKA_CREATE_TOPICS: "baeldung:1:1".
networks:.
- kafka-net.
networks:.
kafka-net:.
motorist: bridge
In the above docker-comose. yml, we run 2 various solutions. The zookeeper solution usages ” wurstmeister/zookeeper” as base photo. In addition, we revealed the 2181 port of the container to the host device, which enables exterior accessibility to the Zookeeper. In addition, we run the Kafka solution with ” wurstmeister/Kafka” as the base photo. To permit exterior accessibility to the Kafka web server, we revealed the 9092 port.
We gave vital ENV variables to the Kafka solution. These ENV variables set up audience setups inside as well as outside the Docker network. It makes use of the PLAINTEXT procedure, specifies bind addresses for both audiences ( 0.0.0.0), defines “INSIDE” as the inter-broker audience name, gives the link string to Zookeeper (making use of container name ” zookeeper” as well as port 2181), as well as establishes a subject called ” baeldung” with duplication variable 1 as well as dividers matter 1
3. Begin the Kafka Collection
Till currently, we checked out developing solutions to run Zookeeper as well as Kafka making use of docker-compose. yml To show, allow’s consider the command to begin the Kafka collection:
$ docker-compose up -d.
[+] Running 2/2.
✔ Container zookeeper Running 0.0 s.
✔ Container kafka Begun
The over command will just begin the Zookeeper as well as Kafka containers in separated setting. This implies we can communicate with Kafka as well as Zookeeper while the terminal continues to be devoid of various other jobs. The – d setting guarantees that the container runs in the history, which assists us handle the Kafka setting appropriately.
3.1. Confirming the Kafka Collection
In order to confirm that the Kafka collection is running effectively, allow’s run the adhering to command to see the running containers:
$ docker-compose ps.
NAME PHOTO COMMAND SOLUTION DEVELOPED CONDITIONS PORTS.
kafka wurstmeister/kafka "start-kafka. sh" kafka 27 secs ago Up 27 secs 0.0.0.0:9092->> 9092/tcp.
zookeeper wurstmeister/zookeeper "/ bin/sh -c '/ usr/sb ..." zookeeper 3 mins ago Up 3 mins 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->> 2181/tcp
In the above result, we can see that the Zookeeper as well as Kafka containers are up as well as running. This confirmation action validates that the Kafka collection prepares to manage subjects as well as messages. For that reason information from different resources can be consumed.
3.2. Developing a Kafka Subject
Till currently, the Kafka collection is up as well as running. Currently, allow’s develop a Kafka subject:
$ docker-compose director kafka kafka-topics. sh-- develop-- subject baeldung_linux.
-- dividings 1-- replication-factor 1-- bootstrap-server kafka:9092
In the above command, a brand-new subject called baeldung_linux is produced with 1 dividers as well as 1 reproduction established making use of a Kafka broker on port 9092 Currently utilizing this subject, we can enhance Kafka information. This will certainly permit us to trade messages as well as occasions for a range of applications.
3.3. Posting as well as Consuming Messages
With the Kafka subject in position, allow’s release as well as eat some messages. Initially, begin a customer by running the adhering to command:
$ docker-compose director kafka kafka-console-consumer. sh-- subject baeldung_linux.
-- from-beginning-- bootstrap-server kafka:9092.
Making use of the above command, we’ll have the ability to eat all the messages sent out over to this subject. In addition, we utilized — from-beginning to eat all messages sent out over the subject from the get go. Allow’s likewise consider releasing the information to this Kafka subject:
$ docker-compose director kafka kafka-console-producer. sh-- subject baeldung_linux.
-- broker-list kafka:9092
By utilizing the above command, we can create as well as send out messages to the baeldung_linux subject. Sending out messages to Kafka subjects making use of Docker is easy as well as reliable.
4. Final Thought
In this write-up, we checked out exactly how to develop a Kafka subject making use of Docker Compose. Initially, we established a full-fledged Kafka collection making use of 2 various solutions. We run Kafka Zookeeper as well as Kafka Broker making use of Docker Compose. Afterwards, we produced a Kafka subject to release as well as register for messages making use of the Kafka collection.
In other words, we produced a straightforward Kafka collection making use of docker-compose solutions. This entire arrangement assists us disperse as well as connect information in real-time within our Kafka-based applications using docker-compose solutions. In addition, it assists us save money on sources considering that we do not need to run the Kafka collection constantly.