Example 1 — Basic Producer and Consumer

image.png

The simplest Kafka flow. One producer sends messages to a topic, one consumer reads them.

Producer → test-topic (inside Broker) → Consumer

Step 1 — Create the topic

bin/kafka-topics.sh --create \\
  --topic test-topic \\
  --bootstrap-server localhost:9092 \\
  --partitions 1 \\
  --replication-factor 1

Step 2 — Verify topic was created

bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic test-topic --describe

Step 3 — Start Producer (Terminal 1)

bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic

Type any message and hit Enter. Each line is one message.

Step 4 — Start Consumer (Terminal 2)

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-topic --from-beginning

Whatever you type in Terminal 1 appears instantly in Terminal 2. That is Kafka working end to end.

What --from-beginning does — Without it, consumer only reads new messages that arrive after it starts. With it, consumer reads everything from offset 0 — all old messages too.


Example 2 — Partitions + Keys + Consumer Group

image.png

This shows how Kafka distributes messages across partitions using keys, and how a consumer group reads them in parallel.

Producer (key A) → Partition 0 → Consumer 1 (same group)
Producer (key B) → Partition 1 → Consumer 2 (same group)