Kafka Administration - Basics

0 Comments

 Its been long since my last post. In the previous posts we saw the usecases of Apache Kafka..

In this post we will see some of the basic Admin activities of Kafka..

So what are the activities of a Kafka Admin??

As similar to any Middleware Administrator, it has below topics,

  1. Installing Kafka
  2. Configuring and starting brokers
  3. Setting up clusters
  4. Creating Topics
  5. Configuring Kafka Connect
  6. Configuring Kafka Streams
  7. Troubleshooting
  8. Fine-tuning
  9. Upgrades/Patches
  10. Maintaining inter connectivify with other Apache apps

We will see them one by one..

1. Installing Kafka:

Kafka Installation is very simply. Go to the Apache Kafka official page and download the file.

Once you downloaded, un-tar the file(if you are unfamilar with tar commands please refer here for details)

tar -xzf kafka_2.11-2.0.0.tgz

Lets take this destination dir as the kafka home dir.

2. Configuring brokers:

A broker is nothing but a node which is serving Kafka. Multiple brokers form a Kafka Cluster.

Kafka uses Apache Zookeeper to store information about the brokers and consumers( We will see the uses of Zookeeper in Kafka in a different post).

So to start a broker, we have to start Zookeeper prior to that.

If you have a clustered zookeeper environment which is already installed then please use it for Kafka.

To Install/Administer Zookeeper clusters for high availability, please see the zookeeper admin guide.

Else you can use the single node zookeeper instance packaged with Kafka.

bin/zookeeper-server-start.sh config/zookeeper.properties (run from Kafka home dir)

Once zookeeper is started, you can start the broker with the below command,

bin/kafka-server-start.sh config/server.properties (from Kafka home dir)

By default the broker name is server and you can change it.

Note: bin directory in kafka home dir contains all scripts and config dir contains config related files.

3. Setting up Clusters:

Single node environment can be used for testing/dev purposes. But we usually need clustered Kafka environment for high availability and fault tolerance.

Multiple Kafka brokers form Kafka cluster. The Zoo-keeper has the updated list of brokers and helps in cluster formation.

To create a cluster, we need to have multiple instance of broker in same or different nodes and so, when creating a topic we can define the cluster by registering the topic with replication details in zookeeper.

Generic commands from official site are as below,

Create multiple copies of server.properties in config folder (else better copy the entire kafka folder and rename server.properties file as per requirement)

cp config/server.properties config/server-1.properties

cp config/server.properties config/server-2.properties

Below are values need to be changed in the files,

broker.id=<1,2,3.. depending upon the number of brokers> (every broker registers in Zookeeper need to have unique id)

listeners=host://:port (if you are using same machine, please give different ports)

log.dirs= (if you are going to use shared path, please provide different log files like kafka-log-1, kafka-log-2)

Start the brokers using the comamnds,

bin/kafka-server-start.sh config/server-1.properties &

bin/kafka-server-start.sh config/server-2.properties &

To know more about thhe syntax, please refer official page.

How cluster works ?

Zoo-keeper manages the cluster and co-ordinate the brokers. It keeps track of all brokers and its availability and also start the eclection process incase of any broker failure to elect the new leader for a topic partition.

4. Creating topics:

Creating topic in kafka is very easy. The generic command to create a kafka topic for single broker,

bin/kafka-topics.sh –create –zookeeper –replication-factor 1 –partitions 1 –topic (from Kafka home dir)

kafka-topics.sh is the standard script comes with kafka installation to administer topic

Parameter’s definition:

–create – parameter used to create a topic

–zookeeper – used to give the zookeeper listener details

–replication-factor – used to create clusters and define how many brokers we are going to use for this topic

–partition – used to define the number of partitions we are going to create.

–topic – used to input the topic name

For cluster environments the command is as below,

bin/kafka-topics.sh –create –zookeeper –replication-factor 3 –partitions 1 –topic (from Kafka home dir)

In the above command replication factor is 3, means 3 servers will be in the cluster for this topic.

You can check the topics list in a broker using,

bin/kafka-topics.sh –list –zookeeper (from kafka home dir)

Also to check the details of a topic,

bin/kafka-topics.sh –describe –zookeeper –topic

We will see elaborately about the connectors in next post.

Please write your suggestions at srinivasanbtechit@gmail.com or comment in the blog.



You may also like

No comments:

Search This Blog

Powered by Blogger.

Blog Archive