Securing Apache Kafka using trustStore and SASL

Satish S. Aherkar, DevOps Evangelist.

Apache Kafka comes with a lot of security features out of the box (at least since version 0.9). But one feature is missing if you deal with sensitive mission critical data: Encryption of the data itself.


Talking about Encryption, Kafka Security mainly has three components:

  • Encryption of data in-flight using SSL/TLS: This component allows your data to be encrypted. This happens between both producers and customers, and consumers and Kafka.This is the most common pattern we see on web.
  • Authentication using SSL or SASL: This component is to verify the identity. It allows your producers and consumers to authenticate the Kafka cluster. Also, this component is the best way to enable your clients to endorse an identity, for authorization.
  • Authorization using ACLs (Access Control Lists): After authentication, your Kafka brokers are able to run them against ACLs. This is to determine whether or not a particular client would be authorized to write/read.


Encryption eliminates the problem of the man in the middle (MITM) attack. Hopping from machines to machines, your packets travel through your network while being routed to your Kafka cluster. That’s where the encryption comes.


In general, there are two ways to authenticate your Kafka clients to brokers. SSL and SASL.

SSL Authentication

This authentication leverages a capability from SSL and also issue certificates to your clients. These are signed by a certificate authority, which allow your Kafka brokers to verify the client identity.

SASL Authentication

SASL stands for Simple Authorization Service Layer and is popular with Big Data systems. This SASL authentication mechanism is separated from the Kafka protocol.

SASL, in its many ways, is supported by Kafka. The following are the different forms of SASL: SASL PLAINTEXT, SASL SCRAM, SASL GSSAPI, SASL Extension, SASL OAUTHBEARER.


– Java Keystroke.

Java KeyStore is used to store the certificates for each broker in the cluster and pair of private/public key.

Zookeeper SSL settings:





Broker SSL settings:










In broker SSL configuration is important to set ssl.client.auth=required  and to not allow connections from clients without SSL parameters and enforce SSL communication between brokers in the cluster.

For security protocols we are using TLSv1, v1.1 and v1.2 as an option or this can be set only to the latest version to avoid security flaws.


1) Set up a KDC using Apache Kerby

The KDC is a simple junit test that is available here. To run it just comment out the “org.junit.Ignore” annotation on the test method. It uses Apache Kerby to define the following principles:



Keytabs are created in the “target” folder. Kerby is configured to use a random port to lauch the KDC each time, and it will create a “krb5.conf” file containing the random port number in the target directory.

2) Configure Apache Zookeeper

Edit ‘config/’ and add the following properties:




Now create ‘config/zookeeper.jaas’ with the following content:

Server { required refreshKrb5Config=true useKeyTab=true keyTab=”/” storeKey=true principal=”zookeeper/localhost”;


Before launching Zookeeper, we need to point to the JAAS configuration file above and also to the krb5.conf file generated in the Kerby test-case above. This can be done by setting the “KAFKA_OPTS” system property with the JVM arguments:

Now start Zookeeper via:

bin/ config/

3) Configure Apache Kafka broker

Create ‘config/kafka.jaas’ with the content:

KafkaServer {

  required refreshKrb5Config=true useKeyTab=true keyTab=”/” storeKey=true principal=”kafka/localhost”;


Client { required refreshKrb5Config=true useKeyTab=true keyTab=”/” storeKey=true principal=”kafka/localhost”;


The “Client” section is used to talk to Zookeeper. Now edit  ‘config/’ and add the following properties:



We will just concentrate on using SASL for authentication, and hence we are using “SASL_PLAINTEXT” as the protocol. For “SASL_SSL” please follow the keystore generation as outlined in the following article. Again, we need to set the “KAFKA_OPTS” system property with the JVM arguments:

Now we can start the server and create a topic as follows:

bin/ config/

bin/ –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test

4) Configure Apache Kafka producers/consumers

To make the test-case simpler we added a single principal “client” in the KDC for both the producer and consumer. Create a file called “config/client.jaas” with the content:

KafkaClient { required refreshKrb5Config=true useKeyTab=true keyTab=”/” storeKey=true principal=”client”;


Edit *both* ‘config/’ and ‘config/’ and add:



Now set the “KAFKA_OPTS” system property with the JVM arguments:

We should now be all set. Start the producer and consumer via:

bin/ –broker-list localhost:9092 –topic test –producer.config config/

bin/ –bootstrap-server localhost:9092 –topic test –from-beginning –consumer.config config/ –new-consumer


Leave a Reply

Your email address will not be published. Required fields are marked *