Securing Apache Kafka using trustStore and SASL

Satish S. Aherkar, DevOps Evangelist.

Apache Kafka comes with a lot of security features out of the box (at least since version 0.9). But one feature is missing if you deal with sensitive mission critical data: Encryption of the data itself.

 

Talking about Encryption, Kafka Security mainly has three components:

  • Encryption of data in-flight using SSL/TLS: This component allows your data to be encrypted. This happens between both producers and customers, and consumers and Kafka.This is the most common pattern we see on web.
  • Authentication using SSL or SASL: This component is to verify the identity. It allows your producers and consumers to authenticate the Kafka cluster. Also, this component is the best way to enable your clients to endorse an identity, for authorization.
  • Authorization using ACLs (Access Control Lists): After authentication, your Kafka brokers are able to run them against ACLs. This is to determine whether or not a particular client would be authorized to write/read.

Encryption:

Encryption eliminates the problem of the man in the middle (MITM) attack. Hopping from machines to machines, your packets travel through your network while being routed to your Kafka cluster. That’s where the encryption comes.

Authentication:

In general, there are two ways to authenticate your Kafka clients to brokers. SSL and SASL.

SSL Authentication

This authentication leverages a capability from SSL and also issue certificates to your clients. These are signed by a certificate authority, which allow your Kafka brokers to verify the client identity.

SASL Authentication

SASL stands for Simple Authorization Service Layer and is popular with Big Data systems. This SASL authentication mechanism is separated from the Kafka protocol.

SASL, in its many ways, is supported by Kafka. The following are the different forms of SASL: SASL PLAINTEXT, SASL SCRAM, SASL GSSAPI, SASL Extension, SASL OAUTHBEARER.

Security

– Java Keystroke.

Java KeyStore is used to store the certificates for each broker in the cluster and pair of private/public key.

Zookeeper SSL settings:

client.secure=true

ssl.keyStore.location=/path/to/ssl/server.keystore.jks

ssl.keyStore.password=<<password>>

ssl.trustStore.location=/path/to/ssl/server.truststore.jks

ssl.trustStore.password=<<password>>

Broker SSL settings:

client.auth=required

enabled.protocols=TLSv1.2,TLSv1.1,TLSv1

keystore.type=JKS

truststore.type=JKS

truststore.location=/path/to/ssl/server.truststore.jks

truststore.password=<<password>>

keystore.location=/path/to/ssl/server.keystore.jks

keystore.password=<<password>>

key.password=<<password>>

inter.broker.protocol=SSL

In broker SSL configuration is important to set ssl.client.auth=required  and security.inter.broker.protocol=SSL to not allow connections from clients without SSL parameters and enforce SSL communication between brokers in the cluster.

For security protocols we are using TLSv1, v1.1 and v1.2 as an option or this can be set only to the latest version to avoid security flaws.

— SASL

1) Set up a KDC using Apache Kerby

The KDC is a simple junit test that is available here. To run it just comment out the “org.junit.Ignore” annotation on the test method. It uses Apache Kerby to define the following principles:

zookeeper/localhost@kafka.apache.org

kafka/localhost@kafka.apache.org

client@kafka.apache.org

Keytabs are created in the “target” folder. Kerby is configured to use a random port to lauch the KDC each time, and it will create a “krb5.conf” file containing the random port number in the target directory.

2) Configure Apache Zookeeper

Edit ‘config/zookeeper.properties’ and add the following properties:

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

requireClientAuthScheme=sasl

jaasLoginRenew=3600000

Now create ‘config/zookeeper.jaas’ with the following content:

Server {

       com.sun.security.auth.module.Krb5LoginModule required refreshKrb5Config=true useKeyTab=true keyTab=”/path.to.kerby.project/target/zookeeper.keytab” storeKey=true principal=”zookeeper/localhost”;

};

Before launching Zookeeper, we need to point to the JAAS configuration file above and also to the krb5.conf file generated in the Kerby test-case above. This can be done by setting the “KAFKA_OPTS” system property with the JVM arguments:

-Djava.security.auth.login.config=/path.to.zookeeper/config/zookeeper.jaas

-Djava.security.krb5.conf=/path.to.kerby.project/target/krb5.conf

Now start Zookeeper via:

bin/zookeeper-server-start.sh config/zookeeper.properties

3) Configure Apache Kafka broker

Create ‘config/kafka.jaas’ with the content:

KafkaServer {

           com.sun.security.auth.module.Krb5LoginModule required refreshKrb5Config=true useKeyTab=true keyTab=”/path.to.kerby.project/target/kafka.keytab” storeKey=true principal=”kafka/localhost”;

};

Client {

       com.sun.security.auth.module.Krb5LoginModule required refreshKrb5Config=true useKeyTab=true keyTab=”/path.to.kerby.project/target/kafka.keytab” storeKey=true principal=”kafka/localhost”;

};

The “Client” section is used to talk to Zookeeper. Now edit  ‘config/server.properties’ and add the following properties:

listeners=SASL_PLAINTEXT://localhost:9092

security.inter.broker.protocol=SASL_PLAINTEXT

sasl.mechanism.inter.broker.protocol=GSSAPI

sasl.enabled.mechanisms=GSSAPI

sasl.kerberos.service.name=kafka

We will just concentrate on using SASL for authentication, and hence we are using “SASL_PLAINTEXT” as the protocol. For “SASL_SSL” please follow the keystore generation as outlined in the following article. Again, we need to set the “KAFKA_OPTS” system property with the JVM arguments:

-Djava.security.auth.login.config=/path.to.kafka/config/kafka.jaas

-Djava.security.krb5.conf=/path.to.kerby.project/target/krb5.conf

Now we can start the server and create a topic as follows:

bin/kafka-server-start.sh config/server.properties

bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test

4) Configure Apache Kafka producers/consumers

To make the test-case simpler we added a single principal “client” in the KDC for both the producer and consumer. Create a file called “config/client.jaas” with the content:

KafkaClient {

       com.sun.security.auth.module.Krb5LoginModule required refreshKrb5Config=true useKeyTab=true keyTab=”/path.to.kerby.project/target/client.keytab” storeKey=true principal=”client”;

};

Edit *both* ‘config/producer.properties’ and ‘config/consumer.properties’ and add:

security.protocol=SASL_PLAINTEXT

sasl.mechanism=GSSAPI

sasl.kerberos.service.name=kafka

Now set the “KAFKA_OPTS” system property with the JVM arguments:

-Djava.security.auth.login.config=/path.to.kafka/config/client.jaas

-Djava.security.krb5.conf=/path.to.kerby.project/target/krb5.conf

We should now be all set. Start the producer and consumer via:

bin/kafka-console-producer.sh –broker-list localhost:9092 –topic test –producer.config config/producer.properties

bin/kafka-console-consumer.sh –bootstrap-server localhost:9092 –topic test –from-beginning –consumer.config config/consumer.properties –new-consumer

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*