Home

Kafka security

You can improve the Kafka cluster security by having Kafka authenticate connections to brokers from client using either SSL or SASL.

SSL + Kerberos for Kafka clients

Prerequisite: Kafka brokers are configured with SSL and Kerberos. Please refer to your Hadoop providers documentation for configuring SSL and Kerberos for Kafka brokers.

SSL+Kerberos is supported by new Kafka consumers and producers. The configuration is same for consumer and producer. Replace items in red with values specific/relevant to your environment.

For single Kafka clients
  1. Create a file named consumerConfig.properties. Add the following properties and copy/move the file /usr/local/unravel/etc. You can locate your SSL + Kerberos configuration.

    ssl.protocol = TLSv1
    sasl.mechanism = GSSAPI security.protocol = SASL_SSL
    sasl.kerberos.service.name = kafka
    ssl.truststore.location = /usr/java/jdk1.7.0_67-cloudera/jre/lib/security/jssecacerts1 
    ssl.truststore.password = changeit ssl.truststore.type = JKS
    ssl.keystore.location = /opt/cloudera/security/jks/server.keystore.jks 
    ssl.keystore.password = password
    ssl.keystore.type = JKS
    ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
    sasl.jaas.config = \
    com.sun.security.auth.module.Krb5LoginModule required \
    useKeyTab=true \
    keyTab="/etc/keytabs/kafka.keytab" \
    principal="kafka/edge-1.uddev.unraveldata.com@UDDEV.UNRAVELDATA.COM"
    sasl.mechanism = GSSAPI
    security.protocol = SASL_PLAINTEXT 
    sasl.kerberos.service.name = kafka
    sasl.jaas.config = com.sun.security.auth.module.Krb5LoginModule required \
    useKeyTab=true \
    keyTab="/etc/keytabs/kafka.keytab" \
    principal="kafka/edge-1.uddev.unraveldata.com@UDDEV.UNRAVELDATA.COM";
  2. Copy/move consumerConfig.properties to /usr/local/unravel/etc.

  3. Edit /usr/local/unravel/unravel.properties. Search for com.unraveldata.ext.kafka.clusters.

    com.unraveldata.ext.kafka.clusters=ClusterName

    Add the following property using the ClusterName from above.

    com.unraveldata.ext.kafka.ClusterName.consumer.config=/usr/local/unravel/etc/consumerConfig.properties
  4. Restart the Kafka monitor daemon unravel_km.

    service unravel_km restart
For multiple Kafka clients

Warning

Each cluster must have a separate consumerConfig.properties files.

  1. Open /usr/local/unravel/unravel.properties. Search for com.unraveldata.ext.kafka.clusters.

    The property should be defined with a comma separated list. If there is only one cluster name see above.

    com.unraveldata.ext.kafka.clusters=ClusterName1,ClusterName2,ClusterName3
  2. Create a file named consumerConfigClusterName.properties for each cluster.

    ssl.protocol = TLSv1 sasl.mechanism = GSSAPI security.protocol = SASL_SSL 
    sasl.kerberos.service.name = kafka 
    ssl.truststore.location = /usr/java/jdk1.7.0_67-cloudera/jre/lib/security/jssecacerts1 
    ssl.truststore.password = changeit 
    ssl.truststore.type = JKS 
    ssl.keystore.location = /opt/cloudera/security/jks/server.keystore.jks 
    ssl.keystore.password = password ssl.keystore.type = JKS 
    ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1 sasl.jaas.config = com.sun.security.auth.module.Krb5LoginModule required \
    useKeyTab=true \ keyTab="/etc/keytabs/kafka.keytab" \
     principal="kafka/edge-1.uddev.unraveldata.com@UDDEV.UNRAVELDATA.COM"
  3. Copy/move each file to /usr/local/unravel/etc.

  4. Edit /usr/local/unravel/unravel.properties. For each cluster add the following property.

    com.unraveldata.ext.kafka.ClusterName.consumer.config=/usr/local/unravel/etc/consumerConfigClusterName.properties
  5. Restart the Kafka monitor daemon unravel_km.

    service  unravel_km restart
Kafka authorizations

Unravel consumes messages to topic __consumer_offsets using consumer group UnravelOffsetConsumer.

Sentry authorization

The following privilege must be granted using sentry:

HOST=*->CONSUMERGROUP=UnravelOffsetConsumer→action=read
HOST=*->CONSUMERGROUP=UnravelOffsetConsumer→action=write
HOST=*->CONSUMERGROUP=UnravelOffsetConsumer→action=describe
HOST=*->TOPIC=__consumer_offsets→action=read
HOST=*->TOPIC=__consumer_offsets→action=write
HOST=*->TOPIC=__consumer_offsets->action=describe

For further details see Using Kafka with Sentry Authorization in the Cloudera Distribution of Apache Kafka documentation.

Kafka with Ranger authorization

The following privilege must be granted using Ranger for the topic __consumer_offsets.

Publish
Consume
Describe

For further details, see Security - Create a Kafka Policy in the HDP Security Guide.

References

For further information see Apache Kafka documentation chapter # 7 Security.