Demonstration on how to configure Kafka authentication and authorization using OAuth2.
Product | Version |
---|---|
OpenShift |
4.15+ |
AMQ Streams Operator |
2.8 |
Keycloak Operator |
26.0 |
Quarkus |
2.7.5.Final |
-
Access to an OpenShift 4.15+ cluster
-
Maven
-
Java 11 JDK
-
Ansible >= 2.9
-
Python module for kubernetes and openshift
-
oc Client
Parameter | Example Value | Definition |
---|---|---|
token |
sha256~vFanQbthlPKfsaldJT3bdLXIyEkd7ypO_XPygY1DNtQ |
access token for a user with cluster-admin privileges |
server |
OpenShift cluster API URL |
The first step to do the installation is to run the playbook. This playbook is responsible for the AMQ Streams and Keycloak installation,
plus the environment configuration required for the applications to work. To run the playbook just export token and server as environment variables.
You may take these values from the OpenShift Copy login command
page.
token=REPLACE_ME server=REPLACE_ME ansible-playbook -e token=${token} -e server=${server} playbook.yml
You can set the token
and server
values like this.
ansible-playbook -e token=`oc whoami -t` -e server=`oc whoami --show-server` ansible/playbook.yml
This demo is a very simple producer/consumer example. The producer
app exposes a REST API to allow you to request a quote
, the results are streamed through a HTTP GET
call in the same API. The processor
app will take those requests in the background from a topic and provide a quote in another topic.
To demonstrate this, open two terminal. In the first one, add the streamed responses:
export ROUTE=$(oc get route kafka-producer -o jsonpath='{.spec.host}' -n kafka-keycloak) curl http://$ROUTE/quotes
In the second one, you may send multiple requests like this:
export ROUTE=$(oc get route kafka-producer -o jsonpath='{.spec.host}' -n kafka-keycloak) curl -X POST http://$ROUTE/quotes/request
oc run -ti --restart=Never --image=registry.redhat.io/amq7/amq-streams-kafka-30-rhel8:2.0.1 kafka-client -n kafka-keycloak -- /bin/sh
on your workstation terminal:
PROJECT=kafka-keycloak CLUSTER=my-cluster OUTPUT_DIR=/tmp/kafka-keycloak-manual TRUSTSTORE_PASSWORD=storepass SSO_HOST= SSO_HOST_PORT=$SSO_HOST:443 # Extract and add Kafka CA to truststore oc extract -n $PROJECT secret/$CLUSTER-cluster-ca-cert --keys ca.crt --to $OUTPUT_DIR/ keytool -keystore $OUTPUT_DIR/truststore.p12 -storetype pkcs12 -alias my-cluster-kafka -storepass $TRUSTSTORE_PASSWORD -import -file $OUTPUT_DIR/ca.crt -noprompt # Extract and add Keycloak CA to truststore echo "Q" | openssl s_client -showcerts -connect $SSO_HOST_PORT 2>/dev/null | awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/ { print $0 } ' > $OUTPUT_DIR/sso.crt keytool -keystore $OUTPUT_DIR/truststore.p12 -storetype pkcs12 -alias sso -storepass $TRUSTSTORE_PASSWORD -import -file $OUTPUT_DIR/sso.crt -noprompt oc rsync $OUTPUT_DIR/ kafka-client:/tmp
You need to authenticate on Keycloak before trying to access Kafka.
SSO_HOST= SSO_HOST_PORT=$SSO_HOST:443 USERNAME=alice PASSWORD=alice-password STOREPASS=storepass GRANT_RESPONSE=$(curl -X POST "https://$SSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token" -H 'Content-Type: application/x-www-form-urlencoded' -d "grant_type=password&username=$USERNAME&password=$PASSWORD&client_id=kafka-cli&scope=offline_access" -s -k) REFRESH_TOKEN=$(echo $GRANT_RESPONSE | awk -F "refresh_token\":\"" '{printf $2}' | awk -F "\"" '{printf $1}')
Create the file with all the information required to access Kafka.
cat > /tmp/alice.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=$STOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.refresh.token="$REFRESH_TOKEN" \ oauth.client.id="kafka-cli" \ oauth.ssl.truststore.location="/tmp/truststore.p12" \ oauth.ssl.truststore.password="$STOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.token.endpoint.uri="https://$SSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF
Once you have your properties file set, you may test your permission in the following ways:
Creating a topic:
bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --topic x_messages --create --replication-factor 1 --partitions 1
Listing available topics:
bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --list
You can add more users to the groups and roles used in the demo. A LDAP server is provisioned during the installation. Check the image documentation for information about passwords and default data.
The first step is to add a new User Federation LDAP provider. Here is what you should input in the form:
The password is adminpassword
.
Once you are done, hit save. Go back to the provider configuration. In the end of the page, hit synchronize all user
.
Expect two users to be imported.
You will be able to authenticate with those users but remember to add them to Groups
or Roles
so they can have access to Kafka resources.
-
How to make ansible shows additional logging:
Add -vvv
to show additional logging in case there are errors.
ansible-playbook -vvv -e token=`oc whoami -t` -e server=`oc whoami --show-server` ansible/playbook.yml
-
You can clean up all resources by running the script
delete_k8s_resources.sh
. -
You can show additional logging from the deployed quarkus application by adding the specific logging in
src/main/resources/application.properties
:
logging.level.org.apache.kafka=DEBUG logging.level.io.strimzi=DEBUG