Kafka example with quarkus and kafka (amq streams). This is based on the Quarkus Kafka example here. The difference is that the application is decomposed into two microservices
This project uses JDK 8 but probably needs to be updated to use 11 since 8 is deprecated with later versions of Quarkus
- Local Development
- Deploy AMQ Streams
- Deploy microservices on OpenShift from remote git repo
- Deploy microservices on openshift using local git repo/source code
docker-compose -f kafka/docker-compose.yaml up -d
#to view the docker services
docker-compose -f kafka/docker-compose.yaml ps
# this should return two entries
To stop the running kafka
docker-compose -f kafka/docker-compose.yaml downlive-coding
cd price-generator
mvn clean compile quarkus:devlive-coding
cd price-converter
mvn clean compile quarkus:devNote: If running both microservices concurrently, you may want to pass -Ddebug=false to avoid collision on debug ports
navigate to http://localhost:8080/prices.html
Every 5 seconds you should see the page refresh with a new price
Note: if you choose to deploy Kafka elsewhere or maybe you are developing in CodeReady Workspaces (CRW), you can pass the appropriate location of the kafka bootstrap to the local deployment using
-Dmp.messaging.outgoing.generated-price.bootstrap.servers=<bootstrap location>For example
#kakfa_bootstrap=prices-cluster-kafka-bootstrap-prices-kafka.apps.cluster-60c1.60c1.example.opentlc.com:80
kakfa_bootstrap=prices-cluster-kafka-bootstrap.prices-kafka:9092
mvn clean compile quarkus:dev -Dmp.messaging.outgoing.generated-price.bootstrap.servers=$kakfa_bootstrapRemember to shutdown the kafka services once down with local development
docker-compose -f kafka/docker-compose.yaml downSet the OpenShift name
PRICES_PROJECT=prices-kafkaNote: The following commands assume that you are logged into and OpenShift cluster and that the AMQ Streams Operator has been installed on the cluster and is watching the namespace
oc new-project $PRICES_PROJECTYou can cut and paste the contents of kafka/PriceKafkaCluster.yaml using the OpenShift Operator UI or just run the folllowiing
oc apply -f kafka/PriceKafkaCluster.yaml -n $PRICES_PROJECT
oc get KafkaWait till the Kafka cluster and zookeeper statefulsets, and prices-cluster-entity-operator deployment are running
You can cut and paste the contents of kafka/PriceKafkaTopic.yaml using the OpenShift Operator UI or just run the folllowiing
oc apply -f kafka/PriceKafkaTopic.yaml -n $PRICES_PROJECT
oc get KakfaTopicoc run kafka-producer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list prices-cluster-kafka-bootstrap:9092 --topic prices
oc run kafka-consumer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-23:1.3.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server prices-cluster-kafka-bootstrap:9092 --topic prices --from-beginning
# clean up after testing
oc delete pod kafka-producer
oc delete pod kafka-consumeroc new-app registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift~https://github.com/gbengataylor/prices-quarkus-kafka-example.git --context-dir=price-generator --name=price-generator -n $PRICES_PROJECT
oc expose svc price-generator -n $PRICES_PROJECT
oc new-app registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift~https://github.com/gbengataylor/prices-quarkus-kafka-example.git --context-dir=price-converter --name=price-converter -n $PRICES_PROJECT
oc expose svc price-converter -n $PRICES_PROJECT
oc label dc/price-generator app.kubernetes.io/part-of=prices --overwrite -n $PRICES_PROJECT
oc label dc/price-generator app.openshift.io/runtime=quarkus --overwrite -n $PRICES_PROJECT
oc label dc/price-converter app.kubernetes.io/part-of=prices --overwrite -n $PRICES_PROJECT
oc label dc/price-converter app.openshift.io/runtime=quarkus --overwrite -n $PRICES_PROJECT
Note: This assumes that the Kafka Cluster has been deployed on OpenShift and in the same project
oc project $PRICES_PROJECTQuarkus has an extension to faciliate deployment to OpenShift.
./mvnw quarkus:add-extension -Dextensions="openshift"
The two microserivces already include the openshift extension so there is no need to run that command
S2I is also an option, but not used in this example
Note: If you want to build the image without deploying, the following command can be run (it performs an S2I build). This will create a BuildConfig, build the image, and push to an ImageStream
./mvnw clean package -Dquarkus.container-image.build=truecd price-generator
./mvnw clean package -Dquarkus.kubernetes.deploy=true
oc label dc/price-generator app.kubernetes.io/part-of=prices --overwrite
oc label dc/price-generator app.openshift.io/runtime=quarkus --overwrite Note: if the Image gets built but the deployment fails (and may with JDK8), you can deploy the microservice with
oc new-app price-generator:1.0-SNAPSHOT
oc expose service price-generatorcd price-converter
./mvnw clean package -Dquarkus.kubernetes.deploy=true
oc label dc/price-converter app.kubernetes.io/part-of=prices --overwrite
oc label dc/price-converter app.openshift.io/runtime=quarkus --overwrite Note: if the Image gets built but the deployment fails(and may with JDK8), you can deploy the microservice with
oc new-app price-converter:1.0-SNAPSHOT
oc expose service price-converterAfter deploying to openshift
export URL="http://$(oc get route price-converter -o jsonpath='{.spec.host}')"
echo "Navigate to URL: $URL/prices.html to view updated prices"
To view the stream using the http tool
http $URL/prices/stream --stream
