Kafka cluster installation. Only practical experience.
Disclaimer: The story below is related to OpenShift Dedicated v.3. I have a project which requires event sourcing architecture. The project is running in OpenShift. My idea is to try Kafka and RabbitMQ in the broker role. Kafka installation is supported in OpenShift by Strimzi implementation called AMQ Streams . This means that, in case of OpenShift Dedicated clusters, RedHat has to enable Streams support if you are on v.3. After the Streams are enabled you can create a Kafka cluster with a simple command, like it is described in How to run Kafka on Openshift, the enterprise Kubernetes, with AMQ Streams . Just execute: $ cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1alpha1 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 listeners: external: type: route storage: type: ephemeral zookeeper: replicas: 3 storage: type: ephemeral entityOperator: topicOperator: {} EOF And you will get you cluster.....