A Kustomize-based repository for deploying the StreamsHub event-streaming stack on a local or development Kubernetes cluster using only kubectl.
Note: This is a development-only configuration. Resource limits, security settings and storage configurations are not suitable for production use.
| Component | Namespace | Description |
|---|---|---|
| Strimzi Kafka Operator | strimzi |
Manages Kafka clusters via CRDs |
Kafka cluster (dev-cluster) |
kafka |
Single-node Kafka for development |
| Apicurio Registry Operator | apicurio-registry |
Manages schema registry instances |
| Apicurio Registry instance | apicurio-registry |
In-memory schema registry |
| StreamsHub Console Operator | streamshub-console |
Manages console instances |
| StreamsHub Console instance | streamshub-console |
Web UI for Kafka management |
Optional: The metrics overlay adds Prometheus Operator, a Prometheus instance, and Kafka metrics collection via PodMonitors.
kubectlv1.27 or later (for Kustomize v5.0labelstransformer support)- A running Kubernetes cluster (minikube, KIND, etc.)
- An Ingress controller for StreamsHub Console access (e.g.
minikube addons enable ingress)
Deploy the entire stack with a single command:
curl -sL https://raw.githubusercontent.com/streamshub/developer-quickstart/main/install.sh | bashThis script installs operators, waits for them to become ready, then deploys the operands.
The install script accepts the following environment variables:
| Variable | Default | Description |
|---|---|---|
REPO |
streamshub/developer-quickstart |
GitHub repository path |
REF |
main |
Git ref, branch, or tag |
OVERLAY |
(empty) | Overlay to apply (e.g. metrics) |
TIMEOUT |
120s |
kubectl wait timeout |
Example with a pinned version:
curl -sL https://raw.githubusercontent.com/streamshub/developer-quickstart/main/install.sh | REF=v1.0.0 bashDeploy the stack with Prometheus metrics collection:
curl -sL https://raw.githubusercontent.com/streamshub/developer-quickstart/main/install.sh | OVERLAY=metrics bashThis adds the Prometheus Operator and a Prometheus instance (namespace: monitoring), enables Kafka metrics via Strimzi Metrics Reporter, and wires Console to display metrics.
If you prefer to control each step, the stack is installed in two phases:
kubectl apply -k 'https://github.com/streamshub/developer-quickstart//overlays/core/base?ref=main'Wait for the operators to become ready:
kubectl wait --for=condition=Available deployment/strimzi-cluster-operator -n strimzi --timeout=120s
kubectl wait --for=condition=Available deployment/apicurio-registry-operator -n apicurio-registry --timeout=120s
kubectl wait --for=condition=Available deployment/streamshub-console-operator -n streamshub-console --timeout=120skubectl apply -k 'https://github.com/streamshub/developer-quickstart//overlays/core/stack?ref=main'To include the metrics overlay, use the overlays/metrics paths instead of overlays/core.
# Phase 1
kubectl create -k 'https://github.com/streamshub/developer-quickstart//overlays/metrics/base?ref=main'
kubectl wait --for=condition=Available deployment/prometheus-operator -n monitoring --timeout=120s
kubectl wait --for=condition=Available deployment/strimzi-cluster-operator -n strimzi --timeout=120s
kubectl wait --for=condition=Available deployment/apicurio-registry-operator -n apicurio-registry --timeout=120s
kubectl wait --for=condition=Available deployment/streamshub-console-operator -n streamshub-console --timeout=120s
# Phase 2
kubectl apply -k 'https://github.com/streamshub/developer-quickstart//overlays/metrics/stack?ref=main'When using minikube, (if you didn't enable it when you created the minikube cluster) enable the ingress addon and run minikube tunnel:
minikube addons enable ingress
minikube tunnelThen use port-forwarding to access the console:
kubectl port-forward -n streamshub-console svc/streamshub-console-console-service 8080:80Open http://localhost:8080 in your browser.
When using Kind, create the cluster with ingress-ready port mappings:
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
EOFThen deploy an ingress controller (e.g. ingress-nginx):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.1/deploy/static/provider/kind/deploy.yaml
kubectl wait --namespace ingress-nginx \
--for=condition=Ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120sUse port-forwarding to access the console:
kubectl port-forward -n streamshub-console svc/streamshub-console-console-service 8080:80Open http://localhost:8080 in your browser.
The uninstall script handles safe teardown with shared-cluster safety checks:
curl -sL https://raw.githubusercontent.com/streamshub/developer-quickstart/main/uninstall.sh | bash
# If installed with the metrics overlay:
curl -sL https://raw.githubusercontent.com/streamshub/developer-quickstart/main/uninstall.sh | OVERLAY=metrics bashThe script:
- Deletes operand custom resources and waits for finalizers to complete
- Checks each operator group for non-quick-start CRs on the cluster
- Fully removes operator groups with no shared CRDs
- For shared operator groups, removes only the operator deployment (retaining CRDs)
- Reports any retained groups and remaining resources
Phase 1 — Delete operands:
kubectl delete -k 'https://github.com/streamshub/developer-quickstart//overlays/core/stack?ref=main'Wait for all custom resources to be fully removed before proceeding.
Phase 2 — Delete operators and CRDs:
Warning: On shared clusters, deleting CRDs will cascade-delete ALL custom resources of that type cluster-wide. Check for non-quick-start resources first:
kubectl get kafkas -A --selector='!app.kubernetes.io/part-of=streamshub-developer-quickstart'
kubectl delete -k 'https://github.com/streamshub/developer-quickstart//overlays/core/base?ref=main'For the metrics overlay, use overlays/metrics/base and overlays/metrics/stack instead.
All resources carry the label app.kubernetes.io/part-of=streamshub-developer-quickstart:
kubectl get all -A -l app.kubernetes.io/part-of=streamshub-developer-quickstart
kubectl get crds,clusterroles,clusterrolebindings -l app.kubernetes.io/part-of=streamshub-developer-quickstartUse the update-version.sh script to update component versions:
# List available versions
./update-version.sh --list strimzi
# Preview changes
./update-version.sh --dry-run strimzi 0.52.0
# Check if a release exists
./update-version.sh --check apicurio-registry 3.2.0
# Update a component
./update-version.sh strimzi 0.52.0Supported components: strimzi, apicurio-registry, streamshub-console, prometheus-operator
When developing changes to the kustomization files, use the LOCAL_DIR environment
variable to point the install and uninstall scripts at your local checkout instead
of the remote GitHub repository:
# Install from local repo
LOCAL_DIR=. ./install.sh
# Uninstall from local repo
LOCAL_DIR=. ./uninstall.shWhen LOCAL_DIR is set, REPO and REF are ignored — the scripts resolve
kustomization paths relative to the given directory.
You can also provide an absolute path:
LOCAL_DIR=/home/user/repos/developer-quickstart ./install.shcomponents/ # Reusable Kustomize components
├── core/ # Core stack component
│ ├── base/ # Operators & CRDs
│ │ ├── strimzi-operator/ # Strimzi Kafka Operator
│ │ ├── apicurio-registry-operator/ # Apicurio Registry Operator
│ │ └── streamshub-console-operator/ # StreamsHub Console Operator
│ └── stack/ # Operands (Custom Resources)
│ ├── kafka/ # Single-node Kafka cluster
│ ├── apicurio-registry/ # In-memory registry instance
│ └── streamshub-console/ # Console instance
└── metrics/ # Prometheus metrics component
├── base/ # Prometheus Operator
└── stack/ # Prometheus instance, PodMonitors, patches
overlays/ # Deployable configurations
├── core/ # Default install (core only)
│ ├── base/ # Phase 1: Operators & CRDs
│ └── stack/ # Phase 2: Operands
└── metrics/ # Core + Prometheus metrics
├── base/ # Phase 1: Operators & CRDs + Prometheus Operator
└── stack/ # Phase 2: Operands + Prometheus instance & monitors