Deploying HariKube operator to manage dynamic database topology

Richard Kovacs
4 min read
Deploying HariKube operator to manage dynamic database topology

Before diving in, make sure you have a running Kubernetes cluster. If you need help setting one up, check out our tutorial. [→]


🔌 Installing Essential Kubernetes Add-ons

To get started, let’s install two popular open-source add-ons: Cert-Manager for automated certificate management, and the Prometheus Operator for monitoring and alerting. These add-ons are widely used and will help your cluster run smoothly.

1
2
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml
kubectl apply -f https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.77.1/stripped-down-crds.yaml

✨ Deploying the HariKube Operator

⚠️ HariKube images aren’t public yet. If you’d like to try them, request a free trial version on the Open Beta invitation page.

Start by authenticating your local Docker client with the private registry at registry.harikube.info. This step is essential for pulling images from the registry.

1
docker login registry.harikube.info

Next, pull the HariKube Operator image from our registry:

1
docker pull registry.harikube.info/harikube/operator:beta-v1.0.0-2

If you’re using Kind for your cluster, load the image into your Kind node:

1
kind load docker-image -n harikube-cluster registry.harikube.info/harikube/operator:beta-v1.0.0-2

Now, deploy the HariKube Operator to your cluster. This operator will manage your custom database routing policies and automate topology changes.

1
kubectl apply -f https://harikube.info/manifests/harikube-operator-beta-v1.0.0-2.yaml

🔨 Configuring the Operator and Registering a Custom Resource

Let’s create your first topology configuration. This tells the HariKube Operator how to route data for a specific custom resource.

topology-shirts.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apiVersion: harikube.info/v1
kind: TopologyConfig
metadata:
  name: topologyconfig-shirts
  namespace: default
spec:
  targetSecret: default/topology-config
  backends:
  - name: shirts
    endpoint: sqlite:///db/shirts.db?_journal=WAL&cache=shared
    customresource:
      group: stable.example.com
      kind: shirts

This TopologyConfig custom resource instructs the operator to manage all shirts resources, storing their data in a dedicated SQLite database (shirts.db) inside the HariKube Middleware container.

Apply the configuration:

1
kubectl apply -f topology-shirts.yaml

🚀 Defining and Deploying a New Application Resource

To see HariKube in action, let’s define a new custom resource type. We’ll use a simple example: a Shirt resource.

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/customresourcedefinition/shirt-resource-definition.yaml

This command registers the Shirt custom resource definition (CRD) in your cluster. Now, Kubernetes can manage Shirt objects just like native resources.

Let’s create an actual Shirt instance:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
cat | kubectl apply -f - <<EOF
apiVersion: stable.example.com/v1
kind: Shirt
metadata:
  name: example1
  labels:
    # Disables Kubernetes Controller Manager operations on this instance
    # For more info please visit /docs/custom-resource page
    skip-controller-manager-metadata-caching: "true"
spec:
  color: blue
  size: S
EOF

Once applied, the Kubernetes API server will accept the new Shirt object. The HariKube Middleware, guided by your TopologyConfig, will store this instance in the dedicated SQLite database. This demonstrates the full workflow: from defining a custom resource to automated, isolated database storage.

To verify everything is working, check the database for your new Shirt resource:

1
docker run -it --rm -v harikube_db:/data alpine/sqlite /data/shirts.db "select name from kine"

And that’s the final step! You’ve successfully deployed the HariKube Operator and configured a dynamic database topology for your custom resources. This setup gives you data isolation, lower latency, and virtually unlimited storage by offloading custom resource data from ETCD to dedicated backends like SQLite, truly turning Kubernetes into a scalable Platform-as-a-Service.

Thank you for reading! If you have questions or ideas, please share them—we’d love to hear from you.

Ready to Get Started?

We're getting close to launch, and we want you to be one of the first to experience Cloud-Native microservice development in scale.