I previously blogged about setting up and monitoring a Cardano relay using Kubernetes. Since I’m storing files on the hosts filesystem using a local PersistentVolume and was unable map a single configuration folder into all nodes (I found no explanation of this online, but having multiple pods try to mount a ReadOnlyMany volume seems to result in all but the first getting stuck pending), I’ve ended up with a copy of the configuration files for each node. I decided to move at least the topology config into a ConfigMap since it often changes (when I add/update peer relays from other SPOs) to avoid having to keep synchronising it between nodes.
Posts in this series:
Step 1: Setting up Cardano Relays using Kubernetes/microk8s
Step 2: Monitoring Cardano Relays on Kubernetes with Grafana and Prometheus
Step 3: Using Kubernetes ConfigMaps for Cardano Node Topology Config
If you find this post useful or are looking for somewhere to delegate while setting up your own pool, check out my pool [CODER] which donates 30% of rewards to charities that help people into coding �
Creating a ConfigMap
Like the rest of the config, our ConfigMap will live in a .yaml file. Each ConfigMap needs a unique name and can contain multiple files. I have two such maps - one for the producer node (which needs to connect only to relays) and one for relays (which need to connect both to the producer and to other peers).
I’ve used the hostnames cardano-mainnet-relay-service.default.svc.cluster.local and cardano-mainnet-producer-service.default.svc.cluster.local in my config files, which Kubernetes DNS automatically resolves to the IP addresses of the services created for the relays/producer respectively. This avoids needing to hardcode specific IP addresses when first setting this up.
I’ve omitted my peers and used just the default IOHK hostname to keep the example shorter.
Next we need to add the ConfigMaps to the volume and volumeMounts sections of the relay/producers StatefulSet configs. My relays config now looks like this:
And that’s all there was to it. I microk8s.kubectl apply -f‘d the files, deleted the old topology files from disk and restarted the nodes. Checking the logs with microk8s.kubectl logs pod/cardano-mainnet-relay-deployment-0 and microk8s.kubectl logs pod/cardano-mainnet-producer-deployment-0 I saw both connect to the expected nodes and after a few minutes of loading their databases, they showed back up on Grafana as processing transactions.
If you find this post useful or are looking for somewhere to delegate while setting up your own pool, check out my pool [CODER] which donates 30% of rewards to charities that help people into coding �
This post was served up via my RSS feed. Please visit the original article to read/post comments. If you found this article interesting, why not follow @DanTup on Twitter for more? :)