2021-04-25

I previously blogged about setting up, monitoring and using ConfigMaps for a Cardano relay using Kubernetes. This post describes the changes to also run a Producer node, which connects to the relay and uses Kubernetes secrets for its keys.

Posts in this series:

Step 1: Setting up Cardano Relays using Kubernetes/microk8s

Step 2: Monitoring Cardano Relays on Kubernetes with Grafana and Prometheus

Step 3: Using Kubernetes ConfigMaps for Cardano Node Topology Config

Step 4: Setting up a Cardano Producer node using Kubernetes/microk8s

If you find this post useful or are looking for somewhere to delegate while setting up your own pool, check out my pool [CODER] which donates 30% of rewards to charities that help people into coding �

Creating and Registering a Pool and Cold Keys

This post assumes you have already set up your pool and create the required keys and focuses only on setting up the Producer node in Kubernetes. If you do not already have your pool set up you should follow the official Cardano docs.

Be sure to use an offline machine for the creation of the cold keys and ensure you keep secure backups. If you have a significant pledge I would consider using a Crypto Hardware wallet to secure it (and for the rewards address) instead of the wallet on the cold machine (ensure you have good security/backups of the hardware seed phrase!). This AdaLite post has some good instructions for how to include a hardware wallet as a pool owner. Doing this will reduce the loss if your cold machine was lost/compromised to just the deposit and future rewards rather than also existing unclaimed rewards and the pledge.

Creating Kubernetes Secrets

After following the pool creation instructions above, you should have three files from your cold environment that your Producer node will need:

kes.skey

vrf.skey

node.cert

No other files from the cold machine should be used on the producer (although they should be safely backed up from the cold machine).

These three files will be stored in Kubernetes as Secrets. The default is for Secrets to be stored base64 encoded in Kubernetes. This is not encryption - it is simply to support binary files. For simplicity this post will stick with base64 but you should consider enabling encryption for Secrets and ensure your config files are secure and cannot be read by anything that does not need them.

If these three files are compromised, it does not give up control of your pool, your deposit or your pledge. However it would allow someone else to run a producer for your pool (which could result in short-lived forks that negatively impact the network) or calculate which slots your pool will produce (making a DoS attack a little easier).

DO NOT use online web pages for base64 encoding Kubernetes secrets!

To base64 encode the contents of a file, you can use the base64 command:

You can verify the contents using base64 -d

These values can then be added to a new Kubernetes config file along with the rest of your config using the names we’d like to use as filenames when mounting these into the pod as the keys:

Node Volume

Extend the node volumes config created for the relays to include a similar definition for the producer. The producer will need its own data folder, though to speed up the initial sync you may wish to copy the db folder in from an existing relay. You’ll also want to copy the configuration folder since that will likely be the same for each node now we’re using ConfigMaps for topology.

If you’re using a local node folder for storage, you should again set nodeAffinity to ensure this pod always runs on the same node (bear in mind this means the pod won’t run if this node is unavailable, but redundant storage is outside of the scope of this post).

Topology

The previously created topology ConfigMap file needs updating to include config for the producer, and also to point the relay to the producer.

I’ve used the hostnames cardano-mainnet-relay-service.default.svc.cluster.local and cardano-mainnet-producer-service.default.svc.cluster.local in my config files, which Kubernetes DNS automatically resolves to the IP addresses of the services created for the relays/producer respectively. This avoids needing to hardcode specific IP addresses when first setting this up.

I’ve omitted my other peers and used just the default IOHK hostname to keep the example shorter.

StatefulSet/Pod Definition

The configuration for the producer is mostly the same as for the relay, with a few differences:

Names and volumes are updated to reference “producer” instead of “relay”

A new volume is used to mount the secrets (keys) into the pod

Additional arguments are passed to the node containing paths to the keys

The exposed service does not bind a NodePort because incoming connections are only from inside the cluster (the relays)

My full config looks like this:

With all of these files kubectl apply -f‘d, the producer node should start up and being syncing. Since it’s tagged with cardano-mainnet-node, it should automatically be picked up by the Prometheus config we previously set up and show in Grafana when it’s fully running.

I created a bash alias that allows me to quickly print the last few blocks from each node so I can quickly verify they’re in-sync and the latency of them syncing blocks:

I also configured some additional panels and alerts in Grafana to easily monitor the remaining KES periods (it’s import to periodically create new KES keys and copy them - along with an updated node.cert into the secrets file and re-apply them!).



If you find this post useful or are looking for somewhere to delegate while setting up your own pool, check out my pool [CODER] which donates 30% of rewards to charities that help people into coding �

This post was served up via my RSS feed. Please visit the original article to read/post comments. If you found this article interesting, why not follow @DanTup on Twitter for more? :)

Show more