My Kubernetes Dashboard and How To Deploy Yours
My experiences with the Kubernetes Dashboard and a step-by-step tutorial on how you can deploy your own instance on your Raspberry Pi Kubernetes cluster.
The Kubernetes Dashboard is essentially a web UI for managing the Kubernetes cluster that it is deployed on. It allows the administrator to perform CRUD (Create, Read, Update, Delete) operations on the most commonly used cluster resources.
Less commonly used resources such as LimitRange and HorizontalPodAutoscaler, you'll still have to fallback to the command line to manage them. In the screenshot you can already see all the resources you can manage within the UI in the sidebar on the left.
The dashboard has just reached 2.0.0 a few months back after staying in 2.0.0-beta for almost a year. Though there still are some rough edges that can be ironed out, I feel it is sufficiently stable for production usage.
In my opinion, this should be the first application to deploy on a new cluster for both beginners and experienced users alike.
Most useful features
Here are the top 3 features that I find most useful, and screenshots of their corresponding views on the dashboard.
Viewing pod logs that auto-refreshes every few seconds
Opening a shell session into any container to run commands
Logging state changes in your Deployments and Pods
How to install
There are only 2 steps to install the Kubernetes Dashboard:
- Download and deploy yaml manifests
- Create an admin user
Download and deploy yaml manifests
This step is very much simplified by the developers of the Kubernetes Dashboard to a single command:
Create an admin user
To use the dashboard securely, you must create an admin user ServiceAccount, with which you obtain the token for and use that to access the dashboard.
Copy the following resource definitions into a text editor and save the file as dashboard-adminuser.yml
Deploy them to the cluster with:
After the ServiceAccount has been created, obtain the token for the admin-user with:
Copy the token
and save it somewhere secure for the next step.
Accessing the dashboard
To access the dashboard securely, we must first set up a proxy through which we access the cluster's Kubernetes API Server:
Then navigate to http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
and you'll (hopefully) be greeted with the following page:
Ensure that the token login method is selected, paste the token obtained earlier in the first installation step into the field and click Sign In
.
If you see something like the screenshot above, congratulations, you've successfully set up your own instance of the Kubernetes Dashboard! If not, consider retracing your steps and referring to the official guide for debugging.
Optional step for CPU/Memory metrics
If you have followed my previous guide on installing k3s on your cluster or are running k3s already, you're all set to see the CPU and Memory usage charts within the next few minutes while it initializes.
If you are not running k3s, then there's a good chance that those charts won't be populated at all. In that case, you'll need to deploy the metrics-server app with the following command:
My experience
Given that Kubernetes is quite a complex beast and that I came from the comforts of using Portainer, the Docker Swarm equivalent UI, a dashboard was the first thing I desired when I was just getting started with running k3s on the Raspberry Pi cluster nearly 2 years ago.
The first thing I learnt was how to create resources on the cluster, but somehow I never learnt how to monitor, list, nor delete resources on the cluster until a few months later. It was through the use of the dashboard that I managed to keep my deployments in check.
Fast-forward to today, I perform most of my operations through the command line. I'm really only using the dashboard in the event where I'm unable to comprehend what is blocking the deletion of a resource, which I believe is a relatively common problem if you are using kubectl delete -f
which ends up deleting only declared resources in yaml templates.
When it comes in handy
I was trying to delete and re-deploy AdguardHome from my cluster just yesterday, by running the following command:
This command deletes all resources that you have declared in yaml files in the directory, ./production/adguard/
but does not delete resources that are dynamically generated as a side-effect of creating those resources in the first place.
This is usually fine for the most part, however, as deletion of Deployments, StatefulSets and DaemonSets alike require dependencies, dynamically generated or not, to be deleted first, it waits forever at the last step.
Why that happened
In the StatefulSet for AdguardHome I have defined the following key:
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500M
storageClassName: nfs-nvme-128g
This generates a PersistentVolumeClaim data-adguard-i
for each Pod the StatefulSet creates, where i is the ordinal number of the Pod in the StatefulSet, allowing for persistent state storage per Pod that is isolated from others.
These generated storage resources however, are not deleted in the earlier command as they are not defined in the yaml files, only the template is.
Manual deletion of generated resources
To correctly delete the StatefulSet, I still had to run the following commands to manually delete the generated PersistentVolumeClaims
$ kubectl delete pvc data-adguard-0
persistentvolumeclaim "data-adguard-0" deleted
$ kubectl delete pvc data-adguard-1
persistentvolumeclaim "data-adguard-1" deleted
Subsequently, I'll check the dashboard for PersistentVolumes in the Released
state where their corresponding PersistentVolumeClaims have been deleted, and then manually delete them via the UI.
You may make the same mistake too
It's easy to forget about these generated resources if you, like me, subscribe to the design principle known as Infrastructure-as-Code (IaC). In IaC, the paradigm dictates that infrastructure resources can and should be defined as code, and that the state of the code reflects the state of the infrastructure resources at any point in time.
Dynamically generated resources break this design with volumeClaimTemplates, though that is out of necessity and convenience as it would be unsustainable to define one PersistentVolumeClaim per Pod.
It remains to be seen what the community will come up with to improve on adherence to the IaC paradigm, but what's for certain is that some extra visibility enabled by the Kubernetes Dashboard wouldn't hurt, whether or not you're an experienced user or a complete beginner.
What's next?
I'll be writing something up about how I deployed my very first self-hosted application, Nextcloud, on my Kubernetes cluster, and my experiences thus far with it. See you again!