I am not particularly fond of Kubernetes and have tried to avoid it as much as possible, but at times it feels somewhat inevitable. In order to have a little more familiarity with how it works, I decided to go through Kubernetes the Hard Way and setup a cluster.
Note: If I was really going to use Kubernetes, I would likely use some project that managed it in a better way. Going through this excercise is the same reason I went through Linux from Scratch years ago to better understand how things were assembled together.
The first step is to configure prerequisites which I did using lima to setup some virtual machines. This initially took a few tries to get right, which I eventually did using the following commands.
limactl create --cpus 1 --memory .5 --disk 10 template://debian-12.yaml --name default --vm-type=vz --mount-type=virtiofs
limactl create --cpus 1 --memory 2 --disk 20 template://debian-12.yaml --name server --vm-type=vz --mount-type=virtiofs
limactl create --cpus 1 --memory 2 --disk 20 template://debian-12.yaml --name node-0 --vm-type=vz --mount-type=virtiofs
limactl create --cpus 1 --memory 2 --disk 20 template://debian-12.yaml --name node-1 --vm-type=vz --mount-type=virtiofs
This creates 4 virtual machines for me to use with the recommend configuration. Using --vm-type=vz
configures the networking so they can talk to one another.
Since I am still using an Intel Mac, for the jumpbox
configuration, I needed to edit downloads.txt
to basically do s/arm/amd/g
to ensure the packages used where for the correct architecture.
This also means that there were a few commands in the other steps that I needed to change arm
to amd
for the command to work, but I did not notice any hiccups.
I was then able to connect to each of the machines using limactl shell <node>
where I also installed tmux
and htop
so that I could more easily inspect the state of the running system.
I currently use saltstack
to manage docker deployments so I will often use something like watch docker ps
to keep track of running containers though I am also curious about things like ducker
which is a bit nicer UI for viewing containers.
I think one of my problems with Kubernetes, is that it feels like a rats nest of yaml configuration. I like some of the ideas behind behind aurae and auraescript to tame some of this by having a better base controller and better configuration language behind it.
Part of me would also like to attempt my own small abstraction on top of Kubernetes while also trying to avoid reinventing the wheel too much.
Selfhosting hot take : I’ve been watching all these various “docker container managers” pop up in the selfhosting space, where they manage docker containers, storage, TLS certs, etc…. and it feels like they’re just building out Kubernetes but less flexible with a smaller user base
Want to run an app and share it’s config? We already have helm charts??? (Ok helm is kinda awful but it 1. Exists 2. It does work)
You could even make a chart of charts #selfhosting #Homelab
I am still not quite ready to give up my saltstack based deployment, but I think going through Kubernetes the Hard Way was still a good exercise.