My kubernetes test cluster, part one โ install.
I'm running a kubernetes test cluster in my home network. It is used to learn kubernetes and try out various things, for example kata containers and kubevirt. Not used much (yet?) for actual development.
After mentioning it here and there some people asked for details, so here we go. I'll go describe my setup, with some kubernetes and container basics sprinkled in.
This is part one of an article series and will cover cluster node installation and basic cluster setup.
The cluster nodes
Most cluster nodes are dual-core virtual machines. The
control-plane node (formerly known as master node) has 8G of memory, most worker
nodes have 4G of memory. It is a mix of x86_64
and aarch64
nodes. Kubernetes names these
architectures amd64
and arm64
, which is
easily confused, so take care ๐.
The virtual nodes
use bridged
networking. So no separate network, they simply show up on my
192.168.2.0/24
home network like the physical machines
connected. They get a static IP address assigned by the DHCP
server, and I can easily ssh into each node.
All cluster nodes run Fedora 34, Server Edition.
Node configuration
I have a git repository with some config files, to simplify rebuilding a cluster node from scratch. The repository also has some shell scripts with the commands listed later in this blog post.
Lets go over the config files one by one.
This is needed for kubernetes networking.
Load some kernel modules needed at boot. Again for kubernetes networking. Also vhost support which is needed by kata containers.
The upstream kubernetes rpm repository. Note this is not enabled
(enabled=0
) because I don't want normal fedora system
updates also update the kubernetes packages. For
installing/updating kubernetes packages I can enable the repo
using dnf --enablerepo=kubernetes ...
.
Package installation
Given I want play with different container runtimes I've decided to use cri-o, which allows to do just that. Fedora has packages. They are in a module though, so that must be enabled first.
The cri-o version should match the kubernetes version you want run. That is not the case in my cluster right now because I've learned that after setting up the cluster, so obviously sky isn't falling in case they don't match. The next time I update the cluster I'll bring them into sync.
Now we can go install the packages from the fedora repos. cri-o, runc (default container runtime), and a handful of useful utilities.
Next in line are the kubernetes packages from the google repo. The repo has all versions, not only the most recent, so you can ask for the version you want and you'll get it. As mentioned above the repo must be enabled on the command line.
Configure and start services
kubelet needs some configuration, my git repo with the config files has this:
Asking kubelet to delegate all cgroups work to systemd is needed to make kubelet work with cgroups v2. With that in place we can reload the configuration and start the services:
Kubernetes cluster nodes need a few firewall entries so the nodes can speak to each other. I was to lazy to setup all that and just turned off the firewall. The cluster isn't reachable from the internet anyway, so ๐คท.
Initialize the control plane node
All the preparing steps up to this point are the same for all cluster nodes. Now we go initialize the control plane node.
Picked the 10.85.0.0/16
network because that happens to
be the default network used by cri-o,
see /etc/cni/net.d/100-crio-bridge.conf
.
This command will take a while. It will pull kubernetes container images from the internet, start them using the kubelet service, and finally initialize the cluster.
kubeadm
will write the config file needed to access the
cluster with kubectl
to /etc/kubernetes/admin.conf
. It'll make you cluster
root. Kubernetes names this cluster-admin
role in the
rbac (role based access control) scheme.
For my devel cluster I simply use that file as-is instead of setting
up some more advanced user authentication and access control. I
place a copy of the file at $HOME/.kube/config
(the
default location used by kubectl). Copying the file to other
machines works, so I can also run kubectl on my laptop or
workstation instead of ssh'ing into the control plane node.
Time to run the first kubectl
command to see whenever
everything worked:
Yay! First milestone.
Side note: single node cluster
By default kubeadm init
adds a taint to the control
plane node so kubernetes wouldn't schedule pods there:
If you want go for a single node cluster all you have to do is remove that taint so kubernetes will schedule and run your pods directly on your new and shiny control plane node. The magic words for that are:
Done. You can start playing with the cluster now.
If you want add one or more worker nodes to the cluster instead, then watch kubernetes distribute the load, read on ...
Initialize worker nodes
The worker nodes need a bootstrap token to authenticate when they
want join the cluster. The kubeadm init
command
creates a token and will also print the kubeadm join
command needed to join. If you don't have that any more, no
problem, you can always get the token later using kubeadm
token list
. In case the token did expire (they are valid for
a day or so) you can create a new one using kubeadm token
create
. Beside the token kubeadm also needs the hostname and
port to be used to connect to the control plane node. Default port
for the kubernetes API is 6443, so ...
... and check results:
The node may show up in "NotReady" state for a while when it did register already but didn't complete initialization yet.
Now repeat that procedure on every node you want add to the cluster.
Side note: scripting kubernetes with json
Both kubeadm
and kubectl
can return the
data you ask for in various formats. By default they print a nice,
human-readable table to the terminal. But you can also ask for
yaml, json and others using the -o
or --output
switch. Specifically json is very useful
for scripting, you can pipe the output through
the jq utility (you
might have noticed this in the list of packages to install at the
start of this blog post) to fish out the items you actually need.
For starters two simple examples. You can get the raw bootstrap token this way:
Or check out some node details:
There are way more possible use cases. When reading config and
patch files kubectl
likewise accepts both yaml and
json as input.
Pod networking with flannel
There is one more basic thing to setup: Install a network fabric to get the pod network going. This is needed to allow pods running on different cluster nodes to talk to each other. When running a single node cluster this can be skipped.
There are a bunch of different solutions out there, I've settled
for flannel in
"host-gw" mode. First
download kube-flannel.yml
from github. Then tweak the configuration: Make sure the network
matches the pod network passed to kubeadm init
, and
change the backend. Here are the changes I've made:
Now apply the yaml file to install flannel:
The flannel pods are created in the kube-system namespace, you can check the status this way:
Once all pods are up and running your pod network should be working. One nice thing with "host-gw" mode is that this uses standard network routing of the cluster nodes and you can inspect the state with standard linux tools:
Each cluster node gets a /24 subnet of the pod network assigned.
The cni0
device is the subnet of the local node. The
other subnets are routed to the other cluster nodes. Pretty
straight forward.
Rounding up
So, that's it for part one. The internet has tons of kubernetes tutorials and examples which you can try on the cluster now. One good starting point is Kubernetes by example.
My plan for part two of this article series is installing and configuring some useful cluster services, with one of them being ingress which is needed to access your cluster services with a web browser.