..

K8s

k8s

I decided to install a kubernetes (short K8s) cluster in my homelab. Just to be able to mess around with it as I please. By setting it up myself I also learn the architecture of it, its benefits , as well as its limitations or where cloud providers make your life easier for certain costs…

Because I only fnd time in the evenings it took me quite some time, but one can break it down in several milestones, which are manageable in short time slots. This way It was always fun!

Some useful commands

To delete something previously deployed by kubectl apply some-file.yml:

kubectl delete -f test-replica-set.yml

Persistent Volumes

I will provide persistent volumes by my TrueNas server with NFS. I used the official k8s documentation and some other blog as reference1.

Add nfs subdir external provisioner:

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

kubectl create ns nfs-provisioner

NFS_SERVER=192.168.x.y
NFS_EXPORT_PATH=/mnt/sysdataset/k8s-nfs-1

helm -n  nfs-provisioner install nfs-provisioner-01 nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=$NFS_SERVER \
    --set nfs.path=$NFS_EXPORT_PATH \
    --set storageClass.defaultClass=true \
    --set replicaCount=1 \
    --set storageClass.name=nfs-01 \
   --set storageClass.provisionerName=nfs-provisioner-01  

Theoretically that should work by now. But as always it did not -.-

Troubleshooting

Firstly the pods could not mount the volumes, because nfs-common was missing on each worker node. After installing that it could mount the volumes.

The next problem was then container related. I tried to install mongodb and my worker node vm missed a cpu instruction (The default one was missing some newer ones). So I changed my processor type of the worker nodes in proxmox on “host”. Finally all worked as expected.

ingress

The ingress controller is mapped to the nodeport range because of the bare metal setup. One can change that, if it is important and the nodes should expose ports 80 and 443 directly. See here in the documentation: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service

  • 80:32355/TCP http
  • 443:32748/TCP https

My router resolves to host names of my vms like hostname.tobisyurt.local. So in my case I can access ingress on node 1 like that: http://k8s-node-1.tobisyurt.home:32355/comments/ For me that is good enough for testing purposes. Especially because U will map it by my external load balancer, so the port numbers do not matter anyways.

External Load Balancer

I simply used my nginx reverse proxy / waf, which I already use for all other stuff to route traffic to the cluster.

I will recommend reading it up on the nginx documentation, but that is how I set it up:

I defined the cluster in the nginx.conf

upstream k8s {
        server k8s-node-1.tobisyurt.home:32355;
        server k8s-node-2.tobisyurt.home:32355;
        server k8s-node-3.tobisyurt.home:32355;
    }

The default load balancing is Round Robin, which is what I want in this case, because it is a stateless application. If the app is stateful, they do not share all information (for example; the deployment does not have a shared session storage) you can set ‘ip_hash’. Normally you would also want to configure a proper health check.

In the vhost config it looks as following:

location /example {
#       proxy_set_header Host k8s-node-x.tobisyurt.home;
        include snippets/proxy-params.conf;
        proxy_pass http://k8s;
        access_log /var/log/nginx/k8s-test.access.log json_analytics;
        error_log /var/log/nginx/k8s-test.error.log info;
}

It is important to make sure the Host header corresponds to the host in the ingress configuration. Otherwise, the requests will not get forwarded to the right apps in your k8s cluster.

Pull images from private registries

You need to create a k8s secret for the private registry to enable your k8s cluster to pull images from it2. Here an example how to generate a secret named regcred for a private registry:

kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>