Skip to main content

K3Yes

· 6 min read
Brock Henrie
Lead Software Engineer | CEO Spakl

When doing a homelab, it starts to get hard to manage multiple deployments to different hosts.

I really like using Gitops and having as much as my infrastructue in code as possible. Tools like terraform and ansible are great for that, but when it comes to just managing containers I haven't found a great way of doing that. If you're talking about single node docker hosts.

Portainer is a great setup but it didn't feel great managing files that I wanted to bind into my container.

I tried Ansible, i just had issues with it removing labels I no longer wanted on the container.

It feels like a lot of overhead, but using K3s and Argocd felt like the best solution. But what are we talking about, its a homelab. Is there too much overhead???

Absolutely Not!

I like to use my homelab to push my boundaries and learn new ways of doing things.

K3s

In my experience, I had the easiet time using k3s.

Cluster Services

I was able to take advantage of...

  • HA traefik deployment to act as a load balancer to my services.
  • Cert Manager to manage Certificates
    • Cloudflare
    • Private CA
  • ArgoCD to automate deployments using git
  • Grafana Alloy to get all observability data.
  • KubeVip to make my kubeapi server HA
  • MetalLB to provision Virtual IPs.
K3s

K3s

Lightweight Kubernetes. Easy to install, half the memory, all in a binary of less than 100 MB.

Traefik

Traefik

Traefik is an open-source Application Proxy that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and identifies which components are responsible for handling them, and routes them securely.

Cert Manager

Cert Manager

cert-manager automates the management and issuance of TLS certificates from various issuing sources.

ArgoCD

ArgoCD

ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes.

Grafana Alloy

Grafana Alloy

Grafana Alloy collects and visualizes metrics, logs, and traces to provide complete observability.

MetalLB

MetalLB

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters.

KubeVIP

KubeVIP

KubeVIP provides virtual IP addresses for Kubernetes clusters to enable high availability of services.

Node Setup

To start out provsion 3 nodes.

I reccomend doing 4cpu and at least 4Gb RAM.

We will be copying in a config file and a script into each.

tip

I use my private DNS technitium to create DNS records for each node and to use for the Kubevip IP.

k3s.yml

Lets create the config file for the cluster

k3s_config.yml
# A config.yaml file is created at /etc/rancher/k3s/config.yaml
cluster-init: true
etcd-expose-metrics: true
write-kubeconfig-mode: 0644
tls-san:
- "127.0.0.1"
- "10.10.66.28" #IP Node 0
- "10.10.66.103" #IP Node 1
- "10.10.66.110" #IP Node 2
- "10.10.66.111" #IP future node
- "10.10.66.112" #IP future node

### Private DNS Names To be included on certs
- "darknet.clstr.spakl"
- "n0.darknet.clstr.spakl"
- "n1.darknet.clstr.spakl"
- "n2.darknet.clstr.spakl"
- "n3.darknet.clstr.spakl"
- "n4.darknet.clstr.spakl"
- "n5.darknet.clstr.spakl"

disable:
- "servicelb"
- "traefik"

kubelet-arg:
- "containerd=/run/k3s/containerd/containerd.sock"
- "node-status-update-frequency=60s"

kube-apiserver-arg:
- "default-not-ready-toleration-seconds=30"
- "default-unreachable-toleration-seconds=30"

########### Optional
# - "oidc-issuer-url=https://auth.provider.io"
# - "oidc-client-id="
# - "oidc-username-claim=email"
# - "oidc-groups-claim=groups"

kube-controller-arg:
- "node-monitor-period=60s"
- "node-monitor-grace-period=60s"

kube-controller-manager-arg:
- "bind-address=0.0.0.0"

kube-scheduler-arg:
- "bind-address=0.0.0.0"

kube-proxy-arg:
- "metrics-bind-address=0.0.0.0"

Bootstrap.sh

Now we need a bootstrap script

bootstrap.sh
#!/usr/bin/env bash
K3S_VERSION=v1.30.6+k3s1


################## NODE INIT
### Increase Limits for FS (for grafana alloy and log collecting)
sudo sysctl fs.inotify.max_user_instances=1280
sudo sysctl fs.inotify.max_user_watches=655360


## Move config file into correct spot
sudo mkdir -p /etc/rancher/k3s
sudo cp ./k3s_config.yml /etc/rancher/k3s/config.yaml

## Download and install cluster at Version
curl -sfL https://get.k3s.io \
| INSTALL_K3S_VERSION=${K3S_VERSION} sh -s -

## For Convenience
mkdir -p $HOME/k3s
sudo cp /var/lib/rancher/k3s/server/node-token $HOME/k3s/node-token
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/k3s/k3s.yml
sudo chmod 644 $HOME/k3s/node-token

## Done
echo "${HOSTNAME} is ready!"
sudo kubectl get nodes

To use this, simply copy the script and the k3s_config.yml to the home dir of the node you are using.

Then run the script.

It will generate a k3s folder with the kubeconfig and the node token.

We will need those!!

Copy the Kukeconfig and change the server url from 127.0.0.1 to the clusters ip or hostname

Join.sh

Now we need to copy that identical config file to the new node and also this script..

join.sh
#!/usr/bin/env bash
K3S_VERSION=v1.30.6+k3s1

K3S_URL="https://darknet.clstr.spakl:6443"
TOKEN=""

## Move config file into correct spot
sudo mkdir -p /etc/rancher/k3s
sudo cp ./k3s_config.yml /etc/rancher/k3s/config.yaml

## Increase Limits for FS (for grafana alloy and log collecting)
sudo sysctl fs.inotify.max_user_instances=1280
sudo sysctl fs.inotify.max_user_watches=655360


## Download and install cluster at Version
curl -sfL https://get.k3s.io \
| K3S_TOKEN=${TOKEN} INSTALL_K3S_VERSION=${K3S_VERSION} sh -s - server \
--server $K3S_URL


## For Convenience
mkdir -p $HOME/k3s
sudo cp /var/lib/rancher/k3s/server/node-token $HOME/k3s/node-token
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/k3s/k3s.yml
sudo chmod 644 $HOME/k3s/node-token

## Done
echo "${HOSTNAME} is ready!"
sudo kubectl get nodes

Update the k3s url and node token vars

K3S_URL="https://darknet.clstr.spakl:6443" 
TOKEN="<tokenvalue>"

Now run the script..

Validate Nodes

All noded should pop up on the command

kubectl get nodes

brock@minty ~/code/pltfrm/darknet/traefik (master) $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
darknet-node-0 Ready control-plane,etcd,master 4d1h v1.30.6+k3s1
darknet-node-1 Ready control-plane,etcd,master 4d1h v1.30.6+k3s1

Next Time

Keep Posted. Now that we have a cluster up and running we will setup some services on it!!!

More to come soon. We will deploy all these

brock@minty ~/code/pltfrm/darknet/traefik (master) $ kubectl get deployments --all-namespaces

NAMESPACE        NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
argocd argocd-applicationset-controller 1/1 1 1 3d23h
argocd argocd-dex-server 1/1 1 1 3d23h
argocd argocd-notifications-controller 1/1 1 1 3d23h
argocd argocd-redis 1/1 1 1 3d23h
argocd argocd-repo-server 1/1 1 1 3d23h
argocd argocd-server 1/1 1 1 3d23h
cert-manager cert-manager 3/3 3 3 3d20h
cert-manager cert-manager-cainjector 1/1 1 1 3d20h
cert-manager cert-manager-webhook 1/1 1 1 3d20h
kube-system coredns 1/1 1 1 4d1h
kube-system headlamp 1/1 1 1 3d1h
kube-system local-path-provisioner 1/1 1 1 4d1h
kube-system metrics-server 1/1 1 1 4d1h
metallb-system controller 1/1 1 1 4d1h
traefik traefik 1/1 1 1 3d20h

brock@minty ~/code/pltfrm/darknet/traefik (master) $ kubectl get ds --all-namespaces

NAMESPACE        NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
alloy alloy 2 2 2 2 2 <none> 2d23h
kube-system kube-vip-ds 2 2 2 2 2 <none> 4d1h
metallb-system speaker 2 2 2 2 2 kubernetes.io/os=linux 4d1h