Skip to main content

Node Manager

When doing a homelab, it starts to get hard to manage multiple deployments to different hosts.

I really like using Gitops and having as much as my infrastructue in code as possible. Tools like terraform and ansible are great for that, but when it comes to just managing containers I haven't found a great way of doing that. If you're talking about single node docker hosts.

Portainer is a great setup but it didn't feel great managing files that I wanted to bind into my container.

I tried Ansible, i just had issues with it removing labels I no longer wanted on the container.

It feels like a lot of overhead, but using K3s and Argocd felt like the best solution. But what are we talking about, its a homelab. Is there too much overhead???

Absolutely Not!

I like to use my homelab to push my boundaries and learn new ways of doing things.

K3s

In my experience, I had the easiet time using k3s.

Repo Files

Cluster Services

I was able to take advantage of...

  • HA traefik deployment to act as a load balancer to my services.
  • Cert Manager to manage Certificates
    • Cloudflare
    • Private CA
  • ArgoCD to automate deployments using git
  • Grafana Alloy to get all observability data.
  • KubeVip to make my kubeapi server HA
  • MetalLB to provision Virtual IPs.
Traefik

Traefik

Traefik is an open-source Application Proxy that makes publishing your services a fun and easy experience. It receives requests on behalf of your system and identifies which components are responsible for handling them, and routes them securely.

K3s

K3s

Lightweight Kubernetes. Easy to install, half the memory, all in a binary of less than 100 MB.

Cert Manager

Cert Manager

cert-manager automates the management and issuance of TLS certificates from various issuing sources.

ArgoCD

ArgoCD

ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes.

Grafana Alloy

Grafana Alloy

Grafana Alloy collects and visualizes metrics, logs, and traces to provide complete observability.

MetalLB

MetalLB

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters.

KubeVIP

KubeVIP

KubeVIP provides virtual IP addresses for Kubernetes clusters to enable high availability of services.

Node Setup

To start out provsion 3 nodes.

I reccomend doing 4cpu and at least 4Gb RAM.

We will be copying in a config file and a script into each.

tip

I use my private DNS technitium to create DNS records for each node and to use for the Kubevip IP.

k3s.yml

Lets create the config file for the cluster

k3s_config.yml
# A config.yaml file is created at /etc/rancher/k3s/config.yaml
cluster-init: true
etcd-expose-metrics: true
write-kubeconfig-mode: 0644
tls-san:
- "127.0.0.1"
- "10.10.4.200" #IP Node 0
- "10.10.4.201" #IP Node 1
- "10.10.4.205" # KubeVIP IP

### Private DNS Names To be included on certs
- "hlab.clstr.spakl"
- "n0.hlab.clstr.spakl"
- "n1.hlab.clstr.spakl"
- "n2.hlab.clstr.spakl"
- "n3.hlab.clstr.spakl"
- "n4.hlab.clstr.spakl"
- "n5.hlab.clstr.spakl"

disable:
- "servicelb"
- "traefik"

kubelet-arg:
- "containerd=/run/k3s/containerd/containerd.sock"
- "node-status-update-frequency=60s"

kube-apiserver-arg:
- "default-not-ready-toleration-seconds=30"
- "default-unreachable-toleration-seconds=30"

########### Optional
# - "oidc-issuer-url=https://auth.provider.io"
# - "oidc-client-id="
# - "oidc-username-claim=email"
# - "oidc-groups-claim=groups"

kube-controller-arg:
- "node-monitor-period=60s"
- "node-monitor-grace-period=60s"

kube-controller-manager-arg:
- "bind-address=0.0.0.0"

kube-scheduler-arg:
- "bind-address=0.0.0.0"

kube-proxy-arg:
- "metrics-bind-address=0.0.0.0"

Bootstrap.sh

Now we need a bootstrap script

bootstrap.sh
#!/usr/bin/env bash
K3S_VERSION=v1.30.6+k3s1


################## NODE INIT
### Increase Limits for FS (for grafana alloy and log collecting)
sudo sysctl fs.inotify.max_user_instances=1280
sudo sysctl fs.inotify.max_user_watches=655360


## Move config file into correct spot
sudo mkdir -p /etc/rancher/k3s
sudo cp ./k3s_config.yml /etc/rancher/k3s/config.yaml

## Download and install cluster at Version
curl -sfL https://get.k3s.io \
| INSTALL_K3S_VERSION=${K3S_VERSION} sh -s -

## For Convenience
mkdir -p $HOME/k3s
sudo cp /var/lib/rancher/k3s/server/node-token $HOME/k3s/node-token
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/k3s/k3s.yml
sudo chmod 644 $HOME/k3s/node-token

## Done
echo "${HOSTNAME} is ready!"
sudo kubectl get nodes

To use this, simply copy the script and the k3s_config.yml to the home dir of the node you are using.

Then run the script.

It will generate a k3s folder with the kubeconfig and the node token.

We will need those!!

Copy the Kukeconfig and change the server url from 127.0.0.1 to the clusters ip or hostname

Join.sh

Now we need to copy that identical config file to the new node and also this script..

join.sh
#!/usr/bin/env bash
K3S_VERSION=v1.30.6+k3s1

K3S_URL="https://hlab.clstr.spakl:6443"
TOKEN=""

## Move config file into correct spot
sudo mkdir -p /etc/rancher/k3s
sudo cp ./k3s_config.yml /etc/rancher/k3s/config.yaml

## Increase Limits for FS (for grafana alloy and log collecting)
sudo sysctl fs.inotify.max_user_instances=1280
sudo sysctl fs.inotify.max_user_watches=655360


## Download and install cluster at Version
curl -sfL https://get.k3s.io \
| K3S_TOKEN=${TOKEN} INSTALL_K3S_VERSION=${K3S_VERSION} sh -s - server \
--server $K3S_URL


## For Convenience
mkdir -p $HOME/k3s
sudo cp /var/lib/rancher/k3s/server/node-token $HOME/k3s/node-token
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/k3s/k3s.yml
sudo chmod 644 $HOME/k3s/node-token

## Done
echo "${HOSTNAME} is ready!"
sudo kubectl get nodes

Update the k3s url and node token vars

K3S_URL="https://hlab.clstr.spakl:6443" 
TOKEN="<tokenvalue>"

Now run the script..

Validate Nodes

All noded should pop up on the command

kubectl get nodes

brock@minty ~/code/pltfrm/hlab/traefik (master) $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
hlab-node-0 Ready control-plane,etcd,master 4d1h v1.30.6+k3s1
hlab-node-1 Ready control-plane,etcd,master 4d1h v1.30.6+k3s1