K3S Cluster with Debian and RaspberryPi

j3ffyang
6 min readNov 11, 2021

--

Objective

  • Hybrid Kubernetes — Build K3S cluster to manage thousand Raspberry Pi devices, which run on arm64 architecture
  • Distribute and manage application and other apps, such as MySQL, Kafka across K3S cluster
  • CI/CD — Automatically detect code change at Github (or Gitlab alternatively) and distribute the change to each Pi device in cluster. Rollback if distribution failure occurred

Environment

Software Spec

Hardware Specification on Raspberry Pi

Hardware    : BCM2835
Revision : c03111
Serial : 10000000abf8a439
Model : Raspberry Pi 4 Model B Rev 1.1
pi@raspberrypi:~ $ cat /proc/cpuinfo | grep processor
processor : 0
processor : 1
processor : 2
processor : 3
pi@raspberrypi:~ $ cat /proc/meminfo | grep MemTotal
MemTotal: 3889380 kB

Raspberry Pi Images

Download >
https://downloads.raspberrypi.org/raspios_arm64/images/ https://downloads.raspberrypi.org/raspios_arm64/images/raspios_arm64-2020-05-28/2020-05-27-raspios-buster-arm64.zip

pi@raspberrypi:~ $ uname -a
Linux raspberrypi 5.4.42-v8+ #1319 SMP PREEMPT Wed May 20 14:18:56 BST 2020 aarch64 GNU/Linux
pi@raspberrypi:~ $

Network Architecture

This diagram is generated in PlantUML, drawn in Atom Editor. Its source code in PlantUML is attached below.

K3S

https://k3s.io/

Install on Master

  • Install
curl -sfL https://get.k3s.io | sh -   # standard# or install a specific versioncurl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" \
INSTALL_K3S_VERSION="v1.21.0-rc1+k3s1" sh -s -

Bug > https://github.com/fluxcd/kustomize-controller/issues/320

  • ~/.kube/config
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

If master node IP is changed, update ~/.kube/config accordingly.

Install on Worker

  • Edit /etc/hosts if using hostname instead of IP
pi@raspberrypi:~ $ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.1.1 raspberrypi
10.165.73.100 myk3s
  • Find out token for worker_node
cat /var/lib/rancher/k3s/server/node-token
K1040447c1579e6094c1b3c82664681322ef521cc08b29dc62a9356af1f6bdb4bac::server:d57783db3c0b52d6ffe24becc08d48fc
  • Install with K3S_URL and K3S_TOKEN
curl -sfL https://get.k3s.io | K3S_URL=https://myk3s:6443 \
K3S_TOKEN=K1040447c1579e6094c1b3c82664681322ef521cc08b29dc62a9356af1f6bdb4bac::server:d57783db3c0b52d6ffe24becc08d48fc \
sh -
  • Add cgroup_memory=1 cgroup_enable=memory in /boot/cmdline.txt, then reboot or restart k3s.service
pi@raspberrypi:~ $ cat /boot/cmdline.txt
console=serial0,115200 console=tty1 root=PARTUUID=a2d2d5f6-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles cgroup_memory=1 cgroup_enable=memory
  • Inspect cluster
debian@debian:/etc/bash_completion.d$ k get node
NAME STATUS ROLES AGE VERSION
raspberrypi Ready <none> 85m v1.20.7+k3s1
debian Ready control-plane,master 47h v1.20.7+k3s1

(optional) kubectl AutoComplete

echo 'source <(kubectl completion bash)' >> ~/.bashrcecho 'alias k=kubectl' >> ~/.bashrc
echo 'complete -F __start_kubectl k' >> ~/.bashrc

Uninstall

On server node

/usr/local/bin/k3s-uninstall.sh

To uninstall K3s from an agent node, run:

/usr/local/bin/k3s-agent-uninstall.sh

ArgoCD

https://argoproj.github.io/cd/

Architecture

image credit > https://argo-cd.readthedocs.io/en/stable/assets/argocd_architecture.png

Installation

kubectl create namespace argocd
kubectl apply -n argocd -f \
https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Install argocd binary

VERSION=$(curl --silent "https://api.github.com/repos/argoproj/argo-cd/releases/latest" | grep '"tag_name"' | sed -E 's/.*"([^"]+)".*/\1/')curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/$VERSION/argocd-linux-amd64

Ref > https://argo-cd.readthedocs.io/en/stable/cli_installation/

Update admin Password

Capture the initial admin passwd

kubectl -n argocd get secrets argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d && echo

Or reset

Reference > https://github.com/argoproj/argo-cd/blob/master/docs/faq.md#i-forgot-the-admin-password-how-do-i-reset-it

Auto-Completion

Reference > https://argoproj.github.io/argo-cd/user-guide/commands/argocd_completion/

Append the following into ~/.bashrc

source <(kubectl completion bash)
alias k=kubectl
complete -F __start_kubectl k
. <(flux completion bash)
. <(argocd completion bash)

Network Routing

  • Patch loadbalancer Service over network
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
  • Port-Forwarding for localhost
kubectl port-forward svc/argocd-server -n argocd 8080:443
  • Traefik Ingress

https://argoproj.github.io/argo-cd/operator-manual/ingress/#traefik-v22

Login through CLI

argoPass=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)argocd login --insecure --grpc-web localhost:32434 --username admin --password $argoPassdebian@debian:~$ argocd login --insecure --grpc-web localhost:8080 --username admin --password $argoPass
'admin:login' logged in successfully
Context 'localhost:8080' updated

Login through GUI without VPN over internet

Since the K3S master (and ArgoCD ) is running in Japan, I want to login ArgoCD’s GUI locally without having a VPN. I could directly use SSH to port-forwarding

  • Find out the argocd-server exposed port
kubectl -n argocd get svc | grep "argocd-server"NAME                    TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
argocd-server LoadBalancer 10.43.120.17 <pending> 80:32761/TCP,443:32551/TCP 7d20h
  • Run local script to SSH port-forwarding, which means the forward 32761 from k3s_master to 8080 on localhost. No proxy configuration required on browser.
ssh -v -L8080:localhost:32761 az_jpn

Notice: az_jpn is host alias of k3s_master in Japan

Create an App

argocd app create example-app --repo https://github.com/j3ffyang/docker-development-youtube-series.git \
--path argo/example-app --dest-server https://kubernetes.default.svc \
--dest-namespace example-app

Courtesy of example-app from https://github.com/marcel-dempers/docker-development-youtube-series

List an App

argocd app list

Sync an App

argocd app sync example-app

Tutorial

https://redhat-scholars.github.io/argocd-tutorial/argocd-tutorial/index.html https://dev.to/dizveloper/gitops-w-argocd-a-tutorial-7o4

Reference > https://medium.com/@toja/argocd-are-you-gonna-flux-my-way-e6b571f41a57 https://medium.com/hootsuite-engineering/using-gitops-argocd-to-ship-kubernetes-changes-faster-at-hootsuite-4d35628a3fb7

Troubleshooting

Terminating forever when deleting flux-system namespace in Kubernetes

k get namespaces flux-system -o json > flux-system.json

Edit flux-system.json,

"spec": {
"finalizers": [
]
},

Launch busybox for test

kubectl run -i -t busybox --image=busybox --restart=Never

Check DNS Status

kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup www.google.com

App synch fails with ComparisonError rpc error: code = DeadlineExceeded desc = context deadline exceeded

https://github.com/argoproj/argo-cd/issues/3864 https://github.com/argoproj/argo-cd/issues/1641

Wireguard VPN

This VPN enables the encrypted network between K3S master and RaspberryPi devices across public internet.

Reference > https://linuxize.com/post/how-to-set-up-wireguard-vpn-on-ubuntu-20-04/

Appendix

Flux2

Architecture

Flux works with Kubernetes’ role-based access control (RBAC), so you can lock down what any particular sync can change. It can send notifications to Slack and other like systems when configuration is synced and ready, and receive webhooks to tell it when to sync.

Reference > https://fluxcd.io/docs/

<img src=”https://github.com/fluxcd/flux2/raw/main/docs/_files/gitops-toolkit.png" width=”750px”>

Reference >https://github.com/fluxcd/flux2

Install

curl -s https://fluxcd.io/install.sh | sudo bashdebian@debian:~$ export GITHUB_USER=j3ffyang
debian@debian:~$ export GITHUB_TOKEN=xxx_xxx
flux bootstrap github --owner=$GITHUB_USER --repository=fleet-infra \
--branch=main --path=./clusters/my-cluster --personal
► connecting to github.com
► cloning branch "main" from Git repository "https://github.com/j3ffyang/fleet-infra.git"
✔ cloned repository
► generating component manifests
✔ generated component manifests
✔ component manifests are up to date
► installing toolkit.fluxcd.io CRDs
◎ waiting for CRDs to be reconciled
✔ CRDs reconciled successfully
► installing components in "flux-system" namespace
✔ installed components
✔ reconciled components
► determining if source secret "flux-system/flux-system" exists
✔ source secret up to date
► generating sync manifests
✔ generated sync manifests
✔ sync manifests are up to date
► applying sync manifests
✔ reconciled sync configuration
◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
✗ context deadline exceeded
► confirming components are healthy
✔ source-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ helm-controller: deployment ready
✔ notification-controller: deployment ready
✔ all components are healthy
✗ bootstrap failed with 1 health check failure(s)
debian@debian:~$ k -n flux-system get all
NAME READY STATUS RESTARTS AGE
pod/source-controller-85fb864746-6mhz8 1/1 Running 0 14m
pod/kustomize-controller-6977b8cdd4-4gfgb 1/1 Running 0 14m
pod/notification-controller-5c4d48f476-9llnj 1/1 Running 0 14m
pod/helm-controller-85bfd4959d-mcjtv 1/1 Running 0 14m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/notification-controller ClusterIP 10.43.165.100 <none> 80/TCP 14m
service/source-controller ClusterIP 10.43.99.87 <none> 80/TCP 14m
service/webhook-receiver ClusterIP 10.43.166.175 <none> 80/TCP 14m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/source-controller 1/1 1 1 14m
deployment.apps/kustomize-controller 1/1 1 1 14m
deployment.apps/notification-controller 1/1 1 1 14m
deployment.apps/helm-controller 1/1 1 1 14m
NAME DESIRED CURRENT READY AGE
replicaset.apps/source-controller-85fb864746 1 1 1 14m
replicaset.apps/kustomize-controller-6977b8cdd4 1 1 1 14m
replicaset.apps/notification-controller-5c4d48f476 1 1 1 14m
replicaset.apps/helm-controller-85bfd4959d 1 1 1 14m
debian@debian:~$ flux --version
flux version 0.13.4
debian@debian:~$ flux check
► checking prerequisites
✗ flux 0.13.4 <0.14.0 (new version is available, please upgrade)
✔ kubectl 1.20.7+k3s1 >=1.18.0-0
✔ Kubernetes 1.20.7+k3s1 >=1.16.0-0
► checking controllers
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.12.2
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.12.0
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.13.0
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.10.1
✔ all checks passed

Diagram source code

Enable plantUML in Atom Editor > https://github.com/j3ffyang/container_and_devops/blob/master/docs/20200317_plantuml.md

package dataCenter_in_japan {
collections wireguard_server
collections k3s_master
collections argocd_server
note right of wireguard_server: 172.16.0.0/24
note right of k3s_master: 10.43.0.0/8
wireguard_server -[hidden]- k3s_master
k3s_master -[hidden]- argocd_server
}
cloud internetpackage raspberrypi_in_korea {
collections wireguard_client
collections k3s_worker
wireguard_client -[hidden]- k3s_worker
}
package github {
collections private_repo
}
dataCenter_in_japan - internet
internet - raspberrypi_in_korea
internet -- github

--

--

j3ffyang
j3ffyang

Written by j3ffyang

ardent linux user, opensource, kubernetes containerization, blockchain, data security. handler of @analyticsource and @j3ffyang

No responses yet