Setting Up a Raspberry Pi Kubernetes Cluster as a Home Server

Published on 3/7/2025 · 13 min read

This article explains how to set up a Raspberry Pi Kubernetes cluster as a home server, using k3s together with Longhorn for persistent volumes and MetalLB + Traefik to expose services locally and publicly. As a disclaimer, this is not the fastest and cheapest way to get a home server up and running. For that, you would be better off reusing an old laptop or desktop computer. However, if you enjoy tinkering with hardware and software, and are willing to learn Kubernetes, then this project is for you.

Motivation

My motivation for this project was primarily educational. I wanted to learn more about Kubernetes and how to set up and manage a cluster. I also wished to have a home server that I could use to host a few services. I chose to use Raspberry Pi’s because:

Although a Kubernetes cluster is overkill for a home server, it has some pretty nice advantages:

Other nice features, though not essential for a home server, include:

There are some constraints to bare in mind:

With all that said, let’s get started!

Hardware

RPi cluster

In this project, I used the following hardware (mostly because that’s what I had lying around):

A case to keep everything tidy is also a good idea. A stackable one is a cheap and handy option, although it will only take care of your RPis: the switch, supply and hard drives will still be dangling around… I zip-tied the network switch and the power supply at the base of RPis stack to make it (slightly) tidier.

In all fairness, this project would be expensive if you had to buy everything from scratch. Yet you don’t need all that: one or two RPis are enough, or an old computer.

Setting up the cluster

Preparing the nodes

For each node, you will need to:

  1. flash the OS, either on the eMMC or on the microSD card. To flash on the eMMC, you can follow the nice tutorial from Jeff Geerling. I chose to go with Raspbian, and flashing with the Raspberry Pi Imager, since it conveniently allow you to put your SSH key directly in the system, so you can seamlessly connect to your RPis without needing to plug a keyboard and a display on them. You can also directly set a hostname and a user.
  2. ssh to the node, and make some updates:
sudo apt-get update && sudo apt-get upgrade
  1. set up a static IP address. You can do this by editing the /etc/dhcpcd.conf file:
interface eth0
static ip_address=<your_ip>/24
static routers=<your_router_ip>
static domain_name_servers=<your_dns>

Make sure you also exclude these IPs from your router’s DHCP range, so they don’t get assigned to other devices. This step isn’t mandatory, but strongly advised since it makes it easier to connect to your RPi. Morever, depending on what solution you want to use to manage persistent volumes in Kubernetes, it is actually necessary. More on that very soon.

  1. install Docker. You can follow the official guide, or simply run:
curl -fsSL test.docker.com -o get-docker.sh && sh get-docker.sh
sudo usermod -aG docker <user_name (xam)>
sudo systemctl enable docker # to enable docker on start
rm get-docker.sh
  1. edit /boot/cmdline.txt to enable OS container features, and reboot:
sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1/' /boot/cmdline.txt
sudo reboot
  1. if you also use external hard drives, we can make sure they are automatically mounted at boot:
# list them
sudo fdisk -l
# format it
sudo mkfs -t ext4 <device>
# get its UUID
sudo blkid <device>
# create a mount point for it
sudo mkdir -p <path>

Then add "UUID=<UUID> <path> ext4 defaults,noatime 0 2" to /etc/fstab.

We are now ready to install Kubernetes on the nodes.

Optional steps

A few complementary steps you can take:

sudo rfkill block bluetooth
sudo rfkill block wifi
sudo apt-get install unattended-upgrades

Installing Kubernetes

We will use k3s, a lightweight Kubernetes distribution. It is remarkably easy to install. One node will need to have the master role, while all others will be workers. On the master node, run:

curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="server --disable traefik --disable servicelb" sh -s -

Note that we disable traefik because we will (optionally) install our own version of it. We also disable servicelb because we will use metallb instead. More on that later.

On the worker nodes, simply run:

curl -sfL https://get.k3s.io | K3S_URL=https://<master IP>:6443 K3S_TOKEN="<k3s token>" sh -

where <master IP> is the IP of the master node, and <k3s token> is the token generated on the master node after the installation.

Congrats! You now have a Kubernetes cluster up and running! To interact with it, you can copy the kubeconfig file from the master node to your local machine using (from your local machine):

scp <user>@<master_ip>:/etc/rancher/k3s/k3s.yaml ~/.kube/config

You will need to install kubectl on your local machine to interact with the cluster. You can follow the official guide. Once this is done, you can check the status of the cluster with:

kubectl get nodes

Managing persistent volumes

If you want to use persistent volumes in your cluster, you will need to set up a storage solution across your nodes. I chose to use Longhorn, because you can easily set it up to mirror your volumes, a pretty nice feature for a home server. Having said that, a simpler NFS server would also do a great job. To set up Longhorn, follow these steps on each of your nodes:

  1. install and start its dependencies (nfs actually):
sudo apt-get install jq open-iscsi nfs-kernel-server portmap nfs-common
sudo systemctl enable rpcbind.service
sudo systemctl enable nfs-server.service
sudo systemctl start rpcbind.service
sudo systemctl start nfs-server.service
  1. in your mounted hard drive, create a directory to store the data:
sudo mkdir -p <path>
  1. edit the /etc/exports file to add the path to the volume you want to share:
<path>        <node1-ip>(rw,sync,crossmnt,no_subtree_check) <node2-ip>(rw,sync,crossmnt,no_subtree_check) <node3-ip>(rw,sync,crossmnt,no_subtree_check) <node4-ip>(rw,sync,crossmnt,no_subtree_check)
  1. export the volume:
sudo exportfs -a
  1. (optional) if you want to encrypt the data, you can use dm-crypt.

You can now install Longhorn on your cluster. To do so, run:

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.8.1/deploy/longhorn.yaml

You can check the status of the installation with:

kubectl -n longhorn-system get pod

The final step is to create a storage class for Longhorn. Check my manifests for a few examples. In my case, I created several storage classes to leverage:

Exposing your services

At this stage, your cluster is ready to host your services, i.e. you can now deploy your applications on it with persistent volumes. Before going further, let’s discuss what your options are to access these services.

Using port-forwarding

You can already access your services within your local network using kubernetes port-forwarding. For instance, you can access the longhorn UI with:

kubectl -n longhorn-system port-forward svc/longhorn-frontend 8080:80

While this is a viable option, it’s not very convenient to execute this command each time. Moreover, it doesn’t allow you to access your services from outside your local network.

Using MetalLB

If the only thing you want is to expose your services in your local network, MetalLB is enough. You can install it with a single command:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.9/config/manifests/metallb-native.yaml

Then, you will need to create to define the IP range that MetalLB can use. You can do so by applying the following manifest (adapt it to your needs):

---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: ip-pool
  namespace: metallb-system
spec:
  addresses:
    - '<start-ip>-<end-ip>'

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
    - ip-pool

Don’t forget to also exclude this range from your router’s DHCP range. You can now expose your services with a LoadBalancer service type. For instance, to expose the longhorn UI, you can run:

---
apiVersion: v1
kind: Service
metadata:
  name: longhorn-lb
  namespace: longhorn-system
  annotations:
    metallb.universe.tf/loadBalancerIPs: <your_ip>
spec:
  ports:
    - name: longhorn-frontend
      port: 8080
      protocol: TCP
      targetPort: 80
  selector:
    app: longhorn-ui
  type: LoadBalancer

Your service will now be accessible at <your_ip>:8080.

That’s more convenient than port-forwarding, but it still doesn’t allow you to access your services from outside your local network.

Using cloudflare tunnels

Using Cloudflare tunnels is an easy way to expose your services to the internet without poking a hole in your router’s firewall. It’s also free and make it easy to set up some security (authentication, access rules, …). However, you will need to open a Cloudflare account, and add your own domain name to it. There are a lot of tutorials around to do this ([this one] for instance). To run the cloudflared daemon in Kubernetes, you can use my manifest: just put in there your own cloudflare token.

If you go that way, please remember to secure your services. Let me insist on that: most tutorials online, especially blog posts, stop after setting a route between a service to expose and a subdomain. Yet your service is exposed to the internet, so anyone can visit them. To secure them, the bare minimum should be to:

One last thing to note is that all your traffic will go through Cloudflare’s servers. This is a good thing for security, but it also means that they can see all your traffic. If you are concerned about that, you should use another method such as VPN.

Using a VPN

To access your services outside your home without exposing them publicly, you can use a VPN. Nowadays, there are a lot of options to quickly and easily set up one. The easiest is probably to use a service like Tailscale: their free tier is quite generous, so they got you covered. Check their docs to get started. No personal domain needed.

Alternatively, you can also consider other solutions such as setting up your own VPN server using PiVPN or Wireguard Easy. For these solutions, you will need a domain name + a dynamic DNS service to update the IP associated with your domain periodically (many Docker services are available for this online, like this one), or you can use DuckDNS.

This solution is a very good compromise between security and convenience.

Setting up a reverse proxy

A last way to expose your services is to use a reverse proxy. This is a bit more complicated than the other solutions, but it gives you more control over the traffic. You can actually use this in conjunction with the VPN solution to access your services using a nice human readable URL and avoid annoying HTTP warnings (see this excellent video from Wolfgang’s channel explaining this technique). Let’s get into how to make this work in our cluster setup. For this, we will use Traefik: it will act as both a reverse proxy and an ingress controller in our cluster. Moreover, it will take care of generating and renewing certificates for us. You can use my manifests to deploy it in just two steps, assuming that you deployed MetalLB, and that you have your domain managed by Cloudflare:

  1. create the necessary environment variables:
export TRAEFIK_USER_KEY='' # a secret key to secure the dashboard: you can generate one using `htpasswd -nb mysecretuser mysecretpassword | base64`
export CF_API_EMAIL='' # the email associated with your cloudflare account
export CF_DNS_API_TOKEN='<cloudflare_token>' # you can get it from your cloudflare account
export CLUSTER_DOMAIN='<subdomain_of_your_domain_pointing_to_your_ip>' # for instance, k8s.example.com
export INGRESS_DOMAIN='<your_domain>'  # example.com
export TRAEFIK_IP='<traefik_local_ip>' # The IP at which MetalLB will expose Traefik
export TRAEFIK_CF_EMAIL='<your_email>' # Just used in the certificate generation
  1. apply the manifests:
git clone git@github.com:xamcost/homeserver.git
cd homeserver/deployment/traefik
cat *.yaml | envsubst | kubectl apply -f -

⚠️ Important: it is very likely that you will need to adapt the manifests to your needs. At least, you will need to change the storageClassName in the persistent volume claim to match the one your created.

You should now be able to access the Traefik dashboard at traefik.<your_domain>. You can also create your ingresses to expose your services at <service_name>.<your_domain>, getting HTTPS with automatic certificate generation/renewal for free.

This reverse proxy provides you with a lot of flexibility: if you decide to stop using a VPN and expose your service directly to the internet, you will just need to change the Traefik IP and have your domain pointing to it. In that case, be sure to secure your services by adding an authentication portal that your reverse proxy redirects to. For instance, you can use Authentik or Authelia. You can also set some extra services like fail2ban to protect your server against brute force attacks.

Wrapping up

In this post, we saw how to set up a Raspberry Pi Kubernetes cluster with k3s, with persistent volumes managed by Longhorn. We also discussed several ways to expose your services to the internet, and went a bit deeper into setting a reverse proxy and how to use it together with a VPN service to access our services with HTTPS without exposing our services publicly. I hope this tutorial was useful to you! If you went this far and set up your own cluster, you can now enjoy deploying more services to it. In my repository, you will find ready to use manifests to deploy:

I had a lot of fun setting up this cluster, and it was my first home server. I have been using it for a little more than a year before moving to a more beefy server, better suited for my growing needs! I hope you will enjoy it as much as I did. If you have any questions, feel free to reach me on GitHub or LinkedIn.