Self-Hosted Blog Part 7 - Kubernetes
In part 6 we got our blog building in our CI/CD pipeline. Now we’re going to run that built image in our Raspberry Pi K8S cluster. One of the pieces of infrastructure you get in the cloud is a load balancer. How do we replicate that on bare metal? There’s a really awesome project called MetalLB that allows us to do this on hardware. MicroK8S provides an add-on for this. On one of the nodes, run:
microk8s enable metallb:192.168.68.20-192.168.68.39
Obviously, substitute an IP range that’ll work for your local network configuration. I gave it 20 IPs, but I doubt I’ll ever be exposing that many services. You’ll want to make sure those IP ranges are outside of your DHCP scopes you don’t end up with IP conflicts.
With MetalLB, any service you create in K8S will get an IP on your LAN and you can treat it like a cloud network load balancer. Your traffic will end up on the inside of your K8S network just like with a cloud load balancer. Now, let’s create a Kubernetes service that will take advantage of it. I have a file in my repo called k8s-blog.yml
which has the following contents:
---
apiVersion: v1
kind: Namespace
metadata:
name: blog
---
apiVersion: v1
kind: Service
metadata:
name: blog
namespace: blog
labels:
app.kubernetes.io/name: blog
annotations:
{}
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: web
selector:
app.kubernetes.io/name: blog
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog
namespace: blog
labels:
app.kubernetes.io/name: blog
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: blog
template:
metadata:
labels:
app.kubernetes.io/name: blog
spec:
containers:
- name: blog
image: clintsharp/blog:latest
imagePullPolicy: Always
resources:
limits:
cpu: 2000m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
I’m assuming you know a bit about Kubernetes if this interested you, so I’ll not go through intimate details on what all of this is. We’re creating a namespace, a deployment, and a service. After running kubectl apply -f ./deployment/k8s-blog.yml
and after the deployment finishes converging I’ll have a working blog in Kubernetes.
To validate, I ran kubectl get po -n blog
and I see pods deployed:
NAME READY STATUS RESTARTS AGE
blog-5844664449-pbqpw 1/1 Running 0 5d17h
blog-5844664449-627kt 1/1 Running 0 5d17h
blog-5844664449-gdrpl 1/1 Running 1 5d17h
When I interrogate the service with kubectl get svc -n blog
I see the IP assigned from MetalLB
:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
blog LoadBalancer 10.152.183.173 192.168.68.20 80:30201/TCP 6d18h
I can now see my blog running on my LAN at http://192.168.68.20/. The last thing I want to do is setup the deployment for autoscaling, which I do with kubectl autoscale deployment blog --cpu-percent=50 --min=3 --max=20
. This will automatically scale up to 20 pods when we see the average CPU usage above 50% for the running pods.
We have a running blog in Kubernetes on bare metal! Next, we will expose this to the internet and deploy on change.
Posts in this Series
- Self-Hosted Blog Part 1 - Overview
- Self-Hosted Blog Part 2 - The Hardware
- Self-Hosted Blog Part 3 - OS & Kubernetes
- Self-Hosted Blog Part 4 - The Blog
- Self-Hosted Blog Part 5 - Container
- Self-Hosted Blog Part 6 - Build
- Self-Hosted Blog Part 7 - Kubernetes
- Self-Hosted Blog Part 8 - Internet Accessibility
- Self-Hosted Blog Part 9 - Performance