Self-Hosted Blog Part 8 - Internet Accessibility

2/12/2021 3-minute read

In part 7 we walked through how to get our blog running in Kubernetes. My Kubernetes cluster is sitting on my home internet connection. We need a way of getting from somewhere on the Internet into our Kubernetes cluster.

I considered registering dynamic DNS for my home IP address and then port forward through my router. I was worried that might be blocked by my provider, or it might somehow put me on their radar. I don’t know technically whether hosting something at home might violate my terms of service, and I thought it best to minimize advertising I was hosting a server workload out of their IP space.

I have a Linux VPS that has been hosting my static site for a while. I’ve gained some experience with Tailscale at work. I first heard about it on Twitter, and we use it at work and it’s simply amazing. With Tailscale, it’s like putting the your machines on the same LAN even though they’re separated by many networks in between. I guess I’d categorize it as an overlay network.

I ended up building a network topology that looked like this:

A user hits my Linode VPS, which port forwards to 192.168.68.20. This IP is advertised in Tailscale by a VM I’m running on my NAS. I initially tried putting Tailscale on each of my Pi nodes, but I ran into difficulty trying to access an exposed NodePort. Given Kubernetes is a heavy kernel networking user and so is Tailscale, I figure there’s some bugs in there. I may try that again some day, but my workaround was to use Tailscale on another VM I’m already running and advertise the 192.168.68.0/22 network which is my home network. Then, the Linux VPS Tailscale client gets that route and knows how to talk to anything on my home LAN. The port forward hits the MetalLB load balancer which distributes traffic to my 4 Kubernetes nodes.

Rather than replicate their document, I’ll point you to installing Tailscale on Linux and setting up subnet routes to complete the setup I did. I merely followed their documentation and everything worked. It’s really quite amazing.

Lastly, I needed to connect my CI/CD pipeline to my K8S cluster to know when a new version has been deployed. For that, I’m also using Tailscale. Tailscale has a feature called ephemeral nodes which allows for things like CI/CD pipelines to get access to the local network. Follow their directions for setting it up in Tailscale. On my GitHub Actions workflow, I added the following steps YAML:

      - name: Setup Tailscale
        id: tailscale
        uses: tailscale/tailscale-deploy-github@main
        with:
          authkey: ${{ secrets.TAILSCALE_AUTHKEY }}
          version: 1.20.2

      - name: Deploy Blog
        uses: actions-hub/kubectl@master
        env:
          KUBE_CONFIG: ${{ secrets.KUBE_CONFIG }}
        with:
          args: rollout -n blog restart deployment/blog

The deploy step uses kubeconfig as a GitHub Actions Secret obtained from microk8s config. Now, on every push, my blog gets told to update the deployment to the latest image tag. Last step, we’ll do some performance testing

Posts in this Series