Recently I’ve been thinking of sharing some of my harder won lessons from the last few years of starting a company. There’s a number of topics in my backlog, like what is a basic sales process and how to run it? How do you learn to lead other sales people? How should you think about marketing and marketing team leadership? How do you find the right price for your product? Is your sales play a replacement play or a complementary play?
In part 1, we established this project with a requirement to run Kubernetes on bare metal. I considered a number of potential options: Get a beefy desktop and run Minikube on a single machine Run a hypervisor and spin up a series of VMs on a beefy machine Buy a small number of not-as-powerful machines, like Raspberry Pis From a requirements perspective, all could easily meet the requirement of running kubernetes on-prem.
In part 2, we got our hardware ordered and assembled. Now, we need to turn it into a functioning cluster. Rather than re-invent the wheel, I recommend you follow the Ubuntu tutorial for installing Ubuntu on Raspberry Pis. A few things to note as you go through that tutorial. First, I strongly recommend using Ethernet rather than WiFi, and I didn’t put WiFi in the build of materials. Secondly, I recommend assigning Static IPs to make discovery easier.
In part 3 we got the infrastructure up and running, now it’s time to get a blog created, building, and containerized. First, in the requirements, I had already decided I was going to build a static site to keep things simple. For this tutorial, I looked at a number of options, but you can substitute anything that builds a static site. There’s been a recent revolution in static sites with a number of options for how to build them.
In part 4 we picked a blog framework and built a static site. Now that we can build our static site, we need to create a container which contains bits for hosting that site. For simplicity’s sake, I decided to build the static site and build the contents of the static site into the container. Then, in Kubernetes, everything we need to host the site is distributed by pulling the Docker image.
In part 5 we got our static site into a container. Now, we’re going to replicate the build process I did on my local machine in my CI/CD pipeline on every git push. I’m hosting the source for the site in GitHub. For this, I’m going to use GitHub Actions, because it’s free and integrated right into GitHub. GitHub Actions defines workflows in your repository, and then executes those workflows on various triggers.
In part 6 we got our blog building in our CI2/CD pipeline. Now we’re going to run that built image in our Raspberry Pi K8S cluster. In my setup I have a single Linode VPS node exposed to the internet. First let’s get all of Kubernetes nodes serving HTTP. Let’s create a Kubernetes service that will use the container we created earlier with our CI/CD pipeline. I have a file in my repo called k8s-blog.
In part 7 we walked through how to get our blog running in Kubernetes. My Kubernetes cluster is sitting on my home internet connection. We need a way of getting from somewhere on the Internet into our Kubernetes cluster. I have a Linux VPS that has been hosting my static site for a while. I’ve gained some experience with Tailscale at work. I first heard about it on Twitter, and we use it at work and it’s simply amazing.
In part 8 we walked through how exposing our Kubernetes cluster to the Internet and updating the blog from source code control. Now, in our final step, we’re going to test performance. I have been seeing around 3.5k to 4k requests per second and 200-450mbits/sec of network traffic coming off of this cluster. According to my devtools instance, a request to my blog home page initiates 12 HTTP requests. That would service 291 pageviews a second or 25m pages a day.
In my last post, I talked about how I’d be blogging a lot about my lessons from startup life. I’ve obviously failed at that over the last year, but now I’m back to describe how I’m running two of Cribl’s latest products in my homelab. There really is no practical purpose to doing this, as it would be more reliable and easier to forward this data to an actual AWS S3 bucket.