me
Published on

Home k8s cluster

Authors
  • avatar
    Name
    Omer Atagun
    Twitter

It was quite a while that I had no time to write any posts. Eventually, I got the time due to having my vacation cancelled and having to stay at home with a bunch of PTO in hand. So, it was time.

OVERKILLING HOME SERVER

I had this machine that I got back then with an i5 12400f and 4060 and 47GB RAM with 1tb nvme and 1tb usb ssd.

My plan:

  • Create an organization in GitHub for home projects.
  • Run my own GitHub runners within a cluster using Arc Runners.
  • Include a kubeconfig in the repositories so that I could create simple deployment and service YAML files to deploy.
  • Have local AI running within the cluster, so NVIDIA support in pods was needed for the job.
  • Plex server.
  • This blog itself – a new project that I am building with Go.

First, I have started going with Proxmox so I could have 3 nodes from the same machine and extend furthermore, but that was quite annoying to handle all virtualization for really no gains (I have other machines as well). I leveraged its simplicity to dump a virtual machine and create a new one right after.

Prerequisites

I use cloudflare zero tunnel to expose my cluster and other applications to my domains. If you are not intended to do that, this story/tutorial is not for you. You can read more details here. I also do not mention how ingress or services or k8s itself work within this blog post, you got to figure that out yourself.

First steps:

With Proxmox installed, to refresh my memories I started creating my cluster using Rancher. I had to get back to thinking on YAML world. So, I broke entire thing couple of times. Finally, I had a working prototype with Arc Runners and deployments from repositories. Yay! Fresh bare Ubuntu installation.

Second step:

Rancher changed its tutorial a bit after couple of years. So i have gone with Helm CLI installation. Quite straightforward, all you have to go to get latest versions from corresponding repositores releases and you are pretty much done.

Third step:

Start with Actions runner controller tutorial up until step "Configure a runner scale set". You need more than what is given to you In tutorial you will see an installation for arc runners like this;

INSTALLATION_NAME="arc-runner-set"
NAMESPACE="arc-runners"
GITHUB_CONFIG_URL="https://github.com/<your_enterprise/org/repo>"
GITHUB_PAT="<PAT>"
helm install "${INSTALLATION_NAME}" \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    --set githubConfigUrl="${GITHUB_CONFIG_URL}" \
    --set githubConfigSecret.github_token="${GITHUB_PAT}" \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set

But this on its own is not sufficient enough to run docker in docker in which you need most of the time for your CI/CD. After reading the docs, you can find out which keys you need to populate to make that happen

  --set resourceMeta.noPermissionServiceAccount.labels.app=dummy \
  --set resourceMeta.noPermissionServiceAccount.annotations.foo=bar \
  --set runnerContainerPrivileged=true \
  --set containerMode.type=dind \
  --set dockerEnabled=true \
  --set dind.privileged=true \

You can then follow up rest of the tutorial as it is.

Forth step:

Now we have arc-runners on but they have not any secrets that they can pull images from your repository registry. We need to create one. Create a secret in the same namespace and name it ghcr-secret. Domain will be ghcr.io and your username and token with package access.

You will also need to inject your kubeconfig into your repository action secrets so you can actually deploy into clusters.

my workflow ci.yaml looks like this

name: Deploy to production cluster

on:
  push:
    branches:
      - master
  workflow_dispatch:
jobs:
  deploy:
    runs-on: arc-runner-set
    if: ${{ github.event_name == 'push' || github.event_name == 'workflow_dispatch' }}
    permissions:
      contents: read
      packages: write
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Log in to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GHCR_PAT }}

      - name: Get commit count
        id: vars
        run: echo "count=$(git rev-list --count HEAD)" >> $GITHUB_OUTPUT

      - name: Build and Push Service
        run: |
          cp .env.example .env
          VERSION=${{ steps.vars.outputs.count }}
          IMAGE=ghcr.io/${{ github.repository_owner }}/$(basename ${{ github.repository }})/service:$VERSION
          docker build -t $IMAGE .
          docker push $IMAGE

      - name: Pull and deploy Service
        run: |
          VERSION=${{ steps.vars.outputs.count }}
          IMAGE=ghcr.io/${{ github.repository_owner }}/$(basename ${{ github.repository }})/service:$VERSION
          docker pull $IMAGE

      - name: Set up kubectl
        uses: azure/setup-kubectl@v4
        id: install
        with:
          version: "v1.28.2"

      - name: Update Kubernetes Deployment with new image version
        run: |
          VERSION=${{ steps.vars.outputs.count }}
          IMAGE=ghcr.io/${{ github.repository_owner }}/$(basename ${{ github.repository }})/service:$VERSION
          sed -i "s|image: .*|image: $IMAGE|" deployment.yaml
        working-directory: k8s

      - name: Apply Kubernetes manifests
        run: |
          kubectl apply -f deployment.yaml --insecure-skip-tls-verify
          kubectl apply -f service.yaml --insecure-skip-tls-verify
        working-directory: k8s

Congratulations, now you have a arc runners for your repositories which can spawn a pod, run your jobs and clean all the leftover data after pod is shutdown. I know you don't need it but its fun!

Now we need to deploy something

Open web ui

Create this as webui.yaml

---
apiVersion: v1
kind: Namespace
metadata:
  name: open-webui

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: open-webui-pvc
  namespace: open-webui
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: open-webui
  namespace: open-webui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: open-webui
  template:
    metadata:
      labels:
        app: open-webui
    spec:
      containers:
        - name: open-webui
          image: ghcr.io/open-webui/open-webui:main
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: data
              mountPath: /app/backend/data
          env:
            - name: WEBUI_PORT
              value: "8080"
      dnsPolicy: ClusterFirstWithHostNet # this is needed for accessing ollama instance in the machine
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: open-webui-pvc

---
apiVersion: v1
kind: Service
metadata:
  name: open-webui
  namespace: open-webui
spec:
  selector:
    app: open-webui
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  type: ClusterIP
kubectl apply -f webui.yaml

and you are done. Create an ingress for where you would like to publish. If you have ollama installed in different machine, you can set that up within open web ui configuration. If ollama will run in same machine as cluster, then make sure ollama environment set for 0.0.0.0 otherwise will be closed to network and open only to localhost.

Handling nvidia support

Now i used nvidia gpu for plex server since i needed more for that than ollama ( you can leverage ollama from machine rather pod ).

At the end my cluster looked like this.

Deployments
Cluster

That sums up! this blog post was meant to be warm up after a very long time. Hopefully i will do better next time.