Posts


This page will be a living document to help me prepare for the CKA (Certified Kubernetes Administrator) exam.

The CKA curriculum can be found on github. As of 2022-02-13 it is targetting kubernetes 1.22. It looks intense.

CKA Curriculum

https://raw.githubusercontent.com/cncf/curriculum/master/CKA_Curriculum_v1.22.pdf

25% - Cluster Architecture, Installation & Configuration

# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.23.4
networking:
  podSubnet: "192.168.0.0/16" # --pod-network-cidr
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

* cni, install calico
    * https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart
    * are there docs on the kubernetes site?


* older docker stuff
    * [switch docker to systemd cgroup driver](https://kubernetes.io/docs/setup/production-environment/container-runtimes/)
    * Seriously ???
    * /etc/docker/daemon.json 
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

15% - Workloads & Scheduling

20% - Services & Networking

10% - Storage

30% - Troubleshooting


Install

Find jenkins installation instructions at https://www.jenkins.io/download/.

Ubuntu

curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee \
    /usr/share/keyrings/jenkins-keyring.asc > /dev/null

echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
    https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
    /etc/apt/sources.list.d/jenkins.list > /dev/null

sudo apt-get update
sudo apt-get install jenkins openjdk-11-jdk-headless docker.io -y
sudo usermod -a -G docker jenkins 

# java -jar jenkins-cli.jar -s http://localhost:8080/ install-plugin SOURCE ... [-deploy] [-name VAL] [-restart]

Plugin setup

Install the docker pipelines and git branch source plugins

  • https://plugins.jenkins.io/docker-workflow/
  • https://plugins.jenkins.io/github-branch-source/
    • Assuming github is being used

To display test results various Jenkin plugins are required.

Jenkinfile Pipeline Examples

Each example should be named Jenkinfile and saved in the base folder of your code repo or entered as an inline Jenkins pipeline.

The examples below are the minimum syntax required per programming language.

Root Access in Docker

If root access is needed, such as to install packages as part of a build, modify the agent section to set the user to root.

agent {                     
    docker { 
        image 'ubuntu:20.04'
        args '-u root:root'
    }
}

Dotnet

Example of building and testing a dotnet project that has nunit testing enabled. If there is only one solution in the directory then the solution name does not need to be specified.

pipeline {
    agent none
    environment {
        DOTNET_CLI_HOME = "/tmp/DOTNET_CLI_HOME"
    }
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'mcr.microsoft.com/dotnet/sdk:6.0'
                }
            }
            steps {
                echo "building"
                sh """
                dotnet restore [YourSolution].sln
                dotnet build [YourSolution].sln --no-restore
                dotnet vstest [YourSolution].sln --logger:"nunit;LogFileName=build/nunit-results.xml"    
                """
            }
            post{
                always {
                    nunit testResultsPattern: 'build/nunit-results.xml'
                }
            }  
        }      
    }
}

Nunit Setup

See https://docs.microsoft.com/en-us/dotnet/core/testing/unit-testing-with-nunit and https://github.com/spekt/nunit.testlogger.

<ItemGroup>
  <PackageReference Include="nunit" Version="3.13.2" />
  <PackageReference Include="NUnit3TestAdapter" Version="4.2.1" />
  <PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.0.0" />
  <PackageReference Include="NunitXml.TestLogger" Version="3.0.117" />
</ItemGroup>

Rust

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'rust:1.58.1'
                }
            }
            steps {
                echo "building"
                sh """
                cargo build --release
                cargo test   
                """
            }
        }      
    }
}

Go

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'golang:1.16'
                }
            }
            steps {
                echo "building"
                sh """
                go build
                go test   
                """
            }
        }      
    }
}

Python

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'python:3.9.10'
                }
            }
            steps {
                echo "whatever is done for python can go here"
                sh """
                python --version
                """
            }
        }      
    }
}

Ruby

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'ruby:3.1.0'
                }
            }
            steps {
                echo "whatever is done for ruby can go here"
                sh """
                ruby --version
                """
            }
        }      
    }
}

Java

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'openjdk:19-jdk-buster'
                }
            }
            steps {
                echo "building"
                sh """
                javac -classpath . Main.java
                """
            }
        }      
    }
}

Swift

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'swift:5.5.3'
                }
            }
            steps {
                echo "building"
                sh """
                swift build
                swift test
                """
            }
        }      
    }
}

C

pipeline {
    agent none
    stages {
        stage('build') {
            agent {                     
                docker { 
                    image 'gcc:9.4.0'
                }
            }
            steps {
                echo "building"
                sh """
                gcc --version
                """
            }
        }      
    }
}

C++

pipeline {
    agent none
    stages {
        stage('build') {
            agent {                     
                docker { 
                    image 'gcc:9.4.0'
                }
            }
            steps {
                echo "building"
                sh """
                g++ --version
                """
            }
        }      
    }
}

Zig

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'ubuntu:20.04'
                    args '-u root:root'
                }
            }
            steps {
                echo "building"
                sh """
                apt update
                apt install wget xz-utils -y

                rm -rf zig
                mkdir -p zig
                cd zig
                wget https://ziglang.org/download/0.9.0/zig-linux-x86_64-0.9.0.tar.xz
                tar -xvf ./zig-linux-x86_64-0.9.0.tar.xz
                cd ..
                zig/zig-linux-x86_64-0.9.0/zig version
                zig/zig-linux-x86_64-0.9.0/zig
                """
            }
        }      
    }
}

Javascript/nodejs

pipeline {
    agent none
    stages {
        stage('build') {
            agent {                     
                docker { 
                    image 'node:16'
                }
            }
            steps {
                echo "building"
                sh """
                npm version
                """
            }
        }      
    }
}

Javascript/electron

See https://www.electron.build/multi-platform-build#docker.

pipeline {
    agent none
    environment {
        ELECTRON_CACHE="/root/.cache/electron"
        ELECTRON_BUILDER_CACHE="/root/.cache/electron-builder"
    }
    stages {
        stage('build') {
            agent {                     
                docker { 
                    image 'electronuserland/builder:wine'
                    args "--env-file <(env | grep -iE 'DEBUG|NODE_|ELECTRON_|YARN_|NPM_|CI|CIRCLE|TRAVIS_TAG|TRAVIS|TRAVIS_REPO_|TRAVIS_BUILD_|TRAVIS_BRANCH|TRAVIS_PULL_REQUEST_|APPVEYOR_|CSC_|GH_|GITHUB_|BT_|AWS_|STRIP|BUILD_') -v ${PWD}:/project -v ${PWD##*/}-node-modules:/project/node_modules -v ~/.cache/electron:/root/.cache/electron -v ~/.cache/electron-builder:/root/.cache/electron-builder"
                }
            }
            steps {
                echo "building"
                sh """
                yarn && yarn dist
                """
            }
        }      
    }
}

Only one gui session can be logged in at a time. If you are already logged into your desktop locally then remoting in will cause major issues.

Create a second local account for use with remoting in that you can use to logout of your real account.

Note to self: research if there is a better way.

Fedora

# install
sudo dnf install xrdp
sudo systemctl enable xrdp 
sudo systemctl start xrdp 

# check the status if you so desire
sudo systemctl status xrdp 

# firewall rules
sudo firewall-cmd --permanent --add-port=3389/tcp 
sudo firewall-cmd --reload 

# SELinux.  Is this actually required?
sudo chcon --type=bin_t /usr/sbin/xrdp 
sudo chcon --type=bin_t /usr/sbin/xrdp-sesman 

# find the machines ip address
ip addr show

Ubuntu

sudo apt update
sudo apt install xrdp 

# Confirm xrdp is running if you so desire
sudo systemctl status xrdp

# Add certs permissions for your user.  Is this actually required?
sudo adduser xrdp ssl-cert
sudo systemctl restart xrdp

# firewall rules
sudo ufw allow 3389

# find the machines ip address
ip addr show

External References

  • https://tecadmin.net/how-to-install-xrdp-on-fedora/
  • https://linuxize.com/post/how-to-install-xrdp-on-ubuntu-20-04/

Local storage

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: standard
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

If this new storage class is the only one configured in the cluster, mark it as the default.

kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

This storage class does not support auto-provisioning of persistent volumes. Each persistent volume must be created manually before the PVC can claim it.

Persistent Volumes

The storageClassName in the PV must match the storageClassName in the PVC.

pv-example.yaml

apiVersion: v1
metadata:
  name: pv-test-vol1
  labels:
      type: local
Spec:
  storageClassName: standard
  capacity:
      storage: 10Gi
  accessModes:
      - ReadWriteOnce
  hostPath:
      path: "/opt/storage/test_pv"
mkdir -p "/opt/storage/test_pv"

kubectl create -f pv.yaml

OpenEBS

Jiva is preferred if your application is small, requires storage level replication but does not need snapshots or clones. Mayastor is preferred if your application needs low latency and near disk throughput, requires storage level replication and your nodes have high CPU, RAM and NVMe capabilities. OpenEBS Data Engines.

Jiva is simple to setup and run. cStor and Mayastor are options I need to investigate more.

Replicated - Jiva

With OpenEBS both local hostpath and replicated are options. Jiva replicated is best left for apps that don’t handle replication such as sql server or PostgreSQL.

https://github.com/openebs/jiva-operator/blob/develop/docs/quickstart.md

Install Jiva Operators

kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml
kubectl apply -f https://openebs.github.io/charts/jiva-operator.yaml

Jiva volume policy

apiVersion: openebs.io/v1alpha1
kind: JivaVolumePolicy
metadata:
  name: example-jivavolumepolicy
  namespace: openebs
spec:
  replicaSC: openebs-hostpath
  target:
    replicationFactor: 1
    # disableMonitor: false
    # auxResources:
    # tolerations:
    # resources:
    # affinity:
    # nodeSelector:
    # priorityClassName:
  # replica:
    # tolerations:
    # resources:
    # affinity:
    # nodeSelector:
    # priorityClassName:

Storage Class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: openebs-jiva-csi-sc
provisioner: jiva.csi.openebs.io
allowVolumeExpansion: true
parameters:
 cas-type: "jiva"
 policy: "example-jivavolumepolicy"

Persistent Volume Claim

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: example-jiva-csi-pvc
spec:
 storageClassName: openebs-jiva-csi-sc
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 4Gi

Local PVs

See https://openebs.io/docs/concepts/localpv.

References


Draft notes as of Jan 30, 2022.

This post is a short walkthrough of installing a YugabyteDB demo with local storage in a Kubernetes cluster.

If your Kubernetes cluster does not already have a storage class setup jump to the Storage Setup - Prerequisites section and complete it before continuing with the following commands.

kubectl create namespace yb-demo

Install Yugabytedb

Notice the overrides.

helm install yb-demo yugabytedb/yugabyte --set resource.master.requests.cpu=0.5,resource.master.requests.memory=0.5Gi,resource.tserver.requests.cpu=0.5,resource.tserver.requests.memory=0.5Gi,replicas.master=1,replicas.tserver=1,storage.tserver.size=1Gi --namespace yb-demo

Yugabyte k8s info

NAMESPACE: yb-demo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES:

  1. Get YugabyteDB Pods by running this command: kubectl –namespace yb-demo get pods

  2. Get list of YugabyteDB services that are running: kubectl –namespace yb-demo get services

  3. Get information about the load balancer services: kubectl get svc –namespace yb-demo

  4. Connect to one of the tablet server: kubectl exec –namespace yb-demo -it yb-tserver-0 bash

  5. Run YSQL shell from inside of a tablet server: kubectl exec –namespace yb-demo -it yb-tserver-0 – /home/yugabyte/bin/ysqlsh -h yb-tserver-0.yb-tservers.yb-demo

  6. Cleanup YugabyteDB Pods For helm 2: helm delete yb-demo –purge For helm 3: helm delete yb-demo -n yb-demo NOTE: You need to manually delete the persistent volume kubectl delete pvc –namespace yb-demo -l app=yb-master kubectl delete pvc –namespace yb-demo -l app=yb-tserver

Storage Setup - Prerequisites

Local hostPath storage works well with YugabyteDB.

Local storage

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: standard
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

If this new storage class is the only one configured in the cluster, mark it as the default.

kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

This storage class does not support auto-provisioning of persistent volumes. Each persistent volume must be created manually before the PVC can claim it.

Persistent Volumes

The storageClassName in the PV must match the storageClassName in the PVC.

pv-example.yaml

apiVersion: v1
metadata:
  name: pv-test-vol1
  labels:
      type: local
Spec:
  storageClassName: standard
  capacity:
      storage: 10Gi
  accessModes:
      - ReadWriteOnce
  hostPath:
      path: "/opt/storage/test_pv"
mkdir -p "/opt/storage/test_pv"

kubectl create -f pv.yaml

OpenEBS Replicated

OpenEBS local hostpath should also work well. Jiva replicated is best left for apps that don’t handle replication such as sql server or PostgreSQL.

https://github.com/openebs/jiva-operator/blob/develop/docs/quickstart.md

kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml
kubectl apply -f https://openebs.github.io/charts/jiva-operator.yaml

Jiva volume policy

apiVersion: openebs.io/v1alpha1
kind: JivaVolumePolicy
metadata:
  name: example-jivavolumepolicy
  namespace: openebs
spec:
  replicaSC: openebs-hostpath
  target:
    replicationFactor: 1
    # disableMonitor: false
    # auxResources:
    # tolerations:
    # resources:
    # affinity:
    # nodeSelector:
    # priorityClassName:
  # replica:
    # tolerations:
    # resources:
    # affinity:
    # nodeSelector:
    # priorityClassName:

Storage Class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: openebs-jiva-csi-sc
provisioner: jiva.csi.openebs.io
allowVolumeExpansion: true
parameters:
 cas-type: "jiva"
 policy: "example-jivavolumepolicy"

Persistent Volume Claim

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: example-jiva-csi-pvc
spec:
 storageClassName: openebs-jiva-csi-sc
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 4Gi

Reference