Posts


Lua can be used inside an NGINX config file to provide dynamic programming support.

Assumptions:

  • Ubuntu 20.04
  • Nginx 1.18
sudo apt install lua-nginx-cookie libnginx-mod-http-ndk libnginx-mod-http-lua -y

As a short example, this is a short Lua rewrite block accessing a cookie named ACookie and retrieving a substring.

set $substring "";
set $tempcode "";

rewrite_by_lua_block {
        ngx.var.tempcode = ngx.unescape_uri(ngx.var.cookie_ACookie)
        ngx.var.substring = string.sub(ngx.var.tempcode, 5, 10)
}

References


Requires a redis install to work with.

See my redis cluster kubernetes install post.

Connect to a cluster

redis-cli -c -h redis-redis-cluster -a $REDIS_PASSWORD

Commands

A few userful commands

Interactive

INFO
CLUSTER INFO
dbsize
ping
incr helloworld
GET hello world
set hello "world"
append hello ". Hi"
GET hello
keys "*"

cli

redis-cli --scan | head -10
redis-cli --bigkeys
redis-cli --scan --pattern '[YourSearch]:*' | wc -l

References


This page will be a living document to help me prepare for the CKA (Certified Kubernetes Administrator) exam.

The CKA curriculum can be found on github. As of 2022-02-13 it is targetting kubernetes 1.22. It looks intense.

CKA Curriculum

https://raw.githubusercontent.com/cncf/curriculum/master/CKA_Curriculum_v1.22.pdf

25% - Cluster Architecture, Installation & Configuration

# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.23.4
networking:
  podSubnet: "192.168.0.0/16" # --pod-network-cidr
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd

* cni, install calico
    * https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart
    * are there docs on the kubernetes site?


* older docker stuff
    * [switch docker to systemd cgroup driver](https://kubernetes.io/docs/setup/production-environment/container-runtimes/)
    * Seriously ???
    * /etc/docker/daemon.json 
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

15% - Workloads & Scheduling

20% - Services & Networking

10% - Storage

30% - Troubleshooting


Install

Find jenkins installation instructions at https://www.jenkins.io/download/.

Ubuntu

curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee \
    /usr/share/keyrings/jenkins-keyring.asc > /dev/null

echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
    https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
    /etc/apt/sources.list.d/jenkins.list > /dev/null

sudo apt-get update
sudo apt-get install jenkins openjdk-11-jdk-headless docker.io -y
sudo usermod -a -G docker jenkins 

# java -jar jenkins-cli.jar -s http://localhost:8080/ install-plugin SOURCE ... [-deploy] [-name VAL] [-restart]

Plugin setup

Install the docker pipelines and git branch source plugins

  • https://plugins.jenkins.io/docker-workflow/
  • https://plugins.jenkins.io/github-branch-source/
    • Assuming github is being used

To display test results various Jenkin plugins are required.

Jenkinfile Pipeline Examples

Each example should be named Jenkinfile and saved in the base folder of your code repo or entered as an inline Jenkins pipeline.

The examples below are the minimum syntax required per programming language.

Root Access in Docker

If root access is needed, such as to install packages as part of a build, modify the agent section to set the user to root.

agent {                     
    docker { 
        image 'ubuntu:20.04'
        args '-u root:root'
    }
}

Dotnet

Example of building and testing a dotnet project that has nunit testing enabled. If there is only one solution in the directory then the solution name does not need to be specified.

pipeline {
    agent none
    environment {
        DOTNET_CLI_HOME = "/tmp/DOTNET_CLI_HOME"
    }
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'mcr.microsoft.com/dotnet/sdk:6.0'
                }
            }
            steps {
                echo "building"
                sh """
                dotnet restore [YourSolution].sln
                dotnet build [YourSolution].sln --no-restore
                dotnet vstest [YourSolution].sln --logger:"nunit;LogFileName=build/nunit-results.xml"    
                """
            }
            post{
                always {
                    nunit testResultsPattern: 'build/nunit-results.xml'
                }
            }  
        }      
    }
}

Nunit Setup

See https://docs.microsoft.com/en-us/dotnet/core/testing/unit-testing-with-nunit and https://github.com/spekt/nunit.testlogger.

<ItemGroup>
  <PackageReference Include="nunit" Version="3.13.2" />
  <PackageReference Include="NUnit3TestAdapter" Version="4.2.1" />
  <PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.0.0" />
  <PackageReference Include="NunitXml.TestLogger" Version="3.0.117" />
</ItemGroup>

Rust

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'rust:1.58.1'
                }
            }
            steps {
                echo "building"
                sh """
                cargo build --release
                cargo test   
                """
            }
        }      
    }
}

Go

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'golang:1.16'
                }
            }
            steps {
                echo "building"
                sh """
                go build
                go test   
                """
            }
        }      
    }
}

Python

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'python:3.9.10'
                }
            }
            steps {
                echo "whatever is done for python can go here"
                sh """
                python --version
                """
            }
        }      
    }
}

Ruby

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'ruby:3.1.0'
                }
            }
            steps {
                echo "whatever is done for ruby can go here"
                sh """
                ruby --version
                """
            }
        }      
    }
}

Java

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'openjdk:19-jdk-buster'
                }
            }
            steps {
                echo "building"
                sh """
                javac -classpath . Main.java
                """
            }
        }      
    }
}

Swift

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'swift:5.5.3'
                }
            }
            steps {
                echo "building"
                sh """
                swift build
                swift test
                """
            }
        }      
    }
}

C

pipeline {
    agent none
    stages {
        stage('build') {
            agent {                     
                docker { 
                    image 'gcc:9.4.0'
                }
            }
            steps {
                echo "building"
                sh """
                gcc --version
                """
            }
        }      
    }
}

C++

pipeline {
    agent none
    stages {
        stage('build') {
            agent {                     
                docker { 
                    image 'gcc:9.4.0'
                }
            }
            steps {
                echo "building"
                sh """
                g++ --version
                """
            }
        }      
    }
}

Zig

pipeline {
    agent none
    stages {
        stage('build and test') {
            agent {                     
                docker { 
                    image 'ubuntu:20.04'
                    args '-u root:root'
                }
            }
            steps {
                echo "building"
                sh """
                apt update
                apt install wget xz-utils -y

                rm -rf zig
                mkdir -p zig
                cd zig
                wget https://ziglang.org/download/0.9.0/zig-linux-x86_64-0.9.0.tar.xz
                tar -xvf ./zig-linux-x86_64-0.9.0.tar.xz
                cd ..
                zig/zig-linux-x86_64-0.9.0/zig version
                zig/zig-linux-x86_64-0.9.0/zig
                """
            }
        }      
    }
}

Javascript/nodejs

pipeline {
    agent none
    stages {
        stage('build') {
            agent {                     
                docker { 
                    image 'node:16'
                }
            }
            steps {
                echo "building"
                sh """
                npm version
                """
            }
        }      
    }
}

Javascript/electron

See https://www.electron.build/multi-platform-build#docker.

pipeline {
    agent none
    environment {
        ELECTRON_CACHE="/root/.cache/electron"
        ELECTRON_BUILDER_CACHE="/root/.cache/electron-builder"
    }
    stages {
        stage('build') {
            agent {                     
                docker { 
                    image 'electronuserland/builder:wine'
                    args "--env-file <(env | grep -iE 'DEBUG|NODE_|ELECTRON_|YARN_|NPM_|CI|CIRCLE|TRAVIS_TAG|TRAVIS|TRAVIS_REPO_|TRAVIS_BUILD_|TRAVIS_BRANCH|TRAVIS_PULL_REQUEST_|APPVEYOR_|CSC_|GH_|GITHUB_|BT_|AWS_|STRIP|BUILD_') -v ${PWD}:/project -v ${PWD##*/}-node-modules:/project/node_modules -v ~/.cache/electron:/root/.cache/electron -v ~/.cache/electron-builder:/root/.cache/electron-builder"
                }
            }
            steps {
                echo "building"
                sh """
                yarn && yarn dist
                """
            }
        }      
    }
}

Only one gui session can be logged in at a time. If you are already logged into your desktop locally then remoting in will cause major issues.

Create a second local account for use with remoting in that you can use to logout of your real account.

Note to self: research if there is a better way.

Fedora

# install
sudo dnf install xrdp
sudo systemctl enable xrdp 
sudo systemctl start xrdp 

# check the status if you so desire
sudo systemctl status xrdp 

# firewall rules
sudo firewall-cmd --permanent --add-port=3389/tcp 
sudo firewall-cmd --reload 

# SELinux.  Is this actually required?
sudo chcon --type=bin_t /usr/sbin/xrdp 
sudo chcon --type=bin_t /usr/sbin/xrdp-sesman 

# find the machines ip address
ip addr show

Ubuntu

sudo apt update
sudo apt install xrdp 

# Confirm xrdp is running if you so desire
sudo systemctl status xrdp

# Add certs permissions for your user.  Is this actually required?
sudo adduser xrdp ssl-cert
sudo systemctl restart xrdp

# firewall rules
sudo ufw allow 3389

# find the machines ip address
ip addr show

External References

  • https://tecadmin.net/how-to-install-xrdp-on-fedora/
  • https://linuxize.com/post/how-to-install-xrdp-on-ubuntu-20-04/