Posts


Resize logical volume

# Check disks
lsblk /dev/sda
lsblk /dev/sdb
lsblk /dev/nvme0n1

# Increase the Physical Volume (pv) to max size
pvresize /dev/sda3

# Expand the Logical Volume (LV) to max size to match
lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

# Expand the filesystem itself
resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

Add a second disk to a lvm

# display physical volumes (pv)
pvs
pvdisplay

# display lvm volume groups (vg)
vgs
vgdisplay

# display lvm logical volume (lv)
lvs
lvdisplay

# find out info on all disks
fdisk -l | grep '^Disk /dev/'
lvmdiskscan

# create the physical volume (pv) and verify
# this is if the new disk is named /dev/sdb
pvcreate /dev/sdb
lvmdiskscan -l

# Add new pv named /dev/sdb to an existing lv

# add /dev/sdb to volume group
vgextend ubuntu-vg /dev/sdb

# extend mapped drive to full size
lvm lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

# resize to 
resize2fs -p /dev/mapper/ubuntu--vg-ubuntu--lv

# verify
df -H

Fdisk basics

# list partisions
fdisk -l

# open /dev/sdb for editing
# once fdisk is running 'm' is for help
fdisk /dev/sdb

References


As a reminder to myself. SBOM (software bill of materials) tooling at this time is CycloneDX.

At the time of writing all below instructoins assume a linux install with bash.

Install CycloneDX

echo "dotnet cyclonedx install"
dotnet tool install --global CycloneDX 
ln -s /root/.dotnet/tools/dotnet-CycloneDX /usr/bin/CycloneDX 

echo "javascript cyclonedx install
npm install --global @cyclonedx/cyclonedx-npm 

Run CycloneDX

Run the dotnet tool against a c# project with github credentials to avoid rate limiting.

PROJECT_PATH="PROJECT PLACEHOLDER.csproj"
REPORT_TITLE="TITLE PLACEHOLDER"
REPORT_OUTPUT_FOLDER=`pwd`
REPORT_FILENAME="the-sbom-report.xml"
REPORT_VERSION="1.2.3"
GITHUB_USERNAME="GH USERNAME PLACEHOLDER"
GITHUB_TOKEN="GH TOKEN PLACEHOLDER"

CycloneDX $PROJECT_PATH --set-name "$REPORT_TITLE" --set-version "$REPORT_VERSION" --set-type "Application" --github-username "$GITHUB_USERNAME" --github-token "$GITHUB_TOKEN" -o "$REPORT_OUTPUT_FOLDER" -f "$REPORT_FILENAME"

Let us say we have a bunch of status checks being recorded by promethues. These status checks are similar to tools such as pingdom. To provide a simple up/down status check dashboard the grafana state timeline panel can be used along with prometheus probe_success.

Tools required

Grafana state timeline

I won’t go into the details of installing prometheus and grafana. There is plenty of information on their respective sites that will be up to date and much more accurate.

Assuming prometheus and grafana are installed with the proper configs the rest of this section will describe configuring a state timeline using probe_success. The data collection can be as simple as using the prmoetheus blackbox exporter.

Add a new state timeline panel to a dashboard. For the query set the data source to a configured prometheus server. If there are a lot of health checks the panel must be manually dragged for a larger height to make the panel readable.

In the options panel set the Legend

  • Visible -> true
  • Mode -> List
  • Placement -> Bottom

In the options panel set the Color scheme

  • From thresholds (by value)

In the options panel set the Value mappings

  • 1 -> UP
  • 0 -> DOWN

In the options panel set the Thresholds

  • 1 -> green
  • Base -> red

Example queries

Visualize status for a production environment:

avg_over_time(probe_success{env="production"}[1m])

Visualize the status of a production environment but exclude certain instances:

avg_over_time(probe_success{env="production", instance!~".*ignoreme.*|.*ignorethis.*"}[1m])

Visualize status for a staging environment:

avg_over_time(probe_success{env="staging"}[1m])


Set kubectl path

# Set the path to kubectl
# Example path if using microk8s "/snap/bin/microk8s kubectl"
k="/usr/bin/kubectl"

Set Rolling Update Strategy maxUnavailable - Patch

Set the rolling update strategy to have a maxUnavailable to avoid outages.

echo "https://kubernetes.io/docs/concepts/workloads/controllers/deployment/"
NAMESPACE="PLACEHOLDER"
DEPLOYMENT_NAME="PLACEHOLDER"

$k -n $NAMESPACE patch deployment $DEPLOYMENT_NAME -p "{\"spec\":{\"strategy\":{\"rollingUpdate\":{\"maxSurge\": 0, \"maxUnavailable\": \"25%\"}, \"type\": \"RollingUpdate\"}}}"

Health Checks - Patch

Health checks are important to managing a cluster. They are used to determine if pods are online and healthy or offline and needs disposal and new pods spun up.

startup probe

Has the pod started?

echo "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/"
NAMESPACE="PLACEHOLDER"
DEPLOYMENT_NAME="PLACEHOLDER"
STARTUPPROBE="/healthz"
PORT="443"
SCHEME="HTTPS"

$k -n $NAMESPACE patch deployment $DEPLOYMENT_NAME -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"$DEPLOYMENT_NAME\",\"startupProbe\":{\"httpGet\": {\"path\":\"$STARTUPPROBE\", \"port\": $PORT, \"scheme\": \"$SCHEME\"}, \"failureThreshold\": 30, \"periodSeconds\": 10}}]}}}}"

livenessProbe

Is the pod alive.

echo "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/"
NAMESPACE="PLACEHOLDER"
DEPLOYMENT_NAME="PLACEHOLDER"
LIVEPROBE="/healthz/live"
PORT="443"
SCHEME="HTTPS"

$k -n $NAMESPACE patch deployment $DEPLOYMENT_NAME -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"$DEPLOYMENT_NAME\",\"livenessProbe\":{\"httpGet\": {\"path\":\"$LIVEPROBE\", \"port\": $PORT, \"scheme\": \"$SCHEME\"}, \"initialDelaySeconds\": 30, \"failureThreshold\": 3, \"timeoutSeconds\": 5}}]}}}}"

readinessProbe

Is the pod alive and ready to serve traffic.

echo "https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/"
NAMESPACE="PLACEHOLDER"
DEPLOYMENT_NAME="PLACEHOLDER"
READYPROBE="/healthz/ready"
PORT="443"
SCHEME="HTTPS"

$k -n $NAMESPACE patch deployment $DEPLOYMENT_NAME -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"$DEPLOYMENT_NAME\",\"readinessProbe\":{\"httpGet\": {\"path\":\"$READYPROBE\", \"port\": $PORT, \"scheme\": \"$SCHEME\"}, \"initialDelaySeconds\": 30, \"failureThreshold\": 30, \"timeoutSeconds\": 15}}]}}}}"

Change Image Name/Version - Patch

Upgrade or downgrade images.

echo "https://kubernetes.io/docs/reference/kubectl/cheatsheet/"
NAMESPACE="PLACEHOLDER"
DEPLOYMENT_NAME="PLACEHOLDER"
IMAGE_NAME="PLACEHOLDER"

$k -n $NAMESPACE patch deployment $DEPLOYMENT_NAME -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"$DEPLOYMENT_NAME\",\"image\":\"$IMAGE_NAME\"}]}}}}"

Reserve Memory and RAM Resources - Patch

Request a set reservation of memory and cpu. This helps kubernetes properly schedule pods across clusters.

echo "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"
NAMESPACE="PLACEHOLDER"
DEPLOYMENT_NAME="PLACEHOLDER"
RAM="128Mi"
CPU="500m"

$k -n $NAMESPACE patch deployment $DEPLOYMENT_NAME -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"$DEPLOYMENT_NAME\",\"resources\":{\"requests\": {\"memory\":\"$RAM\", \"cpu\": \"$CPU\"}}}]}}}}"

Limit Memory and RAM Resources - Patch

Set a limit on memory and cpu.

echo "https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/"
NAMESPACE="PLACEHOLDER"
DEPLOYMENT_NAME="PLACEHOLDER"
RAM="2048Mi"
CPU="2500m"

$k -n $NAMESPACE patch deployment $DEPLOYMENT_NAME -p "{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"$DEPLOYMENT_NAME\",\"resources\":{\"limits\": {\"memory\":\"$RAM\", \"cpu\": \"$CPU\"}}}]}}}}"

References