Posts


Only one gui session can be logged in at a time. If you are already logged into your desktop locally then remoting in will cause major issues.

Create a second local account for use with remoting in that you can use to logout of your real account.

Note to self: research if there is a better way.

Fedora

# install
sudo dnf install xrdp
sudo systemctl enable xrdp 
sudo systemctl start xrdp 

# check the status if you so desire
sudo systemctl status xrdp 

# firewall rules
sudo firewall-cmd --permanent --add-port=3389/tcp 
sudo firewall-cmd --reload 

# SELinux.  Is this actually required?
sudo chcon --type=bin_t /usr/sbin/xrdp 
sudo chcon --type=bin_t /usr/sbin/xrdp-sesman 

# find the machines ip address
ip addr show

Ubuntu

sudo apt update
sudo apt install xrdp 

# Confirm xrdp is running if you so desire
sudo systemctl status xrdp

# Add certs permissions for your user.  Is this actually required?
sudo adduser xrdp ssl-cert
sudo systemctl restart xrdp

# firewall rules
sudo ufw allow 3389

# find the machines ip address
ip addr show

External References

  • https://tecadmin.net/how-to-install-xrdp-on-fedora/
  • https://linuxize.com/post/how-to-install-xrdp-on-ubuntu-20-04/

Local storage

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: standard
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

If this new storage class is the only one configured in the cluster, mark it as the default.

kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

This storage class does not support auto-provisioning of persistent volumes. Each persistent volume must be created manually before the PVC can claim it.

Persistent Volumes

The storageClassName in the PV must match the storageClassName in the PVC.

pv-example.yaml

apiVersion: v1
metadata:
  name: pv-test-vol1
  labels:
      type: local
Spec:
  storageClassName: standard
  capacity:
      storage: 10Gi
  accessModes:
      - ReadWriteOnce
  hostPath:
      path: "/opt/storage/test_pv"
mkdir -p "/opt/storage/test_pv"

kubectl create -f pv.yaml

OpenEBS

Jiva is preferred if your application is small, requires storage level replication but does not need snapshots or clones. Mayastor is preferred if your application needs low latency and near disk throughput, requires storage level replication and your nodes have high CPU, RAM and NVMe capabilities. OpenEBS Data Engines.

Jiva is simple to setup and run. cStor and Mayastor are options I need to investigate more.

Replicated - Jiva

With OpenEBS both local hostpath and replicated are options. Jiva replicated is best left for apps that don’t handle replication such as sql server or PostgreSQL.

https://github.com/openebs/jiva-operator/blob/develop/docs/quickstart.md

Install Jiva Operators

kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml
kubectl apply -f https://openebs.github.io/charts/jiva-operator.yaml

Jiva volume policy

apiVersion: openebs.io/v1alpha1
kind: JivaVolumePolicy
metadata:
  name: example-jivavolumepolicy
  namespace: openebs
spec:
  replicaSC: openebs-hostpath
  target:
    replicationFactor: 1
    # disableMonitor: false
    # auxResources:
    # tolerations:
    # resources:
    # affinity:
    # nodeSelector:
    # priorityClassName:
  # replica:
    # tolerations:
    # resources:
    # affinity:
    # nodeSelector:
    # priorityClassName:

Storage Class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: openebs-jiva-csi-sc
provisioner: jiva.csi.openebs.io
allowVolumeExpansion: true
parameters:
 cas-type: "jiva"
 policy: "example-jivavolumepolicy"

Persistent Volume Claim

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: example-jiva-csi-pvc
spec:
 storageClassName: openebs-jiva-csi-sc
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 4Gi

Local PVs

See https://openebs.io/docs/concepts/localpv.

References


Draft notes as of Jan 30, 2022.

This post is a short walkthrough of installing a YugabyteDB demo with local storage in a Kubernetes cluster.

If your Kubernetes cluster does not already have a storage class setup jump to the Storage Setup - Prerequisites section and complete it before continuing with the following commands.

kubectl create namespace yb-demo

Install Yugabytedb

Notice the overrides.

helm install yb-demo yugabytedb/yugabyte --set resource.master.requests.cpu=0.5,resource.master.requests.memory=0.5Gi,resource.tserver.requests.cpu=0.5,resource.tserver.requests.memory=0.5Gi,replicas.master=1,replicas.tserver=1,storage.tserver.size=1Gi --namespace yb-demo

Yugabyte k8s info

NAMESPACE: yb-demo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES:

  1. Get YugabyteDB Pods by running this command: kubectl –namespace yb-demo get pods

  2. Get list of YugabyteDB services that are running: kubectl –namespace yb-demo get services

  3. Get information about the load balancer services: kubectl get svc –namespace yb-demo

  4. Connect to one of the tablet server: kubectl exec –namespace yb-demo -it yb-tserver-0 bash

  5. Run YSQL shell from inside of a tablet server: kubectl exec –namespace yb-demo -it yb-tserver-0 – /home/yugabyte/bin/ysqlsh -h yb-tserver-0.yb-tservers.yb-demo

  6. Cleanup YugabyteDB Pods For helm 2: helm delete yb-demo –purge For helm 3: helm delete yb-demo -n yb-demo NOTE: You need to manually delete the persistent volume kubectl delete pvc –namespace yb-demo -l app=yb-master kubectl delete pvc –namespace yb-demo -l app=yb-tserver

Storage Setup - Prerequisites

Local hostPath storage works well with YugabyteDB.

Local storage

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: standard
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

If this new storage class is the only one configured in the cluster, mark it as the default.

kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

This storage class does not support auto-provisioning of persistent volumes. Each persistent volume must be created manually before the PVC can claim it.

Persistent Volumes

The storageClassName in the PV must match the storageClassName in the PVC.

pv-example.yaml

apiVersion: v1
metadata:
  name: pv-test-vol1
  labels:
      type: local
Spec:
  storageClassName: standard
  capacity:
      storage: 10Gi
  accessModes:
      - ReadWriteOnce
  hostPath:
      path: "/opt/storage/test_pv"
mkdir -p "/opt/storage/test_pv"

kubectl create -f pv.yaml

OpenEBS Replicated

OpenEBS local hostpath should also work well. Jiva replicated is best left for apps that don’t handle replication such as sql server or PostgreSQL.

https://github.com/openebs/jiva-operator/blob/develop/docs/quickstart.md

kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml
kubectl apply -f https://openebs.github.io/charts/jiva-operator.yaml

Jiva volume policy

apiVersion: openebs.io/v1alpha1
kind: JivaVolumePolicy
metadata:
  name: example-jivavolumepolicy
  namespace: openebs
spec:
  replicaSC: openebs-hostpath
  target:
    replicationFactor: 1
    # disableMonitor: false
    # auxResources:
    # tolerations:
    # resources:
    # affinity:
    # nodeSelector:
    # priorityClassName:
  # replica:
    # tolerations:
    # resources:
    # affinity:
    # nodeSelector:
    # priorityClassName:

Storage Class

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: openebs-jiva-csi-sc
provisioner: jiva.csi.openebs.io
allowVolumeExpansion: true
parameters:
 cas-type: "jiva"
 policy: "example-jivavolumepolicy"

Persistent Volume Claim

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: example-jiva-csi-pvc
spec:
 storageClassName: openebs-jiva-csi-sc
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 4Gi

Reference


Prep work

The example below will use the OpenEBS hostpath storage classes and operators. Installing openebs is described in my kubernetes storage post.

Determine the storageclass name.

 kubectl get storageclasses.storage.k8s.io

As I’m currently testing openebs in microk8s I will use the storage class named openebs-hostpath in the below examples. As a redis cluster handles its own data

If there are any affinity rules that are wanted set the labels now. This example is going to set a label on a node and set the affinity to look for that as a soft (preferredDuringSchedulingIgnoredDuringExecution) target.

kubectl label nodes [NodeName] workertype=database

Install

helm repo add bitnami https://charts.bitnami.com/bitnami

kubectl create namespace redis-demo

helm install redis --set "global.redis.password=HiThere,global.storageClass=openebs-hostpath,redis.nodeAffinityPreset.type=soft,redis.nodeAffinityPreset.key=workertype,redis.nodeAffinityPreset.values[0]=database" bitnami/redis-cluster --namespace redis-demo

kubectl -n redis-demo get pods

If external access is desired it should be set at deployment by setting cluster.externalAccess.enabled to true as part of the above –set command.

cluster.externalAccess.enabled=true

See the redis-cluster chart docs for externalAccess options.

Output

With those commands run there should be some output that looks like the following.

NAME: redis
LAST DEPLOYED: Sat Feb  5 23:08:17 2022
NAMESPACE: redis-demo
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: redis-cluster
CHART VERSION: 7.2.1
APP VERSION: 6.2.6** Please be patient while the chart is being deployed **

To get your password run:

export REDIS_PASSWORD=$(kubectl get secret --namespace "redis-demo" redis-redis-cluster -o jsonpath="{.data.redis-password}" | base64 --decode)

You have deployed a Redis™ Cluster accessible only from within you Kubernetes Cluster.INFO: The Job to create the cluster will be created. To connect to your Redis™ cluster:

  1. Run a Redis™ pod that you can use as a client:
kubectl run --namespace redis-demo redis-redis-cluster-client --rm --tty -i --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis-cluster:6.2.6-debian-10-r95 -- bash
  1. Connect using the Redis™ CLI:
redis-cli -c -h redis-redis-cluster -a $REDIS_PASSWORD

Remove redis cluster

helm delete redis --namespace redis-demo
kubectl delete namespace redis-demo

Reference


This post is going to cover using a virtual machine to host a vpn client in a virtual machine to secret the connection from the host machine. The example will be using hyper-v, openvpn, Ubuntu 18.04, and gnome network manager.

Why? Why not? Also, in the world of working from home it is nice to keep the host device separate from the work network in the case of bringing your own device.

Anyway, using Ubuntu with ‘network manager’ makes the type of vpn easy to switch. I’m using Ubuntu 18.04 because it has the best default integration with Windows when using hyper-v. The other reason for using Ubuntu is because it is 2gb or less download while a Windows vm is closer to 20gb. I don’t want to download that much or waste that much disk space. I can also run Ubuntu with a low cpu and memory impact as an interface to a remote machine. Hyper-v is being used because that is what I have on my Windows machine at the moment. Feel free to use vmware, virtual box, parallels, kvm, whatever.

This is mostly going to be a visual guide. I am making the assumption everyone knows how to use google or some similar search engine to fill in the blanks.

I’ve included more screenshots than I normally would in case people are not familiar with Ubuntu. Search Ubuntu, it is easy to use. However, I don’t have screenshots for everything and I’ve not annotated the screenshots.

Hyper-v, VM Install

Hyper-v pick an operating system

Hyper v quick create option hyper v quick create menu option

Ubuntu 18.04 has the best integration, at the moment of writing, in hyper-v for linux integration in the quick create options. hyper v quick create vm options

Demonstrate windows download is huge. I advise against choosing windows as the vm. See below and use the ubuntu option (or really whatever you are like). Large windows vm download.

Ubuntu download, much smaller. ubuntu download

Ubuntu downloaded. Connect to the machine to finish the install. Ubuntu downloaded

Ubuntu install

Start ubuntu. If you see any error messages about missing drives or anything press enter or the space bar a couple times. Or do both until it starts (one time setup problem). Start ubuntu

Follow the ubuntu install prompts until you are at the login screen. pick a language

Machine name and credentials

Once the install is done you can pick a screen resolution. I like full screen. The nice thing with ubuntu 18.04 quick create is it comes with hyper-v enhanced sessions enabled so resolution changes apply easily at any time. Pick a screen resolution

If you see an xrdp session login you are running in enhanced mode. Login Ubuntu login

The default ubuntu desktop. You’ll notice you are connected using ‘remote desktop’. The bottom left hand side with 9 circles in a square is the application searcher. You will use it to open applications. The default ubuntu desktop

Open VPN Client

Open a terminal command prompt to install open vpn and import your ovpn file. This is assuming you are using an ovpn file that has the certs inlined. search and open terminal

Update and restart the machine and then log back in.

sudo apt update
sudo apt upgrade
sudo apt reboot

Log back in and install openvpn.

sudo apt install network-manager-openvpn-gnome

With openvpn network manager support added lets import the ovpn file.

sudo nmcli connection import type openvpn file FILE_PATH_NAME

FILE_PATH_NAME is the full path to the .ovpn file if you are not in the same directory.

For example on my machine it might look like this

sudo nmcli connection import type openvpn file /home/peter/downloads/[Whatever-The-File-Is-Named].ovpn

At this point you can enable the vpn in the gui. You’ll want to enter your username and password. Ubuntu will also prompt you for your password and a password for a new keyring.

Install network-manager-openvpn-gnome

network-manager-openvpn-gnome installed

Network Manager GUI options

In the upper right hand of the ubuntu system there will be a network icon that can be clicked to show the network settings including the vpn. From here you can turn on the vpn or modified the credentials to connect.

Modify the vpn to settings to add your credentials

Remote desktop in Ubuntu

Use the Remmina client that is already installed as part of the quick create ubuntu 18.04 vm. It can be found in the application search. If for some reason remmina is not installed it can be installed from the terminal.

sudo apt install remmina

Find and open remmina

Once remmina is open click the green + button to configure a new connection.
Remmina front screen

  1. Give the profile an easy to understand name. Maybe the machine name. Some nice description.
  2. Then enter the server name. You probably will need to use the IP address. I almost always have to use the IP address unless it is a public dns name.
  3. Set the color depth as the default probably will not work from inside the vm. I found true color (24 bpp) worked fine. remmina connection settings

At this point click Save and Connect.

You should at this point be remoted into the remote machine from within the VM that you are running on your local machine. The local machine can use your local internet and not have to worry about the vpn and the vpn does not need to know about your local machine.