K3s Single Server Deployment
IVAAP can be deployed on a single server VM host using K3s. This deployment architecture is beneficial if you do not have an existing kubernetes cluster, want to save on infrastructure costs, or want a self-hosted solution. K3s is a lightweight Kubernetes distribution designed for resource-constrained environments.
It is important to note - this guide does not go into full deployment and configuration details. This guide only contains deployment details specific to K3s deployments. Throughout this guide, there will be links referencing to other guides. The primary guide for deployment configuration is the General Helm Configuration Guide.
IVAAP K3s Deployment Dependencies¶
K3s Installation¶
Installation of K3s may vary depending on your operating system choice. It is best to follow K3s official documentation for installation in your environment.
Ubuntu¶
For demostration of installation of K3s, this script can be used on Ubuntu instances:
#!/bin/bash
# Install k3s
curl -sfL https://get.k3s.io | sh -
# Checks nodes are available
sudo k3s kubectl get nodes
# Create local kube config
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
echo 'export KUBECONFIG=$HOME/.kube/config' >> ~/.bashrc
source ~/.bashrc
RedHat Enterprise Linux¶
When setting up a VM in RHEL9, be mindful that K3s stores it's data inside /var. RedHat partitions this directory separate from the root directory, so it is possible for the /var partition to be too small for K3s to run efficiently.
If the partition is too small, or you have no control over sizing it during OS install - it is possible to move the K3s related directories onto the root partition, and create a symbolic link back to it's original location.
# Stop K3s systemd service
sudo systemctl stop k3s
# Move K3s rancher directory to /opt/
sudo mv /var/lib/rancher /opt/.
# Create sym-link
sudo ln -s /opt/rancher /var/lib/rancher
# Start K3s systemd service
sudo systemctl start k3s
Helm Installation¶
Helm is required for IVAAP Deployments. Helm can be installed many ways, and depends on your Operating System. Refer to Helm's official documentation for installation in your environment.
Preparing the Deployment Directory¶
File System Structure¶
IVAAP deployment should be located in /opt/ivaap. Below is the basic, condensed file system structure of IVAAP deployed in K3s:
user@linux:/opt/ivaap$ tree
.
├── certs
│ ├── ivaap-ssl.crt
│ └── ivaap-ssl.key
├── IVAAPHelmTemplate
│ ├── Chart.yaml
│ ├── collect-images.sh
│ ├── local.values.yaml
│ ├── example-deployment-yamls
│ ├── local.values.yaml
│ ├── scripts
│ ├── templates
│ │ ├── aws
│ │ ├── azure
│ │ ├── common
│ │ ├── deployment.yaml
│ │ ├── _helpers.tpl
│ │ ├── k3s
│ │ └── openshift
│ └── values.yaml
└── ivaap-volumes
├── geofiles
│ └── data
├── postgres-operator
└── logs
├── activemq
├── adminserver
├── backend
└── proxy
Enable K3s Deployment Type¶
To set K3s as the deployment type for IVAAP, simply set .Values.environment.type.k3s to true:
environment:
type:
k3s:
enabled: "true"
File Permissions¶
All files from the delivery package should be placed in /opt/ivaap. Ensure that /opt/ivaap has the same permissions as the user deploying IVAAP. We do not recommend deploying as root. IVAAP and K3s should always use a dedicated linux user. We recommend this user have UID and GID 1000 for simplicity, as this UID and GID matches the user inside most IVAAP component containers.
The directory /opt/ivaap/ivaap-volumes should contain any persisted data, or data used by pods/containers. This is where logs for IVAAP will be stored, as well as geofiles data if applicable. The cloned repository and configuration file for Zalando Postgres Operator will also live in ivaap-volumes.
In K3s deployments, there is a configuration option in values.yaml to enable the container logs to be mounted on the file system.
environment:
type:
k3s:
enabled: "true"
logging:
persistLogs: "true"
localLogsPath: "/opt/ivaap/ivaap-volumes/logs"
storageCapacity: "" # ----- Default 10Gb, but can be adjusted. Leave blank for default
Logs can be stored anywhere on the filesystem by changing localLogsPath, but we recommend saving them to /opt/ivaap/ivaap-volumes/logs for simplicity.
The following bash script can be used to create the required directories and permission. This script requires the user deploying IVAAP to be a sudoer.
##!/bin/bash
echo 'Creating log directories'
sudo mkdir -p /opt/ivaap/ivaap-volumes/logs/{activemq,adminserver,backend,proxy,scheduledtasks}
echo 'Modifying ivaap-volumes dir permissions'
sudo chown $(id -u):$(id -g) /opt/ivaap/ivaap-volumes
echo 'Modifying logs dirs permissions'
sudo chown -R 1000:1000 /opt/ivaap/ivaap-volumes/logs
echo 'Modifying proxy logs dir permissions'
sudo chown -R 101:101 /opt/ivaap/ivaap-volumes/logs/proxy
Deployment¶
From this point, your VM is ready for deployment. However, this K3s specific guide does not go into details on creating the deployment yaml file, but will highlight k3s specific options.
Before proceeding in this guide, refer to the General Helm Configuration Guide for creating pre-deployment secrets for TLS and container registry authentication, as well as determining which Deployment Method to choose. For K3s deployments, it is recommended to deploy using the Helm with Multiple Values Files method of deployment, but all other options are possible.
PostgreSQL Database Setup¶
Zalando Postgres Operator¶
IVAAP on K3s supports Zalando Postgres Operator as a same host solution for the IVAAP Database. Zalando is an open-source, MIT licensed operator. In ivaap-helpers, we have included a script to make installing this operator simple: ivaap-helpers/scripts/ivaap-helm-template/deploy-k3s-postgres.sh
Also included in the package should be a starting database schema. The script will ask for the full path of this schema so that it can be loaded into Zalando Postgres. This will be a .sql file. Before running the script, copy the full path of this schema so it can be pasted into the script later.
user@linux:/opt/ivaap$ ls
docker-images.tar.gz IVAAPHelmTemplate ivaap-helpers ivaap-postgres-2024.1-2024-12-06.sql
user@linux:/opt/ivaap$ readlink -f ivaap-postgres-2024.1-2024-12-06.sql
/opt/ivaap/ivaap-postgres-2024.1-2024-12-06.sql
Now simply run the script, and paste the database schema path when prompted:
user@linux:/opt/ivaap/ivaap-helpers/ivaap-helm-template$ ./deploy-k3s-postgres.sh
This script will install Zalando PostgreSQL Operator. This is intended for use with IVAAP 2025.1+ K3s single server VM deployments only.
Ensure that K3s is installed and running, and that /opt/ivaap/ivaap-volumes directory has been created.
The script will require user input of the full path to the provided database schema to load.
get IVAAP running on your system. Please refer the IVAAP Deployment Operations Guide for full deployment steps.
Proceed with Zalando Operator installation? (y/n) y
Enter full path for postgres dump to load into the database: /opt/ivaap/ivaap-postgres-2024.1-2024-12-06.sql
The script will now do the following:
- Clone Zolando's github repo into
/opt/ivaap/ivaap-volumes/postgres-operator - Deploy from this repo:
- Configmap
- RBAC Service Account
- The Postgres Operator Server
- Creates and applies a new file
/opt/ivaap/ivaap-volumes/postgres-operator/ivaap-postgres.yaml- This file contains the config for the IVAAP database, such as user, db name, postgresql version, and more.
- Waits for the postgres pod to be ready and available
- Loads the IVAAP Schema
Here is an example of a successful output, but condensed (the schema loading will be very long output):
success
Cloning into 'postgres-operator'...
remote: Enumerating objects: 29433, done.
remote: Counting objects: 100% (131/131), done.
remote: Compressing objects: 100% (96/96), done.
remote: Total 29433 (delta 91), reused 35 (delta 35), pack-reused 29302 (from 2)
Receiving objects: 100% (29433/29433), 33.94 MiB | 39.41 MiB/s, done.
Resolving deltas: 100% (21386/21386), done.
Deploying Postgres Operator to 'default' namespace...
configmap/postgres-operator created
serviceaccount/postgres-operator created
clusterrole.rbac.authorization.k8s.io/postgres-operator created
clusterrolebinding.rbac.authorization.k8s.io/postgres-operator created
clusterrole.rbac.authorization.k8s.io/postgres-pod created
deployment.apps/postgres-operator created
Waiting for postgres-operator pod to be ready...
deployment.apps/postgres-operator condition met
Deploying PostgreSQL cluster...
postgresql.acid.zalan.do/ivaap-postgres-cluster created
Waiting for Postgres pod to be ready...
Waiting for Postgres pod to become Ready...
Waiting for Postgres pod to become Ready...
Postgres pod is Ready.
SET
SET
SET
SET
SET
set_config
------------
(1 row)
SET
SET
SET
SET
CREATE SCHEMA
ALTER SCHEMA
CREATE SCHEMA
ALTER SCHEMA
CREATE SCHEMA
ALTER SCHEMA
CREATE SCHEMA
ALTER SCHEMA
COMMENT
CREATE EXTENSION
COMMENT
CREATE EXTENSION
COMMENT
CREATE EXTENSION
COMMENT
CREATE TYPE
ALTER TYPE
CREATE TYPE
ALTER TYPE
SET
SET
CREATE TABLE
ALTER TABLE
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
Success!
Zalando PostgreSQL Operator installed successfully. Please ensure no errors are observed in the output while the schema was loaded.
Zalando postgres creates and stores the user password as a kubernetes secret. To collect this password, run the following command:
kubectl get secret ivaapserver.ivaap-postgres-cluster.credentials.postgresql.acid.zalan.do -n default -o jsonpath='{.data.password}' | base64 -d && echo
It is important to ensure no major errors appear in the postgres schema loading section of the output. Certain errors, such as something 'already exists', can be ignored.
As seen at the bottom of the output, Zalando creates and stores the user password as a kubernetes secret. This password needs to be retrieved so that it can be configured in the deployment yaml for database connection. Below is an example output of using the command to retrieve this password:
user@linux:/opt/ivaap$ kubectl get secret ivaapserver.ivaap-postgres-cluster.credentials.postgresql.acid.zalan.do -n default -o jsonpath='{.data.password}' | base64 -d && echo
KZYuNL31HDWccCAeE5MEeiEeaghRItIQfz7rA9wphwLsZ97t6tBYwV8UwEDG5pbi
Later, this password will need to be encrypted, base64 encoded, and set for the environment variable IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTED_PASSWORD. For the steps to do this, refer to Encrypting Sensitive Passwords for IVAAP Java Components
of the IVAAP Configuration Guide.
Database connection configuration for Zalando can be found under .Values.secrets.type.k8sSecrets.adminserver-conf-secrets. All secrets beginning with IVAAP_SERVER_ADMIN_DATABASE will need to be configured for Zalando connection. The below checklist can be used for this setup:
- [ ] IVAAP_SERVER_ADMIN_DATABASE_HOST = ivaap-postgres-cluster.default.svc.cluster.local
- [ ] IVAAP_SERVER_ADMIN_DATABASE_NAME = ivaapdb
- [ ] IVAAP_SERVER_ADMIN_DATABASE_PORT = 5432
- [ ] IVAAP_SERVER_ADMIN_DATABASE_USERNAME = ivaapserver
- [ ] IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTION_KEY = Encryption key used to encrypt the password
- [ ] IVAAP_SERVER_ADMIN_DATABASE_ENCRYPTED_PASSWORD = Encrypted password
External PostgreSQL¶
In K3s deployments, you are not restricted or required to use the Zalando postgres database. IVAAP can connect to any PostgreSQL database as long as the VM has access, and meets all requirements per the IVAAP Technical Datasheet.
This could be an RDS instance, Azure managed postgres service, self-hosted external database, or even a database running directly on the host as a container or systemd service.
For in-depth details, refer to IVAAP Deployment Operations Guide - Database Administration. Additionally, refer to the PostgreSQL Connection and Configmap section of the IVAAP Configuration guide.
Geofiles¶
YAML Configuration¶
The Geofiles connector is comprised of three sharded nodes:
- geofilesmasternode
- geofilesseismicnode
- geofilesreservoirsnode
These nodes require the geofiles data to be mounted into each of the geofilesnode containers. By default, geofiles, and other nodes using persisted volume claims, default to hostPath with context of being deployed on K3s, and using data on the local host's file system. However, these nodes are also compatible with other deployment architecture if using NFS or SMB to mount data. For more information on this, please see the Data Node Persistent Volumes (datanodePVCs) section of the General Helm Configuration guide.
In the helm template, all geofiles configuration can be located here in values.yaml:
ivaapBackendNodes:
dataNodes:
# ----- Geofiles Data Nodes
geofilesmasternode:
enabled: false
repoName: ivaap/backend/geofilesmasternode
tag: geofilesmasternode-3.0-15-53c7ad8-20260313T170919Z
pvcs:
- geofiles
geofilesseismicnode:
enabled: false
repoName: ivaap/backend/geofilesseismicnode
tag: geofilesseismicnode-3.0-15-53c7ad8-20260313T170919Z
pvcs:
- geofiles
geofilesreservoirsnode:
enabled: false
repoName: ivaap/backend/geofilesreservoirsnode
tag: geofilesreservoirsnode-3.0-15-53c7ad8-20260313T170919Z
pvcs:
- geofiles
datanodePVCs:
geofiles:
localPath: /opt/ivaap/ivaap-volumes/geofiles
mountPath: /opt/ivaap/ivaap-volumes/geofiles
For your deployment yaml file, if using the default images, the geofiles configuration can be condensed like so:
ivaapBackendNodes:
dataNodes:
geofilesmasternode:
enabled: true
geofilesseismicnode:
enabled: true
geofilesreservoirsnode:
enabled: true
Data Permissions¶
Since the backend node containers all use UID and GID 1000, please ensure these same permissions are set for the geofilesnode directory.
sudo chown -R 1000:1000 /opt/ivaap/ivaap-volumes/geofiles
After deployment, check inside of one of the geofilesnodes containers in the backend pod to ensure the data is mounted:
# This command shown in this code block uses a function from ivaap-helpers to exec into the container.
# It is equivalent to the following command:
# kubectl exec -it <backend-pod-name> -n ivaap -c geofilesmasternode -- /bin/bash
user@linux:/opt/ivaap$ ki exec ivaap backend geofilesmasternode
ubuntu@ivaap-backend-deployment-765b4dc65-4xvjq:/opt/ivaap/ivaap-playserver/deployment$ ls /opt/ivaap/ivaap-volumes/geofiles/
34_10_A seismic data
/opt/ivaap/ivaap-volumes/geofiles.
IVAAP Kubernetes Secrets¶
In K3s IVAAP deployments, only native kubernetes secrets are supported at this time. In the future, we hope to support Azure Key Vault or AWS Secrets Manager on K3s deployments.
For steps on how to configure kubernetes secrets, refer to General Helm Configuration Guide's section on native kubernetes secrets.