This page describes the installation of K3s using the OpsRamp ISO/OVA for both single-node and multi-node setups.

Overview

The OpsRamp-provided ISO and OVA images come pre-bundled with K3s, a lightweight Kubernetes distribution. This integration simplifies the Kubernetes installation process, allowing you to bring up a functional cluster using a single command with no complex configuration required.

About K3s in OpsRamp Gateway

K3s is used to host and run the NextGen Gateway as a collection of Kubernetes-native applications. These applications are deployed using Helm charts and rely on Docker container images. During installation and runtime, access to a container registry is essential.

  • By default, the Gateway is configured to use the OpsRamp cloud-hosted container registry to pull the required Helm charts and images.
  • Alternatively, organizations can configure their own private or public registry to meet internal security or compliance requirements.

Registry Configuration (Optional, Based on Use Case)

Before installing K3s, it is recommended to review your container registry configuration. This step is optional if you plan to use the default OpsRamp registry.

When You Can Skip Registry Configuration

If you are using the OpsRamp registry to pull Helm charts and Docker images, no additional configuration is required. The Gateway will automatically connect to OpsRamp’s registry during deployment.

When You Must Configure a Custom Registry

If you choose not to use the OpsRamp registry and instead prefer to use your own registry (public or private), you must configure it before installing K3s.

Steps to Configure a Custom Registry

  1. Open the registry configuration template:
    vi /var/cgw/asserts_k3s/registries.yaml.template
  2. Uncomment the configs section and update it with your registry credentials and endpoint:
    mirrors:
      artifact.registry:
        endpoint:
          - "https://{your-private-repo}"
    
    configs:
      "{your-private-repo}":
        auth:
          username: "{your-username}"
          password: "{your-password}"
    Ensure proper YAML indentation to avoid syntax issues during deployment.
    Example:
    mirrors:
      artifact.registry:
        endpoint:
          - "https://hub.testrepo.com"
    
    configs:
      "hub.testrepo.com":
        auth:
          username: "test-user"
          password: "test-password"

Installation Steps

Single Node
Multi-Node

Step 1: Installing K3s

Once registry configuration is complete (if applicable), install K3s using the following command:

opsramp-collector-start setup init

Optional: Custom Pod and Service CIDR Ranges

If your environment requires custom network ranges for pods or services, use the following command with CIDR flags:
opsramp-collector-start setup init --cluster-cidr <pod-cidr> --service-cidr <service-cidr>

Step 2: Verifying K3s Installation

Upon successful installation, you will see output similar to:

root@node1:/home/gateway-admin# opsramp-collector-start setup init
Installing K3s
Installed K3s

Check the K3s service status:

service k3s status

Expected output:

● k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2025-04-24 11:12:21 UTC; 1min 10s ago
   Docs: https://k3s.io
   Process: 1931925 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
   Process: 1931926 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 1931927 (k3s-server)

Troubleshooting

If K3s fails to install or the service is not in a running state:

  • Review the installation logs.
  • Check for registry access errors or YAML syntax issues in the registry configuration.
  • Review the k3s service logs to identify the underlying error messages related to the issue (for example etcd timeouts, API server errors, or networking/plugin failures).
    sudo journalctl -u k3s

Step 1: Installing K3s

Step 1a: Initialize the First Node

To install K3s and initialize the NextGen Gateway on the first node in an HA cluster, use the following command:

opsramp-collector-start setup init --enable-ha=true --loadbalancer-ip {loadbalancerIp}

Load Balancer IP Configuration ({loadbalancerIp})

The {loadbalancerIp} parameter supports specifying single IPs, IP ranges, or CIDR blocks. Below are the supported formats and examples:

IP-rangeDescriptionResult Value
192.25.254.45/32This is used to add single metallb IP192.25.254.45
192.25.254.45/30This is used to add 4 IPs from a given IP192.25.254.44 - 192.25.254.47
192.25.254.44 - 192.25.254.49This is used to add a custom range192.25.254.44 - 192.25.254.49

Examples of {loadbalancerIp} Usage

  • Single IP:
    --loadbalancer-ip 192.25.254.45/32
  • CIDR block of 4 IPs:
    --loadbalancer-ip 192.25.254.45/30
  • Custom IP range:
    --loadbalancer-ip 192.25.254.44-192.25.254.49
  • Multiple IPs and ranges (comma-separated):
    --loadbalancer-ip 192.25.251.12/32,192.25.251.23/28,192.25.251.50-192.25.251.56

CIDR Range Configuration (Optional)

If CIDR ranges are not specified during K3s cluster setup, the following defaults will be applied:

  • Cluster CIDR: 10.42.0.0/16
  • Service CIDR: 10.43.0.0/16 To use custom CIDR ranges, include the –cluster-cidr and –service-cidr flags, ensuring the CIDRs fall within a /16 subnet:
    opsramp-collector-start setup init --enable-ha=true --loadbalancer-ip {loadbalancerIp} --cluster-cidr <cluster-cidr-ip> --service-cidr <service-cidr-ip>

Step 1b: Adding Additional Nodes to the Cluster

After completing the setup of the first node, the next step is to add additional nodes to the existing Kubernetes cluster to achieve High Availability (HA).

Step i: Generate the K3s Token on the First Node

Run the following command on the first node to generate the token required for joining new nodes to the cluster:

opsramp-collector-start setup node token
This command will output a token that will be used to authenticate and securely join new nodes.

Step ii: Add a New Node to the Cluster

On each new node that you want to add, run the following command to join it to the existing cluster:

opsramp-collector-start setup node add -u https://{NodeIp}:6443 -t {token}
  • Replace {NodeIp} with the IP address of the first node (the cluster master).
  • Replace {token} with the token generated in Step 1.

This command installs K3s on the new node and joins it to the cluster.

Step iii: Verify Node Addition

After joining the new node, verify that it has successfully been added by running this command on the first node or any node with access to the cluster:
kubectl get nodes

Sample Output:

NAME    STATUS   ROLES                       AGE     VERSION
node1   Ready    control-plane,etcd,master   8m3s    v1.23.5+k3s1
node2   Ready    control-plane,etcd,master   5m13s   v1.23.5+k3s1

Repeat Step i, Step ii, Step iii for each additional node to build your HA cluster. For example,

  • For a 3-node cluster, add the third node following the above steps.
  • For a 5-node cluster, repeat the process on the third, fourth, and fifth nodes.
  • Continue as needed for larger clusters.

Step 2: Verify the Cluster Status

After adding all nodes to your High Availability (HA) cluster, verify that the Kubernetes environment is healthy and all essential components are running correctly.

Step 2a: Confirm Node Readiness

Ensure all nodes in the cluster are in a Ready state:

kubectl get nodes

Sample Output:

pgsql
CopyEdit
NAME    STATUS   ROLES                       AGE     VERSION
nodea   Ready    control-plane,etcd,master   8m3s    v1.23.5+k3s1
nodeb   Ready    control-plane,etcd,master   5m13s   v1.23.5+k3s1
nodec   Ready    control-plane,etcd,master   4m5s    v1.23.5+k3s1

Step 2b: Verify Helm Chart Deployments

Ensure required Helm-based services like Longhorn (for storage) and MetalLB (for load balancing) have been deployed successfully:

helm list -A

Sample Output:

NAME              NAMESPACE           REVISION    UPDATED                                    STATUS      CHART                  APP VERSION
ipaddresspool     kube-system         1           2025-07-09 07:10:34 +0000 UTC              deployed    ipaddresspool-2.0.0    1.16.0
longhorn          longhorn-system     1           2025-07-09 07:08:04 +0000 UTC              deployed    longhorn-1.6.2         v1.6.1
metallb           kube-system         1           2025-07-09 07:07:59 +0000 UTC              deployed    metallb-6.3.6          0.14.5

Step 2c: Check Pod Status Across All Namespaces

Verify that all pods are in a Running state and all containers within them are Ready:

kubectl get pods -A

Sample Output:

NAMESPACE         NAME                                        READY   STATUS    RESTARTS   AGEdefault           nextgen-gw-0                                4/4     Running   0          23hdefault           stan-0                                      2/2     Running   0          23h
kube-system       coredns-d76bd69b-n8jhh                      1/1     Running   0          23h
kube-system       metallb-controller-7954c9c84d-pm89k         1/1     Running   0          23h
kube-system       metallb-speaker-j69tp                       1/1     Running   0          23h
kube-system       metallb-speaker-mddqj                       1/1     Running   0          23h
kube-system       metallb-speaker-n45g4                       1/1     Running   0          23h
kube-system       metrics-server-7cd5fcb6b7-tvnps             1/1     Running   0          23h
longhorn-system   csi-attacher-76c9f797d7-2jg5w               1/1     Running   0          23h
longhorn-system   csi-attacher-76c9f797d7-qhs85               1/1     Running   0          23h
longhorn-system   csi-attacher-76c9f797d7-qn9pr               1/1     Running   0          23h
longhorn-system   csi-provisioner-b749dbdf9-chjs9             1/1     Running   0          23h
longhorn-system   csi-provisioner-b749dbdf9-hbwx2             1/1     Running   0          23h
Tabbed Interface with Table