Blogs

  • Home
  • Blog
  • kubeopsctl - Seamlessly Master Kubernetes Clusters without Downtime

kubeopsctl - Seamlessly Master Kubernetes Clusters without Downtime

Kubernetes has taken the cloud computing world by storm, but managing a Kubernetes cluster can be a daunting task. That's why we developed kubeopsctl, a CLI tool aimed at simplifying the process of setting up and maintaining your Kubernetes clusters. With kubeopsctl, you can bring up an entire Kubernetes cluster, complete with worker nodes, master nodes, monitoring, storage system, logging stack, private image registry, also common tools like NGINX controller, Keycloak, all through a single YAML configuration file.

 

Introduction

In the world of cloud computing, Kubernetes has become crucial. But managing a Kubernetes cluster is not always easy. That's where kubeopsctl comes in. This CLI tool helps make it simple to set up and manage your Kubernetes clusters. With just one YAML file, kubeopsctl lets you create a complete cluster, including worker nodes, master nodes, also monitoring stack, storage, logging, private image registry, OPA (Open Policy Agent) and also common tools like NGINX Ingress Controller Keycloak, and more.

What is kubeopsctl?

kubeopsctl is a command-line interface (CLI) tool designed to simplify the process of deploying and managing Kubernetes clusters. Whether you're new to Kubernetes or have experience, kubeopsctl is designed to help.

Main Features

  • Single Configuration: With just one YAML file, you can set up your whole cluster. This includes master and worker nodes, extra tools, ports, storage size etc.

  • Upgrade without outage: The zone configuration feature allows you to upgrade your cluster without any downtime. You can place the master/worker nodes you want in different zones and perform operations at the zone level. In this way, you can perform cluster operations with zero outage.

  • Personalization: You can easily set up both master and worker nodes, and specify requirements for each.

  • Extra Tools: You can also add software like Prometheus for monitoring, Keycloak for added security, Certman for certificate management and more. When you deploy these tools also you can configure values in YAML.

  • Security: kubeopsctl has built-in security settings, like pre-defined port numbers, SSL-force options, password and secret configuration.

 

Sample Configuration: A Detailed Look

The YAML file is the cornerstone of kubeopsctl's functionality. Please turn to the end of this page to find an example of the file, helping you gain a deeper understanding of its components:

Cluster and User Information: At the very beginning of the YAML file, basic but critical information about your Kubernetes cluster is defined. This includes the desired name of the cluster, the version of Kubernetes you're planning to install, and the credentials for accessing the cluster.

Master and Worker Nodes: This is where the topology of your cluster takes shape. Within the zones designated, you can specify whether the nodes are master or worker versions, and further provide details about their IP addresses, system configurations, and even resource quotas.

Optional Components: kubeopsctl offers you the luxury to pick and choose additional services like rook-ceph for storage, harbor for container registry, and even opensearch for distributed search as well as analytics. You can also enable NGINX Ingress Controller, Keycloak etc. These can be toggled on or off based on the project’s needs.

Advanced Settings: For those looking to delve deeper, kubeopsctl permits advanced configurations such as custom storage classes, specific volume size, using existing secrets, predefined password etc. This provides a granular level of control for those who need it.

Security Features: Beyond basic firewall settings, kubeopsctl offers an array of options for setting up nftables and container runtimes like containerd, predefined port numbers, giving you a good approach to security.

 

Code example:

apiVersion: kubeops/kubeopsctl/alpha/v1  # mandatory
kubeOpsUser: "demo" # mandatory,  change to your username
kubeOpsUserPassword: "Password" # mandatory,  change to your password
kubeOpsUserMail: "This email address is being protected from spambots. You need JavaScript enabled to view it." # change to your email
imagePullRegistry: "registry1.kubernative.net/lima" # mandatory
localRegistry: false # mandatory
### Values for setup configuration ###
clusterName: "testkubeopsctl"  # mandatory
clusterUser: "mnyuser"  # mandatory
kubernetesVersion: "1.26.4" # mandatory, check lima documentation
#masterHost: optional if you have an hostname, default value in "masterIP"
masterIP: 10.2.10.11 # mandatory
firewall: "nftables" # mandatory, default "nftables"
pluginNetwork: "calico" # mandatory, default "nftables"
containerRuntime: "containerd" # mandatory, default "containerd"
limaRoot: "/home/myuser/kubeops/lima" # optional, default: "/var/lima" 
clusterOS: "Red Hat Enterprise Linux" # optional, can be "Red Hat Enterprise Linux" or "openSUSE Leap", remove this line if you want to use default installed OS on admin machine but it has to be "Red Hat Enterprise Linux" or "openSUSE Leap"
### Additional values for cluster configuration
useInsecureRegistry: false # optional, default is false
ignoreFirewallError: false # optional, default is false
serviceSubnet: 192.168.128.0/17 # optional, default "192.168.128.0/17"
podSubnet: 192.168.0.0/17 # optional, default "192.168.0.0/17"
debug: true # optional, default is true
logLevel: vvvvv # optional, default "vvvvv"
systemCpu: "1" # optional, default "1"
systemMemory: "2G" # optional, default "2G"
sudo: true # optional, default is true
# at least 3 masters and 3 workers are needed
zones:
  - name: zone1
    nodes:
      master:
        - name: cluster1master1
          ipAdress: 10.2.10.11
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.26.4
        - name: cluster1master2
          ipAdress: 10.2.10.12
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.27.2
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.14
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.26.4
        - name: cluster1worker2
          ipAdress: 10.2.10.15
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.27.2
  - name: zone2
    nodes:
      master:
        - name: cluster1master3
          ipAdress: 10.2.10.13
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: drained
          kubeversion: 1.26.4  
      worker:
        - name: cluster1worker1
          ipAdress: 10.2.10.16
          user: myuser
          systemCpu: 100m
          systemMemory: 100Mi 
          status: active
          kubeversion: 1.26.4

# set to true if you want to install it into your cluster
rook-ceph: false # mandatory
harbor: false # mandatory
opensearch: false # mandatory
opensearch-dashboards: false # mandatory
logstash: false # mandatory
filebeat: false # mandatory
prometheus: false # mandatory
opa: false # mandatory
headlamp: false # mandatory
certman: false # mandatory
ingress: false # mandatory
keycloak: false # mandatory

nameSpace: "kubeops" #optional, the default value is different for each application
storageClass: "rook-cephfs" # optional, default value is "rook-cephfs"

###Values for Rook-Ceph###
rookValues:
  namespace: kubeops
  nodePort: 31931 # optional, default: 31931
  cluster:
    spec:
      dataDirHostPath: "/var/lib/rook" # optional, The path on the host where configuration files will be persisted. Default value: "/var/lib/rook"
    removeOSDsIfOutAndSafeToRemove: true # optional, default is true
    storage:
      useAllNodes: true # optional, default value: true
      useAllDevices: true # optional, default value: true
      # Global filter to only select certain devicesnames. This example matches names starting with sda or sdb.
      # Will only be used if useAllDevices is set to false and will be ignored if individual devices have been specified on a node.
      deviceFilter: "^sd[a-b]"
      # This setting can be used to store metadata on a different device. Only recommended if an additional metadata device is available.
      # Optional, will be overwritten by the corresponding node-level setting.
      config:
        metadataDevice: "sda"
      # Names of individual nodes in the cluster that should have their storage included.
      # Will only be used if useAllNodes is set to false.
      nodes:
        - name: "<ip-adress of node_1>"
          devices:
            - name: "sdb" 
        - name: "<ip-adress of node_2>"
          deviceFilter: "^sd[a-b]"
          config:
            metadataDevice: "sda" # optional
    # This setting allows the request of ressources for mgr, mon and osd pods.
    resources:
      mgr:
        requests:
          cpu: "500m" # optional, default is 500m
          memory: "1Gi" # optional, default is 1Gi
      mon:
        requests:
          cpu: "2" # optional, default is 2
          memory: "4Gi" # optional, default is 4Gi
      osd:
        requests:
          cpu: "2" # optional, default is 2
          memory: "4Gi" # optional, default is 4Gi
  operator:
    data:
      rookLogLevel: "DEBUG" # optional, default is DEBUG
  # Specify the filesystem type of the volume. Default value: "ext4"
  blockStorageClass:
    parameters:
      fstype: "ext4" # optional, default value is ext4
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Postgres ###
postgrespass: "password" # mandatory, set password for harbor postgres access 
postgres:
  storageClassName: "rook-cephfs" # optional, default is rook-cephfs
  volumeMode: "Filesystem" # optional, default is  Filesystem
  accessModes: ["ReadWriteMany"] #optional, default is [ReadWriteMany]
  resources:
    requests:
      storage: 2Gi # mandatory, depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Redis ###
redispass: "password" # mandatory set password for harbor redis access 
redis:
  storageClassName: "rook-cephfs" # optional, default is rook-cephfs
  volumeMode: "Filesystem" # optional, default is  Filesystem
  accessModes: ["ReadWriteMany"] #optional, default is [ReadWriteMany]
  resources:
    requests:
      storage: 2Gi # mandatory depending on storage capacity
#-------------------------------------------------------------------------------------------------------------------------------
### Values for Harbor deployment ###
## For detailed explaination for each key see: https://artifacthub.io/packages/helm/harbor/harbor/1.8.1#configuration ##
harborValues: 
  namespace: kubeops # optional, default is kubeops
  harborpass: "password" # mandatory: set password for harbor access 
  externalURL: https://10.2.10.13 # mandatory, the ip address, from which harbor is accessable outside of the cluster
  nodePort: 30003
  harborPersistence:
    persistentVolumeClaim:
      registry:
        size: 5Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      chartmuseum:
        size: 5Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      jobservice:
        jobLog:
          size: 1Gi # mandatory: Depending on storage capacity
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
        scanDataExports:
          size: 1Gi # mandatory: Depending on storage capacity
          storageClass: "rook-cephfs" #optional, default is rook-cephfs
      database:
        size: 1Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      redis:
        size: 1Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
      trivy: 
        size: 5Gi # mandatory, depending on storage capacity
        storageClass: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for filebeat deployment###
filebeatValues:
  namespace: kubeops # optional, default is kubeops   
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Logstash deployment###
##For detailed explaination for each key see: https://github.com/elastic/helm-charts/releases/tag/v7.16.3###
logstashValues:
  namespace: kubeops
  volumeClaimTemplate:
    resources:
      requests:
        storage: 1Gi # mandatory, depending on storage capacity
    accessModes: 
      - ReadWriteMany #optional, default is [ReadWriteMany]
    storageClassName: "rook-cephfs" #optional, default is rook-cephfs
#--------------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch-Dashboards deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch-dashboards###
openSearchDashboardValues:
  namespace: kubeops
  nodePort: 30050
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OpenSearch deployment###
##For detailed explaination for each key see: https://github.com/opensearch-project/helm-charts/tree/main/charts/opensearch###
openSearchValues:
  namespace: kubeops
  opensearchJavaOpts: "-Xmx512M -Xms512M" # optional, default is -Xmx512M -Xms512M
  resources:
    requests:
      cpu: "250m" # optional, default is 250m
      memory: "1024Mi" # optional, default is 1024Mi
    limits:
      cpu: "300m" # optional, default is 300m
      memory: "3072Mi" # optional, default is 3072Mi
  persistence:
    size: 4Gi # mandatory
    enabled: "true" # optional, default is true
    enableInitChown: "false" # optional, default is false
    labels:
      enabled: "false" # optional, default is false
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    accessModes:
      - "ReadWriteMany" # optional, default is {ReadWriteMany}
  securityConfig:
    enabled: false # optional, default value: false
    ### Additional values can be set, if securityConfig is enabled:
    # path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    # actionGroupsSecret:
    # configSecret:
    # internalUsersSecret: internal-users-config-secret
    # rolesSecret:
    # rolesMappingSecret:
    # tenantsSecret:
    # config:
    #   securityConfigSecret: ""
    #   dataComplete: true
    #   data: {}
  replicas: "3" # optional, default is 3
#--------------------------------------------------------------------------------------------------------------------------------
###Values for Prometheus deployment###
prometheusValues:
  namespace: kubeops # optional, default is kubeops
  privateRegistry: false # optional, default is false
  grafanaUsername: "user" # optional, default is user
  grafanaPassword: "password" # optional, default is password
  grafanaResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 5Gi # optional, default is 5Gi
    nodePort: 30211 # optional, default is 30211

  prometheusResources:
    storageClass: "rook-cephfs" # optional, default is rook-cephfs
    storage: 25Gi # optional, default is 25Gi
    retention: 10d # optional, default is 10d
    retentionSize: "24GB" # optional, default is 24GB
    nodePort: 32090
#--------------------------------------------------------------------------------------------------------------------------------
###Values for OPA deployment###
opaValues:
  namespace: kubeops

#--------------------------------------------------------------------------------------------------------------------------------
###Values for Headlamp deployment###
headlampValues:
  service:
    nodePort: 30007
#--------------------------------------------------------------------------------------------------------------------------------
###Values for cert-manager deployment###
certmanValues:
  namespace: kubeops
  replicaCount: 3
  logLevel: 2
#--------------------------------------------------------------------------------------------------------------------------------
###Values for ingress-nginx deployment###
ingressValues:
  namespace: kubeops
keycloakValues:
  namespace: "kubeops" # Optional, default is "keycloak"
  storageClass: "rook-cephfs" # Optional, default is "rook-cephfs"
  keycloak:
    auth:
      adminUser: admin # Optional, default is admin
      adminPassword: admin # Optional, default is admin
      existingSecret: "" # Optional, default is ""
  postgresql:
    auth:
      postgresPassword: "" # Optional, default is ""
      username: bn_keycloak # Optional, default is "bn_keycloak"
      password: "" # Optional, default is ""
      database: bitnami_keycloak # Optional, default is "bitnami_keycloak"
      existingSecret: "" # Optional, default is ""

Check out our latest blogpost


Achieving High Availability in Kubernetes Clusters

High availability in Kubernetes clusters is more than a goal; it's a necessity for today's digital enterprises. Discover the key principles and practices that can transform your Kubernetes deployments into fault-tolerant, highly available environments, ready to support your business-critical operations.

Any Questions?

Please feel free to contact us for any question that is not answered yet. 

We are looking forward to get in contact with you!

Newsletter

Design Escapes

KubeOps GmbH
Hinter Stöck 17
72406 Bisingen
Germany

  • Telefon:

    +49 7433 93724 90

  • Mail:

    This email address is being protected from spambots. You need JavaScript enabled to view it.

Download Area
Certified as

KubeOps GmbH is the owner of the Union trademark KubeOps with the registration number 018305184. 

© KubeOps GmbH. All rights reserved. Subsidiary of