Deploy Cyberwatch on an existing Kubernetes cluster

This page describes the steps to deploy Cyberwatch on an existing Kubernetes cluster. This procedure assumes that the user has a basic knowledge of the Kubernetes orchestrator and Helm.

Technical prerequisites

The following prerequisites are necessary for the deployment and run of Cyberwatch on a Kubernetes cluster.

  1. Have a valid and working DNS entry to access the Cyberwatch application. The DNS entry configured must point to the IP address of the Kubernetes load balancer, or of the Ingress Controller. This DNS entry is necessary, as the application cannot be accessed through its IP address.

  2. Have an environment with Helm (version > 3.8.0) and Kubectl installed, configured to access the Kubernetes cluster.

  3. The Kubernetes cluster must have installed:

    • a DNS resolver such as core-dns;
    • a working ingress-controller, the OVH documentation provides an example to set up one;
    • a storage class able to dynamically handle PersistentVolumes through a StorageClass.

    To make sure the cluster has a DNS resolver:

    kubectl cluster-info
    

Deployment steps

  1. Log in to the Helm repository:

    helm registry login harbor.cyberwatch.fr
    

    Fill in the username prefixed with cbw$, then fill in the password.

    These credentials are the ones in your Cyberwatch license, if you happen to not have it, please contact us at support@cyberwatch.fr.

  2. Create and edit the configuration file values.yml.

    It’s necessary to store the file values.yml in a secure way. The file is required for the update of Docker images or for the update of the Helm chart.

    The following steps describe how to set up a minimal configuration file for deploying the Cyberwatch application.

    Here is an example of the values.yml in its minimal configuration:

    global:
      # storageClass:
    
      image:
        registryCredentials:
          - name: cyberwatch-credentials
            registry: harbor.cyberwatch.fr/cbw-on-premise
            username: changeme
            password: changeme
    
    nginx:
      resolver: changeme
    
    ingress:
      enabled: true
      ingressClassName: nginx
      host: cyberwatch.example.com
      tls:
        selfSigned: true
    
    thirdParties:
      enabled: false
    
    database:
      password: "changeme"
      root_password: "changeme"
    
    redis:
      password: "changeme"
    
    key:
      base: "changeme"
      credential: "changeme"
    
    node:
      name: cyberwatch-node-name
      type: single
    
  3. Set the credentials used to pull the docker images. The username and password are the same as those used to login to the Helm chart repository.

    global:
      image:
        registryCredentials:
          - name: cyberwatch-credentials
            registry: harbor.cyberwatch.fr/cbw-on-premise
            username: changeme
            password: changeme
    
  4. Configure the field global.storageClass used to define the type of storage used by the VolumeClaims to keep the persistent data.

    By default, the Helm chart configures the application so that the data is saved on the machine that executes the containers by using hostPath type of VolumeClaims. This behavior is only adapted if the Kubernetes cluster is made of off one node. On a Kubernetes cluster with multiple nodes, Cyberwatch recommends using a StorageClass:

    global:
      # storageClass:
    
    1. List the storageClass available on the cluster:

      kubectl get sc
      
    2. Remove the comment on the global.storageClass field and assign it the value of an available storageClass on the cluster.

      For example:

      global:
        storageClass: csi-cinder-classic
      

    If necessary, further information is available in the comments of the default chart Helm configuration file.

  5. Configure the nginx.resolver field to the IP address of the DNS service of the Kubernetes cluster.

    1. Get the IP address of the kube-dns DNS resolver:

      kubectl -n kube-system get svc kube-dns
      
    2. Assign the IP address of the DNS resolver of the Kubernetes cluster to the field nginx.resolver.

      Example:

      nginx:
        resolver: 10.3.0.10
      
  6. Configure the ingress.

    1. The field ingress.ingressClassName is used to define which IngressClass will be used to implement the Ingress. This value must be pointing to a valid IngressClass name. Available IngressClass on the cluster can be listed with the command below:

      kubectl get ingressclasses
      
    2. Assign the selected value to the ingress.ingressClassName field and the domain name that will accept requests to the ingress.host field.

      Example:

      ingress:
        enabled: true
        ingressClassName: nginx
        host: cyberwatch.example.com
        tls:
          selfSigned: true
      

      The IP address that corresponds to the domain name must be the IP address of the cluster load balancer.

    If necessary, further information is available in the comments of the default chart Helm configuration file.

  7. Disable thirdParties container by setting to false the parameter thirdParties.enabled:

    thirdParties:
      enabled: false
    
  8. Configure the secrets for the application, the database and redis.

    To generate these secrets, use the following command:

    cat <<-EOF
    database:
      password: "$(openssl rand -hex 16)"
      root_password: "$(openssl rand -hex 16)"
    
    redis:
      password: "$(openssl rand -hex 16)"
    
    key:
      base: "$(openssl rand -hex 64)"
      credential: "$(openssl rand -hex 64)"
    EOF
    
  9. Configure the name of the node in the Cyberwatch application with the node.name parameter:

    node:
      name: cyberwatch-node-name
      type: single
    
  10. Create the cyberwatch namespace on the cluster.

     kubectl create namespace cyberwatch
    
  11. Generate a couple of SSH keys and save them as a secret.

     ssh-keygen -q -N '' -f ./id_ed25519 -t ed25519
     kubectl -n cyberwatch create secret generic web-scanner-ssh-authorized-keys --from-file=authorized_keys="./id_ed25519.pub"
     kubectl -n cyberwatch create secret generic ssh-private-key --from-file="./id_ed25519"
    
  12. Deploy the Helm chart to your cluster:

    helm -n cyberwatch install cyberwatch oci://harbor.cyberwatch.fr/cbw-on-premise/cyberwatch-chart -f values.yml
    

    The deployment of the Helm chart will use the configurations of the values.yml file to configure the application.

  13. Verify the status of all the pods:

    kubectl -n cyberwatch get pods
    
  14. When all the pods are running, register the Administrator account from the web interface.

    Accessing the Cyberwatch instance through the IP address will return a 404 error. It is necessary to use the domain name defined above.

(Optional) Retrieve the chart Helm default configuration file

The above documentation shows the steps to follow to set up a minimal configuration of Cyberwatch.

It is possible to download the default chart Helm configuration file of Cyberwatch, in order to use an already complete file that indicates which default values can be updated.

Using this file is recommended if you wish to deviate from the minimal configuration described in this documentation, if you wish to set up a TLS certificate for example.

To retrieve the chart Helm default configuration file:

helm show values oci://harbor.cyberwatch.fr/cbw-on-premise/cyberwatch-chart > values.yml

This file can then be modified according to your needs, and the Helm chart deployed from this configuration.


Back to top