Deploying a Cyberwatch satellite node on an existing Kubernetes cluster

This page describes the steps involved in deploying a Cyberwatch satellite node on an existing Kubernetes cluster. This procedure assumes that the user has a basic knowledge of the Kubernetes orchestrator and Helm.

Requirements

  1. Have a Cyberwatch master node configured to allow connection of a satellite node and have SSH access to this master node.

    If the master node uses external databases. These databases must be accessible to the Kubernetes cluster.

  2. Have an environment with Helm (version > 3.8.0) and Kubectl installed, configured to access the Kubernetes cluster.

  3. The Kubernetes cluster must have installed:

    • a DNS resolver such as core-dns;
    • a working ingress-controller, the OVH documentation provides an example to set up one;
    • a storage class able to dynamically handle PersistentVolumes through a StorageClass.

    To make sure the cluster has a DNS resolver:

    kubectl cluster-info
    

Deployment steps

  1. Log in to the Helm repository:

    helm registry login harbor.cyberwatch.fr
    

    Fill in the username prefixed with cbw$, then fill in the password.

    These credentials are the ones in your Cyberwatch license, if you happen to not have it, please contact us at support@cyberwatch.fr.

  2. Create and edit the configuration file values.yml.

    It’s necessary to store the file values.yml in a secure way. The file is required for the update of Docker images or for the update of the Helm chart.

    The following steps describe how to set up a minimal configuration file for deploying the Cyberwatch satellite.

    Here is an example of the values.yml in its minimal configuration:

    global:
      pki:
        root_ca: cbw-root-ca-cert
      image:
        registryCredentials:
          - name: cyberwatch-credentials
            registry: harbor.cyberwatch.fr/cbw-on-premise
            username: "changeme"
            password: "changeme"
    
    node:
      name: cyberwatch-node-name
      type: satellite
    
    nginx:
      resolver: "changeme"
    
    ingress:
      enabled: true
      ingressClassName: "changeme"
      host: "changeme"
      tls:
        selfSigned: true
    
    thirdParties:
      enabled: false
    
    database:
      external: true
      host: "changeme"
      password: "changeme"
      root_password: "changeme"
    
    redis:
      external: true
      host: "changeme"
      password: "changeme"
    
    key:
      base: "changeme"
      credential: "changeme"
    
  3. Set the credentials used to pull the docker images. The username and password are the same as those used to login to the Helm chart repository.

    global:
      image:
        registryCredentials:
          - name: cyberwatch-credentials
            registry: harbor.cyberwatch.fr/cbw-on-premise
            username: "changeme"
            password: "changeme"
    
  4. Configure the name of the node in the Cyberwatch application with the node.name parameter:

    node:
      name: cyberwatch-node-name
      type: satellite
    
  5. Configure the nginx.resolver field to the IP address of the DNS service of the Kubernetes cluster.

    1. Get the IP address of the kube-dns DNS resolver:

      kubectl -n kube-system get svc kube-dns
      
    2. Assign the IP address of the DNS resolver of the Kubernetes cluster to the field nginx.resolver.

      Example:

      nginx:
        resolver: 10.3.0.10
      
  6. Configure the ingress.

    1. The field ingress.ingressClassName is used to define which IngressClass will be used to implement the Ingress. This value must be pointing to a valid IngressClass name. Available IngressClass on the cluster can be listed with the command below:

      kubectl get ingressclasses
      
    2. Assign the selected value to the ingress.ingressClassName field and the domain name that will accept requests to the ingress.host field.

      Example:

      ingress:
        enabled: true
        ingressClassName: nginx
        host: cyberwatch.example.com
        tls:
          selfSigned: true
      

      The IP address that corresponds to the domain name must be the IP address of the cluster load balancer.

  7. Configure access to databases and to the Cyberwatch application

    1. Assign IP addresses for the connections to the databases in the fields database.host et redis.host.

      database:
        external: true
        host: "changeme"
      
      redis:
        external: true
        host: "changeme"
      
    2. Connect to the master node via SSH and display the passwords:

      sudo cyberwatch show-secrets
      
      MYSQL_ROOT_PASSWORD=...
      MYSQL_PASSWORD=...
      REDIS_PASSWORD=...
      SECRET_KEY_BASE=...
      SECRET_KEY_CREDENTIAL=...
      
    3. Enter the database passwords obtained in the corresponding fields:

      database:
        external: true
        host: "changeme"
        password: "MYSQL_PASSWORD"
        root_password: "MYSQL_ROOT_PASSWORD"
      
      redis:
        external: true
        host: "changeme"
        password: "REDIS_PASSWORD"
      
    4. Enter your cyberwatch application login details:

      key:
        base: "SECRET_KEY_BASE"
        credential: "SECRET_KEY_CREDENTIAL"
      
  8. Disable usage of container thirdParties by setting the following parameter:

    cron:
      enabled: false
    
    thirdParties:
      enabled: false
    
  9. Create the cyberwatch namespace on the cluster.

    kubectl create namespace cyberwatch
    
  10. Configure the root certificate allowing connection to the Cyberwatch master node.

    1. Connect to the master node via SSH and display the root certificate:

      sudo cyberwatch show-root-cert
      
    2. Store the root certificate in a file named ./cbw-root-ca-cert.pem:

      cat <<EOF > ./cbw-root-ca-cert.pem
      -----BEGIN CERTIFICATE-----
      ...
      -----END CERTIFICATE-----
      EOF
      
    3. Import root certificate as a secret on the Kubernetes cluster:

      kubectl -n cyberwatch create secret generic cbw-root-ca-cert --from-file=./cbw-root-ca-cert.pem
      
  11. Generate a couple of SSH keys and save them as a secret.

    ssh-keygen -q -N '' -f ./id_ed25519 -t ed25519
    kubectl -n cyberwatch create secret generic web-scanner-ssh-authorized-keys --from-file=authorized_keys="./id_ed25519.pub"
    kubectl -n cyberwatch create secret generic ssh-private-key --from-file="./id_ed25519"
    
  12. Deploy the Helm chart to your cluster:

    helm -n cyberwatch install cyberwatch oci://harbor.cyberwatch.fr/cbw-on-premise/cyberwatch-chart -f values.yml
    

    The deployment of the Helm chart will use the configurations of the values.yml file to configure the application.

  13. Verify the status of all the pods:

    kubectl -n cyberwatch get pods
    

When all the pods are running, connect to the master node’s web interface to check the link with the satellite node. You can also check if sidekiq is communicating with the master node:

   kubectl -n cyberwatch logs $(kubectl -n cyberwatch get pods -l app=sidekiq  -o jsonpath='{.items[*].metadata.name}')

Back to top