Deploying a Cyberwatch satellite node on an existing Kubernetes cluster
This page describes the steps involved in deploying a Cyberwatch satellite node on an existing Kubernetes cluster. This procedure assumes that the user has a basic knowledge of the Kubernetes orchestrator and Helm.
Requirements
Have a Cyberwatch master node configured to allow connection of a satellite node and have SSH access to this master node.
If the master node uses external databases. These databases must be accessible to the Kubernetes cluster.
Have an environment with Helm (version > 3.8.0) and Kubectl installed, configured to access the Kubernetes cluster.
The Kubernetes cluster must have installed:
- a DNS resolver such as
core-dns
; - a working
ingress-controller
, the OVH documentation provides an example to set up one; - a storage class able to dynamically handle
PersistentVolumes
through aStorageClass
.
To make sure the cluster has a DNS resolver:
kubectl cluster-info
- a DNS resolver such as
Deployment steps
Log in to the Helm repository:
helm registry login harbor.cyberwatch.fr
Fill in the username prefixed with
cbw$
, then fill in the password.These credentials are the ones in your Cyberwatch license, if you happen to not have it, please contact us at support@cyberwatch.fr.
Create and edit the configuration file
values.yml
.It’s necessary to store the file
values.yml
in a secure way. The file is required for the update of Docker images or for the update of the Helm chart.The following steps describe how to set up a minimal configuration file for deploying the Cyberwatch satellite.
Here is an example of the
values.yml
in its minimal configuration:global: pki: root_ca: cbw-root-ca-cert image: registryCredentials: - name: cyberwatch-credentials registry: harbor.cyberwatch.fr/cbw-on-premise username: "changeme" password: "changeme" node: name: cyberwatch-node-name type: satellite nginx: resolver: "changeme" ingress: enabled: true ingressClassName: "changeme" host: "changeme" tls: selfSigned: true thirdParties: enabled: false database: external: true host: "changeme" password: "changeme" root_password: "changeme" redis: external: true host: "changeme" password: "changeme" key: base: "changeme" credential: "changeme"
Set the credentials used to pull the docker images. The username and password are the same as those used to login to the Helm chart repository.
global: image: registryCredentials: - name: cyberwatch-credentials registry: harbor.cyberwatch.fr/cbw-on-premise username: "changeme" password: "changeme"
Configure the name of the node in the Cyberwatch application with the
node.name
parameter:node: name: cyberwatch-node-name type: satellite
Configure the
nginx.resolver
field to the IP address of the DNS service of the Kubernetes cluster.Get the IP address of the
kube-dns
DNS resolver:kubectl -n kube-system get svc kube-dns
Assign the IP address of the DNS resolver of the Kubernetes cluster to the field
nginx.resolver
.Example:
nginx: resolver: 10.3.0.10
Configure the ingress.
The field
ingress.ingressClassName
is used to define whichIngressClass
will be used to implement theIngress
. This value must be pointing to a validIngressClass
name. AvailableIngressClass
on the cluster can be listed with the command below:kubectl get ingressclasses
Assign the selected value to the
ingress.ingressClassName
field and the domain name that will accept requests to theingress.host
field.Example:
ingress: enabled: true ingressClassName: nginx host: cyberwatch.example.com tls: selfSigned: true
The IP address that corresponds to the domain name must be the IP address of the cluster load balancer.
Configure access to databases and to the Cyberwatch application
Assign IP addresses for the connections to the databases in the fields
database.host
etredis.host
.database: external: true host: "changeme" redis: external: true host: "changeme"
Connect to the master node via SSH and display the passwords:
sudo cyberwatch show-secrets MYSQL_ROOT_PASSWORD=... MYSQL_PASSWORD=... REDIS_PASSWORD=... SECRET_KEY_BASE=... SECRET_KEY_CREDENTIAL=...
Enter the database passwords obtained in the corresponding fields:
database: external: true host: "changeme" password: "MYSQL_PASSWORD" root_password: "MYSQL_ROOT_PASSWORD" redis: external: true host: "changeme" password: "REDIS_PASSWORD"
Enter your cyberwatch application login details:
key: base: "SECRET_KEY_BASE" credential: "SECRET_KEY_CREDENTIAL"
Disable usage of container
thirdParties
by setting the following parameter:cron: enabled: false thirdParties: enabled: false
Create the cyberwatch namespace on the cluster.
kubectl create namespace cyberwatch
Configure the root certificate allowing connection to the Cyberwatch master node.
Connect to the master node via SSH and display the root certificate:
sudo cyberwatch show-root-cert
Store the root certificate in a file named
./cbw-root-ca-cert.pem
:cat <<EOF > ./cbw-root-ca-cert.pem -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- EOF
Import root certificate as a secret on the Kubernetes cluster:
kubectl -n cyberwatch create secret generic cbw-root-ca-cert --from-file=./cbw-root-ca-cert.pem
Generate a couple of SSH keys and save them as a secret.
ssh-keygen -q -N '' -f ./id_ed25519 -t ed25519 kubectl -n cyberwatch create secret generic web-scanner-ssh-authorized-keys --from-file=authorized_keys="./id_ed25519.pub" kubectl -n cyberwatch create secret generic ssh-private-key --from-file="./id_ed25519"
Deploy the Helm chart to your cluster:
helm -n cyberwatch install cyberwatch oci://harbor.cyberwatch.fr/cbw-on-premise/cyberwatch-chart -f values.yml
The deployment of the Helm chart will use the configurations of the
values.yml
file to configure the application.Verify the status of all the pods:
kubectl -n cyberwatch get pods
When all the pods are running, connect to the master node’s web interface to check the link with the satellite node. You can also check if sidekiq is communicating with the master node:
kubectl -n cyberwatch logs $(kubectl -n cyberwatch get pods -l app=sidekiq -o jsonpath='{.items[*].metadata.name}')