Add a network target or a website

  1. Click on Assets management > Network targets and websites
  2. Click on the Add button
  3. Fill the form

    • “Source” refers to the Cyberwatch node that should perform the analysis
    • “Target” refers to the network target or the website to analyze; you can specify network target in the form of a domain name or an IP address or a website target in the form of a URL; in either case, a port scan is performed on the target as well as a discovery of the services used to determine the versions of the exposed services
    • “Scan scope” defines the scan’s amplitude from the target:
      • Default: use the default scope (Folder)
      • URL: only scan the exact base URL given
      • Page: scan every URL matching the path of the base URL (every query string variation)
      • Folder: scan every URL starting with the base URL value. This base URL should have a trailing slash (no filename)
      • Domain: scan every URL whose domain name matches the one from the base URL
      • Subdomain: scans each URL whose subdomain matches that of the base URL
      • Punk: scan every URL found whatever the domain
    • “Swagger URL” specifies the API description file (YAML or JSON) in order to add the described URLs to the scan
    • “URLs excluded from the scan” specifies the list of comma-separated URLs to exclude from the scan, URLs may contain wildcards
    • “Authentication type” specifies the method used to authenticate with the target web application:
      • None: No authentication is performed
      • Basic: Uses HTTP Basic authentication, where the username and password are sent in clear text, base64 encoded, in the HTTP request headers
      • Digest: More secure than Basic authentication, it uses an encrypted response to authenticate the request
      • NTLM: Used mainly in Windows environments, NTLM is a security protocol for authentication
      • POST: Submits authentication data to the server via an HTTP POST request, allowing server-side authentication
      • HTTP Bearer: Allows a JWT Bearer token to be included in each HTTP request via the Authorization header, in the format Authorization: Bearer <token>. Only the token itself should be filled in the associated field in Cyberwatch
      • Selenium : Used to perform authentication by executing automated actions specified in a .side file. After completing these actions, the collected cookies are used for the following stages of the scan. The supported actions are open, type, and click. For each action, the identification of the target element must begin with id=, css=, or xpath=;
      • Cookie: Specifies one or more cookies to add in each HTTP request to enable authentication on the target website. If multiple cookies are specified, they can be separated by a semicolon, following the format cookie=value;cookie=value

    Given the regular expiration of cookies, it is recommended to configure a manual scanning policy on assets using this authentication method, to ensure the validity of the cookies before each scan.

  4. Confirm with the “Save” button

Configure the scanning parameters

Some advanced scanning parameters are configurable through the scanning policy, including the maximum web scanning duration and the maximum crawling duration. To edit them, go to Settings > Scanning policies, and edit the policies of the relevant network targets and websites. The web scanning parameters are located inside the “Advanced settings” section.

Regarding web application scanning, several options can be configured:

  • Maximum crawling time: This is the maximum time allocated for exploring the attack surface, such as discovering pages, parameters, and existing forms

  • Maximum scanning time per module: Once the exploration phase is completed, a set of specific modules testing potential vulnerabilities will be executed. This value controls the maximum execution time allocated to each of these modules

  • Headless: Headless mode will use a browser without an interface to test security vulnerabilities by simulating user actions in web applications, especially those built with JavaScript frameworks like Angular, React, or Vue

While scanning some web applications, headless mode may require more time and yield partial results with the default maximum durations. In such cases, the recommended values to configure in the scan policy are 600 seconds for the maximum crawling time and 300 seconds for the maximum scanning time per module.

Authenticated web scans

When adding or editing a network target, you can specify a set of credentials and an authentication method. These settings apply only when the target is a website, or at least has an open HTTP port.

To enable authentication, you first need to create a credentials set of type Web scan from menu Settings > Stored credentials.

The authentication methods match the types supported by the HTTP Authorization header, except for the post authentication that simulates a user inputting their credentials into a login form. The login form URL only apply to the post authentication method.

Supported products for network targets and websites

Cyberwatch supports any IP address, as any valid and resolved URL or domain name.

In any case, Cyberwatch checks what ports are opened among the 3000 most commonly used ports. On the opened ports, two scanning strategies are used:

  • A passive scan, which detects exposed services versions on each target port, and identifies vulnerability associated to it. If the port is a web port, an additional OWASP scan which identify used libraries and configuration defaults from the OWASP top ten (such as configuration defects in headers) is executed
  • An active scan, which is a more detailed review based on the port specificities

If the port supports TLS, a TLS audit is performed with aim to identify if weak cipher suites are accepted, any use of deprecated protocols, or any certificate invalidity. If the port responds to http/https requests, Cyberwatch executes a second OWASP scan which covers some others configuration defaults from the OWASP top ten, such as possible XSS/SQL injections, etc.

The two OWASP analyses are described below.

The OWASP analyses (Wapiti)

Wapiti lets you audit the security of web applications. It performs “black box” scans of the web application by exploring the deployed application’s web pages, looking for scripts and forms where it can inject data. Once it has a list of URLs, forms and their inputs, Wapiti acts as a fuzzer, injecting payloads and evaluating some applicative returns.

In order to perform the log4j scan, it is necessary for Cyberwatch to have access to this DNS: dns.cyberwatch.app

For more information about this project, you can visit this GitHub page.

Global information

Wapiti scan can be divided into two consecutive main parts:

  • “Passive scan”: this step is composed of modules to perform a static analyses of potential security issues
  • “Active scan”: this part is made of attack modules to actively test the exposure surface to detect security issues

Wapiti provides two different types of scan:

  • “Normal mode”: This mode allows Wapiti to work directly on the website target, this is similar to cURL requests
  • “Headless mode”: With this mode, Wapiti will launch a GUI-free browser for analyzing web applications, particularly those based on JavaScript. It enables better interaction with Single Page Applications (SPAs) and optimal detection of vulnerabilities linked to dynamic JavaScript execution

Passive scan

Non exhaustive list of passive scan actions:
  • Analyze cookie flags to detect insecure configurations
  • Analyze content security policy (CSP) to detect any weaknesses
  • Analyze HTTP headers to detect insecure configurations
  • Detect web technologies used by the site
  • Enumeration of WordPress modules

For more information, you will find the module list here.

Active scan

Non exhaustive list of active scan actions:
  • Search for CRLF (Carriage Return Line Feed) injection security issues
  • Search for CSRF (Cross-Site Request Forgery) security defects
  • Identify potential injection points
  • Search for SQL injection security issues
  • Discover backup files that could disclose sensitive information
  • Attempt to brute-forcing login forms
  • Search for misconfigurations or accessible .htaccess files that could be exploited
  • Test SQL injections
  • Evaluate which HTTP methods are allowed by the server
  • Search for open redirection security issues
  • Test for Server-Side Request Forgery (SSRF) security issues
  • Search for Cross-Site Scripting (XSS) security issues
  • Test for the presence of the Log4Shell vulnerability

For more information, you will find the module list here.

Metadata

Metadata can provide information about the scan performed’s status. They are stored into two fields called “Key” and “Value”.

Crawled pages

The key is in the following form: crawled_pages.[active|passive]. The “value” column will indicate how many pages were browsed, in active and passive mode, on the indicated port.

In order to know which URL has been crawled by the active and passive scan, you should open the tab “Analysis” of the asset concerned. Then, you have to show details of the “Web Application Scan (OWASP) - [Passive/Active] Mode” scan.

Value to 1 or 0

  • If the “value” column indicates “0”, it could mean that there was an authentication problem to the requested page
  • If the “value” column indicates “1”, while you have selected a wider scan range, several interpretations are possible

The target address might use several redirections, which can disturb the scan efficiency.

For example, if the specified target start with http while the host uses https, the redirection from port 80 to 443 will interfere with our scan. Another common example, if you target https://example.com, but the website is actually https://www.example.com, there also will be a redirect.

Target’s address must be the address to be scanned, without redirection. Use of an address who responds with a redirect will cause the scan to fail.


Back to top