Cortex Xpanse Docker installation, configuration, security, and troubleshooting guides.
Docker is a software framework for building, running, and managing containers.
Note
This section is relevant when installing an engine.
Cortex Xpanse maintains a repository of Docker images, available in the Docker hub under the Cortex organization. You can also access the Docker images through the Cortex Container Registry. For Cortex XSOAR servers without an internet connection, you can download Docker images to another machine and copy them to the server.
Each Python/PowerShell script or integration has a specific Docker image listed in the YAML file. When the script or integration runs, if the specified Docker image is not available locally, it is downloaded from the Docker hub or the Cortex Container Registry. The script or integration then runs inside the Docker container. For more information on Docker, see the Docker documentation and Using Docker.
Note
Docker images can be downloaded together with their relevant content packs, for offline installation.
Install Docker
Install Docker on engines and troubleshoot the installation.
Docker is required for engines to run Python/Powershell scripts and integrations in a controlled environment.
If you use the Shell installer to install an engine, Docker is automatically installed. If using DEB and RPM installations, you need to install Docker or Podman before installing an engine. The engine uses Docker to run Python scripts, PowerShell scripts, and integrations in a controlled environment. By packaging libraries and dependencies together, the environment remains the same and scripts and integrations are not affected by different server configurations.
Cortex Xpanse supports the latest Docker Engine release from Docker and the following corresponding supported Linux distributions:
5.3.15 and later
5.4.2 and later
5.5 and later
These Linux distributions include their own Docker Engine package. In addition, older versions of Docker Engine released within the last 12 months are supported unless there is a known compatibility issue with a specific Docker Engine version. In case of a compatibility issue, Cortex Xpanse will publish an advisory notifying customers to upgrade their Docker Engine version.
You can use a version that is not supported. However, when encountering an issue that requires Customer Support involvement, you may be asked to upgrade to a supported version before assistance can be provided.
Docker Installation by Operating System
If you need to install Docker before installing an engine, use the following procedures.
There are two options for CentOS. The first option is Docker CE. To use Docker CE, install docker CE, configure Docker to start on boot, and Update Container-Selinux. The second option is the CentOS Docker distribution. To use the CentOS Docker distribution, follow the instructions in Install Docker Distribution for Red Hat on an Engine.
Note
For RHEL v7 or CentOS v7, you need Mirantis Container Runtime (formerly Docker Engine - Enterprise) or Red Hat's Docker distribution to run specific Docker-dependent integrations and scripts. For more information, see Install Docker Distribution for Red Hat on an Engine.
If you wish to use the Mirantis Container Runtime (formerly Docker Engine - Enterprise) follow the deployment guide for your operating system distribution.
Change the Docker Installation Folder
Instructions for changing the default Docker folder.
The /var/lib/docker/
folder is the default Docker folder for Ubuntu, Fedora, and Deblan in a standard engine installation.
To change the Docker folder:
Stop the Docker daemon.
sudo service docker stop
Create a file called
daemon.json
under the/etc/docker
directory with the following content:{ "data-root": "<path to your Docker folder>" }
Copy the current data directory to the new one.
sudo rsync -aP /var/lib/docker/ <path to your Docker folder>
Rename the old docker directory.
sudo mv /var/lib/docker /var/lib/docker.bkp
After confirming that the change was successful, you can remove the backup file.
sudo rm -rf /var/lib/docker.bkp
Start the Docker daemon.
sudo service docker start
Update Container-Selinux
Update your container-selinux version.
When installing Docker, if you receive the message Requires: container-selinux >= 2.9
, you need to install a newer version of container-selinux.
Go to CentOS Packages.
Find the latest version of
container-selinux
and copy the URL package.Run the following command:
sudo yum install -y <copied container-selinux url
Install the latest version by running the following command (assuming the latest version is 2.74-1):
sudo yum install -y http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2.74-1.el7.noarch.rpm
Install Docker Distribution for Red Hat on an Engine
Install Docker distribution for Red Hat on CentOS v7 and RHEL v7.
Red Hat maintains its own package of Docker, which is the version used in OpenShift Container Platform environments, and is available in the RHEL Extras repository. This procedure is relevant for CentOS v7 and RHEL v7 and below.
Note
CentOS v7 provides a similar
docker
distribution package as part of the CentOS Extras repository.If running RHEL v8 or higher, the engine installs Podman packages and configures the operating system to enable Podman in rootless mode.
For more information about the different packages available to install on Red Hat, see the Red Hat Knowledge Base Article (requires a Red Hat subscription to access).
Install Red Hat’s Docker package.
Run the following commands.
systemctl enable docker.service
systemctl restart docker.service
Change ownership of the Docker daemon socket so members of the
dockerroot
user group have access.Edit or create the file
/etc/docker/daemon.json
.Enable OS group
dockerroot
access to Docker by adding the following entry to the/etc/docker/daemon.json: "group": "dockerroot"
file. For example:{ "group": "dockerroot" }
Restart the Docker service by running the following command.
systemctl restart docker.service
After the engine is installed, run the following command to add the
demisto
os user to thedockerroot
os group (Red Hat uses dockerroot group instead of docker).usermod -aG dockerroot demisto
Restart the engine.
Set the required SELinux permissions.
The Cortex Xpanse engine uses the
/var/lib/demisto/temp
directory (with subdirs) to copy files and receive files from running Docker containers. By default, when SELinux is in enforcing mode directories under/var/lib/
it cannot be accessed by docker containers.To allow containers access to the
/var/lib/demisto/temp
directory, you need to set the correct SELinux policy type, by typing the following command.chcon -Rt svirt_sandbox_file_t /var/lib/demisto/temp
( Optional) Verify that the directory has the
container_file_t
SELinux type attached by running the following command.ls -d -Z /var/lib/demisto/temp
Configure label confinement to allow Python and PowerShell containers to access other script folders.
In the d1.conf file, set the following parameters:
Key
Value
For Python containers
python.pass.extra.keys
--security-opt=label=level:s0:c100,c200
For PowerShell containers
powershell.pass.extra.keys
--security-opt=label=level:s0:c100,c200
Open any incident and in the incident War Room CLI, run the
/reset_containers
command.
Docker Image Security
Information about Cortex Xpanse Docker image security practices.
The build process for Cortex Xpanse Docker images are fully open source and available for review. The project contains the source Docker files used to build the images and the accompanying files. Cortex Xpanse uses only the secure Docker Hub registry for its Docker images. You can view the Docker trust information for each image at the image info branch.
Note
We automatically update our open source Docker images and their accompanying dependencies (OS and Python). Examples of automatic updates can be viewed on GitHub.
We maintain Docker image information which includes information on Python packages, OS packages and image metadata for all our Docker images. Data image information is updated nightly.
All of our images are continuously scanned using Prisma Cloud and an additional third-party scanner. We evaluate all critical/high findings and actively work to prevent and mitigate security vulnerabilities.
All of our images are continuously scanned using Prisma Cloud and an additional third-party scanner. We evaluate all critical/high findings and actively work to prevent and mitigate security vulnerabilities.
Cortex Xpanse ensures container images are fully patched and do not contain unnecessary packages. Patches and dependencies are applied automatically via our open source docker files build project.
Configure Docker Pull Rate Limit
Configure the Docker pull rate limit on public images. Create a Docker user account and receive higher pull limit.
Docker enforces a pull rate limit on public images. The limit is based on an IP address or as a logged-in Docker hub user. The default limit (100 pulls per 6 hours) is usually high enough for Cortex Xpanse's use of Docker images, but the rate limit may be reached if using a single IP address for a large organization (behind a NAT). If the rate limit is reached, the following error message is issued:
Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit.
To increase the limit, take the following steps.
Sign up a free user in the Docker hub.
The pull limit is higher for a registered user (200 pulls per 6 hours).
Authenticate the user on the engine machine by running the following command.
sudo -u demisto docker login
(Optional) Instead of manually logging in to Docker to pull images, you can edit the Docker config file to use credentials from the file or from a credential store.
Docker FAQs
Frequently asked questions (FAQ) about Docker installation, configuration, and security for Cortex Xpanse.
Does Cortex Xpanse use COPY or ADD for building images?
Cortex Xpanse uses COPY for building images. The COPY instruction copies files from the local host machine to the container file system. Cortex Xpanse does not use the ADD instruction, which could potentially retrieve files from remote URLs and perform operations such as unpacking, introducing potential security vulnerabilities.
Should the
--restart flag
be used?The --restart flag should not be used. Cortex Xpanse manages the lifecycle of Docker images and restarts images as needed.
Can we restrict containers from acquiring additional privileges by setting the no-new-privileges option?
Cortex Xpanse does not support the no-new-privileges option. Some integrations and scripts may need to change privileges when running as a non-root user (such as Ping).
Can we apply a daemon-wide custom seccomp profile?
The default seccomp profile from Docker is strongly recommended. The default seccomp profile provides protection as well as wide application compatibility. While you can apply a custom seccomp profile, Cortex Xpanse cannot guarantee that it won't block system calls used by an integration or script. If you apply a custom seccomp profile, you need to verify and test the profile with any integrations or scripts you plan to use.
Can we use TLS authentication for docker daemon configuration?
TLS authentication is not used, because Cortex Xpanse does not use docker remote connections. All communication is done via the local docker IPC socket.
How do we set the logging level to
info
?Set the log level in the Docker daemon configuration file.
Can we restrict Linux kernel capabilities within containers?
The default Docker settings (recommended) include 14 kernel capabilities and exclude 23 kernel capabilities. Refer to Docker’s full list of runtime privileges and Linux capabilities.
You can further exclude capabilities via advanced configuration, but will first need to verify that you are not using a script that requires the capability. For example, Ping requires
NET_RAW
capability.
Is the Docker health check option implemented at runtime?
The Cortex Xpanse tenant monitors the health of the containers and restarts/terminates containers as needed. The Docker health check option is not needed.
Can we enable live restore?
Live restore is not used. Cortex Xpanse uses ephemeral docker containers. Every running container is stateless by design.
Can we restrict network traffic between containers?
Cortex Xpanse does not disable inter-container communication by default, as there are use cases where this might be needed. For example, a script communicating with a long running integration which listens on a port, may require inter-container communication. If inter-container communication is not required, it can be disabled by modifying the Docker daemon configuration.
Can we enable user namespace remapping?
Cortex Xpanse does not support user namespace remapping.
How do we configure auditing for Docker files and directories?
Auditing is an operating system configuration, and can be enabled in the operating system settings. Cortex Xpanse does not change the audit settings of the operating system.
Does Cortex Xpanse map privileged ports?
Cortex Xpanse does not map privileged ports (TCP/IP port numbers below 1024).
Does Cortex Xpanse allow privileged execution?
Cortex Xpanse does not allow privileged execution of Docker commands.
Does Cortex Xpanse run SSH within containers?
Cortex Xpanse does not run SSH within containers.
Does Cortex Xpanse change the ownership of the socket?
Cortex Xpanse does not change the ownership of the socket.
Can we disable the userland proxy?
If the kernel supports hairpin NAT, you can disable docker userland proxy settings by modifying the Docker daemon configuration.
Does Cortex Xpanse support the AppArmor profile?
Cortex Xpanse supports the default AppArmor profile (only relevant for Ubuntu with AppArmor enabled).
Does Cortex Xpanse support the SELinux profile?
Cortex Xpanse supports the default SELinux profile (only relevant for RedHat/CentOS with SELinux enabled).
How does Cortex Xpanse handle secrets management?
For Docker swarm services, a secret is a blob of data, such as password, SSH private keys, SSL certificates, or other piece of data that should not be transmitted over a network or stored unencrypted in a Docker file or in your application’s source code. Cortex Xpanse manages integration credentials internally. It also supports using an external credentials service such as CyberArk.
Docker Hardening Guide
Use the Docker Hardening Guide to configure the Cortex Xpanse settings when running Docker containers.
This guide describes the recommended engine settings for securely running Docker containers. For each engine that you want to apply Docker hardening, you need to edit the engine configuration file to include the Docker hardening parameters.
When editing the configuration file, you can limit container resources, open file descriptors, limit available CPU, etc. For example, add the following keys to the configuration file:
{"docker.run.internal.asuser": true,"limit.docker.cpu": true,"limit.docker.memory": true,"python.pass.extra.keys": "--pids-limit=256##--ulimit=nofile=1024:8192"}
Tip
We recommend reviewing the Docker Network Hardening guide, before changing any parameters in the configuration file.
To securely run Docker containers, it is recommended to use the latest Docker version.
You can Check Docker Hardening Configurations to verify that the Docker container has been hardened according to the recommended settings.
In the configuration file, you can update the following:
Action | Description |
---|---|
Fine tune settings for Docker images according to the Docker image name. | |
Protects the engine machine from a container using too many system resources. | |
We recommend limiting available memory for each container to 1 GB. | |
It is recommended to limit each container to 1 CPU. | |
It is recommend limiting each container to 256 PIDs. This value is sufficient for using threads and sub-processes, and protects against a fork bomb. | |
It is recommend using a soft/hard limit of 1024/8192 filed descriptors for each container process. |
Note
These settings can also be applied to Podman, with the exception of limiting available memory, limiting available CPU, and limiting PIDS.
Configure Docker Images
Apply more specific settings to Docker images by adding the advanced configuration key to the engine configuration file.
You can apply more specific fine tuned settings to Docker images, according to the Docker image name or the Docker image name including the image tag. To apply settings to a Docker image name, add the advanced configuration key to the engine configuration file.
Note
If you apply Docker image specific settings, they will be used instead of the general python.pass.extra.keys
setting. This overrides the general memory and CPU settings, as needed.
Add the following key to apply settings to a Docker image name.
"python.pass.extra.keys.<image_name>"
For example,
"python.pass.extra.keys.demisto/dl"
. To apply settings to a Docker image name including the image tag, use"python.pass.extra.keys.<image_name>": "<image_tag>"
. For example,"python.pass.extra.keys.demisto/dl": "1.4"
.To set the Docker images
demisto/dl
(all tags) to use a higher max memory value of 2g and to remain with the recommended PIDs and ulimit, add the following to the configuration file:"python.pass.extra.keys.demisto/dl": "--memory=2g##--ulimit=no- file=1024:8192##--pids-limit=256"
Save the changes.
Restart the demisto service on the engine machine.
sudo systemctl start d1
(Ubuntu/DEB)
sudo service d1 restart
Check Docker Hardening Configurations
Check Docker hardening configurations on an engine by running the !DockerHardeningCheck
command in the Incident/Alert War Room CLI.
Check your Docker hardening configurations on an engine by running the !DockerHardeningCheck
command in the Incident/Alert War Room CLI. The results show the following:
Non-root User
Memory
File Descriptors
CPUs
PIDs
Before running the command, ensure that your engine is up and running.
Update the
DockerHardeningCheck
script to run on the engine.Note
By default, the
DockerHardeningScript
runs on the Cortex XSOAR tenant.Go to
→ → → → .In the Run on field select Single engine and from the drop-down list, select the engine you want to run the script.
Save the script.
Verify the Docker container has been hardened according to recommended settings, in the Incident/Alert War Room CLI, run the
!DockerHardeningCheck
command.
Run Docker with Non-Root Internal Users
Run Docker with non-root internal users and for containers that do not support non-root internal users.
For additional security isolation, it is recommended to run Docker containers as non-root internal users. This follows the principle of least privilege.
Configure the engine to execute containers as non-root internal users.
Add the following key:
"docker.run.internal.asuser": true
For containers that do not support non-root internal users, add the following key:
"docker.run.internal.asuser.ignore" : "A comma separated list of container names. The engine matches the container names according to the prefixes of the key values>"
For example,
"docker.run.internal.asuser.ignore"="demisto/python3:","demisto/python:"
The engine matches the key values for the following containers:
demisto/python:1.3-alpine demisto/python:2.7.16.373 demisto/python3:3.7.3.928 demisto/python3:3.7.4.977
The
:
character should be used to limit the match to the full name of the container. For example, using the:
character does not finddemisto/python-deb:2.7.16.373
.Save the changes.
Restart the demisto service on the engine computer.
sudo systemctl start d1
(Ubuntu/DEB)
sudo service d1 restart
Configure the Memory Limit Support Without Swap Limit Capabilities
Configure the container memory limit support without swap limit capabilities.
When a container exceeds the specified amount of memory, the container starts to swap. Not all Linux distributions have the swap limit support enabled by default.
Red Hat and CentOS distributions usually have swap limit support enabled by default.
Debian and Ubuntu distributions usually have swap limit support disabled by default.
To check if your system supports swap limit capabilities, in the engine machine run the following command:
sudo docker run --rm -it --memory=1g demisto/python:1.3-alpine true
If swap limit capabilities
is enabled, Configure the Memory Limitation . To test the memory, see Test the Memory Limit.
If you see the WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
message in the output (the message may vary between Docker versions), you have two options:
Configure
swap limit capabilities
by following the Docker documentation.Follow the procedure set out below.
To protect the host from a container using too many system resources (either because of a software bug or a DoS attack), limit the resources available for each container. In the engine configuration file, some of these settings are set using the advanced parameter: python.pass.extra.keys
. This key receives as a parameter full docker run
options, separated with the ##
string.
If you see the WARNING: No swap limit support
you can configure memory support without swap limit capabilities
.
To set the docker run
option --memory-swap
option to -1
(disables swap memory enforcement):
Add the following key:
"python.pass.extra.keys": "--memory=1g##--memory-swap=-1"
If you have the
python.pass.extra.keys
already set up with a value, add the vlaue after the##
separator.Save the changes.
Restart the demisto service on the engine machine.
sudo systemctl start d1
(Ubuntu/DEB)
sudo service d1 restart
Configure the Memory Limitation
Configure the memory limitation by adding a server configuration in Cortex Xpanse.
It is recommended limiting available memory for each container to 1 GB.
Note
On RHEL and CentOS 7.x distributions with Docker CE or EE with version 17.06 and later, ensure that your kernel fully supports kmem accounting or that it has been compiled to disable kmem accounting. The kmem accounting feature in Red Hat’s Linux kernel has been reported to contain bugs, which cause kernel deadlock or slow kernel memory leaks. This is caused by a patch introduced in runc, which turns on kmem accounting automatically when user memory limitation is configured, even if not requested by the Docker CLI setting --kernel-memory
(see: opencontainers/runc#1350). Users using Red Hat's distribution of Docker based on version 1.13.1 are not affected as this distribution of Docker does not include the runc patch. For more information see Red Hat’s Docker distribution documentation.
If you do not want to apply Docker memory limitations, due to the note above, you should explicitly set the advanced parameter: limit.docker.memory
to false
.
If swap limit capabilities
is enabled, in Cortex Xpanse configure the memory limitation using the following advanced parameters.
Add the following keys.
"limit.docker.memory": true, "docker.memory.limit": "1g"
Save the changes.
Restart the demisto service on the engine machine.
sudo systemctl start d1
(Ubuntu/DEB)
sudo service d1 restart
Test the Memory Limit
Test the Docker memory limit by running a script in the Playground.
After configuring the memory limitation to the recommended 1 GB, you can test the memory limit in the playground.
Go toScripts and create a New Script.
→In the Script Name field, type
Test Memory
.Add the following script:
from multiprocessing import Process import os def big_string(size): sys.stdin = os.fdopen(0, "r") s = 'a' * 1024 while len(s) < size: s = s * 2 print('completed creating string of length: {}'.format(len(s))) size = 1 * 1024 * 1024 * 1024 p = Process(target=big_string, args=(size, )) p.start() p.join() if p.exitcode != 0: return_error("Return code from sub process indicates failure: {}".format(p.exitcode)) else: print("Success allocating memory of size: {}".format(size))
From the SCRIPT SETTINGS dialog box, in the BASIC section, select the script to run on the Single engine and select the engine where you want to run the script.
Save the script.
To test the memory limit, type
!TestMemory
.The command returns an error when it fails to allocate 1 GB of memory.
Limit Available CPU on Your System
Limit the available CPU on your system for Docker.
Follow these instructions to set the advanced parameters to configure the CPU limit.
It is recommended limiting each container to 1 CPU.
Add the following keys:
"limit.docker.cpu": true, "docker.cpu.limit": "<CPU Limit>"
(For example,1.0
. Default is 1.0).Save the changes.
Restart the demisto service on the engine machine.
sudo systemctl start d1
(Ubuntu/DEB)
sudo service d1 restart
Configure the PIDs Limit
Configure the PIDs limit by adding a server configuration for a Cortex Xpanse engine.
Configure the PIDs limit by setting the python.pass.extra.keys
advanced parameter.
Add the following key:
"python.pass.extra.keys": "--pids-limit=256"
Save the changes.
Restart the demisto service on the engine machine.
sudo systemctl start d1
(Ubuntu/DEB)
sudo service d1 restart
Configure the Open File Descriptors Limit
Configure the open file descriptors limit by adding a server configuration in an engine.
You need to set the python.pass.extra.keys
advanced parameter to configure the open file descriptors limit.
Type the following key.
"python.pass.extra.keys": "--ulimit=nofile=1024:8192"
Save the changes.
Restart the demisto service on the engine machine.
sudo systemctl start d1
(Ubuntu/DEB)
sudo service d1 restart
Docker Network Hardening
Use the Docker network hardening guide to control network access.
Docker creates its own networking stack that enables containers to communicate with other networking endpoints. You can use iptables rules to restrict which networking sources the containers communicates with. By default, Docker uses a networking configuration that allows unrestricted communication for containers, so that containers can communicate with all IP addresses.
Block Network Access to the Host Machine
Integrations and scripts running within containers do not usually require access to the host network. For added security, you can block network access from containers to services running on the engine machine.
Add the following iptables rule for each private IP on the tenant machine:
sudo iptables -I INPUT -s <
IP address range
> -d <host private ip address
> -j DROPFor example, to limit all source IPs from containers that use the IP ranges 172.16.0.0/12, run
sudo iptables -I INPUT -s 172.16.0.0/12 -d 10.18.18.246 -j DROP
. This also ensures that new Docker networks which use addresses in the IP address range of 172.16.0.0/12 are blocked from access to the host private IP. The default IP range used by Docker is 172.16.0.0/12. If you have configured a different range in Docker'sdaemon.json
config file, use the configured range. Alternatively, you can limit specific interfaces by using the interface name, such asdocker0
, as a source.(Optional) To view a list of all private IP addresses on the host machine, run
sudo ifconfig -a
Assign a Docker Network for a Docker Image
If your engine is installed on a cloud provider such as AWS or GCP, it is a best practice to block containers from accessing the cloud provider’s instance metadata service. The metadata service is accessed via IP address 169.254.169.254
. For more information about the metadata service and the data exposed, see the AWS and GCP documentation
There are cases where you might need to provide access to the metadata service. For example, access is required when using an AWS integration that authenticates via the available role from the instance metadata service. You can create a separate Docker network, without the blocked iptable rule, to be used by the AWS integration’s Docker container. For most AWS integrations the relevant Docker image is: demisto/boto3py3
Create a new Docker network by running the following command:
sudo docker network create -d bridge -o com.docker.network.bridge.name=docker-metadata aws-metadata
Add the following key.
"python.pass.extra.keys.demisto/boto3py3": "--network=aws-metadata"
Save the changes.
Restart the demisto service on the engine machine.
sudo systemctl start d1
(Ubuntu/DEB)
sudo service d1 restart
Verify the configuration of your new Docker network:
sudo docker network inspect aws-metadata
Block Internal Network Access
In some cases, you might need to block specific integrations from accessing internal network resources and allow the integrations to access only external IP addresses. This setting is recommended for the Rasterize integration when used to Rasterize untrusted URLs or HTML content, such as those obtained via external emails. With internal network access blocked, a rendered page in the Rasterize integration cannot perform a SSRF or DNS rebind attack to access internal network resources.
Create a new Docker network by running the following command:
sudo docker network create -d bridge -o com.docker.network.bridge.name=docker-external external
Block network access to the host machine for the new Docker network:
iptables -I INPUT -i docker-external -d <host private ip> -j DROP
Block network access to cloud provider instance metadata:
sudo iptables -I DOCKER-USER -i docker-external -d 169.254.169.254/32 -j DROP
Block internal network access:
sudo iptables -I DOCKER-USER -i docker-external -d 10.0.0.0/8 -j DROP
sudo iptables -I DOCKER-USER -i docker-external -d 172.16.0.0/12 -j DROP
sudo iptables -I DOCKER-USER -i docker-external -d 192.168.0.0/16 -j DROP
Add the following key to run integrations that use the
demisto/chromium
docker image with the Docker networkexternal
."python.pass.extra.keys.demisto/chromium": "
--network=external
"Save the changes.
Restart the demisto service on the engine machine.
sudo systemctl start d1
(Ubuntu/DEB)
sudo service d1 restart
Verify the configuration of your new Docker network:
sudo docker network inspect external
Persist Iptables Rules
By default, iptables rules are not persistent after a reboot. To ensure your changes are persistent, save the iptables rules by following the recommended configuration for your Linux operating system:
Troubleshoot Docker Performance Issues
Troubleshoot Docker performance issues in Cortex Xpanse. Update Docker package and dependencies.
This information is intended to help resolve the following Docker performance issues.
Containers are getting stuck.
The Docker process consumes a lot of resources.
Time synchronization issues between the container and the OS.
Cause
The installed Docker package and its dependencies are not up to date.
Workaround
Update the package manager cache.
Linux Distribution
Command
CentOS
yum check-update
Debian
apt-get update
(Optional) Check for a newer version of the Docker package.
Linux Distribution
Command
CentOS
yum check-update docker
Debian
apt-cache policy docker
Update the Docker package.
Linux Distribution
Command
CentOS
yum update docker
Debian
apt-get update docker
Troubleshoot Docker Networking Issues
Troubleshoot Docker networking issues in Cortex Xpanse.
In Cortex Xpanse, integrations and scripts run either on the tenant, or on an engine.
If you have Docker networking issues when using an engine, you need to modify the d1.conf
file.
On the machine where the Engine is installed, open the
d1.conf
file.Add to the
d1.conf
file the following:{ "LogLevel": "info", "LogFile": "/var/log/demisto/d1.log", "EngineURLs": [ "wss://1234.demisto.live/d1ws" ], "BindAddress": ":443", "EngineID": "XYZ", "ServerPublic": "ABC" "ArtifactsFolder": "", "TempFolder": "", "python.pass.extra.keys": "--network=host" }
Save the file.
Restart the engine using
systemctl restart d1
orservice d1 restart
.