The following procedure describes the steps for deploying an OVA image on OCI and then installing Cortex XSOAR on your deployed virtual machines.
Important
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
To install a Cortex XSOAR 8 tenant, you need to log into Cortex Gateway, which is a portal for downloading the relevant image file and license. Downloading a file image from Cortex Gateway ensures you have the latest pre-configured software package for easy deployment and updates. If you have multiple or development tenants, you must repeat these tasks for each tenant.
Prerequisite
A Customer Support Portal (CSP) account.
You need to set up your CSP account. For more information, see How to Create Your CSP User Account.
When you create a CSP account you can set up two-factor authentication (2FA) to log into the CSP, by using an Email, Okta Verify, or Google Authenticator (non-FedRAMP accounts). For more information, see How to Enable a Third Party IdP.
Have one of the following roles assigned:
Role
Details
CSP role
The Super User role is assigned to your CSP account. The user who creates the CSP account is granted the Super User role.
Cortex role
You must have the Account Admin role.
If you are the first user to access Cortex Gateway with the CSP Super User role, you are automatically granted Account Admin permissions for the Cortex Gateway. You can also add Account Admin users as required.
To download the Cortex XSOAR 8 images from Cortex Gateway, you need a license (or evaluation license via sales) assigned to your CSP account.
Review the System requirements for deploying a Cortex XSOAR tenant.
Have a basic understanding of how to deploy the OVA file format.
For VMWare ESXi 6.5 and later, you need hardware version 13.
Log in to Cortex Gateway.
In the Available for Activation section, use the serial number to locate the tenant to download.
By default, the Production-Standalone license is selected. You can also select Dev.
Production and development are separate Kubernetes clusters with no dependency between them. For example, you can deploy a three-node cluster for production and a standalone node for development, or you can support small-scale for development and large-scale for production.
If you want to use a production and a development tenant with a private remote repository, select Dev. If you don't select it now, you can install a development tenant later.
Select Download On Prem.
Click Next.
Select the OVA image format to download.
OVA is supported by AWS, Oracle Cloud Infrastructure (OCI), and VMWare (for example, VSphere).
Select the checkbox to agree to the terms and conditions of the license and click Download.
Tip
In Google Chrome, to download the image and license files together, you may need to set the the browser Settings → Privacy and security → Site settings → Additional permissions → Automatic downloads to the default behavior Sites can ask to automatically download multiple files.
Two files download: A zipped license file containing one or more JSON license files with instructions, and a zipped image file of the type you selected (.ova, .vhd)
Extract (unzip) the license and image files.
Currently, only Oracle Public Cloud is supported (not Government Cloud).
If you set your Cortex XSOAR environment as a standalone (single node), you cannot add nodes to it and switch to a cluster. If you deploy three nodes, you can later add nodes and expand the cluster. For more information, see Manage nodes in a cluster.
Important
To implement built-in High Availability, deploy a cluster with three nodes (VMs), with each VM on a different hypervisor. This ensures that if one hypervisor fails, the other VMs continue to operate.
You then need to:
Establish trust between all nodes in the cluster (Task 5).
Set the Cluster FQDN to the reverse proxy/ingress controller IP address (Task 6). The reverse proxy/ingress controller serves as a single entry point to distribute traffic across the nodes in the cluster.
In OCI, upload the OVA image file to a private bucket.
Note
Make sure the bucket is private and secure.
Import the image from the bucket into the OCI environment.
Disable CPU logging and performance (DRS). For more information, see Oracle Define or Edit Server Pool Policies.
Create the instance.
Create a block volume.
The block volume size depends on the scale you want to use. For example, 1024 GB (1TB) corresponds to the hardware requirements for a small scale deployment with a 256 GB boot disk plus an additional separate 775 GB data disk.
Important
Every virtual machine is provided with a 256 GB hard disk to run the OS. However, you also need to add an extra hard disk for each virtual machine instance you want to deploy to run the application.
All virtual machines in a cluster must have the same storage size.
To ensure successful deployment, make sure the hard disks meet performance requirements detailed in the System requirements.
Attach the block volume to the running instance.
Note
The attachment type needs to be Paravirtualized (and not iSCSI).
Repeat these steps for each VM in a cluster.
For first time login, open an external terminal and use the
ssh admin<server ip address>command to SSH log in. The default user name and password isadmin.Give the admin a new password as follows.
Important
Save the SSH password securely. If you lose this password you cannot recover or change it, and to use SSH you will need to redeploy the cluster.
The password must be at least eight characters long and contain at least:
One lower case letter
One upper case letter
One number, or one of the following special characters: !@#%
If this is not a first time login, you can log in from the web console or from a terminal using the
ssh admin@<server ip address>command to SSH log in.The textual UI menu opens with all the configuration and installation options.
Tip
To start using the textual UI, click anywhere on the screen.
To navigate between the menu items, use the up and down arrow keys. To select a menu item, press the Enter key.
To navigate between fields within a menu item, use the Tab key. To save settings, tab to the Save button and press the Enter key.
To go back to the menu from a specific menu item field, press the esc key.
Note
Since the Cloud platform handles network and IP settings, you can skip the Host Configuration menu in the textual UI.
Confirm the following network and IP settings are added to the rules of the security group or the firewall rules for each node in a cluster (for standalone there is just a single node). If they are not added to the rules, the installation may fail.
Port configurations
For standalone (one VM) and a three-node cluster (three VMs):
Port | Protocol | Purpose |
|---|---|---|
22 | TCP | SSH communication |
8880 | TCP | Node communication |
A Kubernetes cluster consists of a control plane and one or more worker nodes. For Cortex XSOAR, in standalone (one VM), the VM acts as both control plane and as a worker node. In multi-node clusters, the first three nodes act as both control plane and as worker nodes, and any additional node added acts as a worker node.
Name | Port | Protocol |
|---|---|---|
etcd client port | 2379 | TCP |
etcd peer port | 2380 | TCP |
Kubernetes API | 6443 | TCP |
Kubelet API | 10250 | TCP |
kube-scheduler | 10257 | TCP |
kube-controller-manager | 10259 | TCP |
Name | Port | Protocol |
|---|---|---|
kube nodeport range | 30000:32767 | TCP |
For a multi-node cluster (three VMs):
Name | Port | Protocol |
|---|---|---|
Calico with IPv4 Wireguard | 51820 | UDP |
URLs
Check the following URLs to ensure Cortex XSOAR operates properly.
Function | Service | Port | Direction |
|---|---|---|---|
Web interface | HTTPS | 443 | Inbound |
Engine connectivity | HTTPS | 443 (configurable) | Inbound |
Integrations | Integration-specific ports | Outbound | |
Unit42 Intel Inventory (TIM license) | https://unit42intel.xsoar.paloaltonetworks.com | 443 | Outbound |
Marketplace |
| 443 | Outbound |
On-prem Gateway | onpremgw.crtx.[region].paloaltonetworks.com Cortex XSOAR accesses new versions from and uploads licenses to this repository. | 443 | Outbound |
Download packages required for installation |
| 80 | Outbound |
NTP
Ensure all nodes are synchronized with no NTP offset in order to prevent degraded storage performance.
When a proxy is configured in Cortex XSOAR, the system by default routes internal node-to-node communication and other internal traffic through that configured proxy. If you do not configure a proxy, the system uses standard network routing and DNS resolution.
If you want to use a proxy, define the proxy address and port settings. The proxy can be set at any point, during Cortex XSOAR deployment or at a later stage.
From the textual UI menu, select Proxy Configuration.
Configure the following settings.
Proxy Address
Note
You can either enter the address as
IP:portwithout ahttp://orhttps://prefix, or enter the host name.Proxy Port
Select Save.
This task is not relevant for a standalone deployment (single node).
For each VM (node) in a cluster, the nodes must have SSH connections between them, where all the nodes trust one another. To establish trusted connections in a cluster, one node is designated as the signing server host, generating a token for secure communication and authentication. Other nodes connect to the host using the token displayed on the host's screen.
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
Important
To implement built-in High Availability, after establishing trust between all nodes in a cluster, in the cluster installation step (Task 6) you need to set a single entry point to distribute traffic across the nodes in the cluster. Do this by setting the Cluster FQDN to either the virtual IP address or to the reverse proxy/ingress controller IP address.
In the textual UI menu for the VM you want to be the host, select Connect Nodes.
Select Host.
A message displays that this action cancels prior trust established with other nodes. Select Yes to continue.
This node becomes the host, and a token is generated on the screen. Copy the token, for example:
Note
Keep this window open (do not select Stop) until trust is established between all nodes to enable the host to listen for the token from the other nodes.
In the textual UI for each additional node (VM) in the cluster:
Select Connect Nodes.
Select Join.
Paste the Token generated for the host.
Enter the Host IP Address.
Select Submit.
A message displays that this action cancels prior trust established with other nodes. Select Yes to continue.
Select OK.
After trust is established between all the nodes in the cluster, go back to the host node and select Stop to close the listening window.
Prerequisite
Ensure the following DNS records were added to your DNS server to resolve hostnames to the cluster IP address (only static, DHCP is not supported). These DNS records (for a given tenant) should all point to the same cluster IP address to ensure a single entry point.
xsoar.<hostname>.<domain>: The Cortex XSOAR DNS name for accessing the UI. For example,xsoar.mycompany.com.api-<hostname>.<domain>: The Cortex XSOAR DNS name that is mapped for API access. For example,api-xsoar.mycompany.com. This should be a CNAME entry pointing to the same cluster IP address.ext-<hostname>.<domain>: The Cortex XSOAR DNS name that is mapped to access long running integrations. For example,ext-xsoar.mycompany.com. This should be a CNAME entry pointing to the same cluster IP address.
From the textual UI menu, select Cluster Installation.
The virtual machine you use to run the installer will deploy Cortex XSOAR on all virtual machines in a cluster.
For a single virtual machine (standalone), configure the settings for a single node.
Configure the following settings.
Important
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
You can only change these field values in the textual UI menu before installing. To change these values after installing, you need to redeploy your cluster and then reinstall. Contact support or engineering for assistance.
Field
Description
Cluster Nodes
A list of IPs of all virtual machines/nodes in the cluster, separated by a space. For example,
10.196.37.10 10.196.37.11 10.196.37.12Copy the IP of each VM from the Private IPv4 address in the OCI Instance information tab and paste it in this field, separated by a space.
Cluster FQDN
The Cortex XSOAR environment DNS name. For example,
<subdomain>.<domain name>.<top level domain>Copy the FQDN from the Internal FQDN field in the OCI Instance information tab and paste it in this field.
For a single node: This field value must be registered in your DNS server so the FQDN will be resolved to the IP of the node.
For a multi-node cluster: To implement built-in HA using a reverse proxy/ingress controller, you need to set this field value to match the IP of the reverse proxy/ingress controller, and it must be registered in your DNS server so the FQDN will be resolved to the IP of the reverse proxy/ingress controller.
The reverse proxy/ingress controller IP address serves as a single entry point for the entire Cortex XSOAR cluster. The reverse proxy/ingress controller checks the health endpoint of the node for any issues. If the node is healthy it can be used to process requests. To use a reverse proxy/ingress controller IP address:
Set access to the cluster nodes through port 443.
Use HTTP 10254 with the path
/healthzas the health endpoint.
Note
Cortex XSOAR supports only static IP addresses for each virtual machine in the cluster, it does not support a DHCP (dynamic IP) network interface.
Virtual IP (optional)
The Cortex XSOAR environment virtual IP for the multi-node cluster. It is a virtual interface assigned to one of the nodes to provide a single access point to the cluster. The virtual IP address must be a dedicated, available IP address that is not assigned to any nodes in the cluster.
Important
Do not fill in this field (Cortex XSOAR does not support virtual IPs in Cloud deployments).
Cluster Region
The region the cluster is located in. For example, US.
Cortex XSOAR Admin Email, Password, and Confirm Password
Credentials for the first user to log in to Cortex XSOAR.
Important
These fields can only be changed before installation, so it is important to keep this information secure. To change values like username or password after installation, you will need to redeploy your cluster and reinstall. Contact support or engineering for assistance.
For the Cortex XSOAR Admin Email, we recommend using a service account rather than a specific user email address since this cannot be changed after installation.
Note
The password must be at least eight characters long and contain at least:
One lower case letter
One upper case letter
One number, or one of the following special characters: !@#%
Select Install.
Verify all nodes meet the required hardware and network requirements, and select Install again.
The virtual machine you use to run the installer will deploy Cortex XSOAR on all virtual machines in a cluster.
After the installation tasks run, an Installation completed successfully message displays in the textual UI. However, you need to wait until the installation process fully completes (approximately 30 minutes) and then check that you can log in to Cortex XSOAR. You then need to upload your license to enable all Cortex XSOAR pages.
Log in to Cortex XSOAR.
When you log in for the first time, use the Admin password and email you set during installation.
Upload your license to Cortex XSOAR.
For more information, see Add the Cortex XSOAR license.