There are several options for implementing Cortex XSOAR with Elasticsearch, each with specific sizing requirements.
This topic provides information about the system requirements for implementing Cortex XSOAR with Elasticsearch.
The information in the following table is per Elasticsearch node, and assumes that the node is assigned all Elasticsearch node roles (for example, which data is written to disk and when).
8 CPU Cores
16 CPU cores
36 CPU Cores
16 GB RAM
32 GB RAM
64 GB RAM
250 GB SSD
500 GB SSD with minimum 3k dedicated IOPS
1 TB SSD
We recommend starting with the production minimum. If the production minimum is not sufficient for your needs (for cases such as managing many tenants in a multi-tenant environment, high memory use, large number of processes, etc.) we recommend upgrading to the production recommended specifications.
You must ensure that between the Elasticsearch and Cortex XSOAR servers, and between Elasticsearch servers, latency should not exceed 100 MS. Latency that exceeds 100 MS can cause serious performance degradation. For optimal performance, we recommend 10 MS or lower.
Elasticsearch user permissions are available in the security guidelines.
Ensure the latest Cortex XSOAR version is installed.
7.4 to 7.17 including minor versions
1.0 to 1.2 including minor versions
Elasticsearch in the Cloud
Cortex XSOAR supports using Elasticsearch with all the major cloud service providers, Amazon Web Services, Azure, and Google Cloud Platform.
For Opensearch, ensure that the AWS instance type supports a maximum HTTP payload of 100 MB, which is sufficient for production usage. For more information, see Amazon OpenSearch Instance Limits.
You can use Elasticsearch as a service provided by your cloud provider, or install Elasticsearch on a server in the cloud.
The hardware requirements for Elasticsearch in the cloud similar to those posted above. To achieve this with your cloud provider, Cortex XSOAR recommends you use the machines based on your intentions. For example:
When the Elasticsearch server functions as a data node, we recommend you use storage optimized machines, such as the AWS i3.2xlarge machine. Alternatively, you can use a memory optimized machine, such as the AWS r3.2xlarge machine. On managed Opensearch, we recommend using c5.4xlarge.search as a minimum, or c5.9xlarge.search for higher scale.
When the Elasticsearch server is used for any other function (such as master mode), we recommend that you use a Compute optimized machine, such as the AWS c4.2xlarge machine. On managed Opensearch, we recommend using m5.xlarge.search.
You can configure your cloud environment to work with different regions provided that you can maintain the minimum latency requirements noted above.
We recommend that you implement the following Elasticsearch configurations in Cortex XSOAR. The value of the shards and replica shards should match the sum total of Elasticsearch nodes that you have.
Set the number of shards for an index
This server configuration enables you to set the number of shards for a specific index upon creation, where
<common-indicator> is the name of the index. The default is 1.
To improve the write-performance, you can increase the number of shards and decrease the number of replica shards.
Set the number of replica shards for an index
This server configuration enables you to set the number of replica shards for a specific index upon creation, where
<common-indicator> is the name of the index. To increase search performance and data redundancy, you should set the value to the number of Elasticsearch nodes that you have. The default is 1.
Maximum indicator capacity and disk usage comparison
The following table compares the maximum total indicator capacity and disk usage for BoltDB and Elasticsearch. The maximum indicator capacity value was determined when testing the system.
We recommend using Elasticsearch if you plan to exceed at least one of the following maximum capacities for BoltDB.
The Cortex XSOAR indicators used to test the sizing requirements did not contain a significant number of additional fields nor custom fields. The maximum size of the indicators we tested had 20 additional or custom fields and a random string between 1-16 characters. Therefore, the indicators size tested were approximately 0.5KB. If you plan to have additional or custom fields for indicators, the maximum numbers should be reduced.
Maximum indicator capacity (total)
(Requires up to 10 seconds for a complex query)
(Requires approximately 40 seconds for a complex query)
5 million (~ 30 GB)
100 million (~ 70 GB)
If performance is poor, or you know in advance that you will need more than the maximum number of indicators, you should consider scaling BoltDB or moving to Elasticsearch. If you are already in Elasticsearch, you can scale it as well. For both BoltDB and Elasticsearch, you can scale by either adding engines for one or more feed integrations or increasing the resources (CPU, RAM, Disk IOPS) of the Cortex XSOAR server. For Elasticsearch, you can also increase the Elasticsearch cluster size from one server to two or more servers.
Incident disk usage comparison
The following table compares the disk usage for BoltDB and Elasticsearch.
Number of Incidents
Single feed fetch comparison
The following table compares the number of indicators, time to ingestion, and disk usage for BoltDB and Elasticsearch.
Number of Indicators
Time to Ingestion
1.08 GB + 161 MB (Elasticsearch index)
1.08 GB + 26.7 MB (Elasticsearch index)
1.08 GB + 53 MB (Elasticsearch index)
1.08 GB + 570MB (Elasticsearch index)
1.23 GB + 1 GB (Elasticsearch index)