Troubleshoot common issues in Cortex XSOAR Elasticsearch deployments.
Your Elasticsearch deployment can have issues with feed ingestion, memory, or general functionality.
Note
PANW support will provide customer help to the best of their ability. You should consider vendor support per need.
After reviewing the troubleshooting items in the table below, if you need to create a support ticket:
Set the log level to debug by going to
→ → .Reproduce the issue and download server log bundles.
(High Availability) For High Availability deployments, you only need to download the server log bundle once, as it gathers logs from all online application servers. If your High Availability servers are behind a load balancer and the log bundle times out, increase the timeout on your load balancer to five minutes.
Attach the logs to the support ticket.
General issues
Issue | Description | Recommendation |
---|---|---|
limit of total fields (X) in index has been exceeded | Mapping in Elasticsearch exceeded maximum configured field capacity. | Use
|
field expansion matches too many fields, limit: X, got: X+Y | The number of fields a query can target exceeded the limit of X. | Set |
request to elasticsearch exceeded maximum size, increase 'http.max_content_ length' in your elasticsearch.yml to allow larger requests or (413) request entity too large | A request to Elasticsearch exceeded the maximum size of
| If the request is a bulk save, decrease the |
too many requests to elasticsearch |
| |
unable to search on entries | By default, indexing entries content is disabled for performance reasons. | Set the You can also index notes, changes and evidences for searches only using the NoteAfter making these changes, you will be required to wait for a new month to apply the new mapping. To apply the change to the existing entries you will need to reindex the common-entry_* indices. |
too many open files | By default, most Linux distributions ship with 1,024 file descriptors allowed per process. This is too low for even an Elasticsearch node that needs to handle hundreds of indices. | Increase your file descriptor count to 64,000. |
[400] Failed with error [1:417] [bool] failed to parse field [must]. Other reasons: [[{x_content_parse_ exception [1:417] [bool] failed to parse field [must]}]] | Queries with multiple | Avoid using |
cannot restore index [.geoip_databases] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name | When using Elasticsearch v7.14 or later, you may encounter failures when restoring from snapshots. | Add |
ReleasableBytesStream Output cannot hold more than 2GB of data. Other reasons: [[{illegal_argument_ exception ReleasableBytesStream Output cannot hold more than 2GB of data}]] ) [error '[400] Failed with error: ReleasableBytesStream Output cannot hold more than 2GB of data | If there are insufficient shards, Elasticsearch’s circuit breaker limit may be reached due to the search load. | Increase the number of shards. For example, if you have a three-data-nodes cluster, you should have at least two replicas for each active shard, making the data available across all nodes. We also recommend using nodes stats to verify the nodes are balanced for read and write operations. |
Data too large, data for [<xxx>] would be [xxxx/xxxx], which is larger than the limit of [xxxxx/xxxxx] | Elasticsearch's circuit breaker limit is reached due to the load of indexing operations. |
|
HTTP 504 gateway request timeouts | HTTP requests are timing out and preventing playbook data from loading in the browser. | If your high availability servers are behind a load balancer, increase the timeout on your load balancer to 300s. |
Memory issues
Issue | Description | Recommendation |
---|---|---|
Insufficient JVM memory | The default JVM memory is 1 GB. In production environments, this might be insufficient. | Increase the JVM memory NoteYou should set the JVM to no more than 50% of total machine memory and not more than 32GB. |
Insufficient term query size | The term query size is used by the bulk edit. The default term query size is 65536 and may be insufficient. | Increase the term query size. |
Insufficient bulk size | The bulk size depends on the available JVM memory and affects the amount of data that Cortex XSOAR can send and process in Elasticsearch. | |
Heap size | The recommended maximum heap size is 50% of the entire server, as long as the other 50% is free. | |
Performance issues due to swapping enabled | Disable swapping in Elasticsearch to improve performance. |
Feed Ingestion Issues
Issue | Description | Recommendation |
---|---|---|
Stack overflow | In some cases, complex search queries cause Elasticsearch to fail on stack overflow. | Use the following search query syntax: To determine how many clauses a query can contain, set the maximum clause count and the maximum total field count in the Maximum clause count Key: For ES 6.0 and later the key is For ES 5.x and earlier the key is Default: 1,024. You can increase the value. Maximum total field count Key: Default: 1,000. You can increase the value. |