Best practices for working with playbooks.
This guide provides best practices for building playbooks and reviews common mistakes and how to avoid them. Whether you are just starting or are creating advanced workflows, we recommend you review this guide carefully and implement these guidelines to make your playbooks efficient and easy to use.
Best Practices
Describe tasks clearly. Tasks should be clear to someone not familiar with the playbook workflow. This applies to task names, task descriptions, and the playbook description. When naming tasks, the guideline should be that users can understand what the playbook does by reading the task names, without having to open individual tasks to view the details.
Example of a good task name: Check if the IP is Private
Example of a bad task name: IP Check
Tasks take their inputs from the context, not directly from the previous tasks (even if it says from previous tasks). For an example of a task not receiving the right context, see this bug (since fixed) in a playbook:
The playbook begins by classifying the emails as internal or external. It then checks the reputation of external email addresses if any were found. That happens on the right side of the image. We expect that branch to run only if external addresses are found.
However, we did not apply a filter to the last task that gets the reputation on the right side:
This means that if both internal and external email addresses are found, we proceed with both branches (internal and external) of the playbook, and the task that gets the reputation runs without an applied filter, effectively taking all the emails we have in the inputs. The correct task should have been:
Use the CamelCase convention for inputs, keeping in mind that inherently capitalized terms should be kept in upper case. For example, the Entity ID
input should be named EntityID
and MITRE Technique
should be MITRETechnique
.
Use ignore-case
option where possible, especially when checking Boolean playbook inputs such as True
which users may end up configuring as true
with a lowercase t:
Refrain from using ${ }
(DT) if possible when getting context values. This minimizes mistakes such as choosing as value
instead of from previous tasks
, and is more user-friendly.
However, if using DT can drastically simplify the playbook or improve performance, carefully weigh the pros and cons of DT before using it in your playbook tasks.
Note
If you need to use DT for complex processing and you think a new filter or transformer would provide a better alternative to your DT solution, you can request the feature or contribute it.
When returning to work on a playbook after a break, verify you’re working on the latest version. Reattach the playbook if it’s detached, and update it, to ensure you’re not editing an older version and introducing regressions. If you don’t want to reattach your playbook, or you’re still working on your custom version, we recommend reviewing the release notes to see what changes were made to the out-of-the-box playbook and copying those changes to your version.
When configuring playbook outputs, configure sub-keys as much as possible, do not limit configuration to only the root keys. For example, instead of outputting File
, output File.Name
, File.Size
, etc. This helps when viewing the outputs of the playbook within another playbook.
When building playbooks, verify you are minimizing disk usage, CPU usage, and API calls:
Do I need to do this action in multiple tasks?
Can these tasks run in parallel instead of synchronously?
Where applicable, am I setting realistic timeouts, search windows, intervals?
Can I consolidate the API calls into one call? If not, can an integration enhancement solve this by accepting arrays as input instead of running multiple times for each input?
Am I unnecessarily storing the same data twice? Do I have the data I need already stored?
Where applicable, can I run this playbook without a loop?
When adding new fields that don’t need to be searched, remove the checkbox so that they are not searchable by default. Example of fields that should be searchable:
Endpoint ID
,Is Admin
. Example of fields that should not be searchable:Additional Notes
,Alert Summary
.
Playbooks that are triggered by an alert/job are considered parent playbooks. Sub-playbooks are playbooks that are used from within a parent playbook, as building blocks. The parent playbook is the main playbook that runs on the investigation, and each sub-playbook has a specific goal/responsibility.
Note
If a playbook has more than thirty tasks, consider breaking the tasks into multiple sub-playbooks. Sub-playbooks can be reused, can be managed easily when upgrading, and make it easier to follow the main playbook.
Parent playbooks usually have a
closeInvestigation
task at the end because they are the main playbook for that alert.Parent playbooks usually contain inputs that are passed down to sub-playbooks. Certain
True
/False
flags may come from the parent playbook inputs.
Run playbooks in quiet mode to reduce the alert size and execute playbooks faster. For playbooks running in jobs, indicator enrichment should be done in quiet mode.
When configuring your integration, set indicator extraction to none and extract indicators only in specific tasks where required.
Update scripts and integration commands in playbook tasks to their most current version. Scripts that have updates are designated by a yellow triangle.
Note
When a script is deprecated, it is not removed from Cortex XSIAM. Playbooks that use the script still run, but show an error.
For production playbooks, remove playbook tasks that are not connected to the playbook workflow.
Avoid Common Mistakes
Minimize integration check mistakes by using the IsIntegrationEnabled
script.
Verify whether the data you're getting is As value
(simple value) or From Previous Tasks
(from context).
When working with two lists, if you need multiple items from list A, which are also in list B, use the
in
filter instead of theequals
orcontains
filters.The correct method is to get the IP addresses that
are in
our list of inputs:The incorrect method is to get the IP addresses where the addresses
contain
our list. This is incorrect because they don't contain the list, they contain individual items from it.
Differentiate between checking if
a specific element exists
versus checking ifan
element equals something. This is a common mistake that can lead to tests working in some situations, but not all.The correct method is to check if
any object
where the NetworkType is Externalexists
.The incorrect method is to check if the NetworkType
of the IP object is External
. This is incorrect because the IP object may contain multiple IPs, some internal and some external.
Running
one or more tasks
based on theobject types
versus runningeither one task or the other
based onthe type of one object
.The correct method is to check the existence of both object types and run tasks for the types found.
The incorrect method is to check if there is either an internal or an external IP, and take only one path even if both types exist.
Use playbook loops only where needed. Loops are needed when certain actions have to be performed on specific pairs of data.
Example of incorrect usage:
A user has a playbook that creates relationships for multiple indicator types. All indicator types and malware families are in their ${inputs.Domain}
and ${inputs.MFam}
playbook inputs.
The user wrong assumes that when creating the relationships, the correct malware families in ${inputs.MFam}
correspond to the correct domains in ${inputs.Domain}.
Instead, the user should either use filters and transforms or loop through each separate indicator to verify they're creating the correct relationships.
Be aware of potential race conditions. When you want to add multiple values to the same key, do not use multiple tasks that run Set
, SetAndHandleEmpty
, or any other automation that sets data in context at the same time, because a race condition can cause your data to be overwritten by the same tasks. This is especially problematic when trying to append data. Instead, run the tasks one after the other or use automation to append the data instead of setting a new value to the key.