Best practices for building and working with playbooks.
The following guidelines are best practices for building playbooks as well as optimizing playbook design and performance. Whether you are just starting or are creating advanced workflows, we recommend reviewing these recommendations carefully so your playbooks have a clear logical flow and run correctly and efficiently.
Best Practices for Building Your Playbook
The Use Case Builder content pack helps you streamline the use case design process, including building your playbook. It contains tools to help you measure and track use cases through your automation journey and quickly autogenerate OOTB playbooks and custom workflows.
For a detailed example of designing and building a use case, watch this video series.
Describe tasks clearly. Tasks should be clear to someone not familiar with the playbook workflow. This applies to task names, task descriptions, and the playbook description. When naming tasks, the guideline should be that users can understand what the playbook does by reading the task names, without having to open individual tasks to view the details.
Task name: Check if the IP is Private
Task name: IP Check
Group related input fields.
Grouping inputs organizes the input fields and provides clarity and context to understand which inputs are relevant to which playbook flow.
Use camel case for input names.
Use the CamelCase convention for inputs, keeping in mind that inherently capitalized terms should be kept in upper case. For example, the
Entity IDinput should be named
MITRE Techniqueshould be
Define outputs properly.
When configuring playbook outputs, configure sub-keys as much as possible, do not limit configuration to only the root keys. For example, instead of outputting
File.Size, etc. This helps when viewing the outputs of the playbook within another playbook.
Avoid using Cortex XSOAR Transform Language (DT) in the Get input field definition.
If you need to use DT for complex processing and you think a new filter or transformer would provide a better alternative to your DT solution, you can request the feature or contribute it. Consider using DT only if it can drastically simplify the playbook or improve performance.
In each task, make sure appropriate logical operations are performed on input data. For example:
Avoid race conditions.
Be aware of potential race conditions. When you want to add multiple values to the same key, do not use multiple tasks that run
SetAndHandleEmpty, or any other script that sets data in context at the same time, because a race condition can cause your data to be overwritten by the same tasks. This is especially problematic when trying to append data. Instead, run the tasks one after the other or use scripts to append the data instead of setting a new value to the key.
Determine where inputs are coming from.
Verify whether the data you're getting is
As value(simple value) or
From Previous Tasks(from context).
Filter your inputs correctly so the task runs efficiently.
Tasks take their inputs from the context, not directly from the previous tasks (even if it says from previous tasks). For an example of a task not receiving the right context, see this bug (since fixed) in a playbook:
The playbook begins by classifying the emails as internal or external. It then checks the reputation of external email addresses if any were found. That happens on the right side of the image. We expect that branch to run only if external addresses are found.
However, we did not apply a filter to the last task that gets the reputation on the right side:
This means that if both internal and external email addresses are found, we proceed with both branches (internal and external) of the playbook, and the task that gets the reputation runs without an applied filter, effectively taking all the emails we have in the inputs. The correct task input should have been:
Select Ignore case for input names.
ignore-caseoption where possible, especially when checking Boolean playbook inputs such as
Truewhich users may end up configuring as
truewith a lowercase t:
When working with two lists, if you need multiple items from list A, which are also in list B, use the
infilter instead of the
Get the IP addresses that
are inthe list of inputs.
Get the IP addresses where the addresses
containthe list. This is incorrect because they don't contain the list, they contain individual items from it.
Differentiate between checking if
a specific element existsversus checking if
anelement equals something. This is a common mistake that can lead to tests working in some situations, but not all.
any objectwhere the NetworkType is External
Check if the NetworkType
of the IP object is External. This is incorrect because the IP object may contain multiple IPs, some internal and some external.
one or more tasksbased on the
object typesversus running
either one task or the otherbased on
the type of one object.
Check the existence of both object types and run tasks for the types found.
Check if there is either an internal or an external IP, and take only one path even if both types exist.
Use playbook loops only where needed. Loops are needed when certain actions have to be performed on specific pairs of data.
Correct Method Example
Incorrect Method Example
Either use filters and transformers or loop through each separate indicator to verify they're creating the correct relationships.
A user has a playbook that creates relationships for multiple indicator types. All indicator types and malware families are in their
The user wrongly assumes that when creating the relationships, the correct malware families in
IsIntegrationEnabled script in your playbook to make sure any integrations you need to run are enabled.
Best Practices for Optimizing Playbook Design and Performance
In order to minimize your incident response time and make sure the system runs optimally, it's important to follow design and performance guidelines.
When returning to work on a playbook after a break, verify you’re working on the latest version. Reattach the playbook if it’s detached, and update it to ensure you’re not editing an older version and introducing regressions. If you don’t want to reattach your playbook, or you’re still working on your custom version, we recommend reviewing the release notes to see what changes were made to the out-of-the-box playbook and copying those changes to your version.
Update scripts and integration commands in playbook tasks to their most current version. Scripts that have updates or are deprecated are designated by a yellow triangle.
If a playbook has more than thirty tasks, consider breaking the tasks into multiple sub-playbooks. Sub-playbooks can be reused, managed easily when upgrading, and they make it easier to follow the main playbook.
Playbooks that are triggered by an incident/job are considered a parent playbook. Sub-playbooks are playbooks that are used from within a parent playbook, as building blocks. The parent playbook is the main playbook that runs on the investigation, and each sub-playbook has a specific goal/responsibility.
Parent playbooks usually have a
closeInvestigationtask at the end because they are the main playbook for that incident.
Parent playbooks usually contain inputs that are passed down to sub-playbooks. Certain
Falseflags may come from the parent playbook inputs.
For production playbooks, remove playbook tasks that are not connected to the playbook workflow.
Run playbooks in quiet mode to reduce the incident size and execute playbooks faster. For playbooks running in jobs, indicator enrichment should be done in quiet mode.
Playbook tasks by default try to extract all indicator types from the task Results. (The Results entry is the information printed to the War Room, not the outputs of the task). Extracting all indicator types can slow down the playbook, so it is important to only extract indicators as needed. For example, for the ParseEmailFilesV2 script which prints email information to the War Room, extraction should be enabled in order to extract email addresses, URLs, and other indicators. However, if your task runs the Sleep script, there is no point in extracting indicators.
Set the Indicator Extraction mode to None in the playbook task Advanced tab.
Consider the following:
Do I need to do this action in multiple tasks?
Can these tasks run in parallel instead of synchronously?
Where applicable, am I setting realistic timeouts, search windows, intervals?
Can I consolidate the API calls into one call? If not, can an integration enhancement solve this by accepting arrays as input instead of running multiple times for each input?
Am I unnecessarily storing the same data twice? Do I have the data I need already stored?
Where applicable, can I run this playbook without a loop?
What extractions are running in my incident?
Only if your task requires extracted indicators, in the playbook task you can change the indicator extraction mode to inline. Use this mode carefully because it can affect performance. In addition, it is important to customize and limit the indicators extracted from incident fields of the incident type you are ingesting in the incident type settings Indicator Extraction Rules.
When creating new incident fields that do not need to be searched, double check whether they should be searchable under the relevant checkbox. Example of fields that should be searchable:
Is Admin. Example of fields that should not be searchable: