Admin guide
Admin guide
3.8. Post-installation
3.8.1. Add the Cortex XSOAR license
3.8.2. HTTPS with a signed certificate
3.8.3. Use a signed certificate instead of SSL
3.8.4. Optimize performance from the textual UI
3.8.4.1. Manage nodes in a cluster
3.8.4.2. Scale up hardware resources
5. Engines
5.1. What is an engine?
5.3.3. Podman
5.3.3.1. Change container storage directory
5.3.3.2. Install Podman
5.3.3.3. Migrate From Docker to Podman
5.3.3.4. Troubleshoot Podman
8. Marketplace
8.1. Cortex Marketplace
9. Integrations
9.1. Integration use cases
11. Playbooks
11.1. What is a playbook?
11.6. Scripts
11.6.1. Create a script
12. Lists
12.1. What is a list?
13. Jobs
13.1. Manage jobs
14. SLAs
14.1. SLAs in Cortex XSOAR
14.7. Use SLA and Timer field commands manually in the CLI
15.2. Reports
15.2.1. Manage reports
15.2.1.1. Report scheduling examples
15.3. Widgets
15.3.1. Widget customization
15.3.2. Create a widget using the widget builder
15.3.2.1. Create a widget using the widget builder examples
18. Troubleshoot
18.1. View system status in the System Diagnostics page
18.2. View service limit errors and warnings in the Guard Rails page
19. Reference
19.1. Cortex XSOAR concepts
20. Multi-tenant
20.1. What is Cortex XSOAR multi-tenant?
View information about how to get started with Cortex XSOAR On-prem such as architecture, roles and responsibilities, and licenses.
Before diving in, understand Cortex XSOAR functionality and how it integrates with your needs. Review the available licenses, service limits, and
other key details to optimize your Cortex XSOAR experience from the start.
Alert Alert Exclusion Analytics behavioral indicators of compromise Attack Surface Management Behavioral indicators of compromise Bring Your
Own Machine Learning Broker Virtual Machine Broker Virtual Machine Fully Qualified Domain Name Causality Chain Causality Group Owner
Causality View Cloud Detection and Response Cortex Copilot Cortex Data Model Cortex Query Language Dataset Elasticsearch Filebeat
Endpoint Detection and Response Endpoint Protection Platform Exception Exception vs Alert Exclusion Extended Detection and Response
External Dynamic List Filebeat Forensics Fully Qualified Domain Name Identity Threat Detection and Response Incident Indicators of compromise
IT Metrics Dashboard Managed Threat Hunting Management, Reporting, and Compliance Master Boot Record Protection MITRE ATT&CK
Framework Coverage Dashboard Next-Generation Firewall Notebooks On-write File Protection PlaybookPrisma ScriptSecurity Orchestration,
Automation, and Response Security Information and Event Management Threat Intelligence Platform User and Entity Behavior Analytics Unified
Extensible Firmware Interface Protection Virtual Machine Vulnerability Assessment Windows Event Collector XSIAM Command Center
Cortex XSOAR is the industry’s first extended security orchestration and automation platform that simplifies security operations by unifying
automation, case management, real-time collaboration, and threat intel management.
Cortex XSOAR ingests aggregated alerts and indicators of compromise (IOCs) from detection sources, such as security information and event
management (SIEM) solutions, network security tools, threat intelligence feeds, and mailboxes, before executing automatable, process-driven
playbooks to enrich and respond to these incidents. These playbooks coordinate across technologies, security teams, and external users for
centralized data visibility and action.
With a Threat Intel Management license, Cortex XSOAR provides a Threat Intelligence Platform with actionable threat data from Unit 42. You can
identify and discover new Malware families or campaigns and create and disseminate strategic intelligence reports.
For existing Cortex users, XSOAR is easily integrated into other Cortex solutions and is delivered from the same platform.
An error occurred.
Automate incident response workflows and repetitive tasks to free up analysts to focus on the most critical incidents with Cortex XSOAR.
Use predefined playbooks or easily customize your own to automate SOC use cases such as indicator enrichment, alert deduplication,
phishing response, ransomware response, threat intelligence feed management, malware investigation, and even IT operations such as
employee onboarding and offboarding.
Cortex XSOAR supports future growth, with rapid deployment to accelerate ROI. Fully integrated into the Cortex platform, Cortex XSOAR is
delivered through a unified user interface for ease of use and consistency in workflow management.
When complex, real-time investigations require analyst intervention, ensure analysts have quick access to investigation data. Cortex
XSOAR accelerates incident response by unifying incident and indicator data from multiple sources on a single easy-to-search platform.
Collaborative investigation features provide a powerful toolkit to help analysts assist each other, run real-time security commands, and learn
from each incident with auto-documentation of all actions. An ML-driven assistant learns from actions taken in the platform and offers
guidance on analyst assignments and commands to execute actions.
Unify aggregation, scoring, and sharing threat intelligence with playbook-driven automation with native threat intelligence management. The
built-in, high-fidelity threat intelligence can be boosted by layering additional third-party threat intel to better reveal and prioritize critical
threats.
Cortex XSOAR ingests aggregated alerts and indicators of compromise (IoCs) from detection sources such as security information and event
management (SIEM) solutions, network security tools, threat intelligence feeds, and mailboxes, before executing automatable, process-driven
playbooks to enrich and respond to these incidents. These playbooks coordinate across technologies, security teams, and external users for
centralized data visibility and action.
For existing Cortex users, XSOAR is easily integrated into other Cortex solutions and is delivered from the same platform. Cortex XSOAR ingests
alerts from third-party products and Threat intel feeds and by installing content packs, you can automate the investigation and response process.
The following diagram describes the high-level architecture for Cortex XSOAR:
Cortex XSOAR installation is implemented by your IT team or Cortex XSOAR administrators. Cortex XSOAR uses the following:
Cortex XSOAR is provided as an Kubernetes cluster, a set of nodes (VMs) that runs containerized applications that package Cortex XSOAR with
its dependencies and some necessary services. You can decide how many nodes/VMs to include in the cluster when running the Administrative
tool. You can decide between a standalone environment (one node) or a multi-node cluster (three nodes).
Playbooks are executed on dedicated and isolated workers and workloads do not share compute resources.
Accelerate incident response: Replacing low-level manual tasks with automations, security automation can shave off large chunks from
incident response times while improving accuracy and analyst satisfaction.
Standardize and scale processes: Through stepwise, replicable workflows, security automation can help standardize incident enrichment
and response processes that increase the baseline quality of response and is primed for scale.
Unify security infrastructures: A SOAR platform like Cortex XSOAR can act as a connective fabric that runs through hitherto disparate
security products, providing analysts with a central console from which to action incident response.
Increase analyst productivity: Since low-level tasks are automated, and processes are standardized, analysts can spend their time in
more important decision-making and charting future security improvements rather than getting mired in grunt work.
Leverage existing investments: By automating repeatable actions and minimizing console switching, security orchestration enables teams
to coordinate among multiple products easily and extract more value out of existing security investments.
Streamline incident handling: By applying automation to incident ticket management via integrations with key ITSM vendors such as
ServiceNow, Jira, and Remedy, as well as communication tools such as Slack, security teams can speed up incident handling and closure.
Incidents can also be distributed automatically to the respective stakeholders based on predefined incident types.
Improve overall security posture: The sum of all aforementioned benefits is an overall improvement of the organization’s security posture
and a corresponding reduction in security and business risk.
The following examples demonstrate how to automate repetitive tasks and streamline your security incident response processes for maximum
efficiency. These are tried and tested automation use cases that have been leveraged by our own Palo Alto Networks SOC, ITOps, and our
customers to gain operational efficiencies and scale.
Phishing response
Phishing emails are pernicious and one of the most frequent, easily executable, and harmful security attacks organizations still face today.
Responding to a phishing email involves switching between multiple screens to coordinate a response, including responding to end users. These
tasks can easily take around 45 minutes of your time per incident.
In Cortex XSOAR phishing playbooks can help you execute repeatable tasks at machine speed, identify false positives, and prime your operations
for standardized phishing responses at scale. More importantly, the quick identification and resolution of false positives gives you more time to
deal with genuine phishing attacks and prevents them from slipping through the cracks. Cortex XSOAR has machine learning intelligence built in,
allowing you to “train” the phishing engine to recognize future phishing attacks.
Engage
Cortex XSOAR can ingest suspected phishing emails as incidents from various detection sources such as SIEMs, EDRs, email security, or
phishing services. If you aggregate all suspected phishing emails in a common mailbox, these emails can be ingested as incidents via a mail
listener integration.
When the email is ingested, a playbook is triggered, going through the steps to automate enrichment and response. To keep end users updated,
the playbook sends an automated email to the affected user and lets them know the suspected phishing email is being investigated.
In the triage process, the playbook can perform extraction and enrichment of indicators of compromise (IoC) extraction.
By investigating the email, such as title, email address, and attachments, the playbook assigns incident severity by cross-referencing these details
against external threat databases. Following this, the playbook extracts IoCs from the email and checks for any reputational red flags from threat
intelligence tools that your team uses.
When enrichment is finished, the playbook checks if any malicious indicators are found. Based on this check, different response branches can
arise.
Respond
Different playbook branches execute depending on whether malicious indicators were detected in the suspected phishing email.
If malicious indicators are detected, the playbook sends an email to the affected user with further instructions. The playbook also scans all
organizational mailboxes/endpoints to identify other instances of that email and deletes all instances to avoid further damage. Finally, the playbook
adds the malicious IoCs to block lists/watchlists on the SOC’s other tools. If no malicious indicators are detected, there are still precautions to be
taken before confirming that the email is harmless. The playbook checks if there are any attachments in the email that can be sent for detonation
in a sandbox.
Threat intel analyses are then presented in an incident war room for the analyst to do a final check. Once the analyst is satisfied that the email isn’t
malicious, the playbook sends an email to the affected user apprising them of the false alarm. The incident ticket is marked closed.
You easily eliminate 10 or more steps your security team has to touch, saving them hours responding to phishing alerts.
Determining if alerts for unknown activity from your endpoint security tools are malicious often involves coordinating between multiple security
tools. It’s a cross-referencing nightmare with multiple consoles open simultaneously and valuable time spent performing repetitive data collection
tasks. Decreasing the investigation and response time means less dwell time for malicious activity to wreak havoc in your network.
Automation playbooks can unify processes across SIEMs and endpoint tools in a single workflow, performing repetitive steps before bringing
analysts in for important decision-making and investigative activities.
Query
An incoming endpoint security alert triggers a series of playbooks that automatically query for evidence of malice, such as:
Is there evidence of persistence? Did the process create any scheduled jobs? Did it write to the registry? Was Autorun updated?
The findings are presented in the incident for an analyst review, eliminating the need to manually collect and piece the evidence together.
Triage
Detonating suspicious files in sandboxes for malware analysis is an ever-present and important investigative step during incident response.
However, it’s taxing for security analysts to coordinate across consoles while executing this repetitive task because malware analysis tools are
isolated from other security products. Transferring results from one console to another for documentation is time-consuming and increases the
chances of errors.
In this scenario, playbooks can be run concurrently to automate the file detonation process as an isolated workflow or with other enrichment
activities. Playbooks can parse through the results of the sandbox detonation and be configured to run specific queries against the EDR tool. As
playbooks document the result of all actions on a central console, the need for manual post-incident documentation is also eliminated.
Another aspect of malware analysis involves gathering forensic data, such as all the processes running on a machine, which can be automated.
During an investigation, it is critical to understand what is happening on the endpoint when the alert is detected. Sometimes it can be minutes or
even hours before an analyst looks at a detected alert, at which point the state of the endpoint is likely different, which makes the re-creation of
what happened more challenging. These playbooks can communicate continuously with the same endpoint tools to run queries on processes,
network connections, browser history, etc. to track incident status.
Respond
If the file is malicious, the playbook updates relevant watchlists/block lists with that information. From here, the playbook can branch into other
actions such as quarantining infected endpoints, killing malicious processes, removing infected files, opening tickets, and reconciling data from
third-party threat feeds.
After the queries have been run, the playbook updates the endpoint tool database with new indicator information, so repeat offenses are
eliminated.
For more information, see the Malware Investigation and Response content pack.
Zero-day threats and ransomware breaches are constantly in the news, such as SolarWinds SUNBURST, HAFNIUM Microsoft zero-day exploit,
Nobelium threat actor, Kaseya supply chain ransomware attack, and Log4j vulnerability.
Every time a critical vulnerability is reported, it’s an all-hands-on-deck effort to ensure that your organization is not exposed to the potential exploits
of the vulnerability. Your executive team likely has heard it in the news and needs an assessment of exposure for the organization. Speed is
essential if potential malicious activity is detected.
Automation can help you quickly process, collect, hunt for indicators, and perform quick response actions upon finding IoCs.
In the case of a breach alert, the process of retrieving and discovering associated IoCs is as repetitive as it is important. Your analysts risk getting
mired in this work while the attack continues to manifest. Isolated security tools result in a struggle to reconcile threat data across platforms to get
an overall understanding of malicious activity and spread.
By running this playbook at the outset of incident response, your team can query endpoints, firewalls, and other incidents in seconds, avoiding
wasted time that can be used towards locking down defenses.
Respond
The playbook executes initial response actions based on indicator malice. For example, the playbook can block indicators, isolate, or quarantine
infected hosts, or feed malicious indicators back into threat intelligence databases and tool watchlists to avoid future attacks using the same
indicators.
We provide specific rapid breach response playbooks for high-profile breaches to help you speed up your investigation efforts. For more
information, see the Rapid Breach Response content pack.
Remote work has become the norm, and your business is increasingly moving to the cloud, which has increased the threat exposure and attack
surface your team has to account for.
Automation can play a role in many areas, including aiding investigations into unsuccessful login attempts and other access violations, monitoring
the health of VPNs, and updating dynamic allow/deny IP domain lists to ensure business continuity.
Despite the increased sophistication of security measures, it’s possible for attackers to brute-force their way into accounts by obtaining the email
address and resetting the password. This behavior is difficult to preempt, as there are high chances of it being innocuous (a genuine employee
resetting their password). Constant communication between you and end users to separate the anomalies from the usual is critical.
At user-defined triggers (such as five failed login attempts), a playbook can execute and verify whether the case is genuine or malicious.
The playbook sends an automated email to the affected user, notifying them of the five failed login attempts and asking them to confirm that the
behavior was theirs. The email requests the user to reply with “Yes/No,” and spells out the ensuing action for each response.
Triage
You can analyze the replies to automated emails and execute different playbook branches.
Respond
If the end-user behavior is genuine, the playbook resets the password on Active Directory and sends a new email to the affected user with revised
login credentials.
If the end users confirm they made the failed login attempts, the playbook sends a new email notifying them of these account takeover attempts.
The playbook can also execute investigative actions, such as extracting the IP/location where the failed attempts were made and quarantining the
affected endpoint.
Logins from unusual locations
With the ability to work from anywhere, it’s difficult to spot a malicious access attempt from a genuine case of employee access from multiple
geographical locations. With increased cloud adoption, multiple sources of geographical presence exist to verify, heaping more work on your
security team and presenting a window of opportunity to attackers.
To combat “impossible travel” (simultaneous logins from distant locations), which is flagged by the playbook and the trigger action, a modified
failed user login playbook would enrich IP information by checking the IP address reputations using threat intelligence sources and calculating the
distance between IPs, generate a location map and login time duration. When the analyst decides the activity is malicious, the playbook executes
a series of containment steps, such as disabling user accounts, blocking malicious IPs at the firewall, and notifying IT Support of actions taken.
Multifactor authentication (MFA) is often required when end users connect from untrusted or unknown IPs. Trusted network IPs are defined in
identity and access management (IAM) systems like Okta, so users connecting from a trusted network such as HQ or branch offices do not need
MFA, but if they connect from a coffee shop, they would be required to authenticate with MFA.
However, in SASE solutions such as Prisma Access, due to auto-scaling or provisioning of new locations, the list of assigned IPs for an enterprise
often changes. So, if these egress IPs are not listed or added to their IAM, any user connecting to these new IPs to access their software-as-a-
service (SaaS) applications, even if they are on a trusted network, will be required to use MFA. This can result in an inconsistent end-user
experience.
With the integration between Cortex XSOAR and Prisma Access, an automated playbook can “listen” to auto-scaling and new provisioning events,
immediately pick up the new list of Prisma Access egress IPs, and automatically update the IAM. This provides a seamless login experience for
users connecting from a trusted network.
On a security team’s busy day, there is no time to proactively monitor for potential connectivity downtime as the staff is usually busy firefighting
and triaging critical incidents. Among other things, this makes it difficult to keep track of the health status of all VPN tunnels to ensure 100% uptime
for end users.
In this case, an automated VPN tunnel monitoring playbook can be scheduled to poll Prisma Access connection statuses regularly and send a
Slack alert to the security or ITOps team if a tunnel is down.
With the new normal of remote work, these automation use cases can help streamline operations and help your ITOps and security teams scale to
address remote access security incidents and keep track of remote activity.
For more information, see the Prisma Access - Connection Health Check playbook in the Palo Alto Networks - Strata Cloud Manager content pack.
As you ingest alerts, you can automatically enrich them with the latest threat intel from your feeds. This gives you context for how external and
emerging threats are impacting your environment and also helps you quickly hone in on critical threats.
The indicators collected from many different threat feeds need to be aggregated, normalized, scored, and prioritized before they can be pushed to
enforcement points. A threat intel platform can automate these feed management functions, ensuring that your external dynamic lists (EDLs) are
always up to date per the latest threats.
As you investigate incidents, you need threat intel context on associated indicators. Curated threat intelligence, such as those from Unit 42 Intel
threat research that comes packaged with the Threat Intel Management (TIM) module, helps you automate indicator enrichment, giving your
Generating Weekly OSINT (Open Source Intelligence) and Other Threat Reports
Your threat intel team produces and disseminates threat intelligence reports to various business units/stakeholders to keep them up to date on the
latest threats targeting their industry. Most intelligence is still shared via unstructured formats such as email and blogs, so your threat analysts may
go through hours of manual work aggregating and digging for known malware families, curated news, and industry-specific threats, as well as
providing analyses on why each threat is relevant to the business. Cortex XSOAR TIM provides automated workflows and a central repository for
intelligence analysts to create, collaborate, and share curated intelligence reports with stakeholders.
Threat intelligence teams need to understand the details of attacks and how their organizations may be vulnerable. The intel team builds profiles
of threat actors, identifying if there are related attacks and which techniques and tools the threat actor used. This information is shared with
stakeholders, including security operations and leadership.
The MITRE ATT&CK framework was created to organize the real-world industry observations of threat actors into a standardized language of
tactics, techniques, and procedures (TTPs) to help organizations share information and recommendations, which can be used to harden security
programs.
Given the breadth and depth of the framework, understanding, consuming, and mapping the tactics and techniques within the MITRE ATT&CK
framework into reliable and usable remediation steps can be a complicated and time-consuming task.
The set of playbooks in the MITRE ATT&CK - Courses of Action content pack helps you automatically map your incident response to MITRE
ATT&CK techniques and sub-techniques in an organized and automated manner, which ensures your organization not only blocks specific
reported IoCs but also takes a more holistic approach to preventing future attacks. With Cortex XSOAR, you can leverage prebuilt automation
playbooks to cross-reference every incident with the tactics and techniques of the MITRE ATT&CK framework.
This content pack provides manual or automated remediation of MITRE ATT&CK techniques and kill chain. Security analysts choose the
techniques relevant to their security program and run the prebuilt playbooks that leverage expert remediation workflows. This can be found in the
built-in MITRE ATT&CK dashboard.
When used with Unit 42’s feed ingesting Actionable Threat Objects and Mitigations (ATOMs), your team gets notified when there is a new threat
actor report, with recommendations for immediate remediation action. This allows your security team to apply industry threat response protocols
and best practices to block specific reported IoCs and take a more holistic approach to prevent future attacks.
In cloud security, there are many infrastructures and products to deal with. The security of your cloud is often a shared responsibility between you,
your cloud service provider, and other teams. Cloud SecOps teams report that cloud security incidents are treated on a case-by-case basis, and
the remediation process is high-touch and manual. There is often no correlation between cloud platforms and on-premises security.
Cortex XSOAR can unify processes across multi-cloud and on-premises security infrastructures, providing your security teams with a single
console to execute the incident response. We also integrate with cloud-based identity management tools, enabling role-based and keyless
deployment of services without the need for credential management.
With the move towards digital currency and the acceptance of cryptocurrency for financial transactions, cryptojacking isn't declining anytime soon.
For example, you may automate a response to a cryptomining alert. Cortex XSOAR can ingest cloud security alerts from AWS, Google Cloud,
Microsoft Azure, or Prisma Cloud to fully or partially automate incident response.
Extract
The playbook extracts indicators (IPs, URLs, hashes, and so on) from the incident data. It can also open a ticket for the incident.
Enrich
The playbook enriches indicators with reputation data from threat intelligence tools that the SOC uses. It also enriches the ingested data with
additional context from SIEMs and other non-cloud-based event management tools to identify the full extent of the suspected attack. The playbook
checks if the indicators are identified as malicious.
Respond
The playbook obtains the instance and security group details and security group details, takes volume snapshots, and creates a tag for the EC2
instance to be isolated. These steps are classic digital incident response and forensics actions, but carried out in the cloud. What we are doing is
moving the EC2 instance into a separate virtual PC (VPC) as we would on a virtual LAN (VLAN) in the on-premises world, getting a list of running
processes, analyzing the results, and also sending an email to the analyst for review.
If the indicators are not identified as malicious, the playbook can ask a security analyst to review the information and verify that it’s not dangerous
before closing the incident as a false positive.
Automation cuts analyst time and increases responses by eliminating manual tasks, inter-team coordination, and security product changes. Also,
you can enforce standard operating procedures across different teams for cloud security incident response. Other automation use cases include
automating incident response for common cloud security incidents like password and security group misconfigurations, access key compromises,
unpatched vulnerabilities, and unusual activity like port scans/port sweeps. View more automation content packs in Marketplace.
Vulnerability Management
Vulnerability management is a strategically important process that covers both the proactive and reactive aspects of security operations. Since
vulnerability management encompasses all computing and internet-facing assets, security teams often grapple unsuccessfully with correlating
data across environments, spending too much time unifying context and not enough time remediating the vulnerability.
Security orchestration playbooks can automate enrichment and context addition for vulnerabilities before passing them to the appropriate teams
for patch remediation. This maintains a balance between automated and manual processes by ensuring that analyst time is not spent executing
repetitive tasks but on making critical decisions and drawing inferences.
Extract
The playbook ingests asset and vulnerability information from a vulnerability management tool such as Tenable or Qualys. The related information
from the incident is extracted, and related indicators are created and enriched.
The playbook then enriches endpoint and CVE data through relevant tools. It also adds custom fields to the incident if the newly gathered data
requires them.
To provide the analyst with a richer vulnerability context, the playbook queries the vulnerability management tool for any diagnoses,
consequences, and remediations tied to the vulnerability. If any vulnerability context is found, it’s added to the incident data. Based on the
gathered context, the playbook then calculates the severity of the incident.
Remediate
Playbooks can also use vulnerabilities to inform threat priority and initiate the patching process. Response actions can be taken by playbooks,
including:
Checking if assets (IP, domain, or certificate) associated with the issue are excluded in the exclusions list and closing the incident
automatically.
The playbook now hands over control to the security analyst for manual investigation and remediation of the vulnerability.
Vulnerability scanners are great for monitoring your known assets, but what about your unknown assets? To uncover these blind spots, your
organization needs an automated attack surface management (ASM) solution like Cortex Xpanse that continuously discovers and monitors the
entirety of IPv4 space to provide a complete and accurate inventory of your global internet-facing assets and misconfigurations.
Together with Cortex XSOAR, Xpanse enables you to automate the identification and remediation of web-facing exposures to reduce your mean
time to detect and respond (MTTD and MTTR).
The integration enables the fetching and mirroring of Xpanse issues into Cortex XSOAR incidents as well as the ingestion of indicators (IPs,
domains, and certificates), referring to the corporate network perimeter as discovered by Xpanse. Leveraging both technologies, your security
Discover
Scan the internet and accurately attribute unknown assets using multiple sources to reduce false positives and map your full attack surface.
Enrich
Use automated playbooks to enrich incidents using Xpanse asset information and threat intelligence indicators, helping you reduce MTTD and
MTTR across your cloud native, hybrid, and on-premises environments.
Remediate
Improve your team’s efficiency with a host of integrations and prebuilt scripts to automate attack surface management. For more details, see the
Cortex Xpanse content pack.
In this use case, we will pivot from the SOC to the NOC. A flexible and scalable SOAR platform can be applied to any workflow or process, and our
own Palo Alto Networks operations teams are using Cortex XSOAR internally to automate their manual processes.
One area where we have seen great benefits is network operations, where manual but necessary tasks are a time burden for the ITOps and
NetOps teams.
It’s a tedious and manual process to upgrade and validate all firewalls distributed across your network. There is significant time investment needed
in the process where your team needs to download the firewall update, install, reboot, and verify that the upgrade was successful. For enterprises
with over 100 firewalls distributed across their organization, this process is not scalable and is done infrequently.
“We manage about 450 firewalls. It takes us two hours to upgrade each firewall. We can only do a few at a time to ensure everything upgrades
correctly.” – Insurance industry customer
With Cortex XSOAR, you can onboard and upgrade all your devices within the environment and automatically verify upgrade status. There is still
time required to download and reboot the system, but your NetOps team no longer has to “babysit” the process. Snapshots of the configuration
can be captured to enable rollbacks if necessary. Once the upgrade is complete, verification steps can be performed to ensure the firewall is
functioning properly.
There are many more automation use cases that can be deployed to streamline network operations, from policy and rule change management to
monitoring network health and outages, but your NetOps teams will derive great efficiency benefits just from starting their automation journey with
this key use case.
Cortex XSOAR requires a yearly license per user. Multi-year licenses are available.
License usage
This table describes the types of Cortex XSOAR licenses which are used in the following circumstances:
Cortex XSOAR (Enterprise) Built for customers who need a complete security automation solution. Includes the SOAR Enterprise
Edition and TIM Enterprise licenses.
Cortex XSOAR Threat Intel Built for Threat Intelligence and Security Operations teams who need Includes the TIM Enterprise
Management Edition threat intelligence-based automation. license only.
Cortex XSOAR Starter Built for Security Operations and Incident Response customers who Includes the SOAR Enterprise
Edition need case management with collaboration and playbook-driven license only.
automation.
Multi-Tenant
Cortex XSOAR Enterprise, Threat Intel Management, and the Starter editions are all available for multi-tenant deployments, with a multi-tenant
license. Cortex XSOAR multi-tenant deployments are designed for MSSPs (managed security service providers) and enterprises that require strict
data segregation but also need the flexibility to share and manage critical security practices across tenant accounts.
The multi-tenant license (for example PAN-DEMISTO-MSSP) includes a main tenant. You can install as many child tenants as required using the
installer.
Development/Production tenants
In Cortex XSOAR you can use a content management system with a remote repository to develop and test content. If you want a development
tenant, you need to download the installer and add the development tenant license after installation.
License quota
The following table describes the license quotas of each version in Cortex XSOAR.
XSOAR TIM (TIM Only) XSOAR Starter Edition (SOAR Only) XSOAR (SOAR + TIM)
XSOAR TIM (TIM Only) XSOAR Starter Edition (SOAR Only) XSOAR (SOAR + TIM)
Threat Intel Library Unlimited Intelligence detail view and relationship data are Unlimited
not included
Unit 42 Intelligence Unlimited UI access, 5k/day Not included Unlimited UI access, 5k/day
API points API points
NOTE:
Intel feed quotas are based on the selected Fetches Indicators field in the integration instance settings, not the enabled status. Disabling an
integration instance does not affect the Intel feed quota. For example, if the AWS Feed is enabled and is fetching indicators and you don't want
to include this in your quota, open the integration settings and clear the Fetches Indicators checkbox.
Audit user
Audit users have read-only permission in Cortex XSOAR, meaning they cannot edit system components and data, or run commands, scripts, and
playbooks. Audit users can view incidents, dashboards, and reports.
Full user
Full users have read-write permission in Cortex XSOAR, meaning they can view and edit system components and data. They can investigate
incidents, run scripts and playbooks, chat in the War Room, and more. Full users’ access to Cortex XSOAR is determined by their assigned role.
Learn about the typical core roles that make up a SOC team.
Security Operations Centers (SOCs) were created to facilitate collaboration among security personnel, with a primary focus on security monitoring
and alerting, including the collection and analysis of data to identify suspicious activity and improve the organization's security. A SOC can
streamline the security incident handling process as well as help analysts triage and resolve security incidents more efficiently and effectively. In
today’s digital world, a SOC can be located in-house, in the cloud (a virtual SOC), staffed internally, outsourced, for example, to an MSSP or MDR,
or a mix of these. SOCs can provide continuous protection with uninterrupted monitoring and visibility into critical assets across the attack surface.
They can provide a fast and effective response, decreasing the time elapsed between when the compromise first occurred and the mean time to
detection.
Typical core roles that make up a SOC team consist of different tiers of SOC analysts and dedicated managers:
Tier 2 ‑ Incident responder: Reviews the higher-priority security incidents escalated by triage specialists and does a more in-depth
assessment using threat intelligence, such as indicators of compromise and updated rules. Incident responders need to understand the
scope of an attack and be aware of the affected systems. The raw attack telemetry data collected at tier 1 is transformed into actionable
threat intelligence at this second tier. Incident responders are responsible for designing and implementing strategies to contain and recover
from an incident. If a tier 2 analyst faces major issues with identifying or mitigating an attack, additional tier 2 analysts are consulted, or the
incident is escalated to tier 3.
Tier 3 ‑ Threat hunter: Most experienced workforce in a SOC. Threat hunters handle major incidents escalated to them by the incident
responders. They also perform or at least supervise vulnerability assessments and penetration tests to identify possible attack vectors. Their
most important responsibility is to proactively identify possible threats, security gaps, and vulnerabilities that might be unknown. They should
also recommend ways to optimize the deployed security monitoring tools as they gain reasonable knowledge about a possible threat to the
systems. Additionally, any critical security alerts, threat intelligence, and other security data provided by tier 1 and tier 2 analysts need to be
reviewed at this tier.
SOC manager: Supervises the security operations team. SOC managers provide technical guidance if needed, but most importantly, they
are in charge of managing the team. This includes hiring, training, and evaluating team members; creating processes; assessing incident
reports; and developing and implementing necessary crisis communication plans. They also oversee the financial aspects of a SOC, support
security audits, and report to the chief information security officer (CISO) or a respective top-level management position.
Browser Version
Browser Version
Follow the steps to successfully onboard and configure Cortex XSOAR On-prem
Get up and running quickly. Our intuitive Onboard section guides you through essential setup steps like installation, remote repository
configuration, and content management. Once you're set up, customize Cortex XSOAR to match your requirements.
We recommend that you review the following steps to successfully deploy and onboard Cortex XSOAR:
Step 1: Install Cortex XSOAR Install Cortex XSOAR by downloading the image file from the Cortex Gateway. See topic
Step 2: Set up an engine Use an engine for load balancing and proxies. See topic
Step 3. Set up a remote repository Set up a dev/prod environment with a private remote repository. See topic
Step 4. Set up users & roles Configure users, roles, and user groups, and set up authentication. See topic
Step 5. Install and configure content Install content packs and configure integrations for your use case. See topic
Post-deployment Configure user notifications and customize system emails. Configure system settings. See topic
Learn more about deployment considerations and onboarding steps for Cortex XSOAR.
Before you start your Cortex XSOAR deployment, consider the following:
You may need to create an engine to enable communication or for load balancing.
Currently, if you deploy a single node (standalone), you can't switch to a cluster of three nodes.
If you deploy a cluster of three nodes, you can implement out-of-the-box high availability (HA) by replicating data between the nodes
in the cluster. For more information, see High Availability for Cortex XSOAR.
The remote repository enables developing and testing content in a development environment before using it in a production environment.
Production and development are separate Kubernetes clusters with no dependency between them. For example, you can deploy a three-
node cluster for production and a standalone node for development. Or if you want to implement HA with three nodes for production and for
development, you need a total of six nodes, three for production and three for development.
How do you want users to access Cortex XSOAR? Do you need to set up SSO?
Which mail sender do you use? Do you want to integrate a communication app, such as Slack?
What steps do you currently take in your day-to-day SOC operations? Which integrations will enable you to automate your most important
and time consuming procedures?
Learn how to install Cortex XSOAR On-prem, including system requirements, and adding a license.
To install a Cortex XSOAR 8 tenant, you need to log into Cortex Gateway, which is a portal for downloading the relevant image file and license. If
you have multiple or development tenants, you must repeat this task for each tenant.
You need to set up your CSP account. For more information, see How to Create Your CSP User Account.
When you create a CSP account you can set up two-factor authentication (2FA) to log into the CSP, by using an Email, Okta Verfiy, or
Google Authenticator (non-FedRAMP accounts). For more information, see How to Enable a Third Party IdP.
Role Details
CSP role The Super User role is assigned to your CSP account. The user who creates the CSP account is granted the Super User
role.
Cortex XSOAR supports standalone or cluster installation. Cluster installation is suitable for production environments involving large-scale
data, and offers scalability and High Availability. Standalone is more suitable for small-scale data scenarios. For more information, see
Installation overview.
Add DNS records that point the following host names to the cluster IP address.
FQDN Details
Cluster FQDN The Cortex XSOAR DNS name for accessing the UI. For example, xsoar.mycompany.com.
API-FQDN The Cortex XSOAR DNS name that is mapped to the API IP address. For example, api-xsoar.mycompany.com.
ext-FQDN The Cortex XSOAR DNS name that is mapped to the external IP address. For example, ext-xsoar.mycompany.com.
1. From the Cortex Gateway, in the Available for Activation section, use the serial number to locate the tenant to download.
3. If you want to use a production and development tenant with a private remote repository, select Dev.
If you don't select it now, you can install a development tenant at a later stage.
5. Depending on the image file and the platform you want to deploy on, do one of the following:
When you download the image file, you have two license files for each environment. Each must be uploaded separately to the respective
tenant.
b. In the Upload License section, drag and drop your license file.
7. Optionally perform post-installation maintenance, including scaling up hardware resources and using your own X.509 certificate for a secure
HTTP connection.
NOTE:
You are not restricted to using the platform installed on the production tenant. For example, if you have downloaded an OVA file and
installed the VM on AWS in the production tenant, you can install the VM on OCI in the development tenant.
For more information about setting up a remote repository, see Set up a private remote repository.
Engines are installed on a remote machine and used mainly for the following:
Integration instances for on-prem applications. For example, the GitLab v2 integration enables you to run commands on GitLab instances.
To execute scripts and commands that require access to on-prem resources. For example, the Active Directory v2 integration enables you to
run commands in Active Directory.
Generic Export Indicators Service. In Cortex XSOAR, you can configure an EDL to share a list of Cortex XSOAR indicators with other
products in your network, such as a firewall or SIEM. For example, your Palo Alto Networks firewall can add IP address and domain data
from the EDL to block or allow lists.
Load balancing which enables the distribution of the command execution load.
Before installation, we recommend you review the engine requirements for hardware and operating systems. Engines can be installed on Linux
machines running a variety of operating systems, including Ubuntu, RHEL, Oracle Linux, and Amazon Linux.
Engine requirements
Install an engine
To learn more about engine architecture, installation, upgrades, and configuration, see Engines.
Set up a content management system with a development environment to create and test content before using it in a production environment.
When you set up a remote repository, you can add any private content repository that is Git-based, including GitHub, GitLab, and Bitbucket. Also,
On-prem repositories are supported.
Although you can set up multiple development tenants, in a cluster of tenants that includes one production tenant and one or more development
tenants, only one development tenant can push content. The production tenant and any other development tenants pull from the one development
tenant that is configured to push content. After the remote repository is enabled in the production tenant, by default, the first development tenant
that has been installed is set to push content to the remote repository. When you create additional development tenants, they are set to pull
content from the remote repository.
If the content repository option is disabled for the production or development tenant, the tenant becomes standalone and does not push or pull
content.
If you are changing your remote repository settings, back up existing content to your local computer by navigating to Settings & Info →
Settings → System → Server Settings → Custom Content and click Export all custom content.
1. If you haven't done so already, download and install the Cortex XSOAR development tenant.
a. From the Cortex Gateway, in the Available for Activation section, use the serial number to locate the development tenant to download.
NOTE:
Although you must select the same image file you downloaded for production, you can use a different platform for the development
tenant. For example, if you have downloaded an OVA file and installed the VM on AWS in the production tenant, you can install the
VM on OCI in the development tenant.
2. After you have installed the development tenant, you can now set up the private remote repository. For more information, see Set up a
private remote repository.
To learn more about remote repositories, requirements, and configuration, see Content management in Cortex XSOAR.
Cortex XSOAR uses role-based access control (RBAC) to manage roles with specific permissions for controlling user access. RBAC helps
manage access to Cortex XSOAR components, so that users, based on their roles, are granted the minimal access required to accomplish their
tasks.
Roles enable you to define permissions for specific components, such as incident data, playbooks, scripts, and jobs. For example, you can create
a role that allows users to edit the properties of incidents, but not delete incidents. You can create new roles or customize out-of-the-box roles.
Roles can also be used to define permissions for integration commands. On the Integration Permissions page, you can assign roles to specific
integration instances (all commands for that instance) or specific integration instance commands. For example, you could assign the Generic
Export Indicators Service integration instance the Account Admin role, or you could restrict certain commands in the Core Rest API to a specific
role. For more information, see Integration Permissions.
2. Create a role.
For more information about out-of-the-box roles, permissions, and how to create roles, see Roles management.
While roles can be assigned directly to users, we recommend instead creating user groups. Each user group has a single role associated with it,
but each user group can contain multiple users and user groups can be nested within each other, enabling you to further refine your RBAC
requirements. Users can belong to multiple user groups.
For more information about user groups and how to create them, see User group management.
After adding users, assign users to user groups or assign users to direct roles.
You can create users locally or by using SAML Single Sign-On (SSO) in the tenant. Users authenticate by doing one of the following:
Users can be authenticated using your IdP provider such as Okta, Ping, or Azure AD. You can use any IdP that supports SAML 2.0.
You can manage users including resetting passwords, sending invitations, and removing users.
By default, users do not have roles assigned and do not automatically have access to tenant data until you assign them a role or add them as
members of a user group that has an assigned role.
For more information about how to manage users, see User management.
What is content?
Content Description
Integrations Third-party tools and services that the Cortex XSOAR platform works with to orchestrate and automate SOC operations.
You can trigger events from these integrations that become incidents in Cortex XSOAR. After the incidents are created,
you can run playbooks on these incidents to enrich them with information from other products in your system.
Content Description
Playbooks You can automate many security processes, including handling investigations and managing tickets and security
responses that were previously handled manually. Playbooks enable you to organize and document security monitoring,
orchestration, and response activities. When an incident is ingested, if a playbook runs, an incident is created.
Dashboards, Dashboards and reports consist of visualized data powered by fully customizable widgets, which enable you to analyze
reports, and data from inside or outside Cortex XSOAR in different formats such as graphs, pie charts, or text. Reports allow you to
widgets share similar data outside of Cortex XSOAR via email. Reports can be scheduled to run at a specific time to capture
data where the start/end time is important.
Classifiers and Classification determines the type of incident/indicator that is created for events ingested from a specific integration. You
mappers create a classifier and define that classifier in an integration. Mappers map the fields from your third-party integration to
the fields that you defined in your incident/indicator layouts.
Incident types, All incidents that are ingested into Cortex XSOAR are assigned an incident type when they are classified. Each incident
fields, and layouts type has a unique set of data that is relevant to that specific incident type. Fields and layouts ensure that you see
relevant information that is relevant to the incident type.
Indicator types, Indicators are categorized by indicator type, which determines the indicator layout and fields that are displayed and
fields. and layouts which scripts are run on indicators of that type.
Scripts Perform a specific action, and are comprised of commands associated with an integration. Write scripts in either Python
or JavaScript. Scripts are used as part of tasks, which are used in playbooks and commands in the War Room.
Content is organized into content packs to support specific security orchestration use cases, which are either preinstalled or downloaded from
Marketplace. Content packs are created by Palo Alto Networks, technology partners, contributors, and customers.
After downloading and installing content packs, you can then start customizing the content to suit your use case. For example, although Cortex
XSOAR comes with a Mail Sender integration already configured, you may want to set up your own Mail Sender integration, such as EWS.
For more information about installing and configuring content packs, see Manage content packs.
Abstract
You can only install one content pack at a time. Cortex XSOAR automatically adds any content that is required to install the content pack. You can
also add any optional content packs that use the content pack you want to install.
If you receive an error message when you try to install a content pack, you need to fix the error before installing. If a warning message is issued,
you can still download the content pack, but you should fix the problem otherwise the content may not work correctly.
Before you install a content pack you should review the content pack to see what it includes and what are the various dependencies. Following is
the information you can view:
Details: General information about the content pack such as installation, content, version, author, and status.
Dependencies: Details of any required content packs and optional content packs that may need to be installed with your content pack.
Version History: View the currently installed version, earlier versions, available updates, and revert if required.
1. Go to Marketplace → Browse and locate the content pack you want to install.
4. (Optional) If the content pack includes optional content, select the content packs you want to add.
The Cart displays the number of items you are installing including any required content packs. You can log in and out, but the content packs
remain in the Cart until you click either Empty cart or Install.
5. Click Install.
You can now start configuring your content. If you have installed an integration, configure the integration including setting up an integration
instance. For more information, see Configure integrations.
Abstract
The Deployment Wizard guides you step-by-step to quickly adopt your use case.
The Deployment Wizard can be used to set up your use case for the Malware Investigation and Response content pack and the Phishing
content pack. In order to work with your content pack you need to set up your integrations. The Deployment Wizard guides you through:
Configuring the integrations that will be used to fetch events (fetching integrations). These events will be mapped as incidents.
Configuring the main playbook and its input parameters. For example, the Setup Malware playbook pane opens showing the recommended
primary playbook for the incident type you selected when configuring the fetching integration. The playbook configuration includes all the
input parameters to configure that will change the playbook behavior, for example, whether to use sandbox detonation or whether to perform
isolation response. You can open the playbook by clicking the link on the bottom.
The default fetching integration for your content pack depends on which fetching integration(s) are installed. For example:
Malware Investigation and Response 1. Palo Alto Networks Cortex XDR - Investigation and Response
2. CrowdStrike Falcon
Phishing 1. Gmail
2. EWS v2 (Make sure you also install the Microsoft Exchange On-Premise pack)
Prerequisites
To access the Deployment Wizard for the first time, you need to first install or update your Malware Investigation and Response content pack or
your Phishing content pack in Marketplace. The Deployment Wizard tab appears in Marketplace after the content pack installation or update is
completed.
For example:
For the Phishing content pack, you need at least one email gateway content pack (mandatory). You can also optionally install sandbox, EDR
systems, network devices, email security gateways, mail sender, and data enrichment and threat intelligence content packs.
1. In Marketplace, select the content pack for your use case (for example, Malware Investigation and Response or Phishing) and click Install or
Update (if the pack is already installed).
2. In the Select Content Packs window, select one or more content packs from the required categories. You can also install other
supportive content packs from other categories if needed. These items will be automatically be added to the cart.
4. When the content pack finishes installing or updating, click Refresh content.
NOTE:
After you start running your use case you can return to this tab and make changes to the configurations, such as your integration’s
credentials or playbook parameters.
5. Click Let’s Start in the small dialog box that appears next to the Deployment Wizard tab.
6. Step 1: Fetching Integration - Click the displayed fetching integration. If the integration is new, select New instance. If you want to use an
existing instance, select it from Update existing instance. The integration will stay disabled until you complete all steps of the wizard.
NOTE:
You must define the incident type in order to set the playbook in the next step.
A list of What needs to be done guides you through the required fetching integration instance settings configurations. Scroll down to see the
complete list.
After you save your settings, the wizard initiates a test connection. If the connection succeeds, the Fetching Integration step turns green and
moves to the next step (Set Playbook).
NOTE:
The wizard displays the recommended playbook. If for the fetching integration setup you chose an incident type that uses a different
playbook from the recommended one, the incident type will be detached.
8. Click Done.
9. Step 3: Supporting Integrations - Configure any installed supporting integrations in the content pack.
If a supporting integration is already installed and connected, it appears with a green check. Otherwise, click the integration to configure it.
NOTE:
After you save the settings, the integration instance is automatically enabled.
10. Step 4: What’s Next - Select Turn on Use Case to start the fetching process and running the playbooks and scripts.
User communication: You can choose a mail sender and customize system emails. Users can set which notifications they want to receive
and through which channels, such as email or Slack.
System settings: Cortex XSOAR offers a wide variety of system settings. For example, you can enhance security, set a custom logo and
login message, and choose a timezone.
In Cortex XSOAR, you can configure mail and messaging integrations to send notifications to users and you can customize the subject and body
of system emails.
Users can choose which notifications to receive and whether to receive notifications via email, Slack, Microsoft Teams, or other communication
tools. For more information, see User details and preferences.
Cortex XSOAR can send out notifications and emails to users through the following:
A mail integration enables Cortex XSOAR to send emails and can be used for system notifications and playbooks. For example, when adding
users to Cortex XSOAR, an email invitation is sent to users to log in. When you use the mail integration for playbook tasks, you can pass
arguments such as to, subject, body, etc. to customize the contents of your email.
1. Go to Marketplace.
2. Search for and download a mail sender content pack (such as Microsoft Exchange On-Premise).
4. Locate the mail sender integration (for example, EWS v2) and click Add Instance.
5. Configure your mail sender integration and select Enable to enable your mail sender integration.
6. If you configure multiple email integrations, select the Do not use in CLI by default option in the integration instances that should not be used
to send emails. This ensures that the email will only be sent in the instance you are expecting when running the send-mail command from
the CLI or within a playbook.
When there are multiple instances of a mail sender in Cortex XSOAR, you can choose which email sender should send the notification by
configuring the server.notification.using.sendmail key in the advanced server configuration settings.
If you do not configure the advanced server setting, Cortex XSOAR uses the first email integration it finds to send the system notifications.
1. Navigate to Settings & Info → Settings → Server Settings → Server Configuration → Add Server Configuration.
2. Add the following key and enter the mail sender instance name:
Key Value
If your organization uses a messaging service, such as Slack or Microsoft Teams, we recommend installing the relevant content pack.
The Slack content pack enables you to send messages and notifications to your Slack team and integrates with Slack's services to execute create,
read, update, and delete operations for employee lifecycle processes. For more information, see Slack content pack. For more information about
Microsoft Teams, see Microsoft Teams content pack.
Abstract
Customize subject and message body for Cortex XSOAR system emails and choose HTML and/or text format.
mentionNew Message from Cortex XSOAR Security {{.username}} added you to investigation
Operations Server {{.invName}}.\nYou were mentioned:
{{.parentContent}}.
mentionNewNoContent Message from Cortex XSOAR Security {{ .username}} added you to investigation {{
Operations Server .invName}}.
assign Message from Cortex XSOAR Security {{ .username}} assigned task #{{ .taskId}}
Operations Server in investigation {{ .invName}} to you.
NOTE:
todoAssign Message from Cortex XSOAR Security {{.username}} assigned To-Do task
Operations Server {{.title}} in investigation {{.invName}} to
you.
taskCompleted Message from Cortex XSOAR Security {{ .username}} completed task #{{ .taskId}}
Operations Server in investigation {{.invName }}.
taskUpdated Message from Cortex XSOAR Security {{.username}} updated task #{{.taskId}} in
Operations Server investigation {{.invName}}.
investigationClosed Message from Cortex XSOAR Security {{.username}} has closed investigation
Operations Server {{.invName}}.
investigationWaiting Message from Cortex XSOAR Security {{.username}}, {{.invName}} has stopped
Operations Server and is waiting your instructions."
investigationError Message from Cortex XSOAR Security {{.username}}, {{.invName}} has stopped
Operations Server because of an error.
investigationDeleted Message from Cortex XSOAR Security {{.username}} has deleted investigation
Operations Server {{.invName}}.
incidentAssigned Message from Cortex XSOAR Security {{.username}} has assigned you
Operations Server {{.incTermArticle}}
{{.incTermSingular}} {{.invName}}.
taskCompletedWithNotes Message from Cortex XSOAR Security {{.username}} completed task #{{.taskId}}
Operations Server in investigation {{.invName}}.\nCompletion
note was: {{.taskComment}}
incidentReminderSLA Message from Cortex XSOAR Security FYI, {{.incTermSingular}} #{{.invID}} "
Operations Server {{.reminedOn}}" - SLA expiration is
approaching. ({{.SLA}})
MessageTypeTaskSLA Message from Cortex XSOAR Security FYI, task "{{.reminedOn}}" (from
Operations Server investigation {{.invName}}) - due date is
approaching. ({{.SLA}})
newContentAvailable Message from Cortex XSOAR Security A content update: {{.release}} for your
Operations Server Demisto Server is
available.\n{{.releaseNotes}}
jobRunning Message from Cortex XSOAR Security A previous instance of job {{.invName}} is
Operations Server already running.
2. Add the key messages.subject.formats.<MessageType>, where <MessageType> is the type of message, such as assign or
taskCompleted. For the value, enter your custom subject. You can use any of the default variables, for example .invName in your subject.
Examples:
Key Value
You can customize the content of the system messages, and include variables such as .username and .invName in your body content.
You can send HTML or non HTML messages. If you have users who can only receive plain text, use the key messages.formats.<MessageType>,
where <MessageType> is the type of message, such as assign or taskCompleted. Enter your custom body text as the value. If you have users
who can receive HTML emails, use the key messages.HTML.formats.<MessageType>, where <MessageType> is the type of message. Enter your
custom body text as the value. To set custom body text for both text and HTML messages, add both keys/values for each message you want to
customize.
2. Add the key messages.formats.<MessageType> or messages.HTML.formats.<MessageType>. For the value, enter your custom email
body.
Examples:
Key Value
You can configure server settings, such as a keyboard shortcut for fast navigation, the timezone and the timestamp format, logo, login message,
and specific server configurations, from the Server Settings page. You can also import and export custom content.
Abstract
You can create a more personalized user experience by defining your server settings. Go to Settings & Info → Settings → Server Settings.
NOTE:
By default, the keyboard shortcuts, timezone, and timestamp format options appear on the Preferences table of the User Details page. To
instead display these settings in the Server Settings page, add the UI.show.timezone.in.server.settings server config, set to true.
Keyboard shortcuts, timezone, and timestamp format are not set universally and only apply to the user who sets them.
Keyboard Use a shortcut to search, investigate, and initiate actions. To change the shortcut letter, click the letter in the box, type a
Shortcuts letter, and then save.
NOTE:
Timezone Select the timezone to display your Cortex XSOAR data, which affects the timestamps displayed in Cortex XSOAR, such
as auditing logs, and exported files.
Timestamp The timestamp format is displayed in data tables, auditing logs, and exported files. The setting is configured per user and
Format not per tenant.
Appearance By default, the full-size Cortex XSOAR logo displays on the sign-in page, on the navigation bar when expanded, on
reports, on the artifact viewer, and on communication task forms and emails. A minimized version of the default Cortex
XSOAR logo displays at the top of the navigation bar when it is collapsed. You can replace the default logos with a custom
logo to match your organization's branding in the Cortex XSOAR platform. Supported file formats are PNG, JPEG, SVG,
and GIF. You can add the following:
Full-size logo: Upload your logo (displayed when the navigation bar is not collapsed).
Minimized logo: Upload your logo for the top of the navigation bar when it is collapsed (minimized).
NOTE:
If you define a full-size logo, but not a minimized logo, no logo will display when the navigation bar is collapsed.
Telemetry Cortex XSOAR uses telemetry to collect specific usage data, which is analyzed and used to improve Cortex XSOAR and
Collection to identify common usage to help drive the product roadmap.
All: Includes data that helps improve operational efficiency, optimizes resource allocation, enhances the overall user
experience of Cortex XSOAR, and data relevant for debugging. For more information, see Telemetry in Cortex
XSOAR.
System diagnostics only: Captures data relevant to debug issues only. Cortex XSOAR sends error logs stack traces
and infrastructure metrics that could help debug technical issues, CPU, memory, etc.
None: No telemetry is transmitted, apart from essential information according to your license, such as usage.
NOTE:
Only users with Role → Components → Administration → View/Edit permission can change the telemetry scope, such
as Administrators.
Export all custom content: Exports custom content, such as playbooks and scripts as a content bundle, which you
can import for use in another Cortex XSOAR tenant.
Upload custom content: Imports custom content created from a Cortex XSOAR tenant.
Login Message You can display a custom message to users before every login to Cortex XSOAR. For example, you can add a message
that includes terms and conditions specific to your organization to help adhere to the National Institute of Standards and
Technology (NIST) security standards and reduce cybersecurity risk. The message is by default disabled.
NOTE:
You must have administration rights to access this feature. The message supports markdown.
Server Customize your Cortex XSOAR environment on the tenant level. You can also use custom server configurations where
Configuration you experience issues or need to troubleshoot situations in your environment. For a list of server configurations, see
Server configurations.
Abstract
Configure security settings such as session expiration, user login expiration, and dashboard expiration.
You can configure security settings such as how long users can be logged into Cortex XSOAR, and from which domains and IP ranges users can
log in.
Session Expiration User Login Expiration The number of hours (between 1 and 24)
after which the user login session expires.
You can also choose to automatically log
users out after a specified period of inactivity.
Allowed Sessions Approved Domains The domains from which you want to allow
user access (login) to Cortex XSOAR. You
can add or remove domains as necessary.
User Expiration Deactivate Inactive User Deactivate an inactive user, and also set the
user deactivation trigger period. By default,
user expiration is disabled. When enabled,
enter the number of days after which inactive
users should be deactivated.
Allowed Domains Domain Name Enables you to specify one or more domain
names that can be used in your distribution
list for audit forwarding.
Configure engines, playbooks, scripts, dashboards, etc., for your use case.
As soon as you have completed onboarding with Cortex XSOAR, you can start configuring the tenant to match your use cases.
Engines If you have not done so already, you can configure and manage engines, such as using an Engines
engine as a web proxy and setting up Docker hardening.
Marketplace You may want to install additional content packs, delete, update, revert, and set up notifications. Cortex
Marketplace
Integrations Configure integrations, including fetching incidents, managing credentials, troubleshooting, and Integrations
more.
Incidents Customize incident fields, layouts, and types, set up preprocessing and post-processing rules, Incidents
limit access to an investigation, etc.
Playbooks Learn how to customize your playbooks including creating tasks, sub-playbooks, and polling. Playbooks
Jobs Run playbooks based on certain events or on a specific time and date. Jobs
SLAs Incorporate SLA fields in your investigations so you can view how much time is left before the SLAs
SLA becomes past due, as well as configure actions to take when the SLA is passed its due
date.
Indicators Customize indicator fields, layouts, and types, classify and map fields, and delete and exclude Indicators
indicators.
Dashboards, reports, Customize and create widgets to add to your dashboard and reports. Dashboards
and widgets
After you have configured Cortex XSOAR, analysts can start to investigate incidents and indicators.
Install Cortex XSOAR On-prem and complete post-installation steps. Learn how to upgrade Cortex XSOAR.
Download an image file from the Cortex Gateway, configure, install, and complete the post-installation steps. Learn how to upgrade Cortex
XSOAR.
Learn how to install Cortex XSOAR On-prem, including system requirements, and adding a license.
Before installing Cortex XSOAR, ensure your environment meets all requirements, avoiding installation issues and enabling a smooth setup.
Depending on your needs, decide whether to deploy a standalone node or a cluster of three nodes for optimal performance and scalability.
Standalone Standalone uses a single node, which is more suitable for small-scale data scenarios. A node is a virtual machine (VM)
with a distinct host IP address that runs the Cortex XSOAR application.
Deployment on a standalone environment involves setting up one VM. After deploying the relevant image file, a textual
UI guides you through the installation process, which includes installing the cluster from one node and setting the
node's IP address.
NOTE:
Currently, if you deploy a single node (standalone), you can't switch to a cluster of three nodes.
Cluster A cluster is a group of three nodes that are managed together and participate in workload management. It is suitable for
large-scale data production environments and offers scalability and High Availability.
After deploying the relevant image file, configure the nodes by opening the textual UI:
In the Connect Nodes menu, connect the VMs (establish trust between all nodes in the cluster).
In the Cluster Installation menu, select one node from which to install the cluster, including setting the IP
addresses of each node. To implement High Availability, set the FQDN IP address to either a virtual IP or a
reverse proxy/ingress controller as a single entry point to distribute traffic across the nodes in the cluster.
NOTE:
Each node must meet the minimum specifications, depending on whether you require extra small, small, medium, or large scale. For more
information, see System Requirements.
Cortex XSOAR supports the following image files, which are downloaded from Cortex Gateway:
AWS: For more information, see Install Cortex XSOAR on a VM deployed on AWS.
OCI: For more information, see Install Cortex XSOAR on a VM deployed on OCI.
VMWare: For more information, see Install Cortex XSOAR on a VM deployed on VSphere.
VHD Deploy on Microsoft Hyper-V. For more information, see Install Cortex XSOAR on a VM deployed on Hyper-V.
Post-installation
After installation, add your license to Cortex XSOAR and set up a secure HTTP connection, if required.
You can optimize system performance, such as adding or removing nodes in a cluster. For more information, see Post-installation and Optimize
performance from the textual UI.
High availability keeps your systems running even if one of your components fails. It provides redundancy for the different components, so if a
problem occurs, it has a minimal effect on your system.
If you deploy a cluster of three nodes and set the Cortex XSOAR IP address access to either a virtual IP or the reverse proxy/ingress controller IP,
the system implements built-in high availability. This enables workload distribution and data replication across the nodes, and continuous operation
in case of node failure.
Once you deploy your cluster, you can deploy a second cluster in a secondary data center to enable High Availability and disaster recovery
functionality using backup and restore operations.
Tasks and data are distributed across the nodes to balance the load.
If a node goes down, workloads on the failed node are automatically distributed to the other nodes.
NOTE:
There may be several minutes of downtime until the other nodes take over.
Once the failed node is restored, it automatically reintegrates into the cluster and the workloads are automatically rescheduled.
For more information on setting up built-in High Availability for your specific deployment, see Cortex XSOAR Installation.
Once you deploy your cluster, you can deploy a second cluster in a secondary data center to enable High Availability and disaster recovery
functionality using backup and restore operations.
IMPORTANT:
The secondary environment must run the same Cortex XSOAR version with the same resources as the primary production environment to
ensure seamless restoration (the clusters must be the same).
With periodic backups of the cluster in the primary data center to the cluster in a secondary data center, if the primary data center becomes
unavailable, you can easily restore it from the secondary backup. For more information, see Set up backup and restore in Cortex XSOAR.
Once you set up and install your cluster, you can monitor node status and recover from node failure as needed.
1. In Cortex XSOAR, monitor the node health on the System Diagnostics page. For more information, see View system status in the System
Diagnostics page.
2. If there is a node failure, manage the nodes from the textual UI.
For example, if a node fails remove it and then add a new node to replace it. For more information, see Manage nodes in a cluster.
You need to set the host again and reestablish trust between all the nodes if you want to replace a node in the cluster after completing the
installation.
Verify that your Cortex XSOAR deployment meets the minimum system requirements.
Cortex XSOAR requires the following hardware, ports, URLs, bandwidth, and node synchronization.
URL requirements
Bandwidth requirement
Abstract
The Cortex XSOAR tenant has specific minimum VM hardware requirements depending on the scale.
IMPORTANT:
A hypervisor host running VMs must have enough hardware resources to support all Cortex XSOAR VMs you plan to run on.
Each VM (node) in a cluster must have the same resources. For example, a 3 VM cluster planned to run on a host must have at least 3 times
the listed specs for memory, CPU cores, and storage. Leave additional space for overhead virtualization operations.
To fully leverage High Availability, deploy each VM on a different hypervisor. This ensures that the other VMs continue to operate if one
hypervisor fails.
The following requirements apply for a single node (standalone), or each node in a cluster.
CPU per VM 8 CPU cores 16 CPU cores 32 CPU cores 48 CPU cores
Storage per VM 256 GB boot disk plus a 256 GB boot disk 256 GB boot disk plus 256 GB boot disk
separate 775 GB data disk. plus a separate a separate 1.3 TB data plus a separate 1.8
These disks must be SSDs. 775 GB data disk. disk (1 TB = 1024 GB). TB data disk (1 TB =
These disks must These disks must be 1024 GB). These
be SSDs. SSDs. disks must be SSDs.
Performance benchmark Up to 100 incidents per day / up Up to 200 Phishing 200 - 400 Phishing - Up to 650 Phishing -
to 10 incidents per hour - Generic V3 Generic V3 playbook Generic V3 playbook
NOTE:
playbook runs per runs per hour runs per hour
The benchmark values for hour
running the Phishing -
Generic V3 playbook are
for a single node. This
value can vary according
to playbook size and
complexity.
Abstract
The following ports are required for standalone (one VM) and a three-node cluster (three VMs).
A Kubernetes cluster consists of a control plane and one or more worker nodes. For Cortex XSOAR, in standalone (one VM), the VM acts as both
control plane and as a worker node. In multi-node clusters, the first three nodes act as both control plane and as worker nodes, and any additional
node added acts as a worker node.
Intra-node communication
Abstract
You need to allow the following URLs for Cortex XSOAR to operate properly.
NOTE:
If you use SSL inspection and experience difficulty connecting to the required URLs or to integration URLs, exclude the required URLs from SSL
offloading on the firewall/proxy.
Download content packs and view the Marketplace (to view content pack
images, the domain should also be reachable from the browser).
storage.googleapis.com
Download content packs and view the Marketplace. This domain stores
content pack artifacts (to view content pack images, the domain should also
be reachable from the browser). It is possible to further limit the url prefix to:
https://storage.googleapis.com/marketplace-dist/
api.demisto.com
Download content Packs and view the Marketplace (this file maps the
Marketplace URL to the Cortex XSOAR version).
NOTE:
xsoar-authentication-proxy.paloaltonetworks.com
xsoar-contrib.pan.dev
Abstract
The required bandwidth and node synchronization for Cortex XSOAR On-prem to operate properly.
You need the following download bandwidth and node synchronization for Cortex XSOAR to operate properly.
Bandwidth requirement
The minimum required download bandwidth is 10Mbit/s for successful Cortex XSOAR upgrades and Marketplace operations.
NTP requirement
Ensure all nodes are synchronized with no NTP offset in order to prevent degraded storage performance.
IMPORTANT:
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
Task 1. Download the OVA image and license from Cortex Gateway
Abstract
Download an image from Cortex Gateway, deploy the image, and use the textual user interface to configure environment settings, and to install a
Cortex XSOAR tenant.
To install a Cortex XSOAR 8 tenant, you need to log into Cortex Gateway, which is a portal for downloading the relevant image file and license.
Downloading a file image from Cortex Gateway ensures you have the latest pre-configured software package for easy deployment and updates. If
you have multiple or development tenants, you must repeat these tasks for each tenant.
PREREQUISITE:
You need to set up your CSP account. For more information, see How to Create Your CSP User Account.
When you create a CSP account you can set up two-factor authentication (2FA) to log into the CSP, by using an Email, Okta Verify, or
Google Authenticator (non-FedRAMP accounts). For more information, see How to Enable a Third Party IdP.
Role Details
CSP The Super User role is assigned to your CSP account. The user who creates the CSP account is granted the Super User
role role.
To download the Cortex XSOAR 8 images from Cortex Gateway, you need a license (or evaluation license via sales) assigned to your
CSP account.
For VMWare ESXi 6.5 and later, you need hardware version 13.
2. In the Available for Activation section, use the serial number to locate the tenant to download.
By default, the Production-Standalone license is selected. You can also select Dev.
Production and development are separate Kubernetes clusters with no dependency between them. For example, you can deploy a three-
node cluster for production and a standalone node for development, or you can support small-scale for development and large-scale for
production.
If you want to use a production and a development tenant with a private remote repository, select Dev. If you don't select it now, you can
install a development tenant later.
4. Click Next.
OVA is supported by AWS, Oracle Cloud Infrastructure (OCI), and VMWare (for example, VSphere).
TIP:
In Google Chrome, to download the image and license files together, you may need to set the the browser Settings → Privacy and
security → Site settings → Additional permissions → Automatic downloads to the default behavior Sites can ask to automatically
download multiple files.
Two files download: A zipped license file containing one or more JSON license files with instructions, and a zipped image file of the type you
selected (.ova, .vhd)
Abstract
Download an image from Cortex Gateway, deploy a VM on AWS, and use the textual user interface to configure network, IP, and environment
settings, and to install a Cortex XSOAR tenant.
Currently, only AWS Commercial Cloud, also known as AWS Global, is supported (not GovCloud).
If you set your Cortex XSOAR environment as a standalone (single node), you cannot add nodes to it and switch to a cluster. If you deploy three
nodes, you can later add nodes and expand the cluster. For more information, see Manage nodes in a cluster.
IMPORTANT:
To implement built-in High Availability, deploy a cluster with three nodes (VMs), with each VM on a different hypervisor. This ensures that if
one hypervisor fails, the other VMs continue to operate.
Set the Cluster FQDN to the reverse proxy/ingress controller IP address (Task 6). The reverse proxy/ingress controller serves as a single
entry point to distribute traffic across the nodes in the cluster.
To use backup and restore functionality, deploy a second cluster in a secondary data center. For more information, see Set up backup and
restore in Cortex XSOAR.
2. Upload the OVA image file to a private and secure S3 bucket that you set up.
2. Under Amazon S3 → Buckets → <your bucket name> → Objects, select the folder the OVA image file will upload to.
3. In the Upload info page, drag and drop the OVA image file and select Upload.
Upload example
3. Create a workflow.
1. From the side menu, select Migration & Transfer → AWS Migration Hub.
2. From the Migration Hub side menu, select Orchestrate → Workflows → Create workflow.
3. Select Import new virtual machine images to AWS, then select Next.
Under Target environment configuration, confirm the Boot mode - optional is set to legacy-bios.
5. Select Next.
3. Select the relevant AMI template from the newly created workflow, then select Launch instance from AMI.
For example:
Network settings: Create a static IP. Conform to any security group requirements, including a subnet.
Key pair (login): Under Key pair name - required, leave the default value Proceed without a key pair.
5. Select Launch instance. Confirm selecting Proceed without key pair and select Launch instance again.
The block volume size depends on the scale you want to use. In this example, 1024 GB (1 TB) corresponds to the hardware requirements
for a small scale deployment with a 256 GB boot disk plus an additional separate 775 GB data disk.
IMPORTANT:
Every virtual machine is provided with a 256 GB hard disk to run the OS. However, you also need to add an extra hard disk for each
virtual machine instance you want to deploy to run the application.
All virtual machines in a cluster must have the same storage size.
To ensure successful deployment, make sure the hard disks meet performance requirements detailed in the System requirements.
4. Select the instance you want to pair it with and select Attach volume.
7. For first time login, open an external terminal and use the ssh admin<server ip address> command to SSH log in. The default user
name and password is admin.
IMPORTANT:
Save the SSH password securely. If you lose this password you cannot recover or change it, and to use SSH you will need to redeploy
the cluster.
The password must be at least eight characters long and contain at least:
If this is not a first time login, you can log in from the web console or from a terminal using the ssh admin@<server ip address>
command to SSH log in.
The textual UI menu opens with all the configuration and installation options.
TIP:
To navigate between the menu items, use the up and down arrow keys. To select a menu item, press the Enter key.
To navigate between fields within a menu item, use the Tab key. To save settings, tab to the Save button and press the Enter key.
To go back to the menu from a specific menu item field, press the esc key.
NOTE:
Since the Cloud platform handles network and IP settings, you can skip the Host Configuration menu in the textual UI.
Confirm the following network and IP settings are added to the rules of the security group or the firewall rules for each node in a cluster (for
standalone there is just a single node). If they are not added to the rules, the installation may fail.
Port configurations
Communication ports
A Kubernetes cluster consists of a control plane and one or more worker nodes. For Cortex XSOAR, in standalone (one VM), the VM acts as both
control plane and as a worker node. In multi-node clusters, the first three nodes act as both control plane and as worker nodes, and any additional
node added acts as a worker node.
Intra-node port
URLs
Download content packs and view the Marketplace (to view content pack
images, the domain should also be reachable from the browser).
storage.googleapis.com
Download content packs and view the Marketplace. This domain stores
content pack artifacts (to view content pack images, the domain should also
be reachable from the browser). It is possible to further limit the url prefix to:
https://storage.googleapis.com/marketplace-dist/
api.demisto.com
Download content Packs and view the Marketplace (this file maps the
Marketplace URL to the Cortex XSOAR version).
NOTE:
xsoar-authentication-proxy.paloaltonetworks.com
xsoar-contrib.pan.dev
NTP
Ensure all nodes are synchronized with no NTP offset in order to prevent degraded storage performance.
If you want to use a proxy, define the proxy address and port settings. The proxy can be set at any point, during Cortex XSOAR deployment or at a
later stage.
Proxy Address
NOTE:
You can either enter the address as IP:port without a http:// or https:// prefix, or enter the host name.
Proxy Port
3. Select Save.
For each VM (node) in a cluster, the nodes must have SSH connections between them, where all the nodes trust one another. To establish trusted
connections in a cluster, one node is designated as the signing server host, generating a token for secure communication and authentication.
Other nodes connect to the host using the token displayed on the host's screen.
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
IMPORTANT:
To implement built-in High Availability, after establishing trust between all nodes in a cluster, in the cluster installation step (Task 6) you need to
set a single entry point to distribute traffic across the nodes in the cluster. Do this by setting the Cluster FQDN to either the virtual IP address or
to the reverse proxy/ingress controller IP address.
To use backup and restore functionality, deploy a second cluster in a secondary data center. For more information, see Set up backup and
restore in Cortex XSOAR.
1. In the textual UI menu for the VM you want to be the host, select Connect Nodes.
2. Select Host.
A message displays that this action cancels prior trust established with other nodes. Select Yes to continue.
This node becomes the host, and a token is generated on the screen. Copy the token, for example:
NOTE:
Keep this window open (do not select Stop) until trust is established between all nodes to enable the host to listen for the token from the
other nodes.
b. Select Join.
e. Select Submit.
A message displays that this action cancels prior trust established with other nodes. Select Yes to continue.
4. Select OK.
5. After trust is established between all the nodes in the cluster, go back to the host node and select Stop to close the listening window.
The virtual machine you use to run the installer will deploy Cortex XSOAR on all virtual machines in a cluster.
For a single virtual machine (standalone), configure the settings for a single node.
IMPORTANT:
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
You can only change these field values in the textual UI menu before installing. To change these values after installing, you need to
redeploy your cluster and then reinstall. Contact support or engineering for assistance.
Field Description
Cluster Nodes A list of IPs of all virtual machines/nodes in the cluster, separated by a space. For example, 10.196.37.10
10.196.37.11 10.196.37.12
Copy the IP of each VM from the Private IPv4 address field in the AWS EC2 → Instances → Instance summary
page and paste it in this field, separated by a space.
Field Description
Cluster FQDN The Cortex XSOAR environment DNS name. For example, <subdomain>.<domain name>.<top level
domain>
Copy the FQDN from the Public IPv4 DNS field in the AWS EC2 → Instances → Instance summary page and
paste it in this field.
For a single node: This field value must be registered in your DNS server so the FQDN will be resolved to the
IP of the node.
For a multi-node cluster: To implement built-in HA using a reverse proxy/ingress controller, you need to set this
field value to match the IP of the reverse proxy/ingress controller, and it must be registered in your DNS server
so the FQDN will be resolved to the IP of the reverse proxy/ingress controller.
The reverse proxy/ingress controller IP address serves as a single entry point for the entire Cortex XSOAR
cluster. The reverse proxy/ingress controller checks the health endpoint of the node for any issues. If the node
is healthy it can be used to process requests. To use a reverse proxy/ingress controller IP address:
Use HTTP 10254 with the path /healthz as the health endpoint.
NOTE:
Cortex XSOAR supports only static IP addresses for each virtual machine in the cluster, it does not support a
DHCP (dynamic IP) network interface.
Virtual IP (optional) (Hypervisor deployments only) The virtual IP serves as a single entry point to Cortex XSOAR for the entire
cluster. It is a floating IP, meaning it is dynamically assigned to one of the cluster nodes and moves to another
node if the active node goes offline. The system connects to the virtual IP instead of individual node IPs. The
system then randomly selects a node to handle the session and maintains that connection until the node shuts
down or fails, after which another node is selected.
IMPORTANT:
Do not fill in this field (Cortex XSOAR does not support virtual IPs in Cloud deployments).
Installation Mode The tenant installation type. Options to select from are:
Cluster Region The region the cluster is located in. For example, US.
Field Description
Cortex XSOAR Credentials for the first user to log in to Cortex XSOAR.
Admin Email,
IMPORTANT:
Password, and
Confirm Password These fields can only be changed before installation, so it is important to keep this information secure. To
change values like username or password after installation, you will need to redeploy your cluster and
reinstall. Contact support or engineering for assistance.
For the Cortex XSOAR Admin Email, we recommend using a service account rather than a specific user email
address since this cannot be changed after installation.
NOTE:
The password must be at least eight characters long and contain at least:
Migration Mode Relevant for migration from Cortex XSOAR 6. If checked, the migration wizard starts in the Cortex XSOAR 8
tenant. This cannot be changed at a later stage.
3. Select Install.
Verify all nodes meet the required hardware and network requirements, and select Install again.
The virtual machine you use to run the installer will deploy Cortex XSOAR on all virtual machines in a cluster.
After the installation tasks run, an Installation completed successfully message displays in the textual UI. However, you need to wait until the
installation process fully completes (approximately 30 minutes) and then check that you can log in to Cortex XSOAR. You then need to upload
your license to enable all Cortex XSOAR pages.
When you log in for the first time, use the Admin password and email you set during installation.
IMPORTANT:
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
Task 1. Download the OVA image and license from Cortex Gateway
Abstract
Download an image from Cortex Gateway, deploy the image, and use the textual user interface to configure environment settings, and to install a
Cortex XSOAR tenant.
To install a Cortex XSOAR 8 tenant, you need to log into Cortex Gateway, which is a portal for downloading the relevant image file and license.
Downloading a file image from Cortex Gateway ensures you have the latest pre-configured software package for easy deployment and updates. If
you have multiple or development tenants, you must repeat these tasks for each tenant.
PREREQUISITE:
You need to set up your CSP account. For more information, see How to Create Your CSP User Account.
When you create a CSP account you can set up two-factor authentication (2FA) to log into the CSP, by using an Email, Okta Verify, or
Google Authenticator (non-FedRAMP accounts). For more information, see How to Enable a Third Party IdP.
Role Details
CSP The Super User role is assigned to your CSP account. The user who creates the CSP account is granted the Super User
role role.
To download the Cortex XSOAR 8 images from Cortex Gateway, you need a license (or evaluation license via sales) assigned to your
CSP account.
For VMWare ESXi 6.5 and later, you need hardware version 13.
2. In the Available for Activation section, use the serial number to locate the tenant to download.
By default, the Production-Standalone license is selected. You can also select Dev.
Production and development are separate Kubernetes clusters with no dependency between them. For example, you can deploy a three-
node cluster for production and a standalone node for development, or you can support small-scale for development and large-scale for
production.
If you want to use a production and a development tenant with a private remote repository, select Dev. If you don't select it now, you can
install a development tenant later.
OVA is supported by AWS, Oracle Cloud Infrastructure (OCI), and VMWare (for example, VSphere).
6. Select the checkbox to agree to the terms and conditions of the license and click Download.
TIP:
In Google Chrome, to download the image and license files together, you may need to set the the browser Settings → Privacy and
security → Site settings → Additional permissions → Automatic downloads to the default behavior Sites can ask to automatically
download multiple files.
Two files download: A zipped license file containing one or more JSON license files with instructions, and a zipped image file of the type you
selected (.ova, .vhd)
Abstract
Download an image from Cortex Gateway, deploy the a VM on OCI, and use the textual user interface to configure network, IP, and environment
settings, and to install a Cortex XSOAR tenant.
If you set your Cortex XSOAR environment as a standalone (single node), you cannot add nodes to it and switch to a cluster. If you deploy three
nodes, you can later add nodes and expand the cluster. For more information, see Manage nodes in a cluster.
IMPORTANT:
To implement built-in High Availability, deploy a cluster with three nodes (VMs), with each VM on a different hypervisor. This ensures that if
one hypervisor fails, the other VMs continue to operate.
Set the Cluster FQDN to the reverse proxy/ingress controller IP address (Task 6). The reverse proxy/ingress controller serves as a single
entry point to distribute traffic across the nodes in the cluster.
To use backup and restore functionality, deploy a second cluster in a secondary data center. For more information, see Set up backup and
restore in Cortex XSOAR.
NOTE:
2. Import the image from the bucket into the OCI environment.
3. Disable CPU logging and performance (DRS). For more information, see Oracle Define or Edit Server Pool Policies.
The block volume size depends on the scale you want to use. For example, 1024 GB (1TB) corresponds to the hardware requirements for a
small scale deployment with a 256 GB boot disk plus an additional separate 775 GB data disk.
IMPORTANT:
Every virtual machine is provided with a 256 GB hard disk to run the OS. However, you also need to add an extra hard disk for each
virtual machine instance you want to deploy to run the application.
All virtual machines in a cluster must have the same storage size.
To ensure successful deployment, make sure the hard disks meet performance requirements detailed in the System requirements.
NOTE:
8. For first time login, open an external terminal and use the ssh admin<server ip address> command to SSH log in. The default user
name and password is admin.
IMPORTANT:
Save the SSH password securely. If you lose this password you cannot recover or change it, and to use SSH you will need to redeploy
the cluster.
The password must be at least eight characters long and contain at least:
If this is not a first time login, you can log in from the web console or from a terminal using the ssh admin@<server ip address>
command to SSH log in.
The textual UI menu opens with all the configuration and installation options.
TIP:
To navigate between the menu items, use the up and down arrow keys. To select a menu item, press the Enter key.
To navigate between fields within a menu item, use the Tab key. To save settings, tab to the Save button and press the Enter key.
To go back to the menu from a specific menu item field, press the esc key.
NOTE:
Since the Cloud platform handles network and IP settings, you can skip the Host Configuration menu in the textual UI.
Confirm the following network and IP settings are added to the rules of the security group or the firewall rules for each node in a cluster (for
standalone there is just a single node). If they are not added to the rules, the installation may fail.
Port configurations
Communication ports
A Kubernetes cluster consists of a control plane and one or more worker nodes. For Cortex XSOAR, in standalone (one VM), the VM acts as both
control plane and as a worker node. In multi-node clusters, the first three nodes act as both control plane and as worker nodes, and any additional
node added acts as a worker node.
Intra-node port
URLs
Download content packs and view the Marketplace (to view content pack
images, the domain should also be reachable from the browser).
storage.googleapis.com
Download content packs and view the Marketplace. This domain stores
content pack artifacts (to view content pack images, the domain should also
be reachable from the browser). It is possible to further limit the url prefix to:
https://storage.googleapis.com/marketplace-dist/
api.demisto.com
Download content Packs and view the Marketplace (this file maps the
Marketplace URL to the Cortex XSOAR version).
NOTE:
xsoar-authentication-proxy.paloaltonetworks.com
xsoar-contrib.pan.dev
NTP
Ensure all nodes are synchronized with no NTP offset in order to prevent degraded storage performance.
If you want to use a proxy, define the proxy address and port settings. The proxy can be set at any point, during Cortex XSOAR deployment or at a
later stage.
Proxy Address
NOTE:
You can either enter the address as IP:port without a http:// or https:// prefix, or enter the host name.
Proxy Port
3. Select Save.
For each VM (node) in a cluster, the nodes must have SSH connections between them, where all the nodes trust one another. To establish trusted
connections in a cluster, one node is designated as the signing server host, generating a token for secure communication and authentication.
Other nodes connect to the host using the token displayed on the host's screen.
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
IMPORTANT:
To implement built-in High Availability, after establishing trust between all nodes in a cluster, in the cluster installation step (Task 6) you need to
set a single entry point to distribute traffic across the nodes in the cluster. Do this by setting the Cluster FQDN to either the virtual IP address or
to the reverse proxy/ingress controller IP address.
To use backup and restore functionality, deploy a second cluster in a secondary data center. For more information, see Set up backup and
restore in Cortex XSOAR.
1. In the textual UI menu for the VM you want to be the host, select Connect Nodes.
2. Select Host.
A message displays that this action cancels prior trust established with other nodes. Select Yes to continue.
This node becomes the host, and a token is generated on the screen. Copy the token, for example:
NOTE:
b. Select Join.
e. Select Submit.
A message displays that this action cancels prior trust established with other nodes. Select Yes to continue.
4. Select OK.
5. After trust is established between all the nodes in the cluster, go back to the host node and select Stop to close the listening window.
The virtual machine you use to run the installer will deploy Cortex XSOAR on all virtual machines in a cluster.
For a single virtual machine (standalone), configure the settings for a single node.
IMPORTANT:
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
You can only change these field values in the textual UI menu before installing. To change these values after installing, you need to
redeploy your cluster and then reinstall. Contact support or engineering for assistance.
Field Description
Cluster Nodes A list of IPs of all virtual machines/nodes in the cluster, separated by a space. For example,
10.196.37.10 10.196.37.11 10.196.37.12
Copy the IP of each VM from the Private IPv4 address in the OCI Instance information tab and paste
it in this field, separated by a space.
Field Description
Cluster FQDN The Cortex XSOAR environment DNS name. For example, <subdomain>.<domain name>.<top
level domain>
Copy the FQDN from the Internal FQDN field in the OCI Instance information tab and paste it in this
field.
NOTE:
For a single node: This field value must be registered in your DNS server so the FQDN will be
resolved to the IP of the node.
Cortex XSOAR supports only static IP addresses for each virtual machine in the cluster, it does not
support a DHCP (dynamic IP) network interface.
Virtual IP (optional) The Cortex XSOAR environment virtual IP for the multi-node cluster. It must be an available IP
address that is not used for anything else. It is a virtual interface assigned to one of the nodes to
provide a single access point to the cluster.
IMPORTANT:
Do not fill in this field (Cortex XSOAR does not support virtual IPs in Cloud deployments).
Installation Mode The tenant installation type. Options to select from are:
Cluster Region The region the cluster is located in. For example, US.
Cortex XSOAR Admin Email, Credentials for the first user to log in to Cortex XSOAR.
Password, and Confirm
Password NOTE:
The password must be at least eight characters long and contain at least:
Migration Mode Relevant for migration from Cortex XSOAR 6. If checked, the migration wizard starts in the Cortex
XSOAR 8 tenant. This cannot be changed at a later stage.
3. Select Install.
Verify all nodes meet the required hardware and network requirements, and select Install again.
The virtual machine you use to run the installer will deploy Cortex XSOAR on all virtual machines in a cluster.
After the installation tasks run, an Installation completed successfully message displays in the textual UI. However, you need to wait until the
installation process fully completes (approximately 30 minutes) and then check that you can log in to Cortex XSOAR. You then need to upload
your license to enable all Cortex XSOAR pages.
When you log in for the first time, use the Admin password and email you set during installation.
IMPORTANT:
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
Task 1. Download the VHD image and license from Cortex Gateway
Abstract
Download an image from Cortex Gateway, deploy the image, and use the textual user interface to configure environment settings, and to install a
Cortex XSOAR tenant.
To install a Cortex XSOAR 8 tenant, you need to log into Cortex Gateway, which is a portal for downloading the relevant image file and license.
Downloading a file image from Cortex Gateway ensures you have the latest pre-configured software package for easy deployment and updates. If
you have multiple or development tenants, you must repeat these tasks for each tenant.
PREREQUISITE:
You need to set up your CSP account. For more information, see How to Create Your CSP User Account.
When you create a CSP account you can set up two-factor authentication (2FA) to log into the CSP, by using an Email, Okta Verfiy, or
Google Authenticator (non-FedRAMP accounts). For more information, see How to Enable a Third Party IdP.
Role Details
CSP The Super User role is assigned to your CSP account. The user who creates the CSP account is granted the Super User
role role.
To download the Cortex XSOAR 8 images from Cortex Gateway, you need a license (or evaluation license via sales) assigned to your
CSP account.
For VMWare ESXi 6.5 and later, you need hardware version 13.
How to download the image and license
2. In the Available for Activation section, use the serial number to locate the tenant to download.
By default, the Production-Standalone license is selected. You can also select Dev.
Production and development are separate Kubernetes clusters with no dependency between them. For example, you can deploy a three-
node cluster for production and a standalone node for development. Or you can support small scale for development and large scale for
production.
If you want to use a production and a development tenant with a private remote repository, select Dev. If you don't select it now, you can
install a development tenant later.
4. Click Next.
6. Select the checkbox to agree to the terms and conditions of the license and click Download.
TIP:
In Google Chrome, to download the image and license files together, you may need to set the the browser Settings → Privacy and
security → Site settings → Additional permissions → Automatic downloads to the default behavior Sites can ask to automatically
download multiple files.
Two files download: A zipped license file containing one or more JSON license files with instructions, and a zipped image file of the type you
selected (.ova, .vhd)
Abstract
Download a VHD image from Cortex Gateway, deploy the image on Hyper-V, and use the textual user interface to configure network, IP, and
environment settings, and to install a Cortex XSOAR tenant.
IMPORTANT:
To implement built-in High Availability, deploy a cluster with three nodes (VMs), with each VM on a different hypervisor. This ensures that if
one hypervisor fails, the other VMs continue to operate.
Set the Cluster FQDN to the either the virtual IP address or to the reverse proxy/ingress controller IP address (Task 6). The virtual IP or
the reverse proxy/ingress controller serves as a single entry point to distribute traffic across the nodes in the cluster.
To use backup and restore functionality, deploy a second cluster in a secondary data center. For more information, see Set up backup and
restore in Cortex XSOAR.
2. Create a new hard disk. This additional hard drive will contain the application data.
a. In the Hyper-V manager menu select Action → New → Hard disk → next.
c. Name the new drive and set its location to the dedicated hard disk you prepared to contain to the application data.
d. Select Create a new blank virtual hard disk and set its size. For more details, see the System requirements.
a. In the Hyper-V manager menu select Action → New → Virtual Machine and follow the instructions.
c. Choose Generation 1.
e. Set the memory size. For more details, see the System requirements.
IMPORTANT:
Every virtual machine is provided with a 256 GB hard disk to run the OS. However, you also need to add an extra hard disk for each
virtual machine instance you want to deploy to run the application.
All virtual machines in a cluster must have the same storage size.
To ensure successful deployment, make sure the hard disks meet performance requirements detailed in the System requirements.
g. Choose Use an existing virtual Hard Disk and browse to the location of the VHD image.
h. Click finish.
b. Under Processor, set the number of processors. For more details, see the System requirements.
c. Under IDE controller 0 → Hard drive, click add → virtual hard disk. Choose the hard disk created in Step 2.
6. Repeat this procedure from Step 2 for each additional virtual machine in the cluster.
7. For first time login, the default user name and password is admin.
If you log in from an external terminal, use the ssh admin@<server ip address> command to SSH log in.
IMPORTANT:
Save the SSH password securely. If you lose this password you cannot recover or change it, and to use SSH you will need to redeploy
the cluster.
The password must be at least eight characters long and contain at least:
If this is not a first time login, you can log in from the web console or from a terminal using the ssh admin@<server ip address>
command to SSH log in.
The textual UI menu opens with all the configuration and installation options.
TIP:
To navigate between the menu items, use the up and down arrow keys. To select a menu item, press the Enter key.
To navigate between fields within a menu item, use the Tab key. To save settings, tab to the Save button and press the Enter key.
To go back to the menu from a specific menu item field, press the esc key.
You need to configure network and IP settings in each node in a cluster. For standalone, there is just a single node.
2. Configure the following network and IP settings for each node/virtual machine.
NOTE:
When choosing the network settings, either use private IPs or a public IP covered by an access policy defined in a security group.
IP Address: IP address for this node. After deployment, this field will not be editable. For example, 10.196.37.10
Default Gateway: IP address of the default gateway for this interface. For example, 10.196.37.1
DNS Server 2 (optional) - IP address of a secondary DNS server. For example, 10.196.4.11
NTP Servers: The IP address of NTP server that the node will be synced with. By default, the nodes get an out-of-the-box NTP server,
you can override the value.
3. Select Save.
If you want to use a proxy, define the proxy address and port settings. The proxy can be set at any point, during Cortex XSOAR deployment or at a
later stage.
Proxy Address
NOTE:
You can either enter the address as IP:port without a http:// or https:// prefix, or enter the host name.
Proxy Port
3. Select Save.
For each VM (node) in a cluster, the nodes must have SSH connections between them, where all the nodes trust one another. To establish trusted
connections in a cluster, one node is designated as the signing server host, generating a token for secure communication and authentication.
Other nodes connect to the host using the token displayed on the host's screen.
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
IMPORTANT:
To implement built-in High Availability, after establishing trust between all nodes in a cluster, in the cluster installation step (Task 6) you need to
set a single entry point to distribute traffic across the nodes in the cluster. Do this by setting the Cluster FQDN to either the virtual IP address or
to the reverse proxy/ingress controller IP address.
To use backup and restore functionality, deploy a second cluster in a secondary data center. For more information, see Set up backup and
restore in Cortex XSOAR.
2. Select Host.
A message displays that this action cancels prior trust established with other nodes. Select Yes to continue.
This node becomes the host, and a token is generated on the screen. Copy the token, for example:
NOTE:
Keep this window open (do not select Stop) until trust is established between all nodes to enable the host to listen for the token from the
other nodes.
b. Select Join.
e. Select Submit.
A message displays that this action cancels prior trust established with other nodes. Select Yes to continue.
4. Select OK.
5. After trust is established between all the nodes in the cluster, go back to the host node and select Stop to close the listening window.
PREREQUISITE:
Expose the following DNS records for the same cluster IP address.
Cluster FQDN - The Cortex XSOAR DNS name for accessing the UI. For example, xsoar.mycompany.com.
api-FQDN - The Cortex XSOAR DNS name that is mapped for API access. For example, api-xsoar.mycompany.com. This should be a
CNAME entry pointing to the same cluster IP address.
ext-FQDN - The Cortex XSOAR DNS name that is mapped to access long running integrations. For example, ext-
xsoar.mycompany.com. This should be a CNAME entry pointing to the same cluster IP address.
The virtual machine you use to run the installer will deploy Cortex XSOAR on all virtual machines in a cluster.
For a single virtual machine (standalone), configure the settings for a single node.
IMPORTANT:
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
You can only change these field values in the textual UI menu before installing. To change these values after installing, you need to
redeploy your cluster and then reinstall. Contact support or engineering for assistance.
Field Description
Cluster Nodes A list of IPs of all virtual machines/nodes in the cluster, separated by a space. For example,
10.196.37.10 10.196.37.11 10.196.37.12
Cluster FQDN The Cortex XSOAR environment DNS name. For example, <subdomain>.<domain name>.<top
level domain>
NOTE:
For a single node: This field value must be registered in your DNS server so the FQDN will be
resolved to the IP of the node.
Cortex XSOAR supports only static IP addresses for each virtual machine in the cluster, it does not
support a DHCP (dynamic IP) network interface.
Field Description
Virtual IP (optional) The Cortex XSOAR environment virtual IP for the multi-node cluster. It must be an available IP
address that is not used for anything else. It is a virtual interface assigned to one of the nodes to
provide a single access point to the cluster.
Installation Mode The tenant installation type. Options to select from are:
Cluster Region The region the cluster is located in. For example, US.
Cortex XSOAR Admin Email, Credentials for the first user to log in to Cortex XSOAR.
Password, and Confirm
NOTE:
Password
The password must be at least eight characters long and contain at least:
Migration Mode Relevant for migration from Cortex XSOAR 6. If checked, the migration wizard starts in the Cortex
XSOAR 8 tenant. This cannot be changed at a later stage.
3. Select Install.
Verify all nodes meet the required hardware and network requirements, and select Install again.
The virtual machine you use to run the installer will deploy Cortex XSOAR on all virtual machines in a cluster.
After the installation tasks run, an Installation completed successfully message displays in the textual UI. However, you need to wait until the
installation process fully completes (approximately 30 minutes) and then check that you can log in to Cortex XSOAR. You then need to upload
your license to enable all Cortex XSOAR pages.
When you log in for the first time, use the Admin password and email you set during installation.
IMPORTANT:
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
Task 1. Download the OVA image and license from Cortex Gateway
Abstract
Download an image from Cortex Gateway, deploy the image, and use the textual user interface to configure environment settings, and to install a
Cortex XSOAR tenant.
To install a Cortex XSOAR 8 tenant, you need to log into Cortex Gateway, which is a portal for downloading the relevant image file and license.
Downloading a file image from Cortex Gateway ensures you have the latest pre-configured software package for easy deployment and updates. If
you have multiple or development tenants, you must repeat these tasks for each tenant.
PREREQUISITE:
You need to set up your CSP account. For more information, see How to Create Your CSP User Account.
When you create a CSP account you can set up two-factor authentication (2FA) to log into the CSP, by using an Email, Okta Verify, or
Google Authenticator (non-FedRAMP accounts). For more information, see How to Enable a Third Party IdP.
Role Details
CSP The Super User role is assigned to your CSP account. The user who creates the CSP account is granted the Super User
role role.
To download the Cortex XSOAR 8 images from Cortex Gateway, you need a license (or evaluation license via sales) assigned to your
CSP account.
For VMWare ESXi 6.5 and later, you need hardware version 13.
2. In the Available for Activation section, use the serial number to locate the tenant to download.
Production and development are separate Kubernetes clusters with no dependency between them. For example, you can deploy a three-
node cluster for production and a standalone node for development, or you can support small-scale for development and large-scale for
production.
If you want to use a production and a development tenant with a private remote repository, select Dev. If you don't select it now, you can
install a development tenant later.
4. Click Next.
OVA is supported by AWS, Oracle Cloud Infrastructure (OCI), and VMWare (for example, VSphere).
6. Select the checkbox to agree to the terms and conditions of the license and click Download.
TIP:
In Google Chrome, to download the image and license files together, you may need to set the the browser Settings → Privacy and
security → Site settings → Additional permissions → Automatic downloads to the default behavior Sites can ask to automatically
download multiple files.
Two files download: A zipped license file containing one or more JSON license files with instructions, and a zipped image file of the type you
selected (.ova, .vhd)
Abstract
Download an OVA image from Cortex Gateway, deploy the image, and use the textual user interface to configure network, IP, and environment
settings, and to install a Cortex XSOAR tenant.
The following is an example of deploying your VM on VSphere from an OVA image. For more details, see Deploying OVF Templates.
If you set your Cortex XSOAR environment as a standalone (single node), you cannot add nodes to it and switch to a cluster. If you deploy three
nodes, you can later add nodes and expand the cluster. For more information, see Manage nodes in a cluster.
IMPORTANT:
To implement built-in High Availability, deploy a cluster with three nodes (VMs), with each VM on a different hypervisor. This ensures that if
one hypervisor fails, the other VMs continue to operate.
Set the Cluster FQDN to the either the virtual IP address or to the reverse proxy/ingress controller IP address (Task 6). The virtual IP or
the reverse proxy/ingress controller serves as a single entry point to distribute traffic across the nodes in the cluster.
To use backup and restore functionality, deploy a second cluster in a secondary data center. For more information, see Set up backup and
restore in Cortex XSOAR.
2. Wherever the templates are located, right click one of the templates and choose to deploy a new virtual machine from the template.
NOTE:
Although you can create a virtual machine directly from the OVA image file, deploying from a template enables creating multiple
configured virtual machines from one downloaded OVA instead of downloading the same OVA for each virtual machine, which can be time
consuming.
1. Select a location for the virtual machine and the destination compute resource.
2. Set the storage for the virtual machine configuration and disk files.
3. Select Customize this virtual machine's hardware and Power on virtual machine after creation from the clone options and go to the
Customize hardware step.
IMPORTANT:
Every virtual machine is provided with a 256 GB hard disk to run the OS. However, you also need to add an extra hard disk for each
virtual machine instance you want to deploy to run the application.
All virtual machines in a cluster must have the same storage size.
To ensure successful deployment, make sure the hard disks meet performance requirements detailed in the System requirements.
1. Set the CPU and memory according to your preferred scale size. For example, for small scale, set CPU to 16 CPU cores and
memory to 64 GB.
3. Set the disk space for the extra hard disk according to your preferred scale size. For example, for small scale, set it to 775 GB.
4. Click FINISH.
5. Go to the folder the virtual machine was deployed to and select the virtual machine name you defined.
4. Repeat from Step 2 for each additional virtual machine in the cluster.
5. For first time login, the default user name and password is admin.
If you log in from the web console, you are prompted for your username and password.
If you log in from an external terminal, use the ssh admin@<server ip address> command to SSH log in.
IMPORTANT:
The password must be at least eight characters long and contain at least:
If this is not a first time login, you can log in from the web console or from a terminal using the ssh admin@<server ip address>
command to SSH log in.
The textual UI menu opens with all the configuration and installation options.
TIP:
To navigate between the menu items, use the up and down arrow keys. To select a menu item, press the Enter key.
To navigate between fields within a menu item, use the Tab key. To save settings, tab to the Save button and press the Enter key.
To go back to the menu from a specific menu item field, press the esc key.
You need to configure network and IP settings in each node in a cluster. For standalone, there is just a single node.
2. Configure the following network and IP settings for each node/virtual machine.
NOTE:
When choosing the network settings, either use private IPs or a public IP covered by an access policy defined in a security group.
Network Interface: A list of available interfaces on the node that the textual UI runs on. For example, ens160
IP Address: IP address for this node. After deployment, this field will not be editable. For example, 10.196.37.10
Default Gateway: IP address of the default gateway for this interface. For example, 10.196.37.1
DNS Server 2 (optional) - IP address of a secondary DNS server. For example, 10.196.4.11
NTP Servers: The IP address of NTP server that the node will be synced with. By default, the nodes get an out-of-the-box NTP server,
you can override the value.
3. Select Save.
If you want to use a proxy, define the proxy address and port settings. The proxy can be set at any point, during Cortex XSOAR deployment or at a
later stage.
Proxy Address
NOTE:
You can either enter the address as IP:port without a http:// or https:// prefix, or enter the host name.
Proxy Port
3. Select Save.
For each VM (node) in a cluster, the nodes must have SSH connections between them, where all the nodes trust one another. To establish trusted
connections in a cluster, one node is designated as the signing server host, generating a token for secure communication and authentication.
Other nodes connect to the host using the token displayed on the host's screen.
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
IMPORTANT:
To implement built-in High Availability, after establishing trust between all nodes in a cluster, in the cluster installation step (Task 6) you need to
set a single entry point to distribute traffic across the nodes in the cluster. Do this by setting the Cluster FQDN to either the virtual IP address or
to the reverse proxy/ingress controller IP address.
To use backup and restore functionality, deploy a second cluster in a secondary data center. For more information, see Set up backup and
restore in Cortex XSOAR.
1. In the textual UI menu for the VM you want to be the host, select Connect Nodes.
2. Select Host.
A message displays that this action cancels prior trust established with other nodes. Select Yes to continue.
This node becomes the host, and a token is generated on the screen. Copy the token, for example:
NOTE:
b. Select Join.
e. Select Submit.
A message displays that this action cancels prior trust established with other nodes. Select Yes to continue.
4. Select OK.
5. After trust is established between all the nodes in the cluster, go back to the host node and select Stop to close the listening window.
PREREQUISITE:
Expose the following DNS records for the same cluster IP address.
Cluster FQDN - The Cortex XSOAR DNS name for accessing the UI. For example, xsoar.mycompany.com.
api-FQDN - The Cortex XSOAR DNS name that is mapped for API access. For example, api-xsoar.mycompany.com. This should be a
CNAME entry pointing to the same cluster IP address.
ext-FQDN - The Cortex XSOAR DNS name that is mapped to access long running integrations. For example, ext-
xsoar.mycompany.com. This should be a CNAME entry pointing to the same cluster IP address.
The virtual machine you use to run the installer will deploy Cortex XSOAR on all virtual machines in a cluster.
For a single virtual machine (standalone), configure the settings for a single node.
IMPORTANT:
The IPs of all VMs (nodes) in a cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
You can only change these field values in the textual UI menu before installing. To change these values after installing, you need to
redeploy your cluster and then reinstall. Contact support or engineering for assistance.
Field Description
Cluster Nodes A list of IPs of all virtual machines/nodes in the cluster, separated by a space. For example,
10.196.37.10 10.196.37.11 10.196.37.12
Cluster FQDN The Cortex XSOAR environment DNS name. For example, <subdomain>.<domain name>.<top
level domain>
NOTE:
For a single node: This field value must be registered in your DNS server so the FQDN will be
resolved to the IP of the node.
Cortex XSOAR supports only static IP addresses for each virtual machine in the cluster, it does not
support a DHCP (dynamic IP) network interface.
Virtual IP (optional) The Cortex XSOAR environment virtual IP for the multi-node cluster. It must be an available IP
address that is not used for anything else. It is a virtual interface assigned to one of the nodes to
provide a single access point to the cluster.
Installation Mode The tenant installation type. Options to select from are:
Cluster Region The region the cluster is located in. For example, US.
Cortex XSOAR Admin Email, Credentials for the first user to log in to Cortex XSOAR.
Password, and Confirm
NOTE:
Password
The password must be at least eight characters long and contain at least:
Migration Mode Relevant for migration from Cortex XSOAR 6. If checked, the migration wizard starts in the Cortex
XSOAR 8 tenant. This cannot be changed at a later stage.
3. Select Install.
Verify all nodes meet the required hardware and network requirements, and select Install again.
The virtual machine you use to run the installer will deploy Cortex XSOAR on all virtual machines in a cluster.
After the installation tasks run, an Installation completed successfully message displays in the textual UI. However, you need to wait until the
installation process fully completes (approximately 30 minutes) and then check that you can log in to Cortex XSOAR. You then need to upload
your license to enable all Cortex XSOAR pages.
When you log in for the first time, use the Admin password and email you set during installation.
3.8 | Post-installation
Abstract
After installation, add your license to Cortex XSOAR, set up a signed certificate, and perform optional post-installation maintenance activities from
the VM textual UI menu.
Abstract
Download the Cortex XSOAR license from Cortex Gateway. The license determines which components users can use and how many users can
access the tenant.
Cortex XSOAR requires a yearly license per user. Multi-year licenses are available.
After purchasing a license, the activation card for the license is visible in Cortex Gateway. When you install Cortex XSOAR from Cortex Gateway
download both the image file and the license. After installing Cortex XSOAR, you must upload the license to Cortex XSOAR. Until you upload a
valid license, you will be unable to use Cortex XSOAR.
In the License page (Settings & Info → Cortex XSOAR License) you can see the following:
Expiration date
1. Locate the license file you downloaded from the Cortex Gateway.
If you selected Dev/Prod, you should have two license files, one for each environment. Each should be uploaded separately to the
corresponding environment.
2. In the Upload License section, either drag and drop your license file or browse your files to select the license file. The license file is
in JSON format.
NOTE:
If you upload a new license while your current license is still valid, the new license will override your existing license for the same product. Other
products' licenses will not be affected.
Abstract
Use HTTPS with a signed certificate in Cortex XSOAR. Concatenate the certificate chain.
By default, the tenant uses a self-signed certificate for a secure HTTP connection. TLS versions 1.2 and 1.3 are supported.
We recommend using a self-signed certificate only for development environments. Follow these steps to create a self-signed certificate.
openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -out example.crt -keyout example.key
NOTE:
While the example is generic, you might need to create your certificates and keys with different parameters according to your
internal company policies or compliance with regulations.
If you prefer to create a key without a passphrase, add the -nodes flag
Flag Description
-newkey rsa:4096 Generates a 4096-bit RSA new private key. The default RSA key is 2048 bits.
Flag Description
-days 3650 The number of days for which to certify the certificate. 3650 is ten years. You can use any
positive integer.
-out example.csr Specifies the file name for the newly created certificate signing request. You can specify
any file name.
-keyout example.key Specifies the file name for the newly created private key. You can specify any file name.
2. Apply the key and certificate files that should be used as the HTTPS certificate for the tenant.
If you want to use your own certificate (X.509 certificates), you can install or renew a custom certificate. For security reasons, the default certificate
for a production environment must be replaced with your private key and a certificate from a Certificate Authority (CA). For development
environments, you either use a self-signed certificate or a certificate from a CA.
The following example is one way to create a private key and a CSR on a Linux-based system.
NOTE:
While this example is generic, you might need to create your certificates and keys with different parameters according to your internal company
policies or compliance with regulations.
2. Generate the certificate signing request and the private key. The certificate signing request is for the URL that will be publicly available for
everyone and also includes all public-facing aliases.
NOTE:
The FQDN must be provided as the Common Name (CN) when generating the CSR and private key.
Flag Description
-newkey rsa:4096 Creates a new certificate request and a 4096 bit RSA key. The default RSA key is 2048
bits.
-out example.csr Specifies the file name for the newly created certificate signing request. You can specify
any file name.
Flag Description
-keyout example.key Specifies the file name for the newly created private key. You can specify any file name.
4. Send the CSR to the Certificate Authority (CA). The CA should send the certificate by email in multiple formats. For example, example.crt.
CAUTION:
Cortex XSOAR tenant does not support PKCS#8 encrypted PEM files. To validate that the file is in a format that is supported, view the
encrypted .key file (you can use one of the following commands - vi / less / cat) and check that the DEK-Info header exists.
If the DEK-Info header is not similar to the example above, the file is likely in the wrong format (PKCS#8).
You can convert the .key file to the proper format by running the following command:
You don't have to use aes256, you can use des3 or whichever encryption method you prefer.
After you run this command, view the .key file and verify that the DEK-Info header is similar to the example above. This should allow the
.key file to be read.
5. For the certificate PEM file, you must concatenate the certificate chain one after the other in the file.
NOTE:
1. SSL certificate
2. Intermediate certificate
3. CA certificate
1. SSL Certificate
2. CA Certificate
Only the certificate itself is needed, for example the text between and including "-----BEGIN CERTIFICATE-----" and "-----END
CERTIFICATE-----".
Task 2. Apply the certificate to Cortex XSOAR
Replace the default internal certificate with a private key and a certificate from a CA.
IMPORTANT:
If the custom certificate has a password, it will cause an error. To resolve this, remove the password from the key file using the following
command:
2. Apply the key and certificate files that should be used as the HTTPS certificate for the server.
Check whether the FQDN of the Cortex XSOAR tenant is the same as the CN field of the certificate, or any of the DNS fields in the
Certificate Subject Alternative NAME (SAN) .
On your browser on which you are trying to load Cortex XSOAR, clear cookies and other data. For example, in Chrome, go to Settings →
Advanced → Clear Browsing data → Clear data.
If the Cortex XSOAR tenant is behind a load balancer, reupload the certificate on the load balancer. For example, if the Cortex XSOAR
tenant is behind ELB (Elastic Load Balancing), re-import the certificate on ELB on the Amazon Certificate Manager AWS console.
Example 1.
An EDL is a text file that you or another source hosts on an external web server so that a firewall can import objects (IP addresses, URLs, and
domains) to enforce policy on the entries in the list. As the list is updated, the firewall dynamically imports the list at a configured interval and
enforces policy without making a configuration change or a commit on the firewall.
To export a secure EDL to your firewall, you need to replace the out-of-the-box certification and set up the certification for the firewall to be able to
access the EDL. For more information on setting up a PAN-OS firewall, see Configure the Firewall to Access an External Dynamic List. For more
information on importing a certificate to a PAN-OS firewall, see Import a Certificate and Private Key.
Abstract
Self-signed certificates are customizable and cost-effective for internal systems or development environments where external trust is not required,
offering independence from third-party Certificate Authorities.
In a multi-tenant setup, if a child tenant has a self-signed certificate, you can disable SSL verification for requests between the parent and child
tenant.
3. Click Apply.
Abstract
Configure system performance optimization from the textual UI menu by launching the web console from your VM or by SSH login from an
external terminal.
Once Cortex XSOAR is installed, you can perform system performance optimization tasks from the textual UI, for example:
To access the textual UI menu, log in from the VM web console or from an external terminal using the ssh admin@<server ip address>
command to SSH log in.
IMPORTANT:
If you lose the SSH password, you cannot recover or change it. To use SSH you will need to redeploy the VM.
The textual UI menu opens with all the configuration and maintenance options.
TIP:
To navigate between the menu items, use the up and down arrow keys. To select a menu item, press the Enter key.
To navigate between fields within a menu item, use the Tab key. To save settings, tab to the Save button and press the Enter key.
To go back to the menu from a specific menu item field, press the esc key.
Abstract
Add, drain, remove, taint, or uncordon a node in a cluster under the Cluster Administration textual UI menu item.
If you deployed your Cortex XSOAR environment starting with three nodes, using the textual UI menu in your VM you can add a node, taint a
node, remove a node, drain a node, and uncordon a node.
IMPORTANT:
If you deployed your Cortex XSOAR environment as a standalone (single node), you cannot add nodes to it and switch to a cluster.
A Kubernetes cluster consists of a control plane and one or more worker nodes. For Cortex XSOAR, in standalone (one VM), the VM acts
as both control plane and as a worker node. In multi-node clusters, the first three nodes act as both control plane and as worker nodes,
and any additional node added acts as a worker node.
If you remove one of the original three nodes in the cluster (one of the control planes), you cannot perform actions such as upgrade or
scaling up. When you add a new node, Cortex XSOAR automatically assigns the new node as a control plane with the same IP address
as the node that was removed.
You need to set the host again and reestablish trust between all the nodes if you want to add more nodes to the cluster after completing
installation.
Add a node
Add a node to a cluster to increase its capacity, improve performance, or enhance redundancy for better load distribution.
2. Set the host again and reestablish trust between all nodes in the cluster, including the new node (see Task 5. Establish trust between all
nodes in a cluster).
Taint a node
Tainting a node marks the node as out of service for internal K8s functions. Taint a node to stop applications from running on it.
3. Select Taint.
In the list of nodes in the Cluster Administration menu, the node IP will display as Ready,SchedulingDisabled.
Remove a node
Remove a node from a cluster to reduce resources, perform maintenance, or decommission the node, ensuring the cluster operates efficiently
without unnecessary or malfunctioning components.
b. Select Drain.
In the list of nodes in the Cluster Administration menu, the node IP will display as Ready,SchedulingDisabled.
b. Select Remove.
In the list of nodes in the Cluster Administration menu, the node IP will display as Ready.
Drain a node
Draining a node pauses the node activity in the cluster and marks it as unschedulable. Draining a node safely removes workloads from it, ensuring
that running applications are gracefully terminated or moved to other nodes without disrupting service availability before you perform maintenance
on the node.
3. Select Drain.
In the list of nodes in the Cluster Administration menu, the node IP will display as Ready,SchedulingDisabled.
Uncordon a node
Uncordon a node in a cluster to make it available again for scheduling new workloads, for example after maintenance or troubleshooting is
complete.
3. Select Uncordon.
In the list of nodes in the Cluster Administration menu, the node IP will display as Ready.
Abstract
The Scale Settings textual UI menu item enables scaling up resources for CPU, memory, and disk size.
Using the textual UI menu in your VM, you can easily scale up your Cortex XSOAR hardware environment based on your organizational and
usage growth.
1. Choose the scale you want and make sure your hardware resources meet the system requirements. For more information, see System
requirements.
If you are working with more than one node, all the nodes in the cluster must meet the same hardware requirements.
3. Select Scan scale options to run a scan to evaluate the cluster recommended scale.
Based on the results, the system indicates the current scale size and gives you the option to increase the scale size.
NOTE:
The recommended scale is determined by the node with the least hardware resources.
Troubleshoot the installation from the textual UI menu by launching the web console from your VM or by SSH login from an external terminal.
Once Cortex XSOAR is installed, if you have issues with the installation you can perform troubleshooting tasks from the textual UI, for example:
To access the textual UI menu, log in from the VM web console or from an external terminal using the ssh admin@<server ip address>
command to SSH log in.
IMPORTANT:
If you lose the SSH password, you cannot recover or change it. To use SSH you will need to redeploy the VM.
For example:
The textual UI menu opens with all the configuration and maintenance options.
TIP:
To navigate between the menu items, use the up and down arrow keys. To select a menu item, press the Enter key.
To navigate between fields within a menu item, use the Tab key. To save settings, tab to the Save button and press the Enter key.
To go back to the menu from a specific menu item field, press the esc key.
Abstract
The following provides guidance on avoiding or resolving common issues encountered at various installation stages Cortex XSOAR On-prem to
ensure your system is ready for operation.
Cluster Installation fields in the textual UI menu cannot be changed after installation
After the installation completes, you cannot change any field values. Any changes need to be made before installing Cortex XSOAR.
To change an installation field value after installing, you must redeploy the cluster and reinstall Cortex XSOAR. For more information, see Task 6.
Install Cortex XSOAR on your VM under Cortex XSOAR installation. Contact engineering or support for assistance.
Unable to access the Cortex XSOAR web page immediately after installation
When you install or upgrade in the textual UI, after all the tasks run a successful installation or upgrade message displays. However, the system
may not yet have fully completed the installation process.
Wait until the installation process fully completes (approximately 30 minutes) and then check that you can log in to Cortex XSOAR. For more
information, see the Task 7. Verify you can log in to Cortex XSOAR under Cortex XSOAR installation.
When you set the SSH password after deploying your cluster, you need to save it securely. If you lose this password you cannot recover or
change it.
If you lose the SSH password, you must redeploy the cluster and reinstall Cortex XSOAR. For more information, see Cortex XSOAR installation.
Contact engineering or support for assistance.
For Cortex XSOAR to successfully communicate with integrations and services and for High Availability to work, the IPs of all VMs (nodes) in a
cluster as well as the virtual IP must be on the same subnet, they currently cannot be split across subnets.
To move the IPs in your cluster to the same subnet , you must redeploy the cluster and reinstall Cortex XSOAR. For more information, see Cortex
XSOAR installation. Contact engineering or support for assistance.
After reboot, hard shutdown, or taking a snapshot from a hypervisor Cortex XSOAR is not running properly
Reboot, hard shutdown, or taking a snapshot in your hypervisor (which performs a hard shutdown) can cause issues in Cortex XSOAR, including:
Service failures: Core services or integrations may fail to start due to corrupted files or improper shutdown sequences.
Database errors: Incident data, playbooks, or audit logs may become inaccessible due to database corruption, causing errors when loading
or querying data.
Delayed or failed login: Users may experience delays or failures when trying to log in because authentication or session services were not
properly restored.
Broken playbooks and scripts: Active or scheduled playbooks and scripts may fail to execute, resulting in incomplete or disrupted workflows.
If you experience issues, download a log bundle from the textual UI menu. Contact support or engineering for assistance. Do not reboot or perform
a hard shutdown of Cortex XSOAR. For more information, see Shut down Cortex XSOAR.
For a hypervisor snapshot, either perform a graceful shutdown for the VM and then take the snapshot, or instead of taking a hypervisor snapshot
use the backup and restore feature. For more information, see Backup and restore Cortex XSOAR.
Abstract
Logs provide information about events that occur in the system. They are a valuable tool in troubleshooting issues that might arise in your Cortex
XSOAR environment. If you need additional help to find the source of an issue, you can download the log bundle to send to support or engineering
or to attach to a support ticket to facilitate the troubleshooting process.
NOTE:
You need viewer SSH user permissions to view and download logs.
Once Cortex XSOAR is installed and running, you can view system status and download log bundles from the Cortex XSOAR UI. If you encounter
issues during installation or if Cortex XSOAR is not running, you can access logs and log bundles from the textual UI menu.
1. If the textual UI is not already open, either launch the web console from your VM or SSH log in from an external terminal. For more
information, see Troubleshoot your installation.
These logs are not related to any user session in Cortex XSOAR.
The viewer user can use scp/sftp to download the log bundle to their home directory.
View system status and download log bundles from Cortex XSOAR
The System Diagnostics page provides system status data over time in the form of graph and table widgets.
The graphs and tables in the System Diagnostics page show the following data. This information can help troubleshoot system performance
issues. If you need additional help to find the source of the issue, you can download the log bundle to send to support or engineering or to
attach to a support ticket.
Widget Description
Nodes - CPU A trend graph showing CPU consumption. The graph shows an increase as system usage increases. Temporary
peaks may indicate system delays or slowness. We recommend increasing CPU resources when you reach
system limits.
Active Nodes A list of all active nodes and their statuses. Possible values are Connected or Disconnected.
Snapshot
Storage Groups A trend graph showing storage group utilization. The graph shows an increase as storage usage increases. A
rapid surge in storage utilization may indicate a change in system usage. We recommend either increasing
storage capacity or performing a data cleanup when utilization reaches 80%.
Nodes Memory A trend graph showing memory consumption. The graph shows an increase as memory usage increases.
Temporary peaks may indicate system delays or slowness. We recommend increasing memory resources when
you reach system limits.
XSOAR A table showing the status of Cortex XSOAR components. Possible values are Healthy, Warning, or Error.
Components
Snapshot If you see a warning or error for a Cortex XSOAR component, we recommend you:
Check cluster health graphs for temporary peaks or high resource utilization.
If you have recently made changes to your system, verify if these changes have impacted system
components.
Open a support case if you cannot find the source of the issue.
Widget Description
Playbooks in A graph that includes both manually triggered and automatically triggered playbooks and displays how many
Queue playbooks were waiting in the queue over the displayed time period.
Playbook queues manage playbook executions efficiently and prevent system overload. Rapid surge in the graph
values may indicate a temporary peak of triggered playbooks that can cause playbooks to take longer to execute
and/or slow UI performance.
If the queue count is consistently higher than 0, we recommend contacting customer support to discuss scaling
options.
Nodes - Storage A trend graph showing storage usage. The graph shows an increase as storage usage increases. Temporary
peaks may indicate system delays or slowness. We recommend increasing storage resources when you reach
system limits.
Cortex A table showing the status of the connection between your Cortex XSOAR local tenant and the external gateway.
Connectivity
If the status is Disconnected, you cannot upgrade Cortex XSOAR, access Marketplace, or update Docker
Snapshot
images.
You can choose the following time frames to display the system status data:
Last hour
Last 6 hours
Last 12 hours
Last 3 days
Last 7 days
Abstract
There may be issues related to your Cortex XSOAR deployment that cannot be resolved by the information provided by the log bundles. You can
establish a support session with support or engineering to troubleshoot and resolve these issues. Opening a support session temporarily allows
the support/engineering person access to your system as a user with elevated permissions in order to troubleshoot or debug issues.
3. If the Tenant ID field is not populated, upload the tenant license to Cortex XSOAR and then select Refresh. For more information, see Add
the Cortex XSOAR license.
If your Cortex XSOAR UI is not available, upload your license to the textual UI.
The support/engineering person now has access to your system in a secure support shell as a user with elevated permissions.
When support/engineering finishes their troubleshooting or debugging activities, they log out of the system. The shell closes and you return to the
textual UI menu.
Abstract
There may be issues related to your Cortex XSOAR deployment in which your tenant is not available, and you need to provide your license for a
support session with support or engineering to troubleshoot and resolve the issues. You may also need your license to perform an upgrade. You
can copy your license details into the textual UI for support or engineering to access.
2. Open the license JSON file you downloaded from Cortex Gateway and copy the content into the License JSON field in the textual UI.
3. Click Apply.
The license details are now available for support or engineering to access.
Abstract
You may need to shut down Cortex XSOAR in order to perform maintenance or troubleshooting activities.
You can gracefully shut down or reboot a cluster by selecting the Shutdown/Reboot menu item from the textual UI.
IMPORTANT:
Do not reboot Cortex XSOAR or do a hard shutdown if it is not working properly. Instead, download a log bundle and perform a graceful
shutdown from the textual UI menu. Contact support or engineering for assistance.
Cortex XSOAR checks several times a day to determine whether there is a new version available and downloads an update file if it is available. If
the update file of the new version of the tenant was downloaded successfully to the tenant, an update notification appears in the left menu pane for
all admin users. Only administrators receive the update notifications and can perform an update.
The notification indicates the new version number that is available and contains a link that opens the About page.
You can hide the notification in the main pane for your user so that the next time you log in to the tenant, the notification in the menu pane will not
appear until a different new version is available. If a different user logs in to the tenant, the notification will appear. However, the update icon will
still appear next to the user's About option providing you with the ability to update the version.
NOTE:
When upgrading Cortex XSOAR you update every version consecutively. For example, if you have installed version 8.5, you update to 8.6 and
then 8.7. After you update to version 8.6, you will receive a notification for the next version.
1. In the Cortex XSOAR left menu pane, click Details in the update notification, or go to <Your Username> → About.
The About page appears with a new banner that indicates that an update is available with the new version number.
2. Click Continue.
3. If you are not going to update now, you can click the Hide the notification from the menu for this release to hide the update notification for
this version. The next time you log in to the tenant, the notification will not appear in the main menu of the UI, but an icon will appear in the
About option.
A message appears indicating that the system will be down while the data and settings are updated.
5. Click Update Now. When the update is done, if this is a major release, a new version message appears.
1. If the textual UI is not already open, either launch the web console from your VM or SSH log in from an external terminal.
IMPORTANT:
Ensure the version syntax and numbering is correct. For example, master-8.7.0-8.7.0.62-dadf65c3.
4. Select Upgrade.
When you select Upgrade, after the upgrade tasks run an Upgrade completed succssfully message displays. However, you need to wait
until the upgrade process fully completes (approximately 30 minutes) and then check that you can open the Cortex XSOAR UI.
Schedule recurring backups of the Cortex XSOAR cluster and then restore the cluster from a specific backup.
Periodically backing up your environment enables you to back up and recover data such as incidents, playbooks, integrations, users,
configurations, and settings. This is an efficient way to recover in case of a failure or data corruption or if you need to roll back to a previous minor
version.
NOTE:
Set up Cortex XSOAR backup and run the command to connect to the server.
Backup saves the state of the cluster with all of its data at the time the backup is taken. To save storage space, backups are saved with de-
duplication and incremental backup methods. Currently, Cortex XSOAR supports backup to a Network File System (NFS) server.
NOTE:
Backups can be restored only if the current version of Cortex XSOAR is the same major version as the backup. For example, if the current
version is 8.8.2, you can restore to a backup of 8.8.1.
A second cluster with the same configurations and hardware specifications as the original cluster. For more information, see Restore
backups between clusters.
A dedicated disk on the NFS server with the minimum disk space required to store backups. For more information, see Hardware
requirements.
Access to the NFS server from the Cortex XSOAR cluster via ports 2049 and 111.
When an NFS server is installed with a new formatted disk dedicated for backup and restore, run the backup-cli install command to connect
to the server.
NOTE:
For multi-tenant/MSSP, you need to configure the backup feature for each tenant, regardless if the tenant is a parent or a child tenant.
The path of the directory to store backups on the NFS server (ending with a forward slash)
2. Log in using SSH to the Cortex XSOAR cluster with the viewer user.
3. Run the backup-cli install command to connect the NFS server to the cluster.
NOTE:
For High Availability, you can log in as a viewer to any node in the cluster and run the backup commands from that node.
# sudo /home/viewer/sbin/backup-cli install [nfs-server-ip] [nfs-path] [size-limit (GB)]
# Example:
sudo /home/viewer/sbin/backup-cli install 1.1.1.1 /some/path/to/nfs/ 1024
The CLI shows a list of tasks performed to connect the NFS server to the cluster. This can take a few minutes.
For example:
Once the connection is established, perform backup and restore actions, including:
See details of existing backup schedules with the schedule show command.
See a list of backups and their statuses with the backup list command.
For more information about the backup and restore options, see Run backup and restore operations from the CLI.
From the CLI, schedule recurring backups of the Cortex XSOAR cluster and subsequently restore the cluster from a specific backup.
Run the following commands from your CLI to perform backup and restore operations. Backup can take several minutes.
Command Description
help Shows a help menu on all available subcommands. You can also apply the --help flag on the subcommands to get more
detailed specific information.
Output:
Usage:
cortex-cli backup-restore [command]
Available Commands:
backup Backup operations
install Install backup components
nfs-check Run NFS connectivity test
restore Restore operations
schedule Schedule operations
Flags:
-h, --help help for backup-restore
Global Flags:
--config string Path to configuration file
--verbose Display verbose output
nfs-check Checks if an NFS share can be mounted and verifies storage requirements for backup.
Example:
schedule Configures an automated backup schedule for the creation and removal of old backups. Only one (or zero) backup schedule
create can run at any given time.
The schedule syntax is in crontab notation (https://crontab.guru/) which must be in quotes (see the example). Backup
retention time is provided in units of days. For example, 5 for keeping each backup for five days before it is automatically
deleted.
sudo /home/viewer/sbin/backup-cli schedule create [schedule cron expression] [ttl in day units]
Example:
Example output:
Example output:
Command Description
backup list Prints a list of all backup attempts and their statuses.
Example output:
Example:
Backups can be restored only if the current version of Cortex XSOAR is the same version as the backup.
You can copy the backup name from the backup list.
CAUTION:
Rolling back to an existing restore point will shut down Cortex XSOAR for several minutes.
sudo /home/viewer/sbin/backup-cli restore [backup-name]
Example:
Output example:
uninstall Removes all backup components and disconnects from the configured NFS server.
WARNING:
To reconfigure NFS access details on the Cortex XSOAR cluster side, run the uninstall command and then run the
install command.
Restoring a backup from one cluster to another enables setting a clean or standby Cortex XSOAR environment to the same state as the original
backed-up Cortex XSOAR environment.
PREREQUISITE:
Verify you have an existing backup. View the available list of backups by running the sudo /home/viewer/sbin/backup-cli backup
list command in your CLI.
Ensure the new cluster has the following configurations and hardware specifications.
The FQDN for the new cluster must be the same as the previous cluster.
The new cluster must have the same amount of nodes as the previous cluster.
The new cluster must have the same scale size (CPU, memory, and disk space) as the previous cluster.
The UI Account Admin user for the new cluster must have the same password as the UI Account Admin user in the previous cluster.
NOTE:
The node IP addresses in the new cluster can differ from the previous one.
Ensure the NFS server is accessible from both deployments. The subnet should be the same, or another entry should exist for the new
cluster but with the same path on the NFS server.
Verify the new cluster was installed successfully and the UI is accessible.
1. From the new cluster CLI, run the backup-cli install command to connect the NFS server to the new cluster.
NOTE:
Use the same NFS server IP, path, and size limit from the previous deployment.
sudo /home/viewer/sbin/backup-cli install [nfs-server-ip] [nfs-path] [size-limit (GB)]
Example:
The CLI shows a list of tasks performed to connect the NFS server to the cluster. This can take a few minutes.
2. Run the backup list command to verify you can see the backups from the previous cluster.
Example:
Example:
For more information on the backup and restore commands, see Run backup and restore operations from the CLI.
5 | Engines
Abstract
Install an engine in your remote network, enabling effortless communication with Cortex XSOAR. Easily configure and manage the engine to fit
your specific needs, and explore how to leverage it for seamless integrations.
While the Cortex XSOAR tenant includes a user interface that allows security analysts to create and manage playbooks, investigate incidents, and
perform other tasks, the engine operates behind the scenes to execute these playbooks and automate security actions. The separation between
the user interface and the engine allows for the scalable and efficient execution of security automation and orchestration.
You can install multiple engines on the same machine (Shell installation only) which is useful in a dev-prod environment where you do not want to
have numerous engines in different environments and to manage those machines.
NOTE:
Engine architecture
Engine proxy
Cortex XSOAR engines enable you to access internal or external services that are otherwise blocked by a firewall or a proxy. For example, if
a firewall blocks external communication and you want to run the Rasterize integration, you need to install an engine to access the Internet.
Engine load-balancing
Engines can be part of a load-balancing group, which enables the distribution of the command execution load. The load-balancing group
uses an algorithm to efficiently share the workload for integrations that the group is assigned to, thereby speeding up execution time. In
general, heavy workloads are caused by playbooks that run a high number of commands.
NOTE:
When you add an engine to a load-balancing group, you cannot use that engine separately. The engine does not appear in the engines
menu when configuring an integration instance but you can choose the load-balancing group.
You can install engines on all Linux machines. Docker/Podman needs to be installed before installing an engine. If you are using the shell installer
for an engine, Docker/Podman is installed automatically.
NOTE:
If your hard drive is partitioned, we recommend a minimum of 50 GB for the /var partition.
NOTE:
If using Podman, we recommend reserving 150 GB for container storage, either in the /home partition or a different storage directory that you
have set using the rootless_storage_path key. For more information, see Change container storage directory.
You can deploy a Cortex XSOAR engine on the following operating systems:
RHEL 8.0, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 8.10, 9.0, 9.1, 9.2, 9.3, 9.4
Amazon Linux 2
NOTE:
Centos 8.x reached End of Life (EOL) on December 31, 2021, and is no longer a supported operating system.
Centos 7.x reached End of Life (EOL) on June 30, 2024, and is no longer a supported operating system.
You need to allow the following URLs for Cortex XSOAR engines to operate properly. The URLs are needed to pull container images from public
Docker registries.
https://registry.fedoraproject.org
https://registry.access.redhat.com
https://docker.io
https://registry.docker.io
https://docker-images-
prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com
https://auth.docker.io
https://production.cloudflare.docker.com
NOTE:
When you install the engine, the d1.conf is installed on the engine machine, which contains engine properties such as proxy, log level, and log
files. If Docker/Podman is already installed, the python.engine.docker and powershell.engine.docker keys are set to true. If Docker or
Podman is not available when the engine is installed, the key is set to false. If so, you need to set the key to true after installing Docker and
Podman. Verify that python.engine.docker and powershell.engine.docker configuration keys are present in the d1.conf file.
NOTE:
If you are using DEB, RPM, or Zip installation, install Docker or Podman.
Installation types
Cortex XSOAR supports the following file types for installation on the engine machine:
The installation file is selected for you. Shell installation supports the purge flag, which by default is false. To uninstall an engine, run the
installer with the purge flag enabled.
NOTE:
When upgrading an engine that was installed using the Shell installation, you can use the Upgrade Engine feature in the Engines page.
For Amazon Linux 2 type engines, you need to upgrade these engine types using a zip type engine and not use the Upgrade Engine
feature.
If you use the shell installer, Docker/Podman is automatically installed. We recommend using Linux and not Windows to be able to use
the shell installer which installs all dependencies.
NOTE:
Use DEB and RPM installation when shell installation is not available. You need to manually install Docker or Podman and any
dependencies.
When you install an engine, the configuration file d1.conf is installed on the engine machine.
IMPORTANT:
For DEB/RPM engines, Python (including 3.x) and the containerization platform (Docker/Podman) must be installed and configured. For Docker
or Podman to work correctly on an engine, IPv4 forwarding must be enabled.
1. Create an engine.
a. Select Settings & Info → Settings → Integrations → Engines → Create New Engine.
b. In the Engine Name field, add a meaningful name for the engine.
d. (Optional) (Shell only) Select the checkbox to enable multiple engines to run on the same machine.
If you have an existing engine, and you did not select the checkbox, and now you want to install another engine on the same machine,
you need to delete the existing engine.
TIP:
For Amazon Linux 2, use the zip installer (see step 4). For other Linux systems, we recommend using the shell installer.
a. Move the .sh file to the engine machine using a tool such as SSH or PuTTY.
b. On the engine machine, grant execution permission by running the following command:
chmod +x /<engine-file-path>
If you receive a permissions denied error, it is likely that you do not have permission to access the /tmp directory.
a. Move the file to the required machine using a tool such as SSH or PuTTY.
mkdir /usr/local/demisto
b. Unzip the engine files to the folder created in the previous step.
e. In /etc/systemd/system edit the d1.service file las follows (adjust the directory and the name of the binaries file if needed).
[Unit]
Description=Demisto Engine Service
After=network.target
[Service]
Type=simple
User=demisto
WorkingDirectory=/usr/local/demisto
ExecStart=/usr/local/demisto/d1_linux_amd64
EnvironmentFile=/etc/environment
Restart=always
[Install]
WantedBy=multi-user.target
f. Give the service execution permissions and change the owner to demisto.
systemctl start d1
systemctl status d1
6. When the engine is connected, you can add the engine to a load-balancing group by clicking Load-Balancing Group on the Engines page.
If you want to add the engine to a new group, click Add to new group from the list.
When the engine is in the load-balancing group, it cannot be used as an individual engine and does not appear when configuring an engine
from the list.
7. (Optional) After installing the engine, you may want to set up a proxy, set up Docker hardening, configure the number of workers for the
engine, or perform other related engine configurations. For more information, see the Configure Engines section. You can also configure an
integration instance to run on the engine you created.
NOTE:
If the installer fails to start due to a permissions issue, even if running as root, add one of the following two arguments when running the
installer:
--target <path> - Extracts the installer files into the specified custom path.
--keep - Extracts the installer files into the current working directory (without cleaning at the end).
If using installer options such as -- -tools=false, the option should come after the --target or --keep arguments. For example:
Abstract
Install a Cortex XSOAR engine offline when you don’t have access to the Internet (tested on RHEL v8).
An air gap is a security measure that involves isolating a computer or network and preventing it from establishing an external connection. An air-
gapped computer is physically segregated and incapable of connecting wirelessly or physically with other computers or network devices.
On a machine that has internet access, you need to download dependencies, Docker images, and from the Cortex XSOAR tenant, the engine
installation files. You then need to transfer and install the files to the machine without internet access.
Install the following top level dependencies according to your operating system. These dependencies may be dependent upon other OS libraries.
NOTE:
Always verify that your dependencies are updated and take into account that they might change across releases.
RPM dependencies
xmlsec1
xmlsec1-openssl
rpm-build
libcap
dnf-utils
file
fontconfig
expat
libpng
freetype
git
makeself
The following dependencies are required for Debian and Ubuntu deployments:
systemd
xmlsec1
rpm
libcap2-bin
file
libfontconfig1
libfreetype6
git
makeself
To download Docker images you need to use the download_packs_and_docker_images script to download the docker image according to the
content pack integration you want to use, such as AWS-ILM, Cybereason, and EWS.
The download_packs_and_docker_images script enables you to download the latest content packs Docker images in a zip folder to your
machine. The script is located in the Utils folder in the GIT Content repository. If you do not have access to the GIT Content repository, you can
download the script from here. For detailed information and how to download the Docker images, see download packs offline.
a. Select Settings & Info → Settings → Integrations → Engines → Create New Engine.
b. In the Engine Name field, add a meaningful name for the engine.
d. (Optional) If you want to add the engine to a load balancing group, from the list, select the group.
The list only appears after you have created and connected an engine and created a load balancing group. To add the engine to a
new group, select Add new group from the list.
The engine cannot be used as an individual engine and does not appear when configuring an engine from the list.
e. (Optional) (Shell only) Select the checkbox to enable multiple engines to run on the same machine.
If you have an existing engine, you did not select the checkbox, and you want to install another engine on the same machine, you
need to delete the existing engine.
b. Verify that the required dependencies in step 1a is installed successfully by running one of the following commands.
chmod +x /<engine-file-path>
If you receive a permissions denied error, it is likely that you do not have permission to access the /tmp directory.
d. (Red Hat v8 & above) If you have not already done so, install and configure Podman, by following the steps in Migrate From Docker to
Podman (from step 2 onwards).
e. Load the Docker images that you downloaded in step 1b, by doing one of the following:
4. (Optional) To verify that images are able to run, use the podman images command. You can also run the podman images
-q "demisto/python:1.3-alpine" command to validate specific images and identify any issues.
NOTE:
a. Confirm that the engine status is active, by running the systemctl status d1 command.
b. Validate that the engine is connected and running by going to Settings & Info → Settings → Integrations → Engines.
c. Run the engine on a sample integration. For example, go to Settings & Info → Settings → Integrations → Instances and search for the
Hello World (Community Contribution) integration. Add or edit the instance and in the Run on field, select the engine.
d. Run a simple command to test that the engine is working properly using the integration.
5.3.2 | Docker
Abstract
NOTE:
Cortex XSOAR maintains a repository of Docker images, available in the Docker hub under the Cortex organization.
Each Python/PowerShell script or integration has a specific Docker image listed in the YAML file. When the script or integration runs, if the
specified Docker image is not available locally, it is downloaded from the Docker hub or the Cortex Container Registry. The script or integration
then runs inside the Docker container. For more information on Docker, see the Docker documentation and Using Docker.
NOTE:
Docker images can be downloaded together with their relevant content packs, for offline installation.
Abstract
Docker is required for engines to run Python/Powershell scripts and integrations in a controlled environment.
If you use the Shell installer to install an engine, Docker is automatically installed. If using DEB and RPM installations, you need to install Docker
or Podman before installing an engine. The engine uses Docker to run Python scripts, PowerShell scripts, and integrations in a controlled
Cortex XSOAR supports the latest Docker Engine release from Docker and the following corresponding supported Linux distributions:
These Linux distributions include their own Docker Engine package. In addition, older versions of Docker Engine released within the last 12
months are supported unless there is a known compatibility issue with a specific Docker Engine version. In case of a compatibility issue, Cortex
XSOAR will publish an advisory notifying customers to upgrade their Docker Engine version.
You can use a version that is not supported. However, when encountering an issue that requires Customer Support involvement, you may be
asked to upgrade to a supported version before assistance can be provided.
If you need to install Docker before installing an engine, use the following procedures:
Red Hat
Ubuntu
Amazon Linux
Oracle Linux
NOTE:
For Red Hat's Docker distribution, you need Mirantis Container Runtime (formerly Docker Engine - Enterprise) to run specific Docker-dependent
integrations and scripts. For more information, see Install Docker distribution for Red Hat on an engine server.
To use the Mirantis Container Runtime (formerly Docker Engine - Enterprise) follow the deployment guide for your operating system distribution.
If you installed an engine before installing Docker, verify the demisto operating system user is part of the docker operating system group.
id demisto
uid=997(demisto) gid=997(demisto) groups=997(demisto),998(docker)
python.executable
python.executable.no.docker
To verify that the operating system user (demisto) has necessary permissions and can run Docker containers, run the following command from the
OS command line.
If everything is configured properly you will receive the following output. Python 2.7.14.
Abstract
Red Hat maintains its own package of Docker, which is the version used in OpenShift Container Platform environments, and is available in the
RHEL Extras repository.
If running RHEL v8 or higher, the engine installs Podman packages and configures the operating system to enable Podman in rootless mode.
For more information about the different packages available to install on Red Hat, see the Red Hat Knowledge Base Article (requires a Red Hat
subscription to access).
3. Change ownership of the Docker daemon socket so members of the dockerroot user group have access.
b. Enable OS group dockerroot access to Docker by adding the following entry to the /etc/docker/daemon.json: "group":
"dockerroot"file. For example:
{ "group": "dockerroot" }
d. Install an engine.
e. After the engine is installed, run the following command to add the demisto os user to the dockerroot os group (Red Hat uses
dockerroot group instead of docker).
The Cortex XSOAR engine uses the /var/lib/demisto/temp directory (with subdirs) to copy files and receive files from running Docker
containers. By default, when SELinux is in enforcing mode directories under /var/lib/ it cannot be accessed by Docker containers.
a. To allow containers access to the /var/lib/demisto/temp directory, you need to set the correct SELinux policy type, by typing the
following command.
b. ( Optional) Verify that the directory has the container_file_t SELinux type attached by running the following command.
ls -d -Z /var/lib/demisto/temp
c. Configure label confinement to allow Python and PowerShell containers to access other script folders.
Key Value
d. Open any incident and in the incident War Room CLI, run the /reset_containers command.
Abstract
The project that contains the source Dockerfiles used to build the images and the accompanying files is fully open source and available for review.
Cortex XSOAR uses the secure Docker Hub registry for its Docker images. However, in an Engine environment, you can also use the PANW
registry . You can view the Docker trust information for each image at the image info branch.
We automatically update our open source Docker images and their accompanying dependencies (OS and Python). Examples of automatic
updates can be viewed on GitHub.
We maintain Docker image information which includes information on Python packages, OS packages and image metadata for all our Docker
images. Data image information is updated nightly.
All of our images are continuously scanned using Prisma Cloud for known and newly published vulnerabilities, in two scenarios:
Every new image, and every new version of an image, are scanned before publishing to our public registries, as part of our CI/CD process.
All existing images are continuously scanned to check whether new vulnerabilities were published and now exist in those images.
We evaluate all critical/high findings and actively work to prevent and mitigate security vulnerabilities.
Cortex XSOAR ensures container images are fully patched and do not contain unnecessary packages. Patches and dependencies are applied
automatically via our open source docker files build project.
Response Prioritization
We remediate any critical and high level vulnerabilities, irrespective of who found them. Issues may be discovered by external researchers, found
during internal testing, encountered by customers or reported by other organizations and vendors.
Any vulnerability with a possible exploitation against our images would be responded to with utmost urgency. If we conclude that there is a risk for
our customers we will issue an advisory with recommended actions and mitigations. Advisories are published at:
https://security.paloaltonetworks.com/.
In each version release (every 3 months,) we publish a new version of our content, that will use the latest and secure versions of our images.
Troubleshooting
If you scanned the Docker images locally, and found some critical CVE’s - Make sure you use the latest version of the pack, as it should
have the latest version of the image. In addition, purge the old and unused image with vulnerabilities.
Abstract
By default, Cortex XSOAR uses Docker Hub's public container registry. As an alternative to using Docker Hub, you can use the Cortex XSOAR
private container registry (XSOAR Registry), which contains all Docker images that Cortex XSOAR uses in integrations and automations. When
you use the XSOAR Container Registry, you can avoid limitations that Docker imposes on Docker Hub, for example rate limits. The registry is
available at: xsoar-registry.pan.dev.
Username: Use the license Customer Name, by going to Settings → About → License.
Password: Use the License ID. To obtain the License ID, go to the License file and copy the ID from the id property, or run the
!GetLicenseID command, included in the Common Scripts content pack.
To pull Docker images from the XSOAR Registry, verify the following base URLs are allowed in your firewall/proxy:
https://xsoar-registry.pan.dev
https://storage.googleapis.com
NOTE:
When using a custom Docker registry, including the Cortex XSOAR Container Registry, you must include localhost when you create a custom
Docker image. Examples:
localhost.local/directory/container_name
localhost/directory/container_name
For Docker: sudo -u demisto docker login -u <license customer name> -p <license id> xsoar-registry.pan.dev
For Podman (Red Hat 8.x): sudo su -s /bin/bash - demisto -c 'podman login -u "<license customer name>" -p "
<license id>" xsoar-registry.pan.dev'
If you see an error such as Error saving credentials: mkdir /home/demisto: permission denied, the demisto user is either
missing the home directory or the permissions on the directory are not valid.
1. To verify the home directory assigned to the demisto user, run echo ~demisto to display the home directory, such as:
/home/demisto.
2. To ensure the directory exists and has the correct permissions, run the following commands, using the directory from echo ~demisto:
For Podman (Red Hat 8.x): sudo su -s /bin/bash - demisto -c 'podman pull xsoar-
registry.pan.dev/demisto/python3:3.10.4.27798'
Key Value
python.docker.registry xsoar-registry.pan.dev
If you are using an engine, apply the same server configuration to the engine machine by adding to the JSON file:
{
“python.docker.registry”: “xsoar-registry.pan.dev"
}
4. Reset containers by running the following command in the Cortex XSOAR Server Playground.
5. Test a Docker based automation or integration. For example, from the Cortex XSOAR Server Playground, run the following command.
!py script="print('test')"
6. In the Cortex XSOAR Server Playground, verify that Docker images from xsoar-registry.pan.dev have been pulled, by running the following
command.
/docker_images
The command /docker_images may also display Docker images pulled before enabling the XSOAR Registry or Docker images that were
shipped as part of the server installer.
If you need to use external Docker images (images not available in the XSOAR Registry and not part of the demisto org) in custom content,
specify the full image name with the registry prefix in the automation or integration configuration. For example:
docker.io/frolvlad/alpine-python2:latest
registry.access.redhat.com/ubi8/python-38:latest
myregistryhost:5000/myorg/myimage:version1.0
Abstract
Cortex XSOAR uses COPY for building images. The COPY instruction copies files from the local host machine to the container file system.
Cortex XSOAR does not use the ADD instruction, which could potentially retrieve files from remote URLs and perform operations such as
unpacking, introducing potential security vulnerabilities.
The --restart flag should not be used. Cortex XSOAR manages the lifecycle of Docker images and restarts images as needed.
Can we restrict containers from acquiring additional privileges by setting the no-new-privileges option?
Cortex XSOAR does not support the no-new-privileges option. Some integrations and scripts may need to change privileges when running
as a non-root user (such as Ping).
The default seccomp profile from Docker is strongly recommended. The default seccomp profile provides protection as well as wide
application compatibility. While you can apply a custom seccomp profile, Cortex XSOAR cannot guarantee that it won't block system calls
used by an integration or script. If you apply a custom seccomp profile, you need to verify and test the profile with any integrations or scripts
you plan to use.
TLS authentication is not used, because Cortex XSOAR does not use Docker remote connections. All communication is done via the local
Docker IPC socket.
The default Docker settings (recommended) include 14 kernel capabilities and exclude 23 kernel capabilities. Refer to Docker’s full list of
runtime privileges and Linux capabilities.
You can further exclude capabilities via advanced configuration, but will first need to verify that you are not using a script that requires the
capability. For example, Ping requires NET_RAW capability.
The Cortex XSOAR tenant monitors the health of the containers and restarts/terminates containers as needed. The Docker health check
option is not needed.
Live restore is not used. Cortex XSOAR uses ephemeral Docker containers. Every running container is stateless by design.
Cortex XSOAR does not disable inter-container communication by default, as there are use cases where this might be needed. For
example, a script communicating with a long running integration which listens on a port, may require inter-container communication. If inter-
container communication is not required, it can be disabled by modifying the Docker daemon configuration.
Auditing is an operating system configuration, and can be enabled in the operating system settings. Cortex XSOAR does not change the
audit settings of the operating system.
Cortex XSOAR does not map privileged ports (TCP/IP port numbers below 1024).
If the kernel supports hairpin NAT, you can disable docker userland proxy settings by modifying the Docker daemon configuration.
Cortex XSOAR supports the default AppArmor profile (only relevant for Ubuntu with AppArmor enabled).
Cortex XSOAR supports the default SELinux profile (only relevant for RedHat with SELinux enabled).
For Docker swarm services, a secret is a blob of data, such as password, SSH private keys, SSL certificates, or other piece of data that
should not be transmitted over a network or stored unencrypted in a Docker file or in your application’s source code. Cortex XSOAR
manages integration credentials internally. It also supports using an external credentials service such as CyberArk.
The following provides troubleshooting solutions for Docker networking and performance issues.
In Cortex XSOAR, integrations and scripts run either on the tenant, or on an engine.
If you have Docker networking issues when using an engine, you need to modify the d1.conf file.
1. On the machine where the Engine is installed, open the d1.conf file.
This information is intended to help resolve the following Docker performance issues.
Time synchronization issues between the container and the operating system.
Cause
The installed Docker package and its dependencies are not up to date.
Workaround
Abstract
Configure the Docker pull rate limit on public images. Create a Docker user account and receive a higher pull limit.
Docker enforces a pull rate limit on public images. The limit is based on an IP address or as a logged-in Docker hub user. The default limit (100
pulls per 6 hours) is usually high enough for Cortex XSOAR's use of Docker images, but the rate limit may be reached if using a single IP address
for a large organization (behind a NAT). If the rate limit is reached, the following error message is issued:
The pull limit is higher for a registered user (200 pulls per 6 hours).
2. Authenticate the user on the engine machine by running the following command.
3. (Optional) Instead of manually logging in to Docker to pull images, you can edit the Docker config file to use credentials from the file or from
a credential store.
Abstract
The /var/lib/docker/ folder is the default Docker folder for Ubuntu, Fedora, and Deblan in a standard engine installation.
2. Create a file called daemon.json under the /etc/docker directory with the following content:
{
"data-root": "<path to your Docker folder>"
}
5. After confirming that the change was successful, you can remove the backup file.
Abstract
Configure CA signed and custom certificates for Docker. Trust custom certificates for python integrations in Cortex XSOAR.
Python, Javascript, and native integrations running in Docker use an engine's built-in set of CA-signed certificates to validate TLS communication.
If you need to change the certificate bundle of the operating system you are working on, for Javascript and native integrations you need to add
custom trusted certificates to the engine built-in set, and for Python Docker integrations you need to create a certificate file that includes the
custom certificates and add it to the engine. This is relevant for example if you work with a proxy that performs SSL traffic inspection or use a
service that has a self-signed certificate.
1. Add the certificate to the machine’s trusted root CA bundle. The location of the CA bundle depends on the operating system version and the
operating configuration.
"/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL 6
"/etc/ssl/ca-bundle.pem", // OpenSUSE
"/etc/pki/tls/cacert.pem", // OpenELEC
"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem", //RHEL 7
"/etc/pki/tls/certs", // Fedora/RHEL
This procedure assumes that the Cortex XSOAR lib dir is configured to the default location /var/lib/demisto.
NOTE:
/var/lib/demisto requires root access. This is relevant for Docker and Podman.
a. Create a certificate PEM file that includes all of the required custom certificates.
To examine the certificate chain used by a specific endpoint, run the following command on the engine machine (requires
openssl client):
openssl s_client -servername <host_name> -host <host_name> -port 443 -showcerts < /dev/null
For example, openssl s_client -servername api.github.com -host api.github.com -port 443 -showcerts <
/dev/null
This prints certificate information including the PEM representation of the certificates. After examining the output, if you see
Verification error: unable to get issuer certificate, one or more certificates in the certificate chain is not available
and you need to obtain these certificates from your IT administrator.
openssl s_client -servername api.github.com -host api.github.com -port 443 -showcerts < /dev/null
2>/dev/null | sed -n '/^-----BEGIN CERT/,/^-----END CERT/p' > certs.pem
To verify that the certs.pem has all needed certificates as part of the certificate chain, run openssl verify -CAfile
certs.pem site.pem, where site.pem contains the certificate of a specific site you want to trust. To get the cert of a site, run
openssl s_client -servername <site_host> -host <site_host> -port 443 and copy the base content including ----
-BEGIN CERTIFICATE----- and -----END CERTIFICATE-----.
After saving the certs.pem file, add its content to /var/lib/demisto/python-ssl-certs.pem, by running the following
command:
(Optional) Verify that the file has the container_file_t SELinux type attached by running the following command:
ls -d -Z /var/lib/demisto/python-ssl-certs.pem
c. (Optional) If you require the standard set of certificates trusted by browsers, you can append the CA certificates provided by your
operating system. For example, on Ubuntu, these certificates are located at the following path: /etc/ssl/certs/ca-
certificates.crt. Alternatively, you can download the PEM certificates file provided by the Certifi Project and add your custom
certificates to the file that contains the standard set of certificates. For more details, see the cacert.pem file.
This example adds the proxy-ca.pem file (custom certificate) to the cacert.pem file (standard certificates): cat proxy-ca.pem >>
cacert.pem
/var/lib/demisto/python-ssl-certs.pem
(Multi-tenant) In a multi-tenant deployment, the certificate is copied to the following path on the host machine:
/var/lib/demisto/tenants/acc_TENANT/python-ssl-certs.pem
/var/lib/demisto
ii. Set the demisto user as the directory owner with 0700 permissions.
iv. Add the following configuration to either the engine configuration file (in the UI) or to the d1.conf file.
"python.docker.use_custom_certs": true
After saving the python.docker.use_custom_certs configuration on your engine, Docker images that are launched by the engine will
contain the certificates file mounted in the following path:
/etc/custom-python-ssl/certs.pem
Additionally, the following environment variables will be set with the value of the certificates file path, which enables standard Python HTTP
libraries to automatically trust the certificates (without code modifications):
REQUESTS_CA_BUNDLE
SSL_CERT_FILE
The Python SSL library checks the SSL_CERT_FILE environment variable only when using OpenSSL. If you use a Docker image that
uses LibreSSL, the SSL_CERT_FILE environment variable will be ignored. For more details, see LibreSSL support.
NOTE:
If you are developing your own integration (BYOI) and using non-standard HTTP libraries, you might need to include specific code that will
trust the passed certificates file when the environment variable SSL_CERT_FILE is set. In this case, always use the value in the
environment variable as the path for the certificates file, and do not hard code the mounted path specified above. For example:
certs_file = os.environ.get('SSL_CERT_FILE')
if certs_file:
# perform custom logic to trust certificates...
Abstract
Use the Docker Hardening Guide to configure the Cortex XSOAR settings when running Docker containers.
The following describes the engine settings we recommend for securely running Docker containers on Ubuntu, using iptables to restrict IP access.
When editing the configuration file, you can limit container resources, open file descriptors, limit available CPU, and more. For example, add the
following keys to the configuration file:
TIP:
We recommend reviewing Docker network hardening below, before changing any parameters in the configuration file.
To securely run Docker containers, we recommend to use the latest Docker version.
You can Check Docker Hardening Configurations to verify that the Docker container has been hardened according to the settings we recommend.
NOTE:
The settings below can also be applied to Podman, with the exception of limiting available memory, limiting available CPU, and limiting PIDS.
Docker creates its own networking stack that enables containers to communicate with other networking endpoints. You can use iptables rules to
restrict which networking sources the containers communicate with. By default, Docker uses a networking configuration that allows unrestricted
communication for containers, so that containers can communicate with all IP addresses.
Integrations and scripts running within containers do not usually require access to the host network. For added security, you can block
network access from containers to services running on the engine machine.
For example, to limit all source IPs from containers that use the IP ranges 172.16.0.0/12, run sudo iptables -I INPUT -s
172.16.0.0/12 -d 10.18.18.246 -j DROP. This also ensures that new Docker networks which use addresses in the IP address range of
172.16.0.0/12 are blocked from access to the host private IP. The default IP range used by Docker is 172.16.0.0/12. If you configured a
different range in Docker's daemon.json config file, use the configured range. Alternatively, you can limit specific interfaces by using the
interface name, such as docker0, as a source.
1. Add the following iptables rule for each private IP on the tenant machine:
sudo iptables -I INPUT -s <IP address range> -d <host private ip address> -j DROP
2. (Optional) To view a list of all private IP addresses on the host machine, run sudo ifconfig -a
If your engine is installed on a cloud provider such as AWS or GCP, it is a best practice to block containers from accessing the cloud
provider’s instance metadata service. The metadata service is accessed via IP address 169.254.169.254. For more information about the
metadata service and the data exposed, see the AWS and GCP documentation
There are cases where you might need to provide access to the metadata service. For example, access is required when using an AWS
integration that authenticates via the available role from the instance metadata service. You can create a separate Docker network, without
the blocked iptable rule, to be used by the AWS integration’s Docker container. For most AWS integrations the relevant Docker image is:
demisto/boto3py3
2. Edit the engine configuration file either by editing the d1.conf file, or If you installed via Shell, you can edit the configuration in the UI
as well as editing the file directly. For details, see Configure engines.
"python.pass.extra.keys.demisto/boto3py3": "--network=aws-metadata"
In some cases, you might need to block specific integrations from accessing internal network resources and allow the integrations to access
only external IP addresses. We recommend this setting for the Rasterize integration when used to Rasterize untrusted URLs or HTML
content, such as those obtained via external emails. With internal network access blocked, a rendered page in the Rasterize integration
cannot perform a SSRF or DNS rebind attack to access internal network resources.
2. Block network access to the host machine for the new Docker network:
5. Edit the engine configuration file either by editing the d1.conf file, or If you installed via Shell, you can edit the configuration in the UI
as well as editing the file directly. For details, see Configure engines.
6. Add the following key to run integrations that use the demisto/chromium Docker image with the Docker network external.
"python.pass.extra.keys.demisto/chromium": "--network=external"
By default, iptables rules are not persistent after a reboot. To ensure your changes are persistent, save the iptables rules by following the
recommended configuration for your Linux operating system:
Ubuntu
You can apply more specific fine tuned settings to Docker images, according to the Docker image name or the Docker image name including the
image tag. To apply settings to a Docker image name, add the advanced configuration key to the engine configuration file. If you apply Docker
image specific settings, they will be used instead of the general python.pass.extra.keys setting. This overrides the general memory and CPU
settings, as needed.
1. Edit the engine configuration file either by editing the d1.conf file, or If you installed via Shell, you can edit the configuration in the UI as well
as editing the file directly. For details, see Configure engines.
"python.pass.extra.keys.<image_name>"
To set the Docker images demisto/dl (all tags) to use a higher max memory value of 2g and to remain with the recommended PIDs
and ulimit, add the following to the configuration file:"python.pass.extra.keys.demisto/dl": "--memory=2g##--ulimit=no-
file=1024:8192##--pids-limit=256"
For additional security isolation, we recommend to run Docker containers as non-root internal users. This follows the principle of least privilege.
1. Edit the engine configuration file either by editing the d1.conf file, or If you installed via Shell, you can edit the configuration in the UI as well
as editing the file directly. For details, see Configure engines.
"docker.run.internal.asuser": true
3. For containers that do not support non-root internal users, add the following key:
"docker.run.internal.asuser.ignore" : "A comma separated list of container names. The engine matches the
container names according to the prefixes of the key values>"
The engine matches the key values for the following containers:
demisto/python:1.3-alpine
demisto/python:2.7.16.373
demisto/python3:3.7.3.928
demisto/python3:3.7.4.977
The : character should be used to limit the match to the full name of the container. For example, using the : character does not find
demisto/python-ubuntu:2.7.16.373.
When a container exceeds the specified amount of memory, the container starts to swap. Not all Linux distributions have the swap limit support
enabled by default.
Red Hat distributions usually have swap limit support enabled by default.
To protect the host from a container using too many system resources (either because of a software bug or a DoS attack), limit the resources
available for each container. In the engine configuration file, some of these settings are set using the advanced parameter:
python.pass.extra.keys. This key receives as a parameter full docker run options, separated with the ## string.
2. If swap limit capabilities is enabled, configure the memory limitation . (To test the memory, see Step 5. Test the memory limit in
Configure the memory limitation.)
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited
without swap.
See How to configure the memory limit support without swap limit capabilities below.
If you see the WARNING: No swap limit support you can configure memory support without swap limit capabilities.
How to configure the memory limit support without swap limit capabilities
1. Edit the engine configuration file either by editing the d1.conf file, or If you installed via Shell, you can edit the configuration in the UI as well
as editing the file directly. For details, see Configure engines.
"python.pass.extra.keys": "--memory=1g##--memory-swap=-1"
If you have the python.pass.extra.keys already set up with a value, add the value after the ## separator.
If swap limit capabilities is enabled (see How to check if your system supports swap limit capabilities above), in Cortex XSOAR
configure the memory limitation using the following advanced parameters.
1. Edit the engine configuration file either by editing the d1.conf file, or If you installed via Shell, you can edit the configuration in the UI as well
as editing the file directly. For details, see Configure engines.
NOTE:
If you do not want to apply Docker memory limitations you should explicitly set the advanced parameter: limit.docker.memory to
false.
def big_string(size):
sys.stdin = os.fdopen(0, "r")
s = 'a' * 1024
while len(s) < size:
s = s * 2
print('completed creating string of length: {}'.format(len(s)))
4. In the SCRIPT SETTINGS section, select the script to run on the Single engine and select the engine where you want to run the
script.
6. To test the memory limit, type !TestMemory. The command returns an error when it fails to allocate 1 GB of memory.
Configure the CPU, PIDs, and open file descriptors limit
Set the advanced parameters to configure the CPU limit, PIDs limit and the open file descriptor limit.
1. Edit the engine configuration file either by editing the d1.conf file, or If you installed via Shell, you can edit the configuration in the UI as well
as editing the file directly. For details, see Configure engines.
Parameter Key
Available CPU limit "limit.docker.cpu": true, "docker.cpu.limit": "<CPU Limit>" We recommend to limit each container
to 1 CPU. (For example, 1.0. Default is 1.0).
Check your Docker hardening configurations on an engine by running the !DockerHardeningCheck command in the CLI. The results show the
following:
Non-root User
Memory
File descriptors
CPUs
PIDs
Before running the command, ensure that your engine is up and running.
NOTE:
2. Verify the Docker container has been hardened according to recommended settings, in the CLI, run the !DockerHardeningCheck command.
5.3.3 | Podman
Abstract
Podman is a daemonless container engine for developing, managing, and running OCI Containers on the Linux System. Containers can either be
run as root or in rootless mode.
If you use the Shell installer to install an engine, Cortex XSOAR automatically detects the container management type based on the operating
system. For example, if your operating system is running RHEL v8 and higher, Cortex XSOAR installs Podman packages and configures the
operating system to enable Podman in rootless mode.
NOTE:
When upgrading an engine, the engine keeps the previously used container management type (regardless of distribution version).
If using PowerShell integrations, you may need to configure the default SELinux policy as Podman can affect processes which mmap to
/dev/zero.
Docker hardening guidelines can be applied to Podman, with the exception of Limit Available Memory, Limit Available CPU, and Limit PIDS.
By default, Podman uses the $HOME/.local/share/containers/storage directory. To use a different directory for container storage, edit the
Podman config file located at /home/demisto/.config/containers/storage.conf. If the Podman config file does not exist, you need to create
it and change the ownership.
The new storage directory needs to be owned by the demisto user, otherwise they will be denied access to it.
WARNING:
Do not use NAS storage or a temporary (tmpfs) directory for the graphroot setting. The graphroot needs to be a local, non-temporary
directory for Podman to work. For more information, see https://en.wikipedia.org/wiki/Network-attached_storage.
TIP:
We recommend reserving 150 GB for container storage, either in the /home partition or a different storage directory that you have set using the
grarphroot key.
cp /etc/containers/storage.conf /home/demisto/.config/containers
2. To set a different directory for container storage, change the key: graphroot in the storage.conf file. For example:
graphroot = "/var/lib/containers/xsoar-storage"
3. Some additional changes are required in the storage.conf file. Comment out the runroot setting by adding a # (hash) before it. For
example:
#runroot = "/run/containers/storage"
NOTE:
4. Under [storage.options.overlay], uncomment the following line (remove the # from the start):
mount_program = "/usr/bin/fuse-overlayfs"
5. If the engine has already been installed, apply your changes to any existing containers:
Abstract
When installing a new engine on RHEL 8 or later, the shell installer configures Podman automatically. There are some cases, however, where you
might need to install Podman manually:
When using an installation method other than the shell installer (e.g. an RPM package) on RHEL 8 or later.
When you want to migrate from Docker to Podman, for an existing Cortex XSOAR engine.
NOTE:
This procedure is intended for RHEL 8 or later. It may not work for other operating system types.
Do not use NAS storage for the $HOME directory. The directory needs to be a local directory for Podman to work.
Podman by default uses the fedoraproject.org, redhat.com, and docker.io unqualified search registries. SinceCortex XSOAR images use
only the docker.io registry, you can speed up download times for container images by setting unqualified-search-registries to just
docker.io.
NOTE:
If you edit the file with the root user, make sure to set the demisto user as file owner by running chown demisto:demisto
/home/demisto/.config/containers/registries.conf
/usr/local/demisto/d1.conf
If this line does not exist, add the following line to the file:
"container.engine.type": "podman"
"Server": {
"HttpsPort": "443",
"ProxyMode": true
},
"container": {
"engine": {
"type": "podman"
}
},
"db": {
"index": {
"entry": {
"disable": true
NOTE:
If the Allow running multiple engines on the same machine option is selected, run the command:
Abstract
Switch from Docker to Podman when installing an engine for RHEL 8 or later.
Although Podman is set up automatically in an engine installation, it is possible to migrate from Docker to Podman in an existing engine. Follow
the Podman installation instructions to migrate.
Abstract
Podman version 3.4.1 and lower has a known issue that dbus-daemon processes may leak when running in an environment containing the dbus-
x11 OS package. The issue occurs when the dbus-x11 OS package is installed, for example when installing an X11 desktop environment like
GNOME desktop on the host machine. If you experience this issue, you see a large number of dbus-daemon processes owned by the demisto OS
user. To check if you are affected by the issue, run the following command:
1. Remove the dbus-x11 OS package and dependent packages by running the following command:
2. After removal you can kill the leaked dbus-daemon processes by running the following OS command:
When Podman fails to run with an “Invalid argument” error, such as:
ERRO[0000] running `/usr/bin/newuidmap 15936 0 1029 1 1 165536 65536 65537 200000 65536`: newuidmap: write to uid_map failed: Invalid argument
Error: cannot set up namespace using "/usr/bin/newuidmap": exit status 1
This can be caused by duplicate lines for Cortex XSOAR in /etc/subuid and /etc/subgid.
1. Check if the /etc/subuid file contains multiple lines that start with the Cortex XSOAR username (usually demisto). For example:
alice:100000:65536
demisto:165536:65536
demisto:200000:65536
splunk:331072:65536
2. If this is the case, edit the file as root, and remove the extra line(s) for Cortex XSOAR. The line you should keep is the one that ends
with 200000:65536. Continuing with the above example, here is the end result:
alice:100000:65536
demisto:200000:65536
splunk:331072:65536
When encountering errors in Cortex XSOAR that are Podman related, such as:
failed to run "docker ps". stderr: [], err: [Timeout. Process killed (1400)
Timeout while waiting for pong response [error 'Read timed out (15s)
1. Verify that Podman is running properly with the demisto OS user, by performing the following steps:
Check that your system complies with the minimum requirements, and view general system information such as host architecture,
CPU, OS, registries, container storage path, etc., by running the following command:
podman info
Check all active running containers, container names and IDs, by running the following command:
podman ps
Check that Podman is able to run a container, by running the following command:
If any of the Podman commands are not working, try running with the --log-level=debug to receive additional details as to why it is failing.
For example:
podman --log-level=debug ps
NOTE:
This step removes all Podman images including any custom images you may have created.
b. Ensure that all Podman containers of the demisto user are stopped, by running the following command:
c. Delete the following directories (assuming the demisto OS user's home directory is at: /home/demisto)
NOTE:
$(id -u demisto) is used to get the demisto user ID, which is part of the directory name. For example, /tmp/podman-run-993
e. Verify that Podman is working properly with the demisto OS user by following step 1.
In some cases, if the Podman process crashes or is killed abruptly it can leave containers on disk. You might see errors such as error
allocating lock for new container: allocation failed; exceeded num_lock when the maximum number of locks used to manage
containers is exhausted due to the unused containers that remain.
NOTE:
When you run podman container cleanup --rm -a, you might see a message such as running or paused containers cannot
be moved without force. The message can be safely ignored, as it only pertains to current running containers, which are not removed.
4. After cleanup, verify there are no remaining unused containers podman ps -a -f status=exited.
Script failed to run: Docker code runner got container error: [Docker code script is in inconsistent state, ...
error: [exit status 126] stderr: [Error: OCI runtime error: crun: create keyring ...: Disk quota exceeded]
By default, Podman creates a keyring that is used by each container. The limit per user on the machine might be low and Podman can reach the
limit when running more containers than the keyring limit. To check the keyring usage, run the sudo cat /proc/key-users operating system
command.
The command returns the usage for each UID (to retrieve the demisto user UID, run id demisto ). The fourth column shows the number of keys
used out of the total number available. For more information about keys, see Kernel Key Retention Service.
error "exit status 125" and output "Error: chown ... operation not permitted "
If the container storage directory is not owned AND exclusively used by the demisto user, scripts will fail to run. See the Podman section for more
information about assigning ownership of the storage directory.
If the procedure set out in the Verify Podman installation section above does not solve the Podman issue and you require assistance from
Support, do the following:
/etc/containers/storage.conf
/home/demisto/.config/containers/storage.conf
If the file does not exist, indicate that there is no such file.
/home/demisto/.config/containers/registries.conf
If the file does not exist, indicate that there is no such file.
NOTE:
podman info
podman images
podman --log-level=debug ps
When installing a Cortex XSOAR engine on a RHEL system (version 8 or later), or when running an integration on such an engine, you get a
permission error for a path under /run (for example /run/user/0 or /run/libpod).
cp /etc/containers/storage.conf /home/demisto/.config/containers/storage.conf
3. Edit /home/demisto/.config/containers/storage.conf.
IMPORTANT:
The runroot must be located under the tmpfs file system type. This is required to clean Podman's run state on reboot and for
performance reasons.
Also under [storage], change graphroot (where container images are stored) to any location that is owned and accessible by user
demisto. We recommend using this standard path:
graphroot = "/home/demisto/.local/share/containers/storage"
CAUTION:
Unlike the runroot, the graphroot must NOT be located under the tmpfs file system type. Using tmpfs for the graphroot might
corrupt container images, causing command executions to fail. It also degrades performance by forcing Podman to needlessly re-
pull images.
Under [storage.options.overlay], uncomment the following line (remove the # from the start):
mount_program = "/usr/bin/fuse-overlayfs"
NOTE:
You must switch to user demisto before running the "system migrate" (running it as root will have no effect).
su - demisto
5. Also as user demisto, run the following to ensure the path changes were applied:
NOTE:
rm -rf /home/demisto/.local/share/containers/*
You can manage your engines and load-balancing groups by going to Settings & Info → Settings → Integrations → Engines.
You can view engine names, hosts, status, connection, and other engine information.
NOTE:
In the Name column, if the service name starts with a d1 prefix, it is a multiple engine.
Option Description
Managed Security Service Providers may want to split internal engines and SaaS product engines.
If you have multiple AWS accounts that are not connected and do not want a single point of failure for AWS
integrations that use STS.
You can only add the engine to the load-balancing group after you have connected the engine.
If you want to remove the last engine from a specific load-balancing group and one or more integration instances
use that engine, you will get an error. Before moving the engine, in the integration instance settings, you need to
update the Run on field to a different engine or no engine.
When selecting Load-Balancing Group → Add to new group, you can create multiple load-balancing groups and
decide which engines are part of each group.
Users can move an engine from one group to another. A group will be deleted when the last engine is removed
from it.
Upgrade Engine Relevant for Shell installation only. If you didn't install an engine using the Shell installation you will need to remove the
engine and do a fresh installation. For more information, see
Get Logs Logs are located in /var/log/demisto. For multiple engines, logs are located in /var/log/demisto/<name of the
engine>. For example, var/log/demisto.d1_e1.
Edit Relevant for Shell installation only. Enables you to edit the d1.conf file without having to access the file on your remote
Configuration machine. For more information, see Configure engines.
Download Download the d1.conf file to view the attribute values. Useful when migrating from Cortex XSOAR 6 to Cortex XSOAR 8.
Configuration
Delete Engine Deletes an engine from Cortex XSOAR. To remove the engine from your remote machine, see Remove an engine.
Whenever there is a Cortex XSOAR major version change or a change in tenant-engine protocol version, your engines require an upgrade. On the
Engines page, the Status column shows those engines that require upgrades. You can upgrade an engine by doing the following:
If you installed the engine using the Shell installer, you can upgrade the engine on the Engines page.
If you didn't install the engine using the Shell installer, you need to remove the engine and do a fresh install.
You can upgrade the engine on the Engines page if you have installed the engine using the Shell installer.
NOTE:
1. On the Engines page, select the checkbox for the engine that requires an upgrade.
When the upgrade finishes, the version appears in the Cortex XSOAR Version column. The upgrade procedure can take several minutes.
If you didn't use the Shell installer, you need to remove the engine and do a fresh install.
Remove the existing engine. For more information, see Remove an engine.
Install the engine you downloaded in step 2. For more information, see Install an engine.
When the upgrade finishes, the version appears in the Cortex XSOAR Version column. The upgrade procedure can take several minutes.
NOTE:
By default, auto-upgrade extracts the files to the /tmp directory. In some cases, you might need to use a different directory. For example, a
common use case is if your /tmp directory is mounted as a non-executable directory. To use a different directory, edit the
XSOAR_ENGINE_AUTO_UPGRADE_TMP_DIR env variable. The env variable can be specified as a global variable or can be edited in the crontab of
the root user that runs the engine upgrade script. To edit the crontab of root, run sudo crontab -e. For example:
# d1 engine
XSOAR_ENGINE_AUTO_UPGRADE_TMP_DIR=/root/tmp
PATH=/sbin:/bin:/usr/sbin:/usr/bin
* * * * * /usr/local/demisto/upgrade_engine.sh >> /var/log/demisto/demisto_install.log
Remove an engine by running the relevant command, depending on your operating system.
Installation Command
Configure Cortex XSOAR engines to change the number of workers, access communication tasks, notify users if engine disconnects, and remove
server from group.
When installing an engine, a d1.conf file is installed on your machine. Some configurations can only be done by editing the d1.conf file. If you
install via Shell, you can edit the configuration in the UI as well as editing the file directly.
A use case for modifying the engine configuration is if you want to generate engine logs for a specific log level.
1. On the machine on which you installed the engine, navigate to the d1.conf file:
2. Modify the file as required. See Common properties when editing an engine configuration
Ensure that the data is in JSON format. The properties that you specify override the values defined in the d1.conf file.
1. From the engines table, select the engine for which you want to modify the configuration.
3. In the JSON formatted configuration dialog box, modify the properties as required. For more information, see Common properties when
editing an engine configuration.
The following table describes the common properties when editing an engine configuration using the d1.conf file (located by default at
/usr/local/demisto/) or in the JSON formatted configuration dialog box in Cortex XSOAR.
http_proxy String The IP address of the HTTP proxy through The engine
which the engine communicates. d1.conf file.
https_proxy String The IP address of the HTTP/s proxy through The engine
which the engine communicates. d1.conf file.
BindAddress String The port on which the engine listens for agent The engine
connection requests and communication task d1.conf file.
responses.
LogFile String Path to the d1.log file. If you change the The engine
name or location of the d1.log file, you need d1.conf file.
to update this parameter.
Abstract
Configure a Cortex XSOAR engine to use a web proxy by editing the d1.conf file.
NOTE:
You need to configure Docker to use a proxy. When using a BlueCoat proxy, ensure you encode the values correctly.
1. On the machine on which you installed the engine, navigate to the d1.conf file and add the following keys.
2. If the environment variables are not set, or you wish to use a different settings than those specified in the environment variables, set the
configuration with your specific proxy details in the d1.conf file. For example:
{"http_proxy": "http://proxy.host.local:8080",
"https_proxy": "https://proxy.host.local:8443"}
5.7.2 | Configure the engine to call the server without using a proxy
Abstract
In some cases, due to specific environment architecture, you may need to configure the engine to use a proxy when working with integrations, but
not use a proxy when calling the Cortex XSOAR tenant.
1. On the computer where you have installed the engine, go to the directory for d1.conf file.
Key Value
Abstract
NGINX can act as a reverse proxy that sits between internal applications and external clients, forwarding client requests to the appropriate
application. Using NGINX as a reverse proxy in front of the engine enables you to provide network segmentation where the proxy can be put on a
public subnet (DMZ) while the engine can be on a private subnet, only accepting traffic from the proxy. Additionally, NGINX provides a number of
advanced load balancing and acceleration features that you can utilize.
If you want to use an engine (d1) through the reverse proxy, you need to modify EngineURLs in the d1.conf file to point to the host and port the
NGINX server is listening on.
Install NGINX
You can install NGINX on the Red Hat/Amazon (yum) and Ubuntu Linux distributions. For full instructions and available distributions, see NGINX
documentation.
sudo nginx -v
You should not use self-signed certificates for production systems. It is recommended to use a properly signed certificate for production systems.
These instructions are intended only for non-production setups.
1. To use OpenSSL to generate a self-signed certificate, on the engine machine run the following command:
sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/nginx/cert.key -out /etc/nginx/cert.crt
2. When prompted, complete the on-screen instructions to complete the required fields.
Configure NGINX
1. Open the following NGINX configuration file with your preferred editor:
/etc/nginx/conf.d/demisto.conf
# Replace DEMISTO_ENGINE with the appropriate hostname. If needed, change port 443 to the port on which the engine is listening.
upstream demisto {
server DEMISTO_ENGINE:443;
}
server {
# Change the port if you want NGINX to listen on a different port
listen 443;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/demisto.access.log;
location / {
proxy_pass https://demisto;
proxy_read_timeout 90;
}
location ~ ^/(websocket|d1ws|d2ws) {
proxy_pass https://demisto;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header Origin "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
4. Verify you can access the engine by browsing to the NGINX server host.
Abstract
Replace the self-signed certificate for an engine with a valid CA certificate for communication tasks.
For communication tasks that go through an engine, you can replace the default self-signed certificate for the engine with your own certificate.
1. Find the two files created by the engine. The default location is /usr/local/demisto.
d1.key.pem
d1.cert.pem
Use an engine or load-balancing group of engines to fetch alerts and run commands for an integration.
When you create an integration instance, you can select whether to fetch alerts and run commands executed for the integration using the engine
or a load-balancing group of engines. After you add the engine or load-balancing group to an integration instance, you can run commands using
the engine or load-balancing group by specifying the using argument in the alert War Room.
Before configuring an integration to run using multiple engines in a load-balancing group, we recommend that you test the integration using a
single engine in the load-balancing group.
Command Example
Run a script on an engine or load-balancing group to distribute the workload and improve performance.
You can run a script on an engine or load-balancing group to distribute the workload and improve performance.
2. From the BASIC section, in the Run on field, select either a single engine or a load-balancing group.
The option to select an engine or load-balancing group only appears if at least one engine or load-balancing group is connected.
3. From the list, select the name of the engine or load-balancing group.
4. Click Save.
When troubleshooting engines, access the logs from Settings & Info → Settings → Integrations → Engines and select the engine from which you
want to download the logs.
NOTE:
Debug engines
The d1.log field appears whenever an engine is running. The d1.log field contains information necessary for your customer success team to debug
any engine related issue. The field displays any error, as well as noting whether the engine is connected.
If the installer fails to start due to a permissions issue, even if running as root, add one of the following two arguments when running the
installer:
--target <path> - Extracts the installer files into the specified custom path.
--keep - Extracts the installer files into the current working directory (without cleaning at the end).
If using installer options such as -- -tools=false, the option should come after the --target or --keep arguments. For example:
After installing the engine, check that the engine is connected to the Cortex XSOAR tenant and that it is running.
1. Go to Settings & Info → Settings → Integrations → Engines and verify that the engine is connected.
2. If the engine is not connected, run the following command on the engine server to check if the engine service is running.
NOTE:
If the Allow running multiple engines on the same machine option is selected, run the command:
If the engine service is running, review the errors to see if the engine is failing to connect or if there are other issues (ignore all errors
related to \d2ws, because this is not the same as d1ws.) Most often, the server address is incorrect and you will see an error like this:
In this case, navigate to /usr/local/demisto/d1.conf and change the EngineURLs parameter to an address the engine can reach.
Check the addresses at the beginning of the upgrade_engine.sh file and update them to be the same as in the conf file. The
addresses should be a comma-separated list.
NOTE:
You can ignore the following error: Cannot create folder '/var/lib/demisto'
The configurations that might affect the upgrade_engine.sh script are the following variables located at the beginning of the script:
SERVER_URLS
TRUST_ANY_CERT
If you make a change to the baseURLs configuration, you must apply the change in /usr/local/demisto/d1.conf AND in
/usr/local/demisto/upgrade_engine.sh under the SERVER_URLS var.
If you make a change in the engine.connection.trust_any_certificate configuration, you must apply the change in
/usr/local/demisto/upgrade_engine.sh as follows:
If the engine.connection.trust_any_certificate configuration was set to true (trust any certificate), set the
TRUST_ANY_CERT variable to -k.
If the engine.connection.trust_any_certificate configuration was set to false, the TRUST_ANY_CERT variable should
be blank (““).
4. To check the connectivity from the engine to the Cortex XSOAR tenant, see Troubleshoot engine connectivity below.
5. If the installation issue remains, open a support case with logs from the engine.
NOTE:
If the Allow running multiple engines on the same machine option is selected, run the command:
c. Capture a journalctl:
d. On the engine server, tar up the log, conf, journalctl, and install log on the engine.
During an upgrade, the upgrade file is sent to the engine server. A cron job running on the engine server checks if that file exists. The most
common upgrade error is that the job is not running so the new installer does not run.
NOTE:
If the installer fails to start due to a permissions issue, even if running as root, add one of the following two arguments when running the
installer:
--target <path> - Extracts the installer files into the specified custom path.
--keep - Extracts the installer files into the current working directory (without cleaning at the end).
If using installer options such as -- -tools=false, the option should come after the --target or --keep arguments. For example:
2. Check the d1 service status on the engine server. It is possible that it stopped or doesn't exist.
NOTE:
If the Allow running multiple engines on the same machine option is selected, run the command:
3. Access the installer log on the engine server and review the error.
sudo vi /tmp/demisto_install.log
4. Rerun the installer on the engine using one of the following options. You can open a second window and run watch df -h. If the problem
seems to be disk space, you should resolve the disk space issue and then rerun the installer.
a. Option 1
i. Download the installer from the user interface and copy it to the engine.
sudo ./installer.sh -- -y
b. Option 2
sudo /usr/local/demisto/d1_upgrade.sh
ii. If d1_upgrade.sh does not exist, check if /usr/local/demisto/archived_d1_upgrade.sh exists and that it was created at
the time of the attempted upgrade.
If the file exists and was created at the time of the attempted upgrade, run the following on the engine server:
sudo /usr/local/demisto/d1_upgrade_archive.sh
Troubleshoot engine connectivity
The following provides instructions for troubleshooting connectivity issues from the engine to the endpoint.
2. Ensure that the engine can reach the endpoint by running the following command on the server engine.
3. If the engine could not reach the endpoint, try the IP with curl instruction adding the http(s)//, or try using ping.
If this works, add the IP to the /etc/hosts file with the hostname and try to reach the endpoint again by running the following command on the
engine server
If this still fails, then this is an issue of connectivity between the engine and endpoint and you need to resolve this with your networking
team.
If this succeeds but the integration still fails, it could be an integration credentials issue. In that case, open a support case.
If you see a Docker or Selinux issue, see Troubleshoot Docker networking issues .
5. If the installation issue remains, open a support case with logs from the engine.
NOTE:
If the Allow running multiple engines on the same machine option is selected, run the command:
c. Capture a journalctl:
d. On the engine server, tar up the logs, conf, journalctl, and install log on the engine.
This error might occur when a connection is established between an engine and the Cortex XSOAR tenant, because, by default, Linux does not
allow processes to listen on low-level ports.
Error Message
Solution
In the d1.conf file, change the port number to a higher one, for example, 8443.
Run this command: sudo setcap CAP_NET_BIND_SERVICE=+eip /path/to/binary. After running this command the server should be
able to bind to low-numbered ports.
This error can occur in the engine logs relating to a bad handshake on the engine trying to connect to a Cortex XSOAR tenant.
Error Message
Solution
Verify that time is synchronized on the engine to a reliable NTP source. When timing is off on the engine, this can cause a failure during the
SSL/TLS handshake process. When time is resynced, connectivity from the engine to the parent server should be restored.
Broken Pipe
Invalid syntax
Script failed to run: exec: “python”: executable file not found in $PATH (2603)
These errors could indicate that the engine is not using Docker.
sudo vi /usr/local/demisto/d1.conf
4. Add the "python.engine.docker": true configuration to the d1.conf file and remove any other configurations related to python and
Docker, such as “python.executable.no.docker”.
NOTE:
If the Allow running multiple engines on the same machine option is selected, run the command:
6. Retest the integration from the user interface. This may take a few minutes because it may need to pull the relevant Docker image.
Troubleshoot permission denied
A common error message you may see when running integrations on engines is something like: Got permission denied while trying to
connect to the Docker daemon socket at unix:///var/run/docker.sock: Get
http://%2Fvar%2Frun%2Fdocker.sock/v1.35/images/json?t.
1. Determine if you are using a Docker group or Dockerroot group by running one of the following on the server engine:
ls -la /var/run/docker.sock
The output from this command will show what user/group is running docker.sock. For example:
NOTE:
Docker CE installations typically run Docker, while Docker EE installations typically run Dockerroot.
2. To fix a Docker user, run the following commands on the server engine:
NOTE:
If the Allow running multiple engines on the same machine option is selected, run the command:
3. To fix a dockerroot user, run the following commands on the server engine:
NOTE:
If the Allow running multiple engines on the same machine option is selected, run the command:
Configure and manage a remote repository in your dev/prod setup in Cortex XSOAR On-prem
Cortex XSOAR seamlessly integrates with private repositories, allowing you to develop and thoroughly test content in a secure environment in
your development machine before pushing it to your production machine.
Overview of how remote repositories work and how to configure a remote repository in Cortex XSOAR.
You can develop and manage content in Cortex XSOAR manually within the production tenant, using a CI/CD pipeline, or between development
and production tenants using a remote repository.
Cortex XSOAR is a self-contained system. The Cortex XSOAR tenant serves as the content repository, content is developed using an IDE and
stored locally.
If you only use a standalone tenant (with no development tenant), you can develop and manage content manually. You can save content versions
and manage revisions locally for scripts, playbooks, integrations, etc. using the Save Version button. For all other content types, changes are
automatically saved locally. You can also manage content by importing/exporting it in Cortex XSOAR.
CI/CD pipelines are implemented using the XSOAR CI/CD content pack, which enables complete autonomy for developing, staging, and deploying
custom content. This feature is intended for more advanced users who have an understanding of CI/CD concepts, with multiple developers
working on different branches on their local machines.
Instead of building and maintaining code in a Cortex XSOAR development environment, you can build content from your private repository, and
utilize third-party tools such as CircleCI and Jenkins. You can also use version control, perform code reviews, do linting and validations, use
automatic testing, and run tests on development machines.
Content from a development instance is pushed to a Git repository. A CI/CD process runs to generate the required pack artifacts which are then
uploaded to an artifact repository. These artifacts are deployed into Cortex XSOAR instances by running the Configuration Setup playbook.
In Cortex XSOAR you can use a content management system with a private remote repository to develop and test content.
The development tenant pushes content to a remote repository and the production tenant or additional development tenants pull content from the
remote repository.
TIP:
You can use a self-signed certificate instead of SSL verification to securely push and pull content. Using a self-signed certificate provides
encryption and secure communication without the cost and complexity of obtaining a trusted SSL certificate from a certificate authority. For more
information, see Use a signed certificate instead of SSL.
If after setting up the remote repository feature you later decide to revert a tenant to standalone, go to Settings & Info → Settings → Advanced →
Content Repository and toggle the Content repository slider to off. If you disable the remote repository feature, content on the tenant is not
deleted. If you enable the remote repository feature again and the remote repository contains content, you need to choose which content to keep,
either the content on the tenant or the content on the remote repository. We recommend backing up any content that you want to keep before
enabling again.
The development tenant provides a safe environment to develop and test the functionality of custom content before using it in a production
environment.
NOTE:
After you develop your content, if you want it to be available as part of a content update for the production tenant or additional development
tenants, you must push content from a development tenant .
The production tenant is the operational environment for investigating real data. It pulls content as updates that you can install after the
development tenant pushes it to the remote repository. For more information, see Install content on a production tenant.
In a system with a single production tenant and several development tenants, only one development tenant can push content. The production
tenant and any other development tenants pull from the one development tenant that is configured to push content. For example, you can have an
additional development tenant for testing that pulls content from the development tenant configured to create and edit content.
All system content, content updates, and custom (user-defined) content are managed (downloaded, installed, edited, created, and updated) only in
the development tenant that pushes content. For example, system content updates from Marketplace are only delivered to the development tenant
that is configured to push. You cannot create or edit content in a production tenant or additional development tenant, they are configured only to
pull content (except for dashboards and lists).
When pushing content from the development tenant, the content is synchronized and pulled into the production or other development tenants as
content updates. For more information, see Push content from a development tenant.
You can decide which updates you want to push from the development tenant to pull tenants through the remote repository.
When you set up a remote repository, you can add any private content repository that is Git-based, including GitHub, GitLab, and Bitbucket. Also,
On-prem repositories are supported.
Although you can set up multiple development tenants, in a cluster of tenants that includes one production tenant and one or more development
tenants, only one development tenant can push content. The production tenant and any other development tenants pull from the one development
tenant that is configured to push content. After the remote repository is enabled in the production tenant, by default, the first development tenant
that has been installed is set to push content to the remote repository. When you create additional development tenants, they are set to pull
content from the remote repository.
If the content repository option is disabled for the production or development tenant, the tenant becomes standalone and does not push or pull
content.
Once the development tenant is set up, you can only change content repository settings within the tenant.
The following are typical scenarios for setting up a private remote repository for the production and one or more development tenants.
The production tenant is first activated as a standalone (by default), and the private remote repository is then enabled in the production
tenant. Once enabled, the first development tenant becomes the push tenant, the production tenant becomes a pull tenant, and any
additional tenants need to set to pull tenants.
The production and development tenants were managed in parallel with different sets of content.
Verify that you have network connectivity from Cortex XSOAR to the private remote repository. All communication goes through Cortex
XSOAR, so it must have access to the remote repository. If direct access from Cortex XSOAR is not enabled you can use engines with
access to the repository.
TIP:
Due to security concerns, there is a closed allow list of approved URLs for private repositories. If you want to use a URL that is excluded
from the allow list, use an engine (engine groups are not supported).
If you are changing your remote repository settings, back up existing content to your local computer by navigating to Settings & Info →
Settings → System → Server Settings → Custom Content and click Export all custom content.
Download and install the development image file. For more information, see Step 3. Set up a remote repository.
Perform the following procedures in the order listed below to set up a private remote repository.
NOTE:
When the first tenant (development or production) is enabled for the remote repository, the content from that tenant automatically populates the
repository. When you first enable additional tenants (development or production) to the same remote repository, you will see the Specified
repository is not empty window and have the option to use the content in the remote repository or replace the content with content from the new
tenant.
These instructions describe enabling the production tenant first, so the remote repository will initially contain production tenant content. You can
enable a development tenant first if you want the remote repository to initially contain the content from the development tenant.
1. On the production tenant, go to Settings & Info → Settings → Advanced → Content Repository and toggle the Content repository slider to
enable the content repository.
For repository vendors that use tokens, enter the token type in the username field and the token in the password field. Verify details
with your vendor.
If your private Git remote repository uses personal access tokens instead of usernames and passwords, enter the token type in the
username field and the access token in the password field. The username field value depends on your Git configuration, and it is not
case sensitive. For example, if you use an OAuth2 token, you can enter oauth2 or OAuth2 in the username field.
If using SSH, only RSA private keys are supported. If your SSH connection uses a port other than port 22 (the default SSH port), you
must include the SSH string and port number in the Repository URL field. In the following example, we use port 20017:
ssh://[email protected]:20017/~/my-project.git
4. In the Advanced section, the engine is set by default. You can change the engine by selecting from the list of available engines.
NOTE:
You can't add an engine that has been added to a Load-Balancing Group.
Once enabled, the first development tenant automatically becomes the push tenant.
1. On the development tenant, go to Settings & Info → Settings → Advanced → Content Repository and toggle the Content repository slider to
enable the content repository.
When set to On, the sync direction for the development tenant is Push. Set the sync direction for any additional development tenants to Pull.
If your private Git remote repository uses personal access tokens instead of usernames and passwords, enter the access token in the
password field and leave the username field blank.
For repository vendors that use tokens, the token type is entered in the username field and the token is entered in the password field.
Verify details with your vendor.
If using SSH, only RSA private keys are supported. If your SSH connection uses a port other than port 22 (the default SSH port), you
must include the SSH string and port number in the Repository URL field. In the following example, we use port 20017:
ssh://[email protected]:20017/~/my-project.git
4. (Optional) In the Advanced section, you can add any engines you want to connect.
6. For any additional tenants that are enabled for the remote repository, select which content to keep and which to overwrite.
After the first tenant is enabled for the remote repository, its content automatically populates the remote repository (which in this example
initially contains the production tenant content after it is enabled).
1. Disable the remote repository in any additional enabled tenants. In this case, for the first development tenant, only the
production tenant must be disabled.
3. Complete synchronization.
4. Re-enable the remote repository in any additional tenants and select Existing content on the specified repository in each
additional tenant.
Existing content on the specified repository: Deletes the existing content on your tenant and replaces it with content from the specified
repository.
7. Click Continue.
After completion, all tenants are now synced. You can start creating and testing content on the development tenant that you can push to
production and additional development tenants when ready.
Push content to a remote repository and control access for pushing content.
Once you develop your content, for it to be available as part of a content update for the production tenant, you must push the changes from the
development tenant.
CAUTION:
You should not manually export content from the development tenant to import to the production tenant. Use only the procedures outlined in the
documentation to ensure that your content is properly updated in the production tenant.
On each page you can decide whether to include or exclude items, which prevents them from being pushed to production, on a temporary or
permanent basis. You can only exclude individual content items, not content packs.
The following types of content can be synchronized between development and production tenants:
Scripts
Playbooks
Integrations: Integration instances are not pushed to the production tenant. Only customized integration YML files are pushed to the
production tenant.
Content packs: When pushing a content pack to the production tenant, we recommend pushing all of the content for the content pack to
work properly.
Evidence fields
Pre-processing rules: If you reorder your pre-processing rules you must push all of the pre-processing changes to the production tenant.
Lists
Reports: When pushing a report to the production tenant, the time range set in the report on the development tenant does not sync with the
production tenant.
Dashboards
Widgets
2. Under the Local Changes section, go to the relevant page according to the items you want to push:
Items
Content that is not related specifically to a content pack. For example, customized scripts or playbooks. When creating custom
content, the content is automatically added here. If you have already pushed a content pack and later edit one of its content items, the
edited items appear in the Content Pack Items page, not the Content Packs page.
Content Packs
All of the content that is specific to the content packs you installed from Marketplace.
If you do not want to install the whole content pack, you can install specific items in the content pack.
3. Select the items you want to push to production, and click Push.
4. If the items have dependencies, review the contents and click Push
Sometimes you may not want to push all content, content pack dependencies, etc. For example, when a user makes a change in a playbook
that includes a script dependency to which another user is adding a feature, and the change does not require the new feature (version) of
the script, you can push the playbook without the new script.
Install new content that has been pushed from the development tenant to the production tenant.
After you push content from the development push tenant, on the right-hand side of any page in the production tenant you have the option to
install the content. In case of conflicts, you have a choice whether to keep local content or delete and replace.
You can also check for new content that has been pushed.
Replace: Deletes the local content and installs the content from the content repository.
Scenarios that occur when managing content with a remote repository in Cortex XSOAR.
The following scenarios can occur when managing the remote repository.
If you configure a tenant to use a remote repository, you have two options:
Overwrite all content in the remote repository with content from the tenant.
To overwrite the remote repository with content from the tenant, you must use an empty branch. If the branch is not empty, you will get an
error message prompting you to select an empty branch. Alternatively, you can select the first option and overwrite all content in the tenant
with the content from the remote repository.
If you switch between built-in and private remote repository types, you get a warning that switching between repository types may result in the loss
of all version history.
To keep your content history, select Existing content on your tenant to overwrite all content in the remote repository with content from your tenant.
Configure and manage roles, users, and user groups, and set up authentication in Cortex XSOAR On-prem.
Learn how to configure and manage users, roles, and user groups. Assign roles and set up authentication for users.
Set up and configure roles and user groups in Cortex XSOAR. Configure authentication, and manage and create users.
Cortex uses role-based access control (RBAC) to manage roles with specific permissions for controlling user access. RBAC helps manage access
to components, so that users, based on their roles, are granted the minimal access required to accomplish their tasks.
Roles
Roles enable you to define permissions for specific components, such as incident data, playbooks, scripts, and jobs. For example, you can create
a role that allows users to edit the properties of incidents, but not delete incidents. You can create new roles or customize out-of-the-box roles.
If you assign one or more roles to an incident, only users with those roles can view and interact with the incident. For example, you might have an
incident with sensitive data that should only be accessible to Tier-1 analysts and managers.
Roles can also be used to define permissions for integration commands. On the Integration Permissions page, you can assign roles to specific
integration instances (all commands for that instance) or specific integration instance commands. For example, you could assign the Generic
Export Indicators Service integration instance the Account Admin role, or you could restrict certain commands in the Core Rest API to a specific
role. For more information, see Integration Permissions.
User groups
While roles can be assigned directly to users, we recommend instead creating user groups. Each user group has a single role associated with it,
but each user group can contain multiple users and user groups can be nested within each other, enabling you to further refine your RBAC
requirements. Users can belong to multiple user groups.
Nested roles
Cortex XSOAR 8 uses group nesting, where the group with higher permissions includes the permissions of the group with lower permissions, but
as a subset of the group with lower permissions. For example, the Admin user group is included as a subset of the Analyst user group, as shown
in the following graphic. The Admin role includes the permissions of the Analyst role, the same as in Cortex XSOAR 6.
For example, Content Developer and Analyst user groups include Employee user group permissions, and are nested in the Employee user group.
Authentication
You can create users locally or by using SAML Single Sign-On (SSO) in the tenant. After you create users, they authenticate by either:
Using SSO
Manage users
You can manage users including resetting passwords, sending invitations, and removing users.
By default, users do not have roles assigned and do not automatically have access to tenant data until you assign them a role or add them as
members of a user group that has an assigned role.
You can assign the following permissions to various components in Cortex XSOAR:
Permission Description
Out-of-the-box roles
Account Admin Predefined The user who supplied their credentials when installing Cortex XSOAR is assigned the Account Admin role.
This user has view/edit permissions for all components and access to all pages in the Cortex XSOAR tenant
(the same view/edit permissions as the Instance Administrator). You cannot create additional Account Admin
roles in Cortex XSOAR.
You cannot edit this role. You can copy the role by saving it as a new role and then change permissions.
Instance Predefined View/edit permissions for all components and access to all pages in the Cortex XSOAR tenant. The Instance
Administrator Administrator can also assign the Instance Administrator role to other users on the tenant. If the application
has predefined or custom roles, the Instance Administrator can assign those roles to other users.
You cannot edit this role. You can copy the role by saving it as a new role and then change permissions.
Analyst Custom A mix of view and view/edit permissions for all components and access to all pages in the Cortex XSOAR
tenant.
Read-Only Custom Read permissions for all components and pages in the Cortex XSOAR tenant.
NOTE:
By default, users do not have roles assigned. If no direct or user group role has been assigned, users have no permission to view or edit data in
Cortex XSOAR.
Next steps
Decide whether you want to assign roles to users directly or through membership in user groups (recommended) in the Cortex XSOAR
tenant.
Abstract
When creating or editing a role, you can set permission levels (RBAC) for specific components (such as playbooks, scripts, jobs, etc.), set page
access, define preset role queries, and set up shift management.
In the Cortex XSOAR tenant, you can set permission levels for each role by going to Settings → Settings & Info → Access Management → Roles
and editing or creating a new role.
NOTE:
You can only create, edit, copy, or delete a role if you have administrator (Instance/Account Admin) permissions. You cannot change the
predefined (Instance Administrator or Account Admin) role permissions.
The Components tab includes the following areas where you can define permissions.
Data
NOTE:
You need to select View/Edit to see the permissions for the components.
Component Description
Data Sets the permission level generally for data related to investigations, dashboards, and reports. If you select none, the
user role cannot view and edit incidents, indicators, dashboards, and reports.
Execute potential Allows executing integration commands that are marked as Potentially Harmful in the integration code/settings. Users
harmful actions can run these commands from the CLI. Playbook tasks that use these commands would not be affected, as they are
run by the DBot user as part of playbook execution.
Edit incident Allows editing an incident's fields from the layout or via the Actions menu.
properties
Change the incident Allows editing an incident's status, which includes closing an incident, or investigating an incident which is in the
status Pending status.
Delete incidents Allows deleting incidents. We recommend only granting this permission to the default Admin or select Administrators.
Manage incident Allows interacting with the playbook for the incident.
workplan
Edit indicators Allows editing indicators either from the Threat Intel pane or when viewing the indicator via its full layout or quick view
tab.
Retain incidents Allows marking an incident for permanent retention or disabling retention for an incident. Retained incidents cannot be
deleted.
Incidents Table Limits table actions in the Incidents page, such as delete, command line actions, edit, close, and mark as duplicate.
Actions
Exclusion list
Component Description
EXCLUSION LIST Limits permissions when editing, creating, or deleting an indicator in an exclusion list.
Playbooks
Component Description
NOTE:
You can also add, change, and remove roles from a playbook by
clicking Settings on the Playbooks page.
Component Description
Scripts Limits permissions for managing scripts. If the role has read/write permissions, you can enable user roles to create scripts that
run as a Super User.
On the Scripts page, you can define which roles are permitted to run a script, and according to which role the script executes.
Jobs
Component Description
Jobs Limits permissions for managing jobs. Roles that have read permissions to content items, retain partial read access. If you do
not want to retain partial read access, set the permission to none.
Marketplace
Component Description
View: The user role can view, but not take any action in Marketplace.
View/Edit: The user role can install, upgrade, downgrade, and delete content packs in Marketplace.
Configurations
General Auditing Whether a user role can access the Management Audit Logs page.
Setting
General Alert Notifications Whether a user role can forward Management Audit Logs to an email distribution list or a syslog
Setting server.
Integrations Public API Whether a user role can access the API Keys page. View/Edit enables the user role to manage API
keys, including creating, editing, and deleting.
NOTE:
If you select None, the user role can still use the API, but they cannot view API keys in the UI.
Integrations Integrations Whether a user role can view, add, edit, or delete integration instances, pre-process rules, and
classify and map incidents and indicators.
Roles that have view permissions for content items, retain partial read access. If you do not want to
retain partial read access, set the permission to none.
Integrations Integrations Enables you to set the permissions on the Integration Permissions page. Integration permissions
Permissions enable you to assign different permission levels for the same command in each instance.
Integrations Credentials Whether a user role can add, edit, or delete integration credentials.
Object Setup Fields and Types Whether a user can add, edit, or delete fields and types for indicators, incidents, and Threat Intel
Reports.
Object Setup Layouts Whether a user can add, edit, or delete layouts for indicators, incidents, and Threat Intel Reports.
Advanced Administration Limits permissions for administration tasks, such as server configurations, audit trails, and changing
logos.
Advanced Propagation Labels (Main tenant) Whether a user can add, edit, or delete propagation labels in the Tenant Management
Advanced Tenant (Main tenant) Whether a user can add, edit, or delete child tenants in the Tenant Management.
Management
Page Access
Select the pages the user role should have access to.
NOTE:
If you select None in the Data section, even though you allow page access, the user role cannot access those pages. For example, if you allow
page access to Dashboards, but DATA is set to none, the user role cannot access the Dashboards page.
Define access to default dashboards, pre-set role queries, and shifts. For more information, see Manage roles in the Cortex XSOAR tenant.
Component Description
DEFAULT DASHBOARDS Select the default dashboards for each role. If a user has not modified their dashboard,
these dashboards are added automatically, otherwise, users can add these
dashboards to their existing dashboards.
PRE-SET ROLE QUERIES Select the preset query for each of the available components.
SHIFTS Weekly shifts start on Sunday and are specified in the UTC zone.
Abstract
On the Roles page, you can view all roles in Cortex XSOAR, whether they are custom roles, who created the role, when it was created, the tenant,
either main or child, where the role can be created and additional information about the roles. When right-clicking on a role, you can edit the role
and permissions.
Predefined roles: Includes Account Admin and Instance Administrator roles. Permissions cannot be changed. You can create a duplicate of
these roles but you cannot remove them.
When right-clicking a role, you can perform several actions, such as editing a role, saving it as a new role, and removing a role (deleting a role that
is not assigned to a user).
Create a role
The roles you create provide more granular access control. You can add as many new roles as you need and combine them with user groups.
When you create or edit a role, you can perform activities such as adding permissions and permission levels, defining shift periods, and setting
default dashboards.
TIP:
Remove the ability to delete incidents in production environments (DATA → Data → Delete incidents).
Remove the ability to install, delete, and contribute to Marketplace which should be reserved for engineers and administrators. We
recommend setting Marketplace permissions for analysts to None or View.
Remove access to API keys. Under CONFIGURATIONS , set the Public API access to None or View. If you select None, the user role can
still use the API, but they cannot view API keys in the UI.
1. In the Cortex XSOAR tenant, select Settings & Info → Settings → Access Management → Roles → New Role.
TIP:
We recommend making a copy of out-of-the-box roles and editing the copies, rather than creating new roles, to avoid missing any
important permissions.
3. In the Components tab, add the permissions as required. For more information, see Role-based permissions.
Define dashboards
6. You can create user groups and add roles to them (recommended), assign roles directly to users after they have been added, or both.
Define dashboards
In a production environment, an administrator defines the default dashboard for each user and selects the default dashboards that the user sees
when logging into the tenant, depending on a user’s role. If a user has not modified their dashboard, these dashboards are added automatically,
otherwise users can add these dashboards to their existing dashboards. These default dashboards can be removed but not deleted, and can be
added again if required.
If you select Only allow these dashboards, the current role will only be able to access the designated default dashboards. The role will not be able
to import, edit, create, or duplicate any other dashboards. It will not be possible to share any additional dashboards with this role.
The admin unselects Only allow these dashboards for the role.
The user exports the relevant dashboards and shares them with the Admin.
The Admin then adds the relevant dashboards to the default dashboards list for the role and reselects Only allow these dashboards.
A default query associated with a user’s role is useful for new users who are unsure which query to use when accessing the incident, indicators,
and jobs pages. When accessing the relevant page, the role's preset query is the default query for a new user. Existing users can keep their
default query, but the default query is available for selection.
When you define or edit a role, in the Advanced tab, you can view or edit a list of queries for incidents, indicators, and jobs, which are based on
your saved queries for these components.
1. On the component page, such as the Incidents page, create the query.
The preset query runs when a user with that role accesses that component page. If you update the preset query for a role, the query is
added to the users’ queries, but not as the preset query. If you delete one of your queries after you configure a role, the role’s list of queries
is unaffected.
Users can view the preset query based on their role when clicking the ellipsis on each component page. The preset role query has (Pre-set)
appended to the name of the query. Although users can change their default query, they cannot delete the preset role query. If a user has
permissions for multiple roles, the user sees multiple queries. The preset role queries appear at the top of the saved queries list.
If a user’s role changes, the user’s preset role query is automatically updated.
Shift management helps you define multiple shifts within Cortex XSOAR. You can create user groups, so each shift can be assigned to a user
group role, and you can assign one or more analysts across different shifts.
Enable incidents to be routed automatically to analysts based on shifts, ensuring full staff coverage for incoming incidents.
Define multiple shifts, which can be added to a role, and in turn assigned to a user group.
NOTE:
To view suggestions for on-call users to assign to an incident, run the getOwnerSuggestions command with the shiftOnly=true argument.
When assigning an incident, you can manually assign it to analysts who are on-call or you can use the AssignAnalystToIncident script with
argument onCall=true to automatically assign it to users who are on call and active.
2. In the Advanced tab, Shifts field, click Add Shift and add the required period.
Weekly shifts start on Sunday and are specified in the UTC timezone format.
For example, create a role called First Shift and add a shift starting on Sunday and ending Monday.
For more information about how to create a user group, see User group management.
(Optional) We recommend installing the Shift Management content pack. This content pack includes widgets to view Roles Per Shift, Users On-
Call, and more in a dashboard, as well as playbooks and scripts for assigning incidents to on-call users.
Create user groups, and assign roles and users to further refine your requirements,
Users are assigned roles and permissions either by being assigned a role directly or by being assigned membership in one or more user
groups. A user group can only be assigned to a single role, but users can be added to multiple groups if they require multiple roles. You can also
nest groups to achieve the same effect. Users who have multiple roles through either method will receive the highest level of access based on the
combination of their roles.
For example:
Joe has an Analyst role and is a member of the Tier-1 Analyst user group, which is assigned the Triage role. Joe has the permissions of the
Analyst role and the Triage role. Joe is assigned 2 roles, and has the highest permission based on the combination of both roles.
John is a member of two user groups - Tier-1 Analyst and Tier-2 Analyst. One group is configured to use the Triage role and the other group
is configured to use the Incident Response role. John is assigned both roles and has the highest permissions based on the combination of
all roles.
Jack is a member of the Tier-2 user group which has an Incident response role. This user group is included in a Tier-3 user group (Threat
Hunter role), added as a nested group. Jack is assigned both roles and has the highest permissions based on the combination of all roles.
On the User Groups page, you can create a new user group for several different system users or groups. You can see information including the
details of all user groups, the roles, nested groups, IdP groups (SAML), and when the group was created/updated.
You can also right-click in the table to edit, save as a new group, remove (delete) a group, and copy text to the clipboard.
2. To create a new user group for several different system users or groups, click New Group, and add the following parameters:
Parameter Description
Role Select the group role associated with this user group. You can only have a single role designated per group.
Users Select the users you want to belong to this user group.
NOTE:
If users have been created locally, but you want them to access the tenant through SSO only, skip this field and add
only SAML group mapping after SSO is set up, otherwise, users can access the tenant through their username and
password and and through SSO.
If you have not yet created any users, skip this field and add them later. See Set up authentication.
Parameter Description
Nested Lists any nested groups associated with this user group. If you have an existing group you can add a nested group.
Groups
User groups can include multiple users and nested groups, which inherit the permissions of parent user groups. The user
group will have the highest level of permission.
For example:
If you add Group A as a nested group in Group B, Group A inherits Group B's permissions (Tier-1 and Tier-2
permissions).
SAML Maps the SAML group membership to this user group. For example, you have defined a Cortex XSOAR Admins group.
Group You need to name this group exactly how it appears in Okta.
Mapping
You can add multiple groups by separating them by a comma.
NOTE:
When using Azure AD for SSO, the SAML group mapping needs to be provided using the group object ID (GUID) and
not the group name.
If you have not set up SSO in your tenant, skip this field and add it later. After you have added it, follow the procedure
relevant to your IdP. For example, see Task 6. Map SAML Group Memberships to Cortex XSOAR User Groups.
Available Displays the list of child tenants that are paired with the Main Tenant.
Tenants
Users and roles in the child tenant are updated from the Main Tenant only when the user group created includes the child
(Only tenant and the role and user defined in the Main Tenant.
available in
Main
Tenant)
NOTE:
User groups created on the Main Tenant, cannot be edited or deleted from the child tenants.
Decide whether you want to add users locally or through SSO in Cortex XSOAR On-prem.
You can create users locally or by using SSO in the tenant. Users authenticate by doing one of the following:
Authenticate locally
After you create users, they authenticate using their username and password. For more information, see Create users in Cortex XSOAR.
Users can be authenticated using your IdP provider such as Okta, Ping, or Azure AD. You can use any IdP that supports SAML 2.0.
After you have created users, add them to user groups or assign roles directly.
Enforces multi-factor authentication (MFA) and any conditional access policies on the user login at the IdP before granting a user access to
Cortex XSOAR.
Maps SAML group memberships to user groups and roles, allowing you to manage role-based access control.
Removes access to Cortex XSOAR when a user is removed or disabled in the IdP.
Abstract
Create users in Cortex XSOAR on-prem by inviting users to access Cortex XSOAR using their username and password.
To add users locally (not SSO), you can either send an invitation to users by adding their details manually or by uploading a CSV file with multiple
users.
PREREQUISITE:
Add an email integration instance, such as EWS v2, EWS O365, or Gmail.
When you invite users to Cortex XSOAR, an email is sent to their email address, using the integration instance. After inviting users you
can also copy the invitation link and send it to the users.
Review the predefined roles and consider whether you want to create roles and user groups before or after inviting users to Cortex
XSOAR.
When you send invitations to users you can invite them without a role assigned. Although they can log into Cortex XSOAR, they cannot
view/edit any data. This is useful if you want to add multiple users at one time and then define roles at a later stage, rather than users
having access immediately after accepting an invitation .
If you want to add multiple users with different roles, you should split them up according to their roles before inviting them. You can add
them manually or in a CSV format according to their role. Alternatively, leave blank for no role.
After you invite a user, an invitation is valid for seven days from the time it was sent. If not accepted, the invitation expires unless the invite
expiration is reset. Once users accept the invite they have access and permissions within Cortex XSOAR according to their assigned roles. You
have the option to copy the invite link, resend, or cancel the invitation. The invite link cannot be used after the invite has expired.
1. Select Settings & Info → Settings → Access Management → Users → Add User.
3. Repeat the above steps for any other users you want to add, if they have the same role, user group, or no role.
You cannot select different roles and user groups for multiple users.
NOTE:
Users created on a child tenant can’t be assigned to a user group or role that was set up in the main tenant.
Upload a file
NOTE:
At least one row must exist including email address, first and last names.
You cannot select different roles and user groups for each user. If you want different roles and user groups for each set
of users upload separate files.
If you have set up a mail integration, users will receive a link to access Cortex XSOAR. When accessing the link, users need to complete the
password and will be able to log in.
3. Unless already done so, add roles and user groups to users.
Abstract
Cortex XSOAR enables you to securely authenticate system users across enterprise-wide applications and websites with one set of credentials
using single sign-on (SSO) with SAML 2.0. System users can authenticate using your organization's Identity Provider (IdP), such as Okta or
PingOne. You can integrate with any IdP that is supported by SAML 2.0.
Configuring SSO with SAML 2.0 is dependent on your organization’s IdP. Some of the parameter values need to be supplied from your
organization’s IdP and some need to be added to your organization’s IdP. You should have sufficient knowledge about IdPs, how to access your
organization’s IdP, which values to add to Cortex XSOAR, and which values to add to your IdP fields.
NOTE:
To set up SSO authentication in the tenant, you must be assigned an Instance Administrator or Account Admin role.
SAML 2.0 users must log in to Cortex XSOAR using the FQDN (full URL) of the tenant. To allow login directly from the IdP to Cortex
XSOAR, you must set the relay state on the IdP to the FQDN of the tenant.
If you have multiple tenants, you must set up the SSO configuration separately for each tenant, both in the IdP and in Cortex XSOAR.
Create groups in your IdP that correspond to the roles in Cortex XSOAR and assign users to those groups in your IdP. Users can belong
to multiple groups and receive permissions associated with multiple roles. Add the appropriate SAML group mapping from your IdP to
each Cortex XSOAR role.
If you are configuring Okta or Azure, follow the procedure in Okta or Azure AD. You can also adapt these instructions for use with any similar
SAML 2.0 IdP.
You can see the SSO settings, so you can configure them according to your organization’s IdP.
3. If you want to add another SSO connection to enable managing user groups with different roles and different IdPs, click Add SSO
Connection.
Different SSO parameters for an SSO are displayed to configure according to your organization’s additional IdP.
NOTE:
The first SSO cannot be deleted, it can only be deactivated by toggling SSO Enabled to off.
If you add additional SSO providers, you must provide the email Domain in the SSO Integration settings for all providers except the
first. Cortex XSOAR uses this domain to determine which identity provider the user should be sent to for authentication.
When mapping IdP user groups to Cortex XSOAR user groups, you must include the group attribute for each IdP you want to use.
For example, if you are using Microsoft Azure and Okta, your Cortex XSOAR user group SAML Group Mapping field must include
the IdP groups for each provider. Each group name is separated by a comma.
General parameters
Whenever an SSO user logs in to Cortex XSOAR, the following login options are available.
If you have enabled more than one SSO provider, an optional email field appears. If the user does not enter an email address or if the
email address does not match an existing domain, the user is automatically directed to the default IdP provider (the first in the list of
SSO providers in the Authentication Settings). If the user enters an email address and it matches a domain listed in the Domain field
in the SSO Integration settings for one of your IdPs, Sign-In with SSO sends the user to the IdP associated with that email domain.
General parameters
Parameter Description
IdP SSO or Metadata URL Select the option that meets your organization's requirements.
Indicates your SSO URL, which is a fixed, read-only value based on your
tenant's URL using the format https://<name of Cortex-
XSOAR>.crtx.paloaltonetworks.com/idp/saml. For example,
https://tenant1.crtx.paloaltonetworks.com/idp/saml
IdP SSO URL Specify your organization’s SSO URL, which is copied from your
organization’s IdP.
Metadata URL
Parameter Description
Audience URI (SP Entity ID) Indicates your Service Provider Entity ID, also known as the ACS URL. It is a
fixed, read-only value using the format, https://<name of Cortex-
XSOAR>.paloaltonetworks.com. For example
https://tenant1.crtx.paloaltonetworks.com.
Default Role (Optional) Select the default role that you want any user to automatically
receive when they are granted access to Cortex XSOAR through SSO. This is
an inherited role and is not the same as a direct role assigned to the user.
IdP Issuer ID Specify your organization’s IdP Issuer ID, which is copied from your
organization’s IdP.
X.509 Certificate Specify your X.509 digital certificate, which is copied from your organization’s
IdP.
Domain Relevant only for multiple SSOs. For one SSO, this is a fixed, read-only value.
Associate this IdP with a specific email domain (user@<domain>). When
logging in, users are redirected to the IdP associated with their email domain
or to the default IdP if no association exists.
Parameter Description
Group Membership Specify the group membership mapping according to your organization’s IdP.
NOTE:
Cortex XSOAR requires the IdP to send the group membership as part of
the SAML token. Some IdPs send values in a format that include a comma,
which is not compatible with Cortex XSOAR. In that case, you must
configure your IdP to send a single value without a comma for each group
membership. For example, if your IdP sends the Group DN (a comma-
separated list), by default, you must configure IdP to send the Group CN
(Common Name) instead.
First Name Specify the first name mapping according to your organization’s IdP.
Last Name Specify the last name mapping according to your organization’s IdP.
Advanced settings
The following advanced settings are optional to configure and some are specific for a particular IdP.
Parameter Description
Relay State (Optional) Specify the URL for a specific page that you want users to be
directed to after they’ve been authenticated by your organization’s IdP and
log in to Cortex XSOAR.
IdP Single logout URL (Optional) Specify your IdP single logout URL provided by your
organization’s IdP to ensure that when a user initiates a logout from Cortex
XSOAR, the identity provider logs the user out of all applications in the
current identity provider login session.
SP Logout URL (Optional) Indicates the Service Provider logout URL that you need to
provide when configuring a single logout from your organization’s IdP to
ensure that when a user initiates a logout from Cortex XSOAR, the identity
provider logs the user out of all applications in the current identity provider
login session. This field is read-only and uses the following format
https://<name of Cortex-
XSOAR>.crtx.paloaltonetworks.com/idp/logout, such as
https://tenant1.crtx.paloaltonetworks.com/idp/logout.
Service Provider Public Certificate (Optional) Specify your organization’s IdP service provider public certificate.
Service Provider Private Key (Pem Format) (Optional) Specify your organization’s IdP service provider private key in
Pem Format.
Remove SAML RequestedAuthnContext (Optional) Requires users to log in to Cortex XSOAR using additional
authentication methods, such as biometric authentication.
Selecting this removes the error generated when the authentication method
used for previous authentication is different from the one currently being
requested. See here for more details about the RequestedAuthnContext
authentication mismatch error.
Force Authentication (Optional) Requires users to reauthenticate to access the Cortex XSOAR
tenant if requested by the idP, even if they already authenticated to access
other applications.
The following list describes the common errors and issues when using SAML 2.0 authentication.
Errors in your IdP could mean the Service Provider Entity ID and/or Service Identifier are not properly configured in the IdP or in the Cortex
XSOAR settings.
SAML attributes from the IdP are not properly mapped in Cortex XSOAR. The attributes are case sensitive and must exactly match in your
IdP and in the Cortex XSOAR IdP Attributes Mapping.
Group memberships from the IdP have not been properly mapped to Cortex XSOAR user groups. Verify the values your identity provider is
sending, to properly map the groups in Cortex XSOAR.
The identity provider is not configured to sign both the SAML response and the assertion on the login token. Your IdP must be configured to
sign both to ensure a secure login.
If you require further troubleshooting, we recommend using your browser's built-in developer tools or additional browser plugins to capture
the login request and SAML token.
This topic provides specific instructions for using Okta to authenticate your Cortex XSOAR users. As Okta is third-party software, specific
procedures, and screenshots may change without notice. We encourage you to also review the Okta documentation for app integrations.
To configure SAML SSO in Cortex XSOAR, you must be a user who can access the Cortex XSOAR tenant and have either the Account Admin or
Instance Administrator role assigned.
The following video is a step-by-step guide to configure SSO in Cortex XSOAR (specific Okta instructions begin at minute 3:30).
Within Okta, assign users to groups that match the user groups they will belong to in Cortex XSOAR. Users can be assigned to multiple Okta
groups and receive permissions associated with multiple user groups in Cortex XSOAR. Use an identifying word or phrase, such as Cortex
XSOAR, within the group names. For example, Cortex XSOAR Analysts. This allows you to send only relevant group information to Cortex
XSOAR, based on a filter you will set in the group attribute statement.
Create a list of the Okta groups and their corresponding Cortex XSOAR user groups (or the Cortex XSOAR user groups you intend to create) and
save this list for later use when configuring user groups in Cortex XSOAR.
Task 2. Copy Single SSO and Audience URI Values from Cortex XSOAR
1. In Cortex XSOAR, go to Settings & Info → Settings → Access Management → Authentication Settings.
4. Copy and save the values for Single Sign-On URL and Audience URI (SP Entity ID).
You cannot save the enabled SSO Integration at this time, as it requires values from your IdP.
1. In Okta, create a Cortex XSOAR application and Edit the SAML Settings.
2. Paste the Single sign-on URL and the Audience URI (SP Entity ID) that you copied from the Cortex XSOAR SSO settings. The Audience
URI should also be pasted in the Default RelayState field, which allows users to log in to Cortex XSOAR directly from the Okta dashboard.
3. Click Show Advanced Settings, verify that Okta is configured to sign both the response and the assertion signature for the SAML token, and
then click Hide Advanced Settings.
4. Cortex XSOAR requires the IdP to send four attributes in the SAML token for the authenticating user.
Email address
Group membership
First Name
Last Name
Configure Okta to send group memberships of the users using the memberOf attribute. Use the word or phrase you selected when
configuring Okta groups (such as Cortex XSOAR) to create a filter for the relevant groups.
5. Copy the exact names of the attribute statements from Okta and save them, as they are required to configure the Cortex XSOAR SSO
integration. In the example above, the names are FirstName, LastName, Email, and memberOf. The attribute names are case-sensitive.
1. In Okta, from your Cortex XSOAR application page, click View SAML setup instructions. If you do not see this button, verify you are on the
Sign On tab of the application.
2. Copy and save the values for Identity Provider Single Sign-On URL, Identity Provider Insurer, and the X.509 Certificate. These values are
needed to configure your Cortex XSOAR SSO Integration.
1. In Cortex XSOAR go to Settings & Info → Settings → Access Management → Authentication Settings.
4. Use the following table to complete the SSO Integration settings, based on the values you saved from Okta.
5. In the IdP Attributes Mapping section, enter the attribute names from Okta. The names are case-sensitive and must match exactly.
3. In the SAML Group Mapping field add the Okta group(s) that should be associated with this user group. Multiple groups should be separated
with a comma. The Okta group name must match the exact value sent in the token.
2. After authentication to Okta, you are redirected again to the Cortex XSOAR tenant.
3. When logged in, validate that you have been assigned the proper roles.
To view your role and any role assigned to a user group you are a member of, click your name in the bottom left-hand corner, and click
About.
This topic provides specific instructions for using Azure AD to authenticate your Cortex XSOAR users. As Azure AD is third-party software, specific
procedures, and screenshots may change without notice. We encourage you to also review the Azure AD documentation.
To configure SAML SSO in Cortex XSOAR, you must be a user who can access the Cortex XSOAR tenant and have either the Account Admin or
Instance Administrator role assigned.
The following video is a step-by-step guide configuring SSO in Cortex XSOAR (specific Azure AD instructions begin at minute 12:42).
Within Azure AD, assign users to security groups that match the user groups they will belong to in Cortex XSOAR. Users can be assigned to
multiple Azure AD groups and receive permissions associated with multiple user groups in Cortex XSOAR. Use an identifying word or phrase,
such as Cortex XSOAR, within the group names. For example, Cortex XSOAR Analysts. This allows you to send only relevant group information to
Cortex XSOAR, based on a filter you will set in the group attribute statement.
Task 2. Copy Single SSO and Audience URI Values from Cortex XSOAR
1. In Cortex XSOAR go to Settings & Info → Settings → Access Management → Authentication Settings.
4. Copy and save the values for Single Sign-On URL and Audience URI (SP Entity ID).
You cannot save the enabled SSO Integration at this time, as it requires values from your IdP.
1. From within Azure AD, create a Cortex XSOAR application and Edit the Basic SAML Configuration.
2. Paste the Single sign-on URL and the Audience URI (SP Entity ID) that you copied from the Cortex XSOAR SSO settings. The Single sign-
on URL from Cortex XSOAR should be pasted in the Reply URL and the Sign on URL fields. The Audience URI (SP Entity ID) value from
3. In the SAML Certificates section, click Edit and verify that Azure is configured to sign both the response and the assertion.
4. To have Azure AD send group membership for the user in the SAML token, you must + Add a group claim in the Attributes & Claims section.
Send the Security groups, using the source attribute Group ID. Use the word or phrase you selected when configuring Azure AD security
groups (such as Cortex XSOAR) to create a filter. Customize the name of the group claim as memberOf.
5. In addition to group membership, verify that there are also claims for:
Email address
First Name
Last Name
1. In Azure, from the Single sign-on page, in the Set up Cortex XSOAR Production section, copy the values for the Login URL and Azure AD
Identifier. You need these values to configure the SSO Integration in Cortex XSOAR.
2. Edit Attributes & Claims and copy the values in the Claim name column. The claim name is case sensitive. You need these values to
configure the SSO Integration in Cortex XSOAR.
NOTE:
The default attributes shown on the main single sign-on page in Azure AD are not the values you need. You must click Edit next to
Attributes and Claims to view and copy the actual values.
From the SAML Certificates section in Azure AD, Download the Certificate (Base64). You need the contents of this file to configure the Cortex
XSOAR SSO Integration.
The claim for the membership attribute that is sent to Cortex XSOAR uses the Object Id of the group. The Object Id is different from the Azure AD
security group name. You can find the Object Id for each of your Azure AD security groups by navigating to Users and groups in Azure AD, clicking
on the group name, and viewing the Object id. Create a list of the group names and corresponding Object Ids for every Azure AD security group
you want to map to a Cortex XSOAR user group.
1. In Cortex XSOAR go to Settings & Info → Settings → Access Management → Authentication Settings.
4. Use the following table to complete the SSO Integration settings, based on the values you saved from Azure AD.
5. In the IdP Attributes Mapping section, enter the attribute claim names from Azure AD. The names are case sensitive and must match
exactly.
NOTE:
The attribute claim name must exactly match the value sent by your IdP. In some cases, this may be the full attribute name/namespace,
depending on the configuration of our IdP
6. (Optional) Under Advanced Settings, select the checkboxes for ADFS and Compress encode URL (ADFS). In some circumstances, these
fields may be required by your Azure AD configuration.
3. In the SAML Group Mapping field add the Azure AD group(s) Object Ids that should be associated with this user group. Multiple Object Ids
should be separated with a comma. The Azure AD group Object Id must match the exact value sent in the token.
2. After authentication to Azure AD, you are redirected again to the Cortex XSOAR tenant.
To view your role and any role assigned to a user group you are a member of, click your name in the bottom left-hand corner, and
click About.
Invite users to the platform and set user roles and user groups in Cortex XSOAR On-prem.
To access Cortex XSOAR, users must either be added to Cortex XSOAR locally or via SSO. When logging into Cortex XSOAR users must have
an assigned role. If no role is assigned either directly or via a user group, users can log in but can't access the tenant.
On the Users page, you can view user information, such as user type, role, and user groups.
User information
Name Description
User Type Indicates whether the user was Local (added in Cortex XSOAR), SSO (single
sign-on) using your organization’s IdP, or both Local/SSO.
Direct Role Name of the role assigned to the user (not inherited from elsewhere, such as
a User Group).
Any group imported from Active Directory has the letters AD added beside the
group name.
Group Roles Lists the different group roles based on the groups to which the user belongs.
When you hover over the group role, the group associated with this role is
displayed.
Last Login Time Last date and time the user accessed Cortex XSOAR.
Phone number Displays the user's phone number. Including the user's phone number
enables playbooks and scripts to trigger direct analyst communication by
phone.
Name Description
Tenants Displays the main or child tenant the user is allowed to access.
NOTE:
Create users
To add users locally (not SSO), you can either send an invitation to users by adding their details manually or by uploading a CSV file with multiple
users. See Create users in Cortex XSOAR.
You can update user roles for one or multiple users. You can add/update the following user roles:
NOTE:
To update the permissions attributable to each role, you need to change them in the Roles tab.
1. Go to Settings & Info → Settings → Access Management → Users, and do one of the following:
To edit one user, right-click the user's name and select Edit Users Permissions.
To edit multiple users, select multiple users, right-click, and select Edit Users Permissions.
NOTE:
If no role is assigned either directly or via a user group, users do not have view or edit permissions in Cortex XSOAR.
The Show Accumulated Permissions field shows the roles and user groups assigned to the user. You can also select the specific roles assigned
to the user, which enables you to compare available permissions based on the roles selected. This can help you understand how the role
permissions for a particular user are built. For example, if you need to isolate a specific component, the permissions are provided by a particular
role or user group.
If a user has a role in the tenant (besides Account Admin), you can remove their user permission to access the tenant. If no direct or user group
role has been assigned, the user has no permission to view or edit data in Cortex XSOAR.
1. In the Users tab, right-click the user's name and select Remove User Role.
Unlock users
If the user's account has been locked, for example, due to too many login attempts, you can unlock the user.
NOTE:
The user has up to 10 attempts to log in before being locked. In any event, the user will be unlocked after 15 minutes.
1. Go to Settings & Info → Settings → Access Management → Users and select the user.
Deactivate users
Users should be deactivated to temporarily remove user access to Cortex XSOAR. All user information is maintained for deactivated users. Users
should be permanently removed if they no longer need access to Cortex XSOAR.
NOTE:
When you remove a role, the role associated with the API keys is deleted. When a user is deactivated, the API keys that the user created are
not revoked.
If more than one role was associated with the API key, a yellow warning symbol appears next to the API key in the API key table. When
you hover over the symbol, a message indicates that some of the roles associated with the API key have been deleted.
If all roles associated with the API key are removed, a red warning symbol appears next to the API key in the API key table. When you
hover over that symbol, a message indicates that the key is no longer usable because it does not have a role associated with it. The API
key is still visible in the API table but it cannot be assigned.
If the user is assigned to incidents or tasks or is the owner of a dashboard, these assignments do not automatically change when the user is
removed or deactivated. We recommend changing incident and task assignments manually before removing or deactivating users.
Any reports the user has created remain available. Reports are not owned by specific users and can be edited or deleted by other users.
NOTE:
When you remove a role, the role associated with the API keys is deleted. When a user is deactivated, the API keys that the user created are
not revoked.
Go to the Incidents page and search for -status:closed owner:user_name to find any incidents the user is assigned and reassign.
Go to the Incidents page and search for -status:closed investigation.users:user_name and reassign.
When a user is assigned a task in an incident, the user is added to the incident. This search finds all incidents where the user is a
participant.
1. Go to Settings & Info → Settings → Access Management → Users and select the user.
2. Right-click the user and then select Deactivate User and then Deactivate to confirm.
Delete users
In Cortex XSOAR, you can permanently remove a user, or temporarily disable a user. Users should be permanently removed if they no longer
need access to the system.
NOTE:
You cannot deactivate or delete a user that has an Account Admin role.
When you delete users, all their personal information is deleted, including email addresses, usernames, phone numbers, and first and last names.
Go to the Incidents page and search for -status:closed owner:user_name to find any incidents the user is assigned and reassign.
Go to the Incidents page and search for -status:closed investigation.users:user_name and reassign.
When a user is assigned a task in an incident, the user is added to the incident. This search finds all incidents where the user is a
participant.
1. Go to Settings & Info → Settings → Access Management → Users and select the user.
2. Right-click the user and then select Delete User and then Delete to confirm.
NOTE:
To define your password policy, go to Settings & Configurations → Settings → Access Management → Password Policy.
You can define your password policy using the following parameters:
In addition, you can require users to change their password every X number of days or months. By default, this setting is not enabled and
passwords do not automatically expire. You can also prevent users from reusing previous passwords.
The lock settings enable you to lock a user out of Cortex XSOAR after a set number of failed login attempts within one minute. You can either have
the user automatically unlocked after a set number of minutes or hours, or you can only allow the user to be unlocked by an administrator. To
unlock a user, go to Settings & Configurations → Settings → Access Management → Users, right-click on the username, and select Unlock.
Users can change their passwords by clicking the username at the bottom of left hand main menu and selecting User Preferences → Details.
8 | Marketplace
Abstract
In Marketplace, download your content packs to suit your use case in Cortex XSOAR.
Marketplace is a centralized content portal enabling you to manage content in Cortex XSOAR. Content is organized into Content Packs created by
different contributors such as Palo Alto Networks, Partners, and MSSPs. Download a content pack to suit your use case.
You can view Marketplace content packs from within Cortex XSOAR or at https://cortex.marketplace.pan.dev/marketplace/.
Search the Cortex Marketplace and find content. Search by use cases, integrations, and categories.
Cortex XSOAR Marketplace is the premier digital storefront for discovering, exchanging, and contributing security automation playbooks, built into
Cortex XSOAR. Cortex XSOAR content packs are prebuilt bundles of integrations, playbooks, dashboards, fields, subscription services, and all
the dependencies needed to support specific security orchestration use cases.
Leverage content from the largest SOAR community: Continuously extend Cortex XSOAR with proven use cases contributed by
SecOps users and SOAR partners.
Discover top-rated, validated content: Identify the content offerings recommended by your peers and validated by the world’s leading
cybersecurity company. Discover how to increase automation with the tools that you already have.
Solve your toughest security use cases: Deploy turnkey security workflows that span integrations, playbooks, dashboard layouts, and
reports with a single click.
Marketplace enables you to build a strong community with other security professionals by exchanging content. You can explore the latest trends
from Cortex XSOAR and other contributors and test drive use cases all within your Cortex XSOAR platform.
Cortex XSOAR supports free content packs, which are either Cortex XSOAR, or partner-supported content packs. You can restrict a user role from
managing content packs in Marketplace when defining/editing user roles.
In Marketplace, you can browse all content packs (including installed content), or view only installed content packs.
You can sort content packs by latest update, best match, recommended, number of downloads, and filter according to the following criteria:
Use cases: Filter according to high-level use cases, such as Phishing, Malware, Ransomware, Access.
Categories: Filter according to content pack categories, such as Messaging, and Forensics & Malware Analysis
Published: Filter according to whether published by Cortex XSOAR or by Cortex XSOAR technology partners.
General:
Certified: Created and supported by a user and certified by Cortex XSOAR. Cortex XSOAR has tested the content to ensure that it
meets standards and works correctly.
Uses my integrations: Content packs that use integrations that you have added instances for (whether or not they are enabled).
Content Pack Includes: Filter according to the content of the content pack, such as scripts, Integrations, and Playbooks.
When clicking a content pack you can view detailed information including content that it installs (such as scripts and playbooks, and indicator
fields), dependencies (what content packs are required or optional) and version history (including whether you want to roll back to earlier
versions).
You can view Marketplace content packs from within Cortex XSOAR or at https://cortex.marketplace.pan.dev/marketplace/.
Cortex XSOAR content in Marketplace is organized in packs. Content packs are created by Palo Alto Networks, technology partners, consulting
companies, MSSPs, customers, and individual contributors. Content packs may include a variety of different components, such as integrations,
scripts, playbooks, and widgets, grouped together to address a specific use case. Content packs are free and can be used by all customers.
You can view Marketplace content packs from within Cortex XSOAR or at https://cortex.marketplace.pan.dev/marketplace/.
Cortex XSOAR comes with a number of pre-installed content packs that cover many common uses cases. Pre-installed content packs include, but
are not limited to:
Common Scripts, Common Widgets, Common Playbooks, Common Types, Common Reports, Common Dashboards
These content packs provide important tools and building blocks you can use to customize your playbooks and workflows in Cortex XSOAR.
The Common Scripts content pack, for example, includes scripts that convert file formats, fetch indicators from a file, export context data,
send emails, and more.
VirusTotal
Provides integration with the popular Virus Total service to analyze suspicious files, domains, IPs and URLs to detect malware and other
security breaches.
The TIM - Indicator Auto-Processing content pack includes playbooks that automate the processing of indicators for multiple use cases such
as tagging, checking for existence in various lists , running enrichment for specific indicators and preparing indicators if necessary for a
manual review. The content pack also includes incident types and incident layouts for manual review.
In addition, we recommend reviewing if you require the following popular content packs:
Phishing
Automate Cortex XDR incident response. Includes custom Cortex XDR incident views and layouts to aid analyst investigations.
ServiceNow
Manage ServiceNow tickets directly from the Cortex XSOAR and enrich them with Cortex XSOAR data.
Manage Palo Alto Networks Firewall and Panorama, from Cortex XSOAR.
A collaboration integration, such as Microsoft Teams or Slack to send messages and notifications to your team.
Content packs such as the Malware Investigation and Response content pack and the Phishing content pack include a deployment wizard. When
you install the content pack, you are prompted to use a wizard, which sets up your use case. The deployment wizard sets up the fetching
integration, configures the playbook and parameters, and configures supporting integrations, in a user friendly, step-by-step interface.
Types of content packs support - Cortex XSOAR supported, Partner-Supported, Developer-Supported, Community-Supported.
Applies only to content packs published by Palo Alto Networks. These content packs are supported and maintained by Palo Alto Networks
according to the Palo Alto Networks End User Support Agreement.
NOTE:
Palo Alto Networks is not liable for and does not warrant or support any content pack produced by a third-party publisher.
Palo Alto Networks does not support content packs that do not have official available documentation.
Applies to content packs published by Cortex XSOAR Technology Partners. Support and maintenance is provided by the Technology Partner,
whose contact information appears in the content pack details.
Cortex XSOAR Technology Partners are required to join the industry-standard support framework, TSANet, to deliver support to our mutual
customers. Customers engage directly with the partner for support and maintenance of the partner-supported content pack.
Applies to content packs published by third-party developers. Support and maintenance is provided by the publishing developer, whose contact
information appears in the content pack details.
Customers engage directly with the publishing developer. Support and maintenance is provided voluntarily by the publishing developer. Additional
information from the user community may be available at Cortex XSOAR Live Discussions.
Applies to content packs published by Palo Alto Networks or third-party developers. No support or maintenance is provided by the publisher for
these content packs.
Palo Alto Networks ensures that these content packs are updated to use the latest and most secure Docker images through an automated
process. However, functionality may not be fully tested. We recommend fully testing and reviewing Community content packs before updating
production systems.
Customers are encouraged to engage with the user community for questions and guidance. Information from the user community may be
available at Cortex XSOAR Live Discussions.
You can install, delete, update, and revert content packs. Before you install a content pack you should review the content pack to see what it
includes and what are the various dependencies. Following is the information you can view:
Details: General information about the content pack such as installation, content, version, author, and status.
Dependencies: Details of any required content packs and optional content packs that may need to be installed with your content pack.
Version History: View the currently installed version, earlier versions, available updates, and revert if required.
Dependencies
In Cortex XSOAR content packs, some objects are dependent on other objects. For example, an alert may be dependent on a playbook, an alert
type, and an alert field. A script may be dependent on another script, or an integration. When you place a content pack in your cart, mandatory
dependencies including required content packs are added automatically to ensure that the content pack installs correctly.
Optional content packs are used by the content pack you want to install but are not necessary for installation. When you place a content pack in
your cart, you can choose which optional content pack to install. When you install optional content packs, mandatory dependencies in the optional
content pack are automatically included.
NOTE:
Optional content packs that are already installed are treated like they are required content packs to preserve content integrity.
You can only install one content pack at a time. Cortex XSOAR automatically adds any content that is required to install the content pack. You can
also add any optional content packs that use the content pack you want to install.
If you receive an error message when you try to install a content pack, you need to fix the error before installing. If a warning message is issued,
you can still download the content pack, but you should fix the problem otherwise the content may not work correctly.
NOTE:
1. Go to Marketplace → Browse and locate the content pack you want to install.
4. (Optional) If the content pack includes optional content, select the content packs you want to add.
The Cart displays the number of items you are installing including any required content packs. You can log in and out, but the content packs
remain in the Cart until you click either Empty cart or Install.
5. Click Install.
When you delete a content pack, all content is deleted including all detached and customized content.
CAUTION:
If another content pack is dependent on the content pack you want to delete, it may break the other content pack. You can reinstall the content
pack, but you cannot restore detached and customized content.
NOTE:
After you delete a content pack, it is recorded in the audit log. The version appears in the installation/update entry.
2. In the Content Packs Library section, search for the content pack and select the content pack you want to delete.
Content packs are updated for bug fixes, enhancements, and more. Marketplace is updated every 2 hours and when there is an update available
for a content pack, you will see a notification in the Installed Content Packs tab in Marketplace.
In the Version History tab of a content pack, you can see the currently installed version, earlier versions, and available updates. You can revert to a
previous version of a content pack if required.
If you have made any customizations, these are automatically included in any update. All dependent content packs update automatically with the
content pack.
NOTE:
Third party product Integrations are developed and tested against a specific product version. For products which are on-prem or cloud based
with specific API versions, the version developed and tested against will be included in the integration's documentation. Newer versions of the
product are not always immediately tested and it is expected that products maintain API compatibility upon release of newer product versions.
When upgrading to a newer product version, it is highly recommended to test the integration in a dev environment before deploying to
production.
CAUTION:
If you want to downgrade, any content that depends on the content pack including any customizations may be deleted if it does not exist in the
target content pack version.
1. In the Show field of the Installed Content Packs tab, select Update available to display the content packs that are available to update.
3. In the Version History tab of the content pack, view the updates that are available.
4. Click Update. If there is more than one update available, click the version to update.
If you choose to install the latest version it includes the previous version. If you have made any customizations these are included in any
update. If any dependencies require updating, these are automatically added.
5. Click Install.
You can revert to an earlier version of an installed content pack. Items that are not included in the version are also deleted, such as detached
playbooks, or scripts that use other scripts from the content pack. This may cause other content packs to stop working
1. In the Installed Content Packs tab, click the content pack you want to revert.
2. In the Version History tab, select the version to which you want to revert.
3. Click Revert to this version. The version will be added to your Cart.
The Deployment Wizard guides you step-by-step to quickly adopt your use case.
The Deployment Wizard can be used to set up your use case for the Malware Investigation and Response content pack and the Phishing
content pack. In order to work with your content pack you need to set up your integrations. The Deployment Wizard guides you through:
Configuring the integrations that will be used to fetch events (fetching integrations). These events will be mapped as incidents.
Configuring the main playbook and its input parameters. For example, the Setup Malware playbook pane opens showing the recommended
primary playbook for the incident type you selected when configuring the fetching integration. The playbook configuration includes all the
input parameters to configure that will change the playbook behavior, for example, whether to use sandbox detonation or whether to perform
isolation response. You can open the playbook by clicking the link on the bottom.
The default fetching integration for your content pack depends on which fetching integration(s) are installed. For example:
Malware Investigation and Response 1. Palo Alto Networks Cortex XDR - Investigation and Response
2. CrowdStrike Falcon
Phishing 1. Gmail
2. EWS v2 (Make sure you also install the Microsoft Exchange On-Premise pack)
Prerequisites
To access the Deployment Wizard for the first time, you need to first install or update your Malware Investigation and Response content pack or
your Phishing content pack in Marketplace. The Deployment Wizard tab appears in Marketplace after the content pack installation or update is
completed.
For example:
For the Malware Investigation and Response content pack, you need at least one incident fetching content pack (mandatory). You can also
optionally install sandbox, messaging, case management, and data enrichment and threat intelligence content packs.
For the Phishing content pack, you need at least one email gateway content pack (mandatory). You can also optionally install sandbox, EDR
systems, network devices, email security gateways, mail sender, and data enrichment and threat intelligence content packs.
1. In Marketplace, select the content pack for your use case (for example, Malware Investigation and Response or Phishing) and click Install or
Update (if the pack is already installed).
2. In the Select Content Packs window, select one or more content packs from the required categories. You can also install other
supportive content packs from other categories if needed. These items will be automatically be added to the cart.
4. When the content pack finishes installing or updating, click Refresh content.
NOTE:
After you start running your use case you can return to this tab and make changes to the configurations, such as your integration’s
credentials or playbook parameters.
5. Click Let’s Start in the small dialog box that appears next to the Deployment Wizard tab.
6. Step 1: Fetching Integration - Click the displayed fetching integration. If the integration is new, select New instance. If you want to use an
existing instance, select it from Update existing instance. The integration will stay disabled until you complete all steps of the wizard.
NOTE:
You must define the incident type in order to set the playbook in the next step.
A list of What needs to be done guides you through the required fetching integration instance settings configurations. Scroll down to see the
complete list.
After you save your settings, the wizard initiates a test connection. If the connection succeeds, the Fetching Integration step turns green and
moves to the next step (Set Playbook).
NOTE:
The wizard displays the recommended playbook. If for the fetching integration setup you chose an incident type that uses a different
playbook from the recommended one, the incident type will be detached.
8. Click Done.
9. Step 3: Supporting Integrations - Configure any installed supporting integrations in the content pack.
If a supporting integration is already installed and connected, it appears with a green check. Otherwise, click the integration to configure it.
NOTE:
After you save the settings, the integration instance is automatically enabled.
10. Step 4: What’s Next - Select Turn on Use Case to start the fetching process and running the playbooks and scripts.
Marketplace updates are a source for bug fixes and provide new commands for integrations and scripts. It’s a best practice to update content
packs to the newest available version. If you encounter any issue with content updates, you can revert to a previous version with one click.
You can update content while the system is in use. If a playbook, for example, is running on an incident while you update that playbook, the
original version of the playbook will continue to run without issues. If the playbook includes an integration command that has been updated, and
the update occurs before the playbook reaches this task, the new version of the integration command will be used.
Detach content item ( such as playbooks, and automations) and edit the content item. If you want to receive content updates in the future,
you can reattach the content item, but the modifications you made while the item was detached will be overwritten with the content update.
Duplicate the content item and edit the copy. When a content item is duplicated it becomes a custom content item, and therefore will not
receive updates, but you can view updates to the original content item.
After Marketplace content is installed you can detach or duplicate the content and customize the content as needed. Custom content is, by
definition, detached and does not receive updates.
How can content updates be rolled back? Are dependencies automatically rolled back as well?
You can view all versions of a content pack in Marketplace and revert to earlier versions there. When you revert a content pack, only the content
pack is reverted, not the pack dependencies.
You can receive daily notifications of content packs that have available updates. When you enable notifications, they are sent only to you via
email, Slack, or another notification service, depending on your user settings. Notifications are enabled and disabled on a per content pack basis.
You enable content pack update notifications only after the content pack is installed.
NOTE:
You can also disable notifications for individual content packs by clicking Stop Notifications in the daily email.
To view a list of all content packs with notifications, go to Marketplace → Installed Content Packs and in the Show field, select the Notifications
active filter.
You can temporarily or permanently opt out of notifications for all content packs, without disabling the notification option for individual content
packs. The content packs still appear in the Notifications active list.
1. Click your username (in the Cortex XSOAR side menu) and click User Preferences.
3. In the Marketplace Content Packs Updates field, select or clear the checkbox to enable or disable notifications.
Abstract
Customize the frequency and time of content pack update notifications and how much information is included.
You can decide whether to enable notification of new content updates and whether to notify users by email.
By default, updates for content packs are not sent out to users. You can change the default and add users as required.
Key Value
You can create content packs for submission to the Cortex XSOAR Marketplace.
Contributions are content packs that you create for Cortex XSOAR Marketplace, which are submitted to Cortex XSOAR for review and approval.
After approval, these content packs are uploaded to Marketplace, and are shared and installed like any other content pack. When creating new
content such as playbooks, scripts, incident types, and integrations, or when updating content, you can:
Create and submit content directly from Cortex XSOAR. For example, from a playbook, click Contribute. You then have the option to submit
the contribution for review or download the contribution and upload it, for example, to GitHub.
Submit a content pack of one or more items through the Cortex XSOAR Marketplace UI. When you create or edit content in Cortex XSOAR,
that content is added to the Add Content section in the Contributions tab in Marketplace. You can add content from this list to a content pack.
From the Contributions tab in Cortex XSOAR Marketplace, you can create, edit, submit, and delete content that you have submitted through
Marketplace.
Users with the Contribute to Marketplace permission can contribute content packs to Marketplace.
When adding content to the content pack, Cortex XSOAR scans the content and automatically adds dependencies, which ensures that the content
pack installs and runs correctly on all environments.
Although Cortex XSOAR scans and tests the content to ensure it works correctly, you need to review the content to ensure that all dependencies
are incorporated and work as they should in the event that not all dependencies are added automatically.
Validation
Content validation enables users to improve the quality of the content they develop in Cortex XSOAR by running a script to check for errors before
submission.
Configuration
By default, content validation passes your content item(s) as inputs to the ValidateContent script included in the Base pack. The ValidateContent
script uses the demisto-sdk utility to run validate and lint on the content item(s) and returns the results.
Automatic
When contributing content, either from the Contributions page, the Contribution Pack Editor page or directly from a content item's menu, the
content goes through content validation before submission. After clicking Contribute, you have the option to Save and submit your contribution or
Save and download your contribution. In both cases, your contribution goes through validation before you submit or download the content.
If the content pack passes validation, the process continues. If you are downloading the content, a download will start automatically. If you are
submitting the content, the content will submit automatically. If the content pack does not pass validation, the validation issues are listed and you
have the option to export a raw JSON file with the error details. You can then make changes to your content items and resubmit for validation.
You also have the option to skip the validation step or to contribute a content pack that does not pass validation. For example, there might be an
issue you are aware of that cannot yet be resolved. For a large content pack, where you have already validated the individual content items, you
You can also manually trigger content validation. The Validate button appears in the Contribution page, the Contribution Pack Editor page, as well
as in both the Script and Integration Editors. With manual validation, you can check your content during the development process and make
changes.
Review process
The review process consists of the Cortex XSOAR team checking that your contribution meets code, documentation, naming, and other
standards. You receive a form to complete asking for more information, such as certification, contact details, etc. The Cortex XSOAR team will be
in touch with you during the review process.
During the review process you may be asked to make changes in the code, or for more data, metadata, dependencies, documentation, support
and certification model, etc. You can anonymize your name if required.
When your contribution is approved it is uploaded to Marketplace where other Cortex XSOAR users can view, download, and rate it. We
encourage you to learn more about the contribution process.
Abstract
Create a content pack and submit it to Cortex XSOAR for approval. Add your content pack to Marketplace.
Any user can add content that has been created to a content pack. The content pack is submitted and reviewed by Cortex XSOAR to ensure it
complies with Cortex XSOAR standards. After approval, the content pack can be used in Marketplace.
2. In the Pack Name field type a meaningful name for the content pack.
3. From the list select the type of content you want, locate the content you want to add and click Add.
For more information about how to build a content pack, see the Dev Hub documentation.
4. If you want to continue adding content at a later time or to use the Validate Pack option, click the save button.
5. (Optional) Click Validate Pack to check for errors. The pack must be saved before you validate.
6. If you have finished and want to send the content pack to a Cortex XSOAR developer for review, click Save and Contribute to either
contribute or download your contribution.
8. Select Save and submit your contribution, enter your email address and click Contribute.
Instead of submitting the contribution, you can also download the content pack and upload it, for example, to GitHub.
9. After you contribute the content pack, you will receive an email with a link to a form you must complete.
b. Log in to your GitHub account to participate in the review process of the pull request that is automatically opened for your content
pack.
a. Select Update Existing Pack from the Select Contribution Mode menu.
b. Select the pack you want to update from the Select Existing Pack menu.
c. Log in to your GitHub account to participate in the review process of the pull request that is automatically opened for the content pack.
Type Description
New features
11. After you submit the form, a GitHub branch is created in the xsoar-contrib content repository fork with the changes from your contribution.
12. You will receive an invitation to join the xsoar-contrib organization. Being a member of the organization enables the xsoar-bot to invite
you to a GitHub team and grant you write permissions to the created branch. (Each contributor can only modify files in content packs that
they contributed).
NOTE:
The documentation for new integrations/scripts/playbooks is automatically generated and contains basic information. You need to review
the documentation file, README.md, and modify it according to Cortex XSOAR standards. The files to be reviewed are listed at the pull
request comment.
14. You can now modify the files changed in the pull request as part of the review process.
Abstract
Resubmit an existing content pack with new changes from the Cortex XSOAR UI.
NOTE:
You cannot update JavaScript integrations or scripts in an existing content pack using this method.
1. Create or edit any content items that need to be included in your resubmission.
5. After you submit the content pack, you will receive an email with a link to a form you must complete.
a. Select Update Existing Pack from the Select Contribution Mode menu.
NOTE:
The contribution mode option only appears if content items that are part of your contribution are detected as originating from
existing sources. For example, if you created a new automation in the UI by clicking Duplicate Automation.
b. Select the pack you want to update from the Select Existing Pack menu.
c. Log in to your GitHub account to participate in the review process of the pull request that is automatically opened for the content pack.
Type Description
New features
NOTE:
Changing the pack name or the email address of the contributor at this stage will result in creating a new pull request on GitHub,
rather than updating an existing pull request.
In the form, you can include notes describing the update, or provide an updated demo video link which will be displayed in a
comment on the pull request after the changes are successfully pushed to GitHub.
6. After the changes are pushed to your branch, you will receive an email notification.
In addition to the resubmission process described above, there are other ways to modify your existing content pack. You can modify the files
directly in the GitHub pull request, or close the pull request and create a new contribution that includes your changes. We do not recommend
closing the pull request and creating a new request.
Configure integrations, manage credentials, run commands, and troubleshoot integrations in Cortex XSOAR On-prem
integrations seamlessly connect your security and incident management tools right within Cortex XSOAR. Easily configure them to streamline your
workflow: fetch incidents, execute commands, manage credentials, and seamlessly handle long-running integrations.
Common integration use cases for Cortex XSOAR, including analytics and SIEM, authentication, case management, data enrichment, threat
intelligence, forensic and malware,
The following categories are common use cases for Cortex XSOAR integrations. While this list is not meant to be exhaustive, it's a starting point to
understand what use cases are supported by Cortex XSOAR and third-party integrations.
These integrations usually include the Fetch Incidents option for an instance. It can also include list-incidents or get-incident as integration
commands, or important information for an event or incident.
Use credentials from the authentication vault to configure instances in Cortex XSOAR. (Save credentials in: Settings & Info → Settings →
Integrations → Credentials.) Integrations that use credentials from the vault should have the Switch to credentials option.
Lock an external credentials vault - in case of an emergency (if the vault has been compromised), allow the option to lock/unlock the entire
vault via an integration.
Case Management
Enrich information about different IOC types: Upload object for scan and get the scan results. (If there’s an option to upload private/public,
the default should be set to private.) Search for former scan results about an object to get information about a sample without uploading it
yourself. Enrich information and scoring for the object.
Enrich asset – get vulnerability information for an asset (or a group of assets) in the organization.
Get a scan report including vulnerability information for a specified scan and export it.
Data Enrichment & Threat Intelligence integration example: Unit 42 Objects Feed.
Release a held message when a gateway has placed a suspicious message on hold.
Endpoint
Quarantine a file
Update indicators (for example, network and hashes) by policy (can be block, monitor) – deny list
Search for indicators in the system (Seen indicators and related incidents/events)
Update .DAT files for signatures and compare existing .DAT file to the newest one on the Cortex XSOAR tenant
Endpoint integration example: Palo Alto Networks Cortex XDR - Investigation and Response
Network Security
Create block/accept policies (source, destination, port), for IP addresses and domains
Add addresses and ports (services) to predefined groups, create groups, and more
Fetch network logs for a specific address for a configurable time frame
If there is a Management Firewall allow the option to manage policy rules through it
Get/fetch alerts
Get network logs filtered by time range, IP addresses, ports, and more
Update signatures from an online source / upload + get last signature update information
Enrich asset – get vulnerability information for an asset (or a group of assets) in the organization.
Get a scan report including vulnerability information for a specified scan and export it
Integrations are mechanisms through which Cortex XSOAR connects and communicates with other products. These integrations can be executed
through REST APIs, webhooks, and other techniques. Integrations enable you to orchestrate and automate SOC operations.
Integrations can be one-way or two-way. Two-way integrations allow both systems to interact directly, making it easier to manage security
operations across multiple tools.
Integrations are included in content packs which you download and install from Marketplace. After you download and install a content pack that
includes an integration, you need to configure the integration by adding an instance. You can have multiple instances of an integration, for
example, to connect to different environments. Additionally, if you are an MSSP and have multiple tenants, you could configure a separate
instance for each tenant.
Cortex XSOAR comes out-of-the-box with several integrations to help you onboard, such as:
Mail Sender
Sends email notifications to users. By default, this integration is configured to send emails. You can change the main sender by configuring
a different mail sender, such as Gmail. For more information, see Configure notifications in Cortex XSOAR.
Provides an endpoint with a list of indicators as a service for the system indicators. For more information about how to set up the integration,
see Export indicators using the Generic Export Indicators Integration.
Generates a Palo Alto Networks WildFire PDF report. For more information, see Palo Alto Networks WildFire Reports.
Rasterize
Converts URLs, PDF files, and emails to an image file or PDF file. For more information, see Rasterize.
Create an integration
You can create an integration, by adding parameters, commands, arguments, and outputs as well as writing the necessary integration code. You
should have a working Cortex XSOAR tenant and programming experience with Python.
The Cortex XSOAR IDE and the HelloWorld integration template are loaded by default. For more information about how to create an integration
including an example, see Create an Integration.
On the Instance integration page, after you have either downloaded the integration or created an integration, you can do the following:
Option Description
Add instance Configure an integration instance to connect and communicate with other products. For more information, see Add
an integration instance.
After configuring the instance, you can also enable/disable the integration instance, copy the instance, and view the
integration fetch history.
Edit integration's Edit the integration settings and source code. For more information about editing the integration's source code, see
source Create an Integration.
NOTE:
If the integration was installed from a content pack you need to duplicate the integration before editing.
Duplicate integration If you want to change the source code, and settings, or download the integration, you need to duplicate the
integration.
Delete Although you can't delete an integration installed from a content pack (unless a duplicate), you can delete an
integration instance.
Download the Download the integration in YAML format. You can also upload an integration.
integration
NOTE:
If the integration was installed from a content pack you need to duplicate the integration before downloading.
Version History If the integration is a duplicate or you create your integration, you can see the changes in the integration.
Contribute to You can send the integration to Palo Alto Networks for review and for it to be added to Marketplace. For more
Marketplace information, see Content pack contributions.
You can view all the integration changes (the last 100 changes) by clicking the Version History button.
Use Docker to run Python scripts and integrations in a controlled environment in Cortex XSOAR.
Docker enables you to run scripts and integrations from an image in a controlled environment that isolates and safeguards the server. It also
simplifies environment setup by packaging dependencies and configurations within an image, ensuring consistent execution across different
systems. By default, Cortex XSOAR pulls images from the Demisto Docker image registry in Github, which are used in scripts and integrations as
needed. Cortex XSOAR integrations and scripts have the relevant Docker image already selected. For example, the Rasterize integration uses
the demisto/python.3.3.11.9.1079 Docker image.
NOTE:
You can access publicly available Docker hub images from the Cortex XSOAR tenant even if there is no external connection to the Demisto
registryr hub, for example, if due to firewall constraints your engine cannot access the Demisto registry.
Alternatively, instead of pulling publicly available images in the Demisto registry, you can pull images from a private authenticated image registry.
For more information, see Pull images from a private image registry.
You can pull Docker images either directly or through an engine. If using an engine to pull Docker images from a private authenticated registry, you
first need to configure the authentication on the engine machine. For more information, see Connect your engine to an image registry.
2. Under ADVANCED, in the Docker image name field, click X to clear the current selection and then select a Docker image name from the
dropdown menu.
For more information about changing the Docker image for a script, see the Advanced tab in Create a script.
1. Go to Settings & Info → Settings → Integrations → Instances, find your integration, and click the pencil icon to edit the integration’s source.
For an out-of-the-box content pack integration, you first need to duplicate the integration to edit it.
3. Click X to clear the current selection and then select a Docker image name from the dropdown menu.
For more information about changing the Docker image, see the Advanced tab in Create a script.
Abstract
Using an engine to communicate with an image registry streamlines deployment by managing dependencies, ensuring version control, and
facilitating scalability, load balancing, and secure access to private images.
To use an engine, you need to connect the engine to an authenticated Docker image registry and then set it up in the tenant.
NOTE:
This procedure uses the --username and --password command line options to pass the username and password directly. For environments
where command history or logs are visible to others, consider more secure methods like Docker configuration files for handling authentication in
production or CI/CD environments. For more details, see docker login or podman-login.
Replace <your-username>, <your-password>, and <registry-url> with your Docker registry credentials and the URL of your Docker
image registry.
After logging in successfully, you can optionally validate access to images by searching for an image or pulling an image from the registry to
your local machine using the docker search or docker pull command.
4. In the tenant, set up the engine to pull images from a private image registry.
Abstract
Create your own authenticated Docker image repository for Cortex XSOAR. View all available images.
Pulling images from a private image registry enables securely accessing and deploying Cortex XSOAR content, for example, custom integrations
containing scripts and code packaged into Docker images. You can then run the integrations and scripts in Cortex XSOAR.
Before pulling a custom image ensure the image does not infringe any licenses.
If using an engine, connect the engine to the private image registry using Docker or Podman. See Connect your engine to an image registry.
NOTE:
3. Configure access to the private image registry and the images to pull.
Select the Engine to use. Authentication is set on the engine machine itself, not in the Cortex XSOAR tenant. For an example,
see Connect your engine to an image registry.
For Direct:
Click Test the connection to make sure the connection to the registry works.
Define the Import images in name:tag format, for example myorg/python/new:2.7.18.24398 or myorg/python:latest
You can add, edit, or remove images. If you don't specify a tag, the default tag latest will be added automatically, specifying the
latest version of the image.
NOTE:
Image synchronization
When you click Save or Update Docker Images, Cortex XSOAR performs synchronization, which involves:
NOTE:
The synchronization process make take time. The Image Registry page displays synchronization status (for example in progress, complete,
failure).
If the engine fails to synchronize, it may be offline. When it goes back online, it will pull any new images when running scripts or integrations that
use them.
After you set up a credential, you can configure integration instances to use it instead of entering the name and password manually.
Parameters Description
Credential The name of the credential. You select this name when adding the credential to the integration instance.
Name
For example, Cortex XDR API Key.
Workgroup The workgroup to associate this credential with. Relevant for third-party services, such as Active Directory,
CyberArk, and HashiCorps.
Password The password for the credential. For example, add the API Key when defining the API credential.
a. Go to Integrations → Instance and select the integration instance you want to add the credential.
Cortex XSOAR integrates with external credential vaults, which enables you to use them without hard coding or exposing the credentials. The
credentials are not stored in Cortex XSOAR, but the integration fetches the credentials from the external vault when called. The credentials are
passed to the relevant executed integrations as part of the integration parameters.
CyberArk AIM v2
HashiCorp Vault
After the integration is configured to fetch credentials, you can also use them in scripts and playbooks. To use these credentials in an integration,
click Switch to credentials in an integration instance, and select the necessary credential from the drop-down menu.
When you define an integration instance for your third-party security and incident management vendors events triggered by this integration
instance can become incidents in Cortex XSOAR. When incidents are created, you can run playbooks on these incidents to enrich them with
information from other products in your system. For indicators, you can run enrich those indicators depending on the integration instance and add
to an incident if required.
Although you can view the integration documents when adding an instance, the Developer Hub has more detailed information about the
integrations including commands, outputs, and recommended permissions. You can also see more information about content packs, playbooks,
scripts, and Marketplace documentation.
From Marketplace, download and install the relevant content pack, which includes your integration.
Consider whether you want to add credentials, which enable you to save login information without exposing usernames, passwords,
certificates, and SSH keys. For more information, see Manage credentials.
1. Go to Settings & Info → Settings → Integrations → instances and search for the integration.
5. (Optional) To check that the integration instance is working correctly, click Test.
Expand the integration to see more details such as the number of pulled incidents/indicators or error messages.
You can also enable/disable the integration instance, copy the instance, and view the integration fetch history.
8. (Optional) If you want to set up notifications on an incident fetch error, see Receive notifications on an incident fetch error.
After initially ingesting incidents/indicators, you may need to customize incident/indicator types, fields, and layouts. If relevant to your
integration, review and customize classifiers and mappers. Classification determines the type of incident/indicator ingested into Cortex
XSOAR from a specific integration. You create a classifier and define that classifier in an integration, if applicable, mapping enables you to
map the fields from your third-party integration to the fields in your layouts. For more information, see Classification and mapping.
Example 2.
How to configure the Cortex XDR - Investigation and Response instance integration
In this example, set up the Palo Alto Networks Cortex XDR - Investigation and Response integration. If you have not done so, download the
Cortex XDR content pack from Marketplace. Most integrations follow a similar configuration.
You can see the mandatory fields (with an asterisk) and on the right side, the documentation that contains a link to the full documentation
including available commands. See Palo Alto Networks Cortex XDR - Investigation and Response.
Incoming: Changes made to an incident in Cortex XDR are reflected in the fetched event in Cortex XSOAR.
Outgoing: Changes made in Cortex XSOAR for XDR incidents are reflected in the Cortex XDR tenant.
Both: Changes made in either platform are to be reflected in either Cortex XDR/XSOAR.
4. Add the Server URL, API Key ID, and the API key that you obtained from Cortex XDR.
5. Add the maximum number of incidents to fetch. By default, there is a maximum number of 10 incidents per minute.
6. Select whether you want only starred incidents from Cortex XDR and the number of days to fetch. By default, fetching is 3 days ago.
7. In the First fetch timestamp field, specify when the first fetch occurs. By default, fetching is 3 days ago.
Whether to trust certificates not signed by a trusted security authority, such as self-signed certificates.
Whether to run on Prevent Only Mode to match the Cortex XDR tenant.
Incidents fetch interval. By default, the incidents are fetched every one minute.
When troubleshooting the instances troubleshooting adjust the default setting from off to a higher debugging level.
9. Specify how Cortex XSOAR collects, classifies, and maps data fetched by this instance. In the Collect Settings you can define the following:
Field Description
Fetches Fetches incidents from Cortex XDR. We recommend only fetching incidents when everything is set up.
incidents
When enabled, Cortex XSOAR searches for events that occurred within the time frame set for the integration, which is
based on the specific integration. The default is 10 incidents per minute.
Classifier Determines which type of incident type is created. For more information about classifiers, see Classification and
mapping.
Incident type If a classifier does not exist, specify an incident type. If a classifier is specified it takes precedence when assigning an
incident type to the fetched incident. Incident types determine what playbooks are running on the fetched incident.
Mapper Determines how incoming data is mapped to the Cortex XSOAR incident fields. In this integration, we are given a
(Incoming) default incoming and outgoing mapper. For more information about mappers, see Classification and mapping.
Mapper Specifies how Cortex XSOAR incident data should be mapped to external integrations (Cortex XDR). This is important
(Outgoing) when using incident mirroring.
Abstract
Configure a third-party integration instance to fetch incidents into Cortex XSOAR incidents for investigation.
You can poll third-party integration instances for events and turn them into Cortex XSOAR incidents (fetching). Many integrations support fetching,
but not all support this feature. You can view each integration in the Developer Hub.
When setting up an instance, you can configure the integration instance to fetch events. You can also set the interval for which to fetch new
incidents, by configuring the Incidents Fetch Interval field. The fetch interval default is 1 minute. This enables you to control the interval in which an
integration instance reaches out to third-party platforms to fetch incidents into Cortex XSOAR.
NOTE:
In some integrations, the Incidents Fetch interval is called Feed Fetch Interval.
If the integration instance does not have the Incidents Fetch Interval field, you need to add this field by editing the integration settings. If
the integration is from a content pack, you need to create a copy of the integration. Any future updates to this integration will not be
applied to the copy integration.
If you turn off fetching for a while and then turn it on or disable the instance and enable it, the instance remembers the last run and pulls
all events that occurred while it was off. If you don't want this to happen, verify that the instance is enabled and click Reset the “last run”
timestamp when editing the instance. Also, note that "last run" is retained when an instance is renamed.
1. Select the integration instance you want to fetch incidents by going to Settings & Info → Settings → Integrations → Instances finding the
integration and clicking + Add instance.
When enabled, Cortex XSOAR searches for events that occurred within the time frame set for the integration, which is based on the specific
integration. The default is 10 minutes prior but can be changed in the integration script.
3. (Optional) In the Incidents Fetch Interval field, set the interval of hours and minutes to fetch incidents (default 1 minute).
4. (Optional) If the Incidents Fetch Interval field does not appear, add it to the integration.
a. For integrations installed from a content pack, select the duplicate integration button.
If you already duplicated the integration, click the Edit integration’s source button.
In the Parameters section, you can see that the IncidentFetchInterval parameter is added. Change the default value if necessary.
Abstract
Add a server configuration to receive notifications if an integration experiences an incident fetch error.
The administrator and Cortex XSOAR users on the recipient’s list receive a notification when an integration experiences an incident fetch error.
Administrators with multiple instances of mail senders can choose to receive one email notification instead of multiple email notifications. Cortex
XSOAR users can select their notification method, such as email, from their user preferences.
NOTE:
The connectivity behavior that exists between third-party applications may trigger a fetch failure, which will send a notification to an
administrator and users. The notification may no longer be relevant because the fetch might operate correctly just after the notification was sent.
2. Select Settings & Info → Settings → System → Server Settings → Add Server Configuration.
Key Value
message.ignore.failedFetchIncidents false
4. (Optional) Administrators that have multiple instances of a mail sender configured that want to receive only one email notification should
select the Do not use by default option in the integration instances that should not be used to send emails.
Abstract
Integration permissions enable you to restrict running commands to specific roles in integrations.
You can use role-based access control (RBAC) to restrict running commands to specific roles at the integration instance level. If you have multiple
instances of the same integration, you can assign different roles (permission levels) for the same command in each instance.
For example, you may want limit the roles that can run potentially harmful commands, such as in Cortex XDR you may want to allow only certain
roles to isolate endpoints.
Users who do not have permission to run a command, cannot do the following:
Complete pending tasks in a Work Plan that uses the restricted command.
Edit arguments for playbook tasks that use the restricted command.
Leverage the restricted command when executing a reputation command, such as IP, Domain, and File.
If you have multiple instances of the same integration, you can assign different roles (permission levels) for the same command in each instance.
NOTE:
PERMITTED ROLES: Lists the roles that have permission to run the command. Default is No Restrictions.
You may want limit potentially harmful commands, such as in Cortex XDR you may want to limit the ability to isolate endpoints.
2. Click Edit.
3. In the PERMITTED ROLES, column, select the roles that you want to allow running the command.
Abstract
Verify the integration settings. Check settings such as usernames, URLs, and passwords.
In the following example, you receive a 401 unauthorized error code after testing the integration.
Click Run Test & Download Debug log, to download the debug file locally. You can verify what server the URL request is being forwarded to
and any other reasons as to why you received this error code. The 401 unauthorized error code usually relates to invalid error credentials,
expired tokens, or incorrect API settings.
Review the integration logs (Settings & Info → Settings → Integrations → Integration Logs).
You can sort the logs by things such as source instance, command, and log level. You can also export the file
If you are unable to fix the integration, contact Customer Support for further assistance.
The command line interface (CLI) enables you to run system commands, integration commands, scripts, and more from the CLI. The CLI auto-
complete feature allows you to find relevant commands, scripts, and arguments.
External commands: These commands are specific to an integration and perform actions relating to a specific integration, using "!". For
example, !xdr-get-alerts.
Go to Settings & Info → Settings+Integrations → Instances, under each integration, you can view a list of commands.
NOTE:
Integration commands are only available when the integration instance is enabled. Some commands depend on a successful connection
between Cortex XSOAR and third-party integrations.
You can run the CLI commands on any page where the CLI appears or in an incident. If run on a page not in an incident, the results are returned
to the Playground. The Playground is a non-production environment where you can safely develop and test automation scripts, APIs, commands,
and more. It is an investigation area that is not connected to a live (active) investigation.
In the following example, set up the Palo Alto Networks Cortex XDR - Investigation and Response integration instance. To retrieve Cortex XDR
incidents, for the last year, sort by time in ascending order and limit to 5 incidents type the following in the CLI:
In the Playground, you can see the list of incidents in a markdown table.
To see the incidents in a JSON format, select Side Panels → Context Data. Each incident contains information obtained from the Cortex XDR
endpoint that can be used in subsequent commands. You can search for a field such as incident_id. To get more information about the
incident_id:1, copy the data, by clicking the incident_id in the context sata.
TIP:
If you want to delete context in the Playground, type !DeleteContext all=yes. To clear the playground, at the top of the page, click Clear
playground.
To erase a playground and create a new one, run the /playground_create command.
Configure and manage long-running integrations to export internal data from Cortex XSOAR.
Some long-running integrations provide internal data via API calls, to your third-party software, such as a firewall. You can set up Cortex XSOAR
to allow third-party software to access long-running integrations installed either on the Cortex XSOAR tenant or on an engine.
IMPORTANT:
To ensure reliable and secure communication with Cortex XSOAR, you need to add the following DNS records:
ext-FQDN - The Cortex XSOAR DNS name mapped to the external IP address. For example, ext-xsoar.mycompany.com.
API-FQDN - The Cortex XSOAR DNS name mapped to the API IP address. For example, api-xsoar.mycompany.com.
Rather than adding credentials separately for long-running integration instances, you can set up universal credentials for all long-running
integrations.
Long-running integrations provide internal data via API calls such as:
O365 Teams (Using Get authorized access to a user's Teams app in a personal or organization account. O365 Teams
Graph API) (Using Graph API)
Generic Webhook Creates incidents on event triggers. The trigger can be any query posted to the integration. Generic Webhook
Generic Export Use the Generic Export Indicators Service integration to provide an endpoint with a list of Generic Export
Indicators Service indicators as a service for the system indicators. You can set up the tenant to export internal Indicators
data to an endpoint.
NOTE:
This integration replaces the External Dynamic list integration, which is deprecated.
TAXII Server Provides TAXII Services for system indicators (Outbound feed). TAXII Server
TAXII2 Server Provides TAXII2 Services for system indicators (outbound feed). You can choose to use TAXII TAXII2 Server
v2.0 or TAXII v2.1.
XSOAR-Web-Server Supports handling configurable user responses (like Yes/No/Maybe) and data collection tasks XSOAR-Web-
that can be used to fetch key value pairs. Server
Publish List Publishes XSOAR lists for external consumption. Publish List
Simple API Proxy Provides a simple API proxy to restrict privileges or minimize the amount of credentials issued Simple API Proxy
at the API.
Web File Repository Makes your environment ready for testing purpose for your playbooks or automations to Web File
download files from a web server. Repository
NOTE:
When running on the tenant, you can only use long-running integrations provided by Cortex XSOAR, you cannot create custom ones.
Custom long-running integrations are supported only on engines at this time.
Configuring custom certificates or private API Keys in the long-running integration instance is supported only on engines, not on the
Cortex XSOAR tenant.
When defining credentials for long-running integrations, you can do one of the following:
You need the Account Admin or Instance Administrator's permission to define credentials.
TIP:
For long-running integrations running on an engine, we strongly recommend defining a username and password, but it is not required.
Users with sufficient permissions can set the username and password for specific integration instances, on the Integrations → Instances
page.
IMPORTANT:
If you define credentials in long-running integrations, but there is a different username and password in an individual integration instance, the
credentials for the integration instance override the long-running integration credentials.
2. In the Configure Universal Credentials for Long Running Integrations (Optional) section, add a username and password.
When configuring a long-running integration, you don't need to add a username and password.
You can use CURL commands from any terminal to access and test the long-running integration at the URL:
https://ext-<cortex-xsoar-address>/xsoar/instance/execute/<instance-name>
NOTE:
You can use CURL commands from any terminal to access and test the long-running integration at the engine URL:
When sending a curl request to the URL, you can use the following parameters.
t Only with mwg format. The type indicated on the top https://ext-<cortex-
of the exported list. Supports: string, applcontrol, xsoar_instance>/instance/execute/<ExportIndicators_instance_name>?
dimension, category, ip, mediatype, number and v=mwg&t=ip
regex.
sp If set, will strip ports off URLs, otherwise will ignore https://ext-<cortex-
URLs with ports. xsoar_instance>/instance/execute/<ExportIndicators_instance_name>?
v=text&sp
cd Only with proxysg format. The default category for the https://ext-<cortex-
exported indicators. xsoar_instance>/instance/execute/<ExportIndicators_instance_name>?
v=proxysg&cd=default_category
1 - Collapse to ranges.
2 - Collapse to CIDRs
When configuring a long-running integration instance you may need to define a listening port.
If the long-running integration runs on the Cortex XSOAR tenant, you do not need to enter a Listen Port in the instance settings. The system
auto-selects an unused port for the long-running integration when the instance is saved.
You must set the Listen Port for access when configuring a long-running integration instance on an engine. Use a unique port for each long-
running integration instance. Do not use the same port for multiple instances.
10 | Incident configuration
Abstract
Customize how the incident appears, add deduplication rules, and add any other customizations you require for your workflow.
An incident goes through various processes in Cortex XSOAR including defining an incident, classification and mapping, pre and post-processing,
and running a playbook.
Incidents are potential security data threats that SOC analysts identify and remediate. There are several incident triggers, including:
SIEM alerts
Mail alerts
Security alerts
These alerts are generated from third-party services, such as SIEMs, mailboxes, and data.
Cortex XSOAR includes several out-of-the-box incident types, fields, and layouts, which can be customized to suit your use case. You can also
create incident types, custom fields, and layouts as necessary. Incidents can be created manually, from a JSON file, the Cortex XSOAR Restful
API, or an integration feed.
You can define integrations with your third-party security and incident management vendors. You can trigger events from these integrations that
become incidents in Cortex XSOAR. You can run playbooks on these incidents to enrich them with information from other products in your system,
which helps you complete the picture.
In most cases, use rules and scripts to determine if an incident requires further investigation or can be closed based on the findings. You can filter
the incidents that are ingested into Cortex XSOAR by manually de-duplicating incidents, setting up pre-process rules to perform certain actions, or
automatically de-duplicate incidents. This enables your analysts to focus on the minority of incidents that require further investigation. After you
close an incident you may want to automate an additional action such as closing a remedy ticket. For more information, see Use post-processing
scripts in an incident.
Planning
Before you begin configuring integrations and ingesting information from third parties, consider the following:
Phase Description
Incident types Incident types classify the events that are ingested into Cortex XSOAR. Use out-of-the-box types, or
create incident types to classify the different types of attacks with which your organization deals. For
more information, see Create an incident type.
Phase Description
Incident fields Displays information from third-party integrations and playbook tasks when an incident is created or
processed. Use out-of-the-box fields, or create fields for your use case. For more information, see
Create an incident field.
Incident layouts Customize your layouts by adding custom or system fields for each incident type, so that the most
relevant information is shown for each type. For more information, see Incident layout
customization.
This is an iterative process. After you've configured incident types, fields, and layouts, and you've classified and mapped your incident type and
fields, start ingesting information, which enables you to assess how you've mapped out your information. As you see the data coming in, you can
make adjustments to improve your mapping and gain a deeper understanding of the information you're collecting. Although unmapped information
is available in labels, it's significantly easier to work with, when assigned to a specific field and displayed in the appropriate layouts.
Configure integrations
Configure integrations with third-party products to start fetching events, such as potential phishing emails, authentication attempts, and SIEM
events. For more information, see Configure integrations.
Once you configure integrations, you should determine how the events ingested from those integrations will be classified as incidents. For
example, you can classify items based on the subject field for email integrations, but for SIEM events, you should classify them by event type.
During the planning stage, it's important to define how the information ingested from your integrations will be mapped to the fields you're creating.
For more information, see Classification and mapping.
Pre-Processing
Pre-processing rules enable you to perform certain actions on incidents as they are ingested into Cortex XSOAR. Using rules, you can select
incoming events on which to perform actions, for example, link the incoming event to an existing incident, or based on configured conditions, drop
the incoming incident altogether. For more information, see Pre-process rules.
Create an incident
Based on the definitions provided in the Classification and Mapping stage, and the rules you created for pre-processing events, incidents of
various types are created. The incidents all appear on the Incidents page, where you can start investigating incidents.
Run a playbook
Playbooks are triggered when an incident is created or run them manually as part of an investigation. When triggered as part of a created incident,
the playbooks for the type of incident that was classified run on the incident. Alternatively, if you are manually running a playbook, select whichever
playbook is relevant for the investigation. For example, playbooks can take IP address information from one integration and enrich that IP address
with information from additional integrations or sources.
Post Processing
Once the incident is complete and you are ready to close it, you can run various actions on the incident using a post-processing script. For
example, an email is sent to the person who opened the incident informing them that their incident has been resolved or close an incident in a
ticketing system. For more information, see Use post-processing scripts in an incident.
Create and edit incident types, fields, and layouts in Cortex XSOAR.
Several content packs, such as Cortex XDR by Palo Alto Networks, include out-of-the-box integrations, incident types, fields, and layouts. You may
need to customize incident types, fields, and layouts to suit your needs or create new ones to investigate and respond to potential security threats
specific to your organization.
Option Description
Incident You can create a new incident type or customize the incident type, such as setting the default playbook, adding the layout, and
types any post-process and indicator extraction rules. You can create, duplicate, import, export, and customize incident types. For
more information about creating an incident type, see Create an incident type.
Incident Custom incident fields add specific details or attributes to incidents, helping analysts to investigate and understand potential
fields security threats. You can edit or create an incident field. For more information, see Create an incident field.
After creating an incident indicator field, map the field to the relevant context data. You can add the field to an incident type and
view it in an incident layout.
Incident Custom incident layouts enable you to organize and display specific details about potential threats in a way that makes sense for
layouts your organization, making it easier to quickly understand and respond to security issues. You can view, customize, import, and
export indicator layouts and add a custom layout to an incident type. For more information, see Incident layout customization.
This is an iterative process. After you initially create your types, fields, and layouts, you can start the process of ingesting information by installing
and configuring an integration to fetch incidents.
When you configure an integration instance, you can define a classifier and a mapper for the integration. When an incident is ingested into Cortex
XSOAR, the integration assigns the incident type when classified and maps the event data into incident fields. For example, when defining the
EWS O365 instance integration, setting the classifier to EWS - Classifier, classifies all incoming incident types as Phishing from the O365
integration.
When an incident is ingested, one of the first entries in the War Room is the fields and values returned. You may want some of this
information to appear on the Incident Info/Summary page when an analyst starts investigating.
Review the context data (from Side panels). Context data is a map (dictionary) that stores structured results from data, such as commands,
playbooks, and scripts. If there is information in the context data you don't see in the incident, map it into incident fields and display it in the
layout. For more information, see Use incident context data.
An error occurred.
Abstract
Use context data to customize your incident layout and to populate your incidents in Cortex XSOAR.
Context data is a map (dictionary) that stores results from data, such as commands, playbooks, and scripts in a structured format. Context data
includes keys (strings) and values (strings, numbers, maps, and arrays). Context data at its core is a large JSON structure, which represents all
the data that is part of an incident. All incidents have context data.
You can use context data to pass data between playbook tasks, capture important structured data, and display it in the incident layout. Context
data acts as an incident data dump from which you can map data into incident fields. When an incident is generated in Cortex XSOAR and a
playbook or analyst begins investigating it, context data will be written to the incident to assist with the investigation and remediation process.
When an incident is created, the incident data is stored in the context data, under the incident key. When an investigation is opened and
integration commands are run, data returned from those commands is also stored outside of the main incident key. In the following example, you
Add keys and values to the context data, such as the incident status, actions, and ID. This is useful when developing playbooks, and other
scripts.
Add context data to incident fields in a layout to capture important and relevant information to assist with investigation and remediation.
To view context data from within an incident, click on the Side panels menu and select Context Data from the dropdown. In the Context Data pane,
you can use Query to search within the JSON for specific items and expand nested keys.
Example 3.
${HelloWorld.Domain(val.domain == 'example.com')} shows the full object for the example.com domain, as stored in the context
data by the domain command that is part of the HelloWorld integration.
${HelloWorld.Domain(val.domain == 'example.com').registrar} shows the registrar for the example.com domain, as stored in the
context data by the domain command that is part of the HelloWorld integration.
You can also write jQuery scripts using complex logic to access, aggregate, and change context data. For more information, see Cortex XSOAR
Transform Language (commonly referred to as DT).
When fetching incidents from an integration, some important data may not have been picked up in the incident layout. For example, the context
data may return the source user, event type, URL category, and suspicious URL but these fields may not appear as fields or in the layout. For
more information about customization, see Incident Customization.
The main use of context data is to pass data between playbook tasks, one task stores its output in the context and the other reads that output from
the context and uses it. For more information about how to use context data, including examples and use cases, see Context and Outputs.
You can use the information stored in the incident context and apply filters and transformers to context data before using the data in
playbook tasks.
While running a playbook using the playbook debugger. As context data may be updated during a playbook run, set a breakpoint to view the
context data after a specific task, which can be useful for designing and troubleshooting playbooks.
By default, context data for sub-playbooks is stored in a separate context key. When a task in a sub-playbook accesses context data, it does not
have direct access to the main playbook data. If, however, the sub-playbook has been configured to share globally, the sub-playbook context data
is available to the main playbook and vice versa.
NOTE:
Generic polling does not work if a playbook’s context data is shared globally. For more information, see Playbook polling.
In any script that runs in an incident, the data is written to the context. For example, demisto.executeCommand("set", {"key":"
<key>", "value":"<value>"}). For more information, see Set Command.
To add context data to an incident, run the Set command in the CLI. The Set command enables you to set a value under a specific key. For more
information about the Set command, see Set Command.
In the incident that you are investigating run the !Set command. For example, to add the key and value hello:world to the context data, run the
following command:
NOTE:
All incident data stored in incident fields are also stored in the context data. In most cases, however, not all context data is stored in incident
fields. Incident fields represent a subset of the total incident data.
In the incident context data you want to delete, run the DeleteContext command in the CLI. For example, to delete the key and value
hello:world from the context data, run the following command:
!deleteContext="hello"
Abstract
You can create an incident type If the incident type does not exist and then classify the incident according to this incident type. Each incident type
has a unique set of data relevant to that specific incident type. When you duplicate an incident type, the duplicate is associated with the same set
of incident fields that belonged to the original incident type.
By default, when installing incident types from a content pack, incident types are attached, which means they are not editable. If you want to edit
the incident type, such as changing the layout or the default playbook, you have the following options:
The duplicate type is editable and the original incident type continues to receive content pack updates, but the duplicate does not.
While an incident type is detached, it does not receive content pack updates. If you detach an incident type and make changes, any
changes made while it was detached are overwritten by the default values from the content pack. If you want to keep the changes and
protect your changes from content pack upgrades, duplicate the incident type before reattaching the original.
Field Description
Name Enter a descriptive name for the type. Try to make the name informative, so users know what the type
does before viewing the type details.
Default playbook Select the default playbook that is associated with the incident type.
Run playbook automatically Determines if the playbook runs automatically when the incident is ingested.
Post Process using After incidents have been investigated, select the post-process script to run on these incident types.
For more information, see Use post-processing scripts in an incident.
SLA Determines the SLA for this incident type in any combination of Weeks, Days, and Hours. For more
information, see Configure an SLA in an incident type.
Set Reminder at Optionally configure a reminder for the SLA in any combination of Weeks, Days, and Hours.
Indicator extraction rules extract indicators from incident fields and enrich them using commands and scripts. You can view and create
indicator extraction rules according to incident fields. For more information, see Create indicator extraction rules for an incident type.
Abstract
Incident fields are used to accept or populate incident data coming from incidents. These fields are added to incident layouts and are mapped
using classification and mapping.
Creating incident fields is an ongoing process. You can create fields from information ingested from third-party integrations. As you learn more
about your needs and the capabilities of your third-party integrations, you can continually add new fields to capture the most relevant information.
When investigating an incident, an analyst can easily add relevant information to the fields in the layout. Incident fields can be populated by
incident team members during an investigation at the beginning of the investigation, or before closing the investigation.
NOTE:
In the CLI, you can set and update all system incident fields using the setIncident command, of which each field is a command argument.
Field types
Attachments Enables the user to add an attachment, such as .doc, malicious files, reports, and incident images.
Boolean Checkbox
Grid (table) Include an interactive, editable grid as a field type for selected incident types or all incident types. To see how to create a
grid field and to use a script, see Use scripts with a grid field.
When you select Grid (table) you can format the table and determine if the user can add rows.
HTML Create and view HTML content, which can be used in any incident type.
Long text Long text is analyzed and tokenized, and entries are indexed as individual words, enabling you to perform advanced
searches and use wildcards.
Long text fields can't be sorted and used in graphical dashboard widgets.
While editing a long text field, pressing enter will create a new line (case is insensitive).
Markdown Add markdown formatted text as a Template which will be displayed to users in the field after the indicator has been
created. Markdown lets you add basic formatting to text to provide a better end-user experience.
An empty array field for the user to add one or more values as a comma-separated list.
Role Role assigned to the incident. Determines which users (by role) can view the incident.
Short Text Short text is treated as a single unit of text and is not indexed by word. Advanced search, including wildcards, is not
supported.
Short text fields are case-sensitive by default but can be changed to case-insensitive when creating the field.
While editing a short text field, pressing enter will save and close.
Single select Select a value from a list of options. Add comma-separated values.
Timer/SLA View how much time is left before an SLA becomes past due, as well as configure actions to take if the SLA does pass.
NOTE:
Incidents sorted using an SLA/Timer field are sorted by the due date of the SLA field.
1. Select Settings & Info → Settings → Object Setup → Incidents → Incident Fields → New Field.
To edit an existing incident field, right-click the field name and select Edit.
Parameter Description
Field A meaningful display name for the field. After you type a name, you will see below the field that the Machine name is
Name automatically populated. The field’s machine name is applicable for searching and the CLI.
NOTE:
If you try to create a new incident field with a name that already exists in the system such as Account, you may receive
a message like this:
[Could not create incidentfield with ID '' and name 'Account'.Field already exists as a builtin
field (100709)].
If so, select a different name as the incident field is already reserved for system use.
You should not create a custom field named reason as it is a saved keyword in the tenant.
4. In the Basic Settings tab, define the values according to the selected field type.
Parameter Description
Placeholder Optional text to display in the field when it is empty. This text will appear in the layout, but not in the created incident.
Available for Short text, Long text, Multi select / Array, and Tags.
Parameter Description
Values A comma-separated list of values that are valid values for the field.
Parameter Description
SLA Determine the amount of time in which this item needs to be resolved. If no value is entered, the field serves as a
counter.
Risk Threshold Determine the point in time at which an item is considered at risk of not meeting the SLA. By default, the threshold is
3 days, which is defined in the global system parameter.
Run on SLA In the Run on SLA Breach field, select the script to run when the SLA time has passed. For example, email the
Breach supervisor or change the assignee.
NOTE:
Only scripts to which you have added the SLA tag appear in list of scripts that you can select.
6. If you are creating a Grid (table) field, in the Grid tab, define the following values.
To enable users to add/remove rows in the grid, select the User can add rows field. If selected, the user can add rows but not
columns.
Manage rows and columns. You can move the columns and add/delete rows and columns (using the + and - signs). How you design
the grid determines how it appears to users.
Configure each column by clicking the settings button in each column. Add the column name, select whether the column is mandatory,
and the field type. If you select Lock, the value for that field is static (not editable). If you do not select the Lock checkbox (default),
users can perform inline editing.
Field Description
Script to run when field value changes The script dynamically changes the field value when script conditions are met. For a
script to be available, it must have the field-change-triggered-indicator tag
when defining the script.
Run the field triggered script after the new Leave unchecked for the script to execute before the incident is stored in the database,
field value is saved so the script can modify the incident field value. Useful in most cases including
performing validations and starting and stopping Timer/SLA fields.
When checked, the script executes after the incident is stored in the database, so that
the script cannot modify the incident unless through CLI or API calls.
For example, add the emailFieldTriggered script, which runs after the Incident
Updates tag is stored in the database (unchecked).
Field Description
Field display script Determines which fields display in forms, as well as the values that are available for
single-select and multi-select fields. For more information, see Create Dynamic Fields
in Incident Forms.
Add to all incident types Determines for which incident types this field is available. By default, fields are
available to all incident types. To change this, clear the Add to all Incident
types checkbox and select the specific incident types to which the field is available. For
example, you may want to limit the field to Access, Malware and Network incident
types.
Default display on Determines at which point the field is available. For more information, see Incident
Field Examples.
Edit Permissions Determines whether only the owner of the incident can edit this field.
Indexing Make data available for search Determines if the values in these fields are available when searching.
NOTE:
In most cases, Cortex XSOAR recommends that you select this checkbox so values
in the field are available for indexing and querying. However, in some cases, to avoid
adverse effects on performance, you should clear this checkbox. For example, if you
are ingesting an email to an email body field, we recommend that you not index the
field.
if you subsequently edit the field, you can select Don't show in the incidents layout. If selected, the incident field does not appear in the
layout, but the data is displayed in the context data.
10. (Optional) In the incident type, map the incident field, so the incident field is automatically updated, without the analyst having to change it.
The following section shows several examples of common fields used in real-life incidents.
False positive
Below is an example of a mandatory False Positive field, which will be completed when the incident is closed. The Field can have a value Yes or
No. The Administrator can query or run a report based on this field. After this field is added, all incidents need to complete this field, before an
incident can be marked closed.
SLA fields
The following SLA field can be used to trigger a notification when the status affecting the SLA of an incident changes. In this example, if the SLA is
breached an email is sent to the owner's supervisor.
Abstract
Associate Cortex XSOAR incident fields with scripts that are triggered when the field changes.
Incident fields can be associated with trigger scripts that check for field change conditions and take actions based on the change. These scripts
can perform any action, such as dynamically changing the field value, notifying the responder when an incident severity has been changed, or
when the conditions are met. For example, the ChangeRemediationSLAOnSevChange script changes the Remediation SLA of an incident, if the
severity of the incident changes for any reason.
Scripts can be created in Python, PowerShell, or JavaScript on the Scripts page. To use a field trigger script, you need to add the field-change-
triggered tag when creating the script. You can then add the script in the Attributes tab, when you edit or create an incident field. If you did not add
the tag when creating the script, it cannot be selected, until you add the tag.
Cortex XSOAR comes out-of-the-box with field change scripts in the Scripts page, such as:
ChangeRemediationSLAOnSevChange: Changes the remediation SLA once a change in incident severity occurs.
emailFieldTriggered: Sends an email to the incident owner when the selected field is triggered.
StopTimeToAssignOnOwnerChange: Stops the Time to Assignment SLA field, as soon as an owner was assigned to an incident.
A common use case is to create a script that only allows automated changes by a playbook not manual changes by a user.
The script checks who made the change using the user field. The cliName argument returns the field name, so that it can be attached to multiple
incident fields, and block changes to them, without the need to have a different script for each field.
If you want the script to change the incident name field and context data, run the following command:
See the following video about how to create and add scripts to an incident layout:
An error occurred.
Incident field trigger scripts have the following triggered field information available as arguments (args):
Argument Description
associatedTypes An array of the incident types, with which the field is associated.
cliName The name of the field when called from the command line.
ownerOnly Specifies that only the creator of the field can edit.
Argument Description
selectValues If this is a multi-select type field, these are the values the field can take.
validationRegex Whether there is a regex associated for validation the values the field can hold.
Script limitations
Post-processing scripts can modify an incident, but if a modified field has a trigger script, it is not called.
Incident modifications executed within a trigger script are only saved to the database after the modifications are completed.
Best practices
Fields that can hold a list (related incidents, multi-select/tag/role type custom fields) will provide an array of the delta. For example, if a multi-
select field value has changed from ["a"] to ["a", "b"], the new argument of the script will get a value of ["b"].
Incident field trigger scripts run as a batch. This means that if multiple incidents are changed in the same way and are set to trigger the
same action, it will happen in one batch.
When writing incident field trigger scripts, avoid scenarios that call the scripts endlessly (for example, a change in field A triggers script X,
which changes field B's value, which in turn calls script Y, which changes field A's value).
After creating an incident field trigger script in the Scripts page in Python, PowerShell, or JavaScript, you can then associate it with an incident
field.
3. In the Attributes tab, under Script to run when field value changes, select the desired indicator field trigger script.
NOTE:
Incident field trigger scripts must have the field-change-triggered tag to appear in the list.
2. Click New and create a new Incident field of one of the following types:
Multi-select
3. Click Basic Settings and in the Values section set the values you want to see in the incident layout dropdown list for this field.
4. Click Attributes and in Script to run when field value changes, select the script.
Example 4.
val = demisto.args()['new'] # when the script will be triggered this field will hold the new value chosen by the user.
mapped_val = mapping_dict.get(val, val) # getting the value from the map.
execute_command('setIncident', {'customFields' :{'Single_select_field_example': mapped_val}}) # set the new incident mapped field
Example 5.
mapping_dict = {
'low' : '1',
'medium' : '2',
'high' : '3',
'critical' : '4',
}
NOTE:
Choose the name of your custom fields to replace ‘Single_select_field_example’ or ‘multi_select_field_example’ in the examples
above.
5. Go to Settings & Info → Settings → Object Setup → Incidents → Layouts and add the new incident field to an existing layout or create a
new layout.
6. In the incident layout edit page, click Fields and Buttons and drag the new incident field you created to the layout.
In the layout display, you will see the values you set in step 3.
8. Select one of the values. The layout will update with the mapped value as set on the script related to the incident field.
You can use scripts to manipulate and populate data in the Grid field. In this example, you want analysts who can add comments for the incident
during their shift and use a script to automatically populate the Date Logged column with the current date when a user adds a new row to the grid.
1. Create a script called ShiftSummariesChange. The script operates in the following phases:
For each existing row, if the name matches, and the findings column is not updated, the Date Logged column is also updated.
After creating a grid field, it is saved with the new values using the setIncident command.
Full name
Findings
Status
Date Logged
Select Date picker with the Lock checkbox, so the script can populate the values for that column. If a column is unlocked (default), the
column values can be entered manually (by users), or by a script.
NOTE:
During playbook execution, if a malicious finding is discovered you may want to add that finding to a grid. You can use a script in the playbook to
add a new row to the grid with the malicious finding.
fieldCliName: The machine name for the field for which you want to add a new row.
Row: The new row to add the grid. This is a JSON object in lowercase characters, with no white space.
fieldCliName = demisto.args().get('field')
currentValue = demisto.incidents()[0]["CustomFields"][fieldCliName];
if currentValue is None:
currentValue = [json.loads(demisto.args().get('row'))]
else:
currentValue.append(json.loads(demisto.args().get('row')))
You can create scripts that perform specific actions when the SLA is breached in an incident field. For example, you can use the
SendEmailOnSLABreach script that sends an email to specific users when the script is triggered. For more information, see Automate changes to
incident fields using SLA scripts.
Abstract
Create dynamic incident fields using an automation script. Create conditional fields.
Dynamic fields can display different data depending on the field value. You can control which fields display in an incident layout, new/edit, and
close forms, and which values display for single-select and multi-select fields. You create a script on the Scripts page and then add the script to a
field. Scripts support JavaScript, Python, and PowerShell.
You want specific values to appear in a field when the value of another field is different. For example, if the value in the Owner field is Admin,
the values in the assignee field should be Jane, Joe, or Bob. If the value in the Owner field is anything else, the values in the assignee field
should be Mark, Jack, or Christine.
You can use display scripts to change the value displayed in single-select or multi-select fields in the layout. The field displays a list of
options, but when selected, the field may show a different value in the layout than the one selected. For example, in a single-select field,
select an incident from a list of incident names, but the field is populated with the incident ID (not the name) of the related incident.
When assigning an incident to a user, you want to see only relevant data according to the user’s role.
1. Create a script.
This tag must be applied for the script to be available in the field you want to add the script.
Cortex XSOAR comes out-of-the-box with the hideFieldsOnNewIncident field-display script, which hides the incident field for new
incidents, but appears when editing an incident.
Name Description
field The field attributes. Add metadata to the field, such as cliName, type, select values,
etc. For example, [‘field’] [‘cliName’] is the machine learning name of the field.
formType Enables Cortex XSOAR to process the script in the new, edit, close incident forms. For
example, you may want the field to appear in the close form and not in the edit form.
incident.get (‘field’) The field within the incident. For example, incident.get.(‘owner’) retrieves the
owner field. If you create a custom field, you need to change this to CustomFields. For
example, for the incidentclassification custom field, type:
if incident.get('CustomFields').get('incidentclassification') .
Name Description
currentUser Specifies the current user. For example, if you want the script to check on a role
assigned to user and display the appropriate output, type the following:
Add the information that you want to display according to the user roles.
a. Select Settings & Info → Settings → Object Setup → Incidents → Incident Fields → New.
If you want to add the script to an existing field, select the field and click Edit.
b. Under Field Type, select the field type. For example, Single select.
d. Under the Attributes tab, in the Field display script field, select the script you created in step 1.
The following example shows how to create a script for the Assignee field, which shows different values depending on the values in the Owner
field. If the Owner is defined as admin, and the list of available assignees includes one group. If the Owner is defined as anything else, the list of
available assignees includes a different group.
Changes values available in the Assignees field based on the person defined as the owner.
incident = demisto.incidents()[0]
field = demisto.args()['field']['cliName']
if incident.get('owner') == 'admin':
demisto.results({'hidden': False, 'options': ['jane','joe', 'bob']})
else:
demisto.results({'hidden': False, 'options': ['mark','jack', 'christine']})
where
demisto.results tells us whether to hide the field or not, and which values should appear in the field. When the owner field is Admin,
the values are Jane, Joe, Bob. When the ownerowner is anyone else, the values are Mark, Jack, Christine.
5. Select Settings & Info → Settings → Object Setup → Incidents → Incident Fields → New .
The Values field in the Basic Settings tab has been left blank because we hard-coded the values in our script.
Under the Attributes tab, in the Field display script field, select the changeAsigneesPerOwner script we created above.
Fill in the rest of the field definitions as desired and click Save.
7. Create an incident to see what happens when the Owner is set to Admin and when the Owner is set to anything else.
In this example, you need to hide a field in the new incident form but display the field when editing the form. You also set field values for a multi-
select field in the case of an existing incident.
incident = demisto.incidents()[0]
field = demisto.args()['field']
formType = demisto.args()['formType']
if incident["id"] == "":
# This is a new incident, hide the field
demisto.results({"hidden": True, "options": []})
else:
# This is an existing incident, we want to show the field, to know which values to display
options = []
# The field type includes the word select, such as Single select or Multi select
if "Select" in demisto.get(field, "type"):
# take the options from the field definition
options = demisto.get(field, "selectValues")
demisto.results({"hidden": False, "options": options})
2. Select the Malicious Cause (if the cause is a malicious attack) field and click Edit.
3. Under the Field display script field, select the hideFieldsOnNewIncident script and click Save.
Scroll down and note that under Mandatory Information, there is no Malicious Cause field.
Scroll down to the Mandatory Information section and note that the Malicious Cause field appears and the options for the field are
retrieved from the initial field definition.
Abstract
When trying to download a content update, you may receive the following message:
This occurs when a content update has an incident field with the same name as a custom incident field that already exists in Cortex XSOAR.
Click Install Content to force the update and retain your custom incident field. The content update will install without the system version of the
incident field.
After deleting a field of type Grid (table) and creating a new field of another type (string, long text, etc.), you may receive the following error when
trying to close or update an incident:
Cannot convert type []interface {} of '[map[] map[]]' to type string, field: sourceip (8902)
This error occurs with field type changes, if the fields are not compatible types, such as changing the type from long text to boolean or URL to
short text. If you create an incident with that field, delete the field, create a new field with the same name but a different type, and then try to close
the incident with that field, the error occurs.
8. Click Close if you want to close the incident or Edit if you want to edit the incident.
9. In the Custom Fields area, reset (delete) the value for the field you changed.
Abstract
Each incident type has a unique set of data relevant to that specific incident type, including layouts. It is important to display the most appropriate
data for users. Each out-of-the-box incident comes with a layout. You can customize almost every aspect of the layout, including which tabs
appear, in which order they appear, who has permission to view the tabs, what information appears, and how it is displayed.
It's important to build or customize the layout so that you see the information that is relevant to the incident type. For example, in a phishing
incident, you may want to see email headers, but not in an access incident. While some information might be appropriate for multiple incident
types, its location in one incident may require more prominence than in another incident.
You can see which incident type uses the incident layout in the Types tab under Settings & Info → Settings → Object Setup → Incidents. The
incident layout name appears in the Layout column. You can edit the layouts in the Layouts tab.
You can customize the display information including fields for existing incidents, by modifying the sections and fields for the following views:
Section Description
Incident The Incident Summary tab displays the information necessary to investigate an incident. You can customize almost every
Summary aspect of the layout, including which tabs appear, the order they appear, and who has permission. In each field or tab, you can
add filters by clicking on the eye icon, which enables you to add conditions that show specific fields or tabs. For example, if an
analyst decides that a Cortex XDR Malware incident is a Ransomware subtype, they may only want fields to appear that show
data about the encryption method and not to show information if the Malware subtype is adware.
You may also want to limit specific tabs to certain scenarios. For example, if a user clicks a phishing link, the new tab can
contain relevant fields and action buttons for this scenario. You can also add dynamic fields, such as a graph of several bad
indicators, their source, and severity. For more information, see Create dynamic fields.
Also, you can use queries to filter the information in the dynamic section to suit your exact needs.
New/Edit Add, edit, and delete fields and buttons to be displayed when creating or editing an incident.
form
Close Add, edit, or delete sections, fields, and filters, when closing an incident.
Form
Incident Add, edit, and delete sections, fields, and filters in the Incident Quick view section in the incident.
Quick View
NOTE:
There are several out-of-the-box layout sections and fields that you cannot remove, but you can rearrange them in the layout and modify their
queries and filters. These layouts need to be duplicated or detached to make changes.
We recommend copying an existing out-of-the-box incident layout so you don't miss any important information.
Incident Summary
"New/"Edit" Form
"Close" Form
3. Customize the tabs by clicking the settings wheel icon and then doing the following:
NOTE:
Action Description
Rename You can also edit a tab’s name by clicking the tab.
Show empty The setting that you configure in the layout becomes the default value seen in the report for the specific tab, which
fields can then be overridden.
You can also set a global default value using the UI.summary.page.hide.empty.fields server configuration,
which can also be overridden for a specific tab.
Hide tab Hides the tab. Rather than deleting the tab, you may want to use the tab again for future use.
Format for Build your layout based on A4 proportions to match the format used for exporting. Selecting this option hides the tab
exporting by default, but the tab will remain available for export.
Display Filter Add or view a filter applied to the tab. If the filters apply, the specific fields or tabs are shown in the layout. If the
mandatory field is not shown in the layout, the user is not obliged to complete it.
4. Do the following:
Drag and drop the required sections, fields, buttons, and tabs.
5. In the "New"/"Edit" Form and "Close" Form tabs, drag and drop the required fields and buttons.
You can also edit the Basic Information and the Custom Field sections.
NOTE:
If the incident type is attached, you need to detach or duplicate it, if you want to add the layout.
8. Create or ingest an incident to test the new layout and verify fields are populated.
9. (Optional) For a customized layout (duplicate or new layout), you can contribute it to the Marketplace.
a. In the Layouts page, right-click the new layout and select Contribute.
b. In the dialog box, select either Save and submit your contribution or Save and download your contribution for later use, which you can
view in the Contributions tab in the Marketplace.
You can see with the current layout, which is populated with demo data so you can see how the fields fit.
3. If editing a layout that has been installed from a content pack (the layout shows a locked icon), do one of the following:
To add the layout to the incident type, you need to detach the incident type and then add the layout. To duplicate an incident layout,
right-click the layout name in the layouts table, and select Duplicate.
When detached, the layout does not receive content pack updates until you reattach it. You do not need to edit the incident type, as
the layout name remains the same.
TIP:
While a layout is detached, it does not receive content pack updates. If you detach a layout and make changes, any changes made
while it was detached are overwritten by the default values from the content pack. If you want to keep the changes and protect your
changes from content pack upgrades, duplicate the incident type before reattaching the original.
To detach or reattach an incident layout, right-click the layout name in the layouts table, and select Detach or Attach.
Customize sections
2. From the Sections tab in the Library, drag and drop the following sections:
Section Description
New Section After creating a new section, click the Fields and Buttons tab and drag and
drop the fields as needed.
Cortex XSOAR out-of-the-box sections Out-of-the-box sections such as attachments and evidence.
General Purpose Dynamic Section You can add a script to the incident layout. For example, assign a script that
calculates the total number of entries that exist for an incident, which
dynamically updates when new entries are added to the incident.
NOTE:
To remove or duplicate a section, select the section, click , and then select Duplicate or Remove.
TIP:
Limit the number of incident fields to 50 in each section. You can create additional sections as needed.
You can determine how a section in the layout appears in the layout. For example, you may want a section header, or configure the fields to
appear in rows or as cards. If some of the field values will be very long, use rows instead of cards. If the field values are short, you might
want to use cards so you can fit more fields into a section.
To add a long description in the Description field, click Scrollable description to add a scrollbar to enable the displayed information to grow to
fit the content.
For example, to see all indicators of type IP and with a reputation of Bad that were found by a specific source since January 2nd 2022, enter
Type:IP and reputation:Bad and firstseenbysource:>="2021-01-02T00:00:00 +0200"
You can add script-based content to the incident layout by adding the General Purpose Dynamic Section in the incident layout builder. The
General Purpose Dynamic Section enables you to configure a section in the incident layout from a script. The script can return a simple text,
markdown, or HTML, the results of which appear in the General Purpose Dynamic Section.
You can add any required information from a script. For example, you can assign a script that calculates the total number of entries for an incident,
which dynamically updates when new entries are added to an incident. You can add a custom widget to the incident page and add note
information using a script.
The following are examples of values that can be returned from the General Purpose Dynamic Section script:
Text example
return_results(<TEXT>)
Markdown example
return_results({
'ContentsFormat': EntryFormat.MARKDOWN,
'Type': EntryType.NOTE,
'Contents': <MARKDOWN_DATA>
})
HTML example
return_results({
'ContentsFormat': EntryFormat.HTML,
'Type': EntryType.NOTE,
'Contents': <HTML_DATA>
})
For the EntryFormat values see EntryFormat in Common Server Python functions.
1. Create a script.
For examples of script-based widgets for layouts, see Examples of using scripts in incident layouts
3. Drag and drop the General Purpose Dynamic Section into the layout area you want to appear.
4. Select the General Purpose Dynamic Section, click and then click Edit section settings.
5. In the Name and Description fields, add a meaningful name and a description for the dynamic section that explains what the script displays.
6. In the Automation script field, from the dropdown list, select the script that returns data for the dynamic section.
NOTE:
Only scripts to which you have added the dynamic-section tag appear in the dropdown list.
You can add existing buttons or create buttons and then drag and drop them in the layout.
For fields (script arguments) that are optional, you can define whether to show them to analysts when they click on buttons. To expose an optional
field, select the Ask User checkbox next to the script arguments in the button settings page.
In the following example, add a button to the layout, which self-assigns an incident for an analyst. The Common Scripts content pack includes an
AssignToMeButton Script.
NOTE:
When creating a script for use in an incident layout, the incident-action-button tag must be assigned for the script to be available for
custom buttons.
The script that runs when an action button is clicked accepts only mandatory arguments through the pop-up window and does not provide an
option for any non-mandatory arguments to be filled in when the button is clicked. It is recommended to use a wrapper script to collect and
validate arguments in scenarios where there can be a combination of mandatory and non-mandatory arguments for a button.
In the following example, create a button to self-assign an incident for an analyst, and add it to a layout.
3. In the Fields and Buttons tab, drag and drop the New Button into the relevant section.
4. Click to configure.
NOTE:
The Case Management - Generic content pack includes several buttons to use in a layout, such as Assign to Me, Close as Duplicate, and
Link incidents. You can also see useful case management incident layouts. For more information, see Case Management - Generic
content pack.
3. In the Layout field, from the dropdown list, add the customized layout.
a. In the Layouts page, select the new layout and then click Contribute to Marketplace.
b. In the dialog box select either Save and submit your contribution or Save and download your contribution for later use, which you can
view in the Contributions tab in Marketplace.
If you select Save and submit your contribution, your layout is validated and you are prompted to submit to review. You can also view
your contribution in Marketplace.
In any SOC team, there are various roles and responsibilities. For example, you may have specific teams to deal with threats, such as threat
intelligence researchers, security analysts (Tier 1), senior analysts (Tier 2), SOC leads, SOC managers, and SIEM engineers. You have various
options to limit access to incidents and investigations.
Restrict an investigation according to team members. In an investigation, the owner of the incident can restrict the incident to team members
only.
NOTE:
If using a script, use the restrictInvestigation command. You need to specify the incident ID of the incident and set the set
the Restrict argument to True to restrict the incident or set the Restrict argument to False to remove restricted from the incident.
You can add the Roles field to the layout, which enables you to restrict access to all roles other than those you have specifically
added. For example, after an investigation is closed, add administrators or those with specialty roles, so only they can reopen or link
incidents. The added roles have read and write permission, but all other roles do not have access (unless you have added them in the
read-only field).
NOTE:
You can also run the /incident_set roles=<name of role> or !setIncident roles =<name of role> in the CLI,
playbook, or script to set the role.
If you add a role, but the incident has been restricted to team members, and the user is not a team member, the user cannot
access the incident regardless of the role. For example, if you restrict the incident to User A and User B team members who
are Tier 1 analysts but then try to add Tier 2 analysts (none of whom are team members) to the list of roles, a Tier 2 analyst
cannot access the incident.
You add the XSOAR Read Only Roles field to the layout, which restricts access to the incident. When granting read-only access, the
user can view the incident but not edit. For example, when an incident is in triage (phase 1), you may want all Tier-2 analysts to have
read-only access, so that Tier-1 can edit the incident. When the phase changes to phase 2, Tier-1 has read-only access.
Adding a team member overrides the XSOAR Read Only Roles field, so if you add User A, (Tier 1) as a team member, even if you
assign Tier-1 as a Read only role, the user still has Read/Write access. You need to remove the user as a Team Member.
Although an analyst can change the XSOAR Read-Only field manually, you can automate the process by creating a custom incident
field using Incident Field Trigger Scripts, or create a script and adding a new field button.
NOTE:
You can also run the !setIncident xsoarReadOnlyRoles=<name of role> in the CLI, playbook, or script to set the the user
role.
If you assign a role (read and write permission) and assign the same role as read only, the user still has read/write permission. You
need to remove the assigned role. If you restrict the incident, the read-only role does not override the restriction. In other words,
team members permission takes precedence.
Abstract
The following are examples of scripts that are supported in incident layouts:
Charts
A valid result for a chart widget is a list of groups. Each group points to a single entity. For example, in bar charts, each group is a bar. A group
consists of the following:
Name: A string.
Color: A string representing a color that will be used as a default color for that group. It can be the name of the color, a hexadecimal
representation of the color, or an RGB color value (optional).
In this example, create a script in Python that displays a horizontal bar of the indicators by severity.
Python script
data = {
"Type": 17,
"ContentsFormat": "bar",
"Contents": {
"stats": [
{
"data": [
1
],
"groups": None,
"name": "high",
"label": "incident.severity.high",
"color": "rgb(255, 23, 68)"
},
{
"data": [
1
],
"groups": None,
"name": "medium",
"label": "incident.severity.medium",
"color": "rgb(255, 144, 0)"
},
{
"data": [
2
],
"groups": None,
"name": "low",
"label": "incident.severity.low",
"color": "rgb(0, 205, 51)"
},
{
"data": [
8
],
"groups": None,
"name": "unknown",
"label": "incident.severity.unknown",
"color": "rgb(197, 197, 197)"
}
],
"params": {
"layout": "horizontal"
}
}
}
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout. The following widget displays:
Vertical bar
In this example, create a script in Python that displays a vertical bar of the indicators by severity.
Python script
data = {
"Type": 17,
"ContentsFormat": "bar",
"Contents": {
"stats": [
{
"data": [
1
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout. The following widget displays:
Stacked bar
In this example, create a script in Python that displays a stacked bar showing the successes and failures on specific dates.
Python script
data = {
"Type": 17,
"ContentsFormat": "bar",
"Contents": {
"stats": [
{
'name': 'time1',
'groups':
[
{
'name': 'Successes',
'data': [7],
'color': 'rgb(0, 205, 51)'
},
{
'name': 'Failures',
'data': [3],
'color': 'rgb(255, 144, 0)'
}
]
}
]
}
],
"params": {
"layout": "horizontal"
}
}
}
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout. The following widget displays:
Line chart
In this example, we create a JavaScript that displays how many GitHub issues were created each week for Content, Documentation, and Platform
in a line chart.
Java script
content = 'red',
platform = 'yellow'
documentation = 'blue'
data = {
"Type": 17,
"ContentsFormat": "line",
"Contents": {
"stats": [
{
"count": 3,
"data": [
3
],
"floatData": [
3
],
"groups": [
{
"count": 3,
"data": [
3
],
"floatData": [
3
],
"groups": null,
"name": "Content",
color: content
}
],
"name": "2020-35"
},
{
"count": 32,
"data": [
}
],
"name": "2020-46"
},
{
"count": 22,
"data": [
22
],
"floatData": [
22
],
"groups": [
{
"count": 12,
"data": [
12
],
"floatData": [
12
],
"groups": null,
"name": "Platform",
color: platform
},
{
"count": 10,
"data": [
10
],
"floatData": [
10
],
return data;
After you have uploaded the script and created the widget, you can add the widget to an incident layout. The following widget displays:
Pie
In this example, create a script in Python that queries and returns a pie chart.
data = {
"Type": 17,
"ContentsFormat": "pie",
"Contents": {
"stats": [
{
"data": [
1
],
"groups": None,
"name": "high",
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout. The following widget displays indicator
severity as a pie chart:
Duration
In this example, create a script in Python that queries and returns a time duration (specified in seconds), and displays the data as a countdown
clock.
data = {
"Type": 17,
"ContentsFormat": "duration",
"Contents": {
"stats": 60 * (30 + 10 * 60 + 3 * 60 * 24),
"params": {
"layout": "horizontal",
"name": "Lala",
"sign": "@",
"colors": {
"items": {
"#00CD33": {
"value": 10
},
"#FAC100": {
"value": 20
},
"green": {
"value": 40
}
}
},
"type": "above"
}
}
}
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout. The following widget displays the time
duration:
Number
This example shows how to create a single item widget that displays a number.
data = {
"Type": 17,
"ContentsFormat": "number",
"Contents": {
"stats": 53,
"params": {
"layout": "horizontal",
"name": "Lala",
"sign": "@",
"colors": {
"items": {
"#00CD33": {
"value": 10
},
"#FAC100": {
"value": 20
},
"green": {
"value": 40
}
}
},
"type": "above"
}
}
}
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout. The following widget displays:
Number Trend
This example shows how to create a single-item widget that displays a number trend.
data = {
"Type": 17,
"ContentsFormat": "number",
"Contents": {
"stats": { "prevSum": 53, "currSum": 60 },
"params": {
"layout": "horizontal",
"name": "Lala",
"sign": "@",
"colors": {
"items": {
"#00CD33": {
"value": 10
},
"#FAC100": {
"value": 20
},
demisto.results(data)
After you have uploaded the script and created the widget, you can add the widget to an incident layout. The following widget displays:
This example shows how to add note information to an incident layout using a script through the API.
1. Install the Cortex REST API content pack and add a Core REST API instance.
commonfields:
id: ShowLastNoteUserAndDate
version: -1
name: ShowLastNoteUserAndDate
script: |2
function getLastNote(incidentID) {
var body = {pageSize:1,categories:['notes']};
var res = executeCommand('demisto-api-post', {uri:'/investigation/' + incidentID, body: body});
if (isError(res[0])) {
throw 'demisto-api-post failed for incidnet #'+incidentID+'\nbody is ' + JSON.stringify(body) + '\n' + JSON.stringify(res);
}
if (!res[0].Contents.response.entries) {
return null;
}
var notes = res[0].Contents.response.entries;
var lastNote = notes[notes.length-1];
return lastNote;
}
lastNote = getLastNote(incidents[0].id);
if (lastNote) {
md = `#### Update by ${lastNote.user} on ${lastNote.modified.split('T')[0]}\n`;
md += `\n---\n`;
md += lastNote.contents + '\n';
3. Add the script to the layout and then add the layout to the incident type.
You can see note information, containing the last user and date.
The classification and mapping feature enables you to take events and event information ingested from integrations, classify the event as an
incident type, and map event information to incident fields in Cortex XSOAR.
NOTE:
Classifiers and mappers can be created with or without association to a specific integration instance and can be assigned to multiple
instances. An integration can only have a classifier or only a mapper.
When creating a classifier and mapper, you can contribute them to the Marketplace.
For more information about classification and mapping, see the following video:
An error occurred.
Classification
Classification determines the type of incident that is created for events ingested from a specific integration. For example, Cortex XSOAR might
generate alerts from Cortex Traps which you would classify either as a dedicated Traps, Authentication, or Malware incident type.
By classifying the events as different incident types, you can process them with different playbooks in the incident type, which is suited to their
respective requirements.
You can hard code every alert fetched from the integration to a specific incident type, by selecting the incident type in the integration
instance settings. This is useful where you want all alerts classified to a single type, such as phishing and have the same playbook execute
on the same incident.
Most integrations produce a variety of alerts that you may want to send to separate incident types, which may use different
playbook/response processes. Create an incident classifier to route alerts from the integration to incident types in Cortex XSOAR. After you
create a classifier, add the classifier to an integration. For more information, see Create an incident classifier.
NOTE:
To hard code an incident type or select a classifier in an integration instance, you may need to select Fetches incidents in the integration
instance settings.
Some content packs include classifiers, which have incident types already classified. For example, the Cortex XDR Incident Handler - Classifier
classifies events such as FirstSSOAccess and RDPBruteForce as Cortex XDR Incident types in Cortex XSOAR.
Mapping enables you to map important information from incoming alerts into incident fields for use in playbooks and in layouts, so analysts can
view the information when investigating an incident.
Some content packs include mappers, which have fields already mapped. For example, the XDR - Incoming Mapper includes fields such as
Hostnames, LastMirroedInTime, and Occurred are already mapped. To create a mapper, see Create an incident mapper.
When building playbooks, incidents are much easier to use and allow you to take different actions based on those fields within a playbook.
Most field types become searchable. For example, if you map the source username to a field, you can query that field with other incidents
with the same source username. It is easier to correlate, deduplicate, query, and report.
Easily add fields to layouts for display and review by the analyst.
Perform indicator extraction based on the incident type and its fields. Extract specific indicators from specific fields.
Mirror content in Cortex XSOAR with third-party integrations. This enables you to make changes to an incident in Cortex XSOAR and have
that change be reflected in the case managed by the integration. For example, if you are using a case management system such as JIRA or
Salesforce, you can close an incident in Cortex XSOAR and have that reflected automatically.
NOTE:
The integration must support pulling the integration schema for mirroring to work.
Select a schema: When supported by the integration, this pulls all of the integration fields from the database.
Upload a JSON file: If you can't pull samples or the samples do not retrieve sufficient data, upload a formatted JSON file.
The JSON file needs to be in an array of dictionaries, with each alert in its own dictionary. For example:
[
{
"type": "url allowed",
"EventID": "5106",
"urlCategory": "PHISH",
"sourceIP": "10.8.8.181",
"occurred": "2024-05-22T08:16:26Z",
"sourceUser": "[email protected]",
"url": "https://notthedomainyouarelookingfor.com/login.php",
"userAgent": "Mozilla/5.0(WindowsNT6.1;WOW64;rv:27.0)Gecko/20100101Firefox/27.0"
},
{
Abstract
Classify events using a classification key in an integration ingestion. Create incident classifier in Cortex XSOAR
When an integration fetches incidents, it populates the raw JSON object in the incident object. The raw JSON object contains all of the attributes
for the event, such as the source of the event and when the event was created. When classifying the event, select an attribute that can determine
the event type.
When creating a classifier, you can pull data from the following:
NOTE:
Ensure that your instance is configured and enabled but you don't need to fetch incidents.
Schema
When supported by the integration, this pulls all of the integration fields from the database. You select from these fields to classify the
events.
Upload a formatted JSON file which includes the field you want to classify. If the instance has nothing to fetch or has insufficient data, you
can upload a JSON file containing raw data.
1. Go to Settings & Info → Settings → Object Setup → Incidents → Classification & Mapping.
If the classifier is installed from a content pack, you need to duplicate and then open it.
4. Under Get data, select from where you want to pull the event data. You will classify the incident types based on this information.
Select schema
Upload JSON
5. Under Select Instance, select the integration instance from where you want to pull data.
In the Data fetched from [name of integration instance) section, you will see the raw alert data pulled in from the integration instance. In this
example, after configuring the Sample Incident Generator instance, we have pulled in the following data:
6. To route the alert information to an incident type, select the classification key,
a. In the Data fetched section, click the key you want to map. For example, type.
TIP:
Select a key that is common to across all the samples. If a key is selected that will change across all alerts such as SourceIP there
could be many values.
In the Unmapped Values section, the selected key returns any unmapped classifier values. For example, type returns Malware,
Unclassified, and Phishing.
b. Drag and drop the unmapped classifier values onto the Incident Types section.
For example, you can see the Malware and Phishing values have been mapped to the relevant incident type.
c. In the Direct Unclassified events to field, select the incident type for unclassified events.
d. (Optional) If there are events that haven't been pulled from the samples you can manually add them to the Incident Types section, by
clicking the edit button in the Incident Type field. For example, if you know that the source has a file blocked incident type, click the
edit button on the relevant field and type file blocked.
a. Select the integration from which you want to apply the classifier.
b. In the integration settings, under Classifier, select the classifier you created and click Done.
For some instance integrations, you need to click Fetches incidents to add a classifier and mapper.
Abstract
Incoming mapper: Maps all fields you pull from the integration to the incident fields.
Outgoing mapper: Maps incident fields with fields in the integration to which you are pushing the data. This is useful for mirroring.
You can map your fields to incident types irrespective of the integration or classifier, which means that you can create mapping before defining an
instance or ingesting incidents. By doing so, when you do define an instance and apply a mapper, the incidents that come in are already mapped.
When you create a mapper you can select the following incident types, which show the incident fields to map.
Common Mapping: Defines how fields associated to all incident types are mapped.
Specific mapping: Defines how fields associated with the specific incident type are mapped.
Specific mapping overrides any mapping done in Common Mapping. When an incident is ingested, common mapping and then specific
mapping are applied.
TIP:
It is recommended to map all of the fields that are common to all incident types by selecting Common Mapping and then map additional fields
that are specific to each incident type.
When mapping a list, we recommend you map to a multi-select field. Short text fields do not support lists. If you do need to map a list to a short
text field, add a transformer in the relevant playbook task, to split the data back into a list.
You can also use Auto Map to automatically map fields, based on the same or similar names from the integration instance. For example, Severity
can be mapped to Importance.
Some out-of-the-box fields are entirely controlled by Cortex XSOAR, and cannot be mapped, such as:
Dbot Status
Dbot Closed
Close Notes
Feed Based
NOTE:
Anything that you do not map will be discarded. If you want the data or you are not sure at this stage whether you want to map the data in the
future, unmapped data can be placed into labels. Labels are unmapped and unsearchable data that is associated with the incident. Although
you can use labels in playbooks, it is recommended that you map the required data, otherwise, all of the raw data goes into labels including
both mapped and unmapped data. To turn labels on, go to Settings → Advanced and deselect Do not map JSON fields into labels for
selected incident type. This means
How to create a mapper
1. Go to Settings & Info → Settings → Objects Setup → Incidents → Classification & Mapping.
2. Click New and select the mapper that you want to create.
4. Under Get data, select from where you want to pull the information.
When classifying or mapping data and using the integration instance to retrieve data, the instance must be configured and enabled.
You don't need to fetch incidents.
Select the schema: When supported by the integration, this will pull all of the fields for the integration from the database. This enables
you to see all of the fields for each given event type that the integration supports. For example, the Palo Alto Networks Cortex XDR -
Investigation and Response integration supports a schema.
Upload JSON: Upload a formatted JSON file that includes the field you want to map.
If the instance has nothing to fetch or the integration instance has insufficient data, upload a JSON file containing raw data.
NOTE:
If creating an outgoing mapper you can only select a schema or upload a JSON file.
5. Under Select Instance, select the integration instance from where you want to pull data.
On the right-hand side, in the Data fetched from [name of integration instance] section, you will see the raw alert data pulled in from the
integration instance. In this example, after configuring the XSOAR Engineering Training Instance (from the XSOAR Engineering Training
content pack), we have pulled in the following data:
NOTE:
If creating an outgoing mapper, data from the integration schema appears on the left-hand side of the page.
By default the Incident type is set to Common mapping, which includes fields that are common to all of the incident types. This saves you
time having to define these fields individually in each incident type.
NOTE:
When using common mapping it shows fields that are relevant for all incident types. If you created incident fields that are specific to
incident types, these fields do not appear, and you need to select the relevant type.
7. (Outgoing mapper only) In the Incident samples section select the incident you want to map.
(Optional) Automatically map fields. Click Auto Map for Cortex XSOAR to map fields with common or similar names. For example,
Cortex XSOAR can map Importance to Severity or sourceIP to Source IP.
You can Auto Map at any time. These settings do not override any manual mapping.
1. Select the field you want to map and click Choose data path.
In this example, you can see that we have mapped Event ID and Event Type.
If creating an outgoing mapper, on the right-hand side, click the relevant incident field to map from.
NOTE:
Some fields are automatically mapped when you start defining the mapper if Cortex XSOAR recognizes it has something similar. Although
it appears as Show:Unmapped to make sure, map the item.
a. Click the mapped field and then click the curly brackets.
b. Add the filters and transformers, as required. For more information, see Transformer considerations, categories, and built-in
transformers.
10. Repeat this process for the other incident types for which this mapping is relevant.
When selecting an incident type, you can copy the mapper that you created previously. This is useful if you are mapping to multiple incident
types through your classifier, as you will need to perform mapping on each incident type or through common mapping.
b. In the integration settings, under Mapper, select the mapper you created and click Done.
NOTE:
It is recommended that you turn off Fetches Incidents, as soon as the integration fetches incidents until you have configured a
playbook to run on the incident type.
After the integration instance starts fetching incidents go to the Incidents page to see how the classifier and mapper performed. The
incident type should be populated with the correct type. You can also add relevant fields in the incident table to see if they are mapped
correctly. You can also view the information including incident and labels in Context Data from (Side panels) when investigating an
incident.
When mirroring incidents, you can make changes in a third-party application, such as ServiceNow and Jira that will be reflected in Cortex XSOAR,
or vice versa. You can also attach files from either of the systems, which will then be available in the other system.
Configure mirroring for triggering incidents originating from your third-party application or from another fetching integration. For example, see
the ServiceNow v2 integration.
Configure incoming and outgoing mappers. For more information, see Classification and mapping.
Configure account roles for API calls. For example, see the ServiceNow v2 integration.
Run mirroring commands. For more information about mirroring commands, see the Mirroring Integration or your third-party integration, for
example ServiceNow v2.
Deduplicate incidents either manually or automatically in Cortex XSOAR. Mark as duplicate using pre-process rules or playbooks.
When ingesting incidents, you may ingest several incidents that are duplicated. Cortex XSOAR provides the following deduplication capabilities:
Manual deduplication
During an investigation, on the Incidents page, an analyst can manually deduplicate incidents. For more information, see Incident
management.
Automatic deduplication
Option Description
Pre- Set up pre-process rules to deduplicate incidents as soon as they are ingested into Cortex XSOAR.
process
rules
Playbooks There are several out-of-the-box playbooks you can run to identify and close duplicate incidents. Alternatively, you can
use these playbooks as the basis for customized de-duplication playbooks. For example, instead of automatically closing
the duplicate incidents, an analyst can review the duplicated incidents. The Dedup - Generic v4 playbook Identifies
duplicate incidents using the machine learning model (used mainly for phishing). For more information, see Dedup -
Generic v4.
Scripts Automate deduplication by creating a script or using one of the out-of-the-box scripts, such as:
FindDuplicateEmailIncidents: Used to find duplicate emails for phishing incidents including malicious, spam,
and legitimate emails, and whether to close them as duplicates. For more information, see
FindDuplicateEmailIncidents
DBotFindSimilarIncidents: Finds past similar incidents based on incident fields' similarity. Includes an option to
display indicators similarity. For more information, see DBotFindSimilarIncidents.
NOTE:
Pre-process rules enable you to perform certain actions on incidents as soon as they are ingested (after classification and mapping) but before the
incident is created in Cortex XSOAR. These rules enable you to drop, deduplicate, link, or close incoming incidents based on specific criteria. For
example link the incoming incident to an existing incident, or under preconfigured conditions, drop the incoming incident altogether.
When creating pre-process rules you can test them on existing incidents to see how they perform.
Creating a pre-process rule consists of a three-part process using the preprocess wizard.
1. Select the incident field and value you want the rule to apply.
2. Select the action to perform on the incident, such as link and drop.
3. Add the criteria to compare existing incidents with the new incident, including the time range and oldest and newest incidents.
After you create a rule in the Pre-Process Rules tab, you can do the following:
NOTE:
Rules are executed in the order they appear (from top to bottom). You can drag and drop rules as required. Only one rule is applied per incident.
The following table describes the rule action for pre-process rules.
Option Description
Link and Creates an entry in the Linked Incidents table of the existing incident to which you link, and closes the incoming incident. If an
close existing incident matching the defined criteria is not found, an incident is created for the incoming event.
Close Closes the incoming incident. The incident will be created, but the associated playbook doesn't run.
Drop Drops the incoming incident and no incident is created. Used for incidents that have low severity, no severity, or they have no
value and don't need to be investigated.
Drop Drops the incoming event, and updates the Dropped Duplicate Incidents table of the existing incident that you define. In addition,
and a War Room entry is created. If an existing incident matching the defined criteria is not found, an incident is created for the
update incoming event.
Link Creates an entry in the Linked Incidents table of the existing incident to which you link.
Pre-Process rules that use system-based scripts such as GetIncidentsByQuery, by default, are run according to the defined
role (Limited User). For example, if the GetIncidentsByQuery script runs with the Limited User role, it also runs with the
Limited User role in the Pre-Process rule. You can change the default by either detaching the script and updating the RunAs
field such as DbotRole, or create a wrapper script with the required role set in the RunAs field. The wrapper script calls the
system-based script. The system-based when called by the wrapper script runs with the role assigned to the wrapper script.
Pre-processing scripts can access sensitive incident data. As best practice, we recommend assigning a Role for the pre-
processing script to allow only trusted users to edit it.
Pre-processing rules enable you to perform certain actions on incidents as they are ingested into Cortex XSOAR. You can, for example, link an
incoming incident to an existing incident, or under certain conditions, drop the incoming incident altogether.
Before you begin, search for incidents that you want the pre-process rule to apply and click Investigate, so that those incidents are available for
testing.
1. Select Settings & Info → Settings → Object Setup → Incidents → Pre-Process Rules → New Rule.
Give a meaningful name that helps you identify what the rule does. This will be useful when viewing the list of rules.
3. In step 1 Conditions for Incoming incident to apply the rule for incidents, do the following:
For example, if you want to apply the rule to a phishing incident type:
NOTE:
For more information about filters, see Filter considerations, categories, and built-in filters.
For example, if you are running a phishing awareness campaign, add Email Subject and in the value field, type the relevant text.
For example, you may want the rule to apply to blocked or spam alerts.
NOTE:
4. In step 2 Action, select the action to take if the incoming incident matches the rule.
This section enables you to link or update an incoming event and drop or update the incident depending on the selected criteria.
Section Options
Select the incident field and value you want to link. For example, if you want to link the Email Subject field of the
existing incident to the new incident, do the following:
Email Subject Is identical (Incoming incident) to incoming incident (this field is prepopulated)
Section Options
Select the incident field and value you want to drop and update.
From the dropdown list, select the script to run on the incoming incident. Only scripts that were tagged preProcessing appear
in the drop-down list.
6. (Optional) In a remote repository environment, you can view the relevant dependencies to ensure that all necessary dependencies are
propagated or pushed to the remote repository.
Testing is useful to check that you are receiving the desired results before putting a rule into production. We recommend you fetch data from
an existing incident as a sample incident against which the rule can run. You can also manually enter JSON to use as a test sample or edit
the JSON from an existing incident using the Edit button.
Drop incidents
When you run a phishing awareness campaign and send training emails to your employees, you want your employees to report the emails but you
don't want to investigate. In this example, we create a condition for incoming incidents with the email subject You've Won the Best Employee
Award, and drop those incidents without linking them to another incident.
Apply to incidents that are ingested from the Sample Incident Generator.
Drop incoming events and update the incident, if the name of the existing incident is identical to the incoming incident.
You can add multiple conditionals to check for duplicates (not just the incident name) such as a Threat ID, incident ID, email, and host.
Watch the following video to see how to drop blocked or spam incidents and drop and update existing incidents.
An error occurred.
You can set up a post-processing script to run after an incident has been remediated, but before the incident is closed in Cortex XSOAR
Post-processing scripts perform actions on an incident after it is remediated but before it is closed by an analyst or automatically in a script or
playbook. For example, after remediating an incident, an analyst may want to perform additional actions on the incident, such as closing a ticket in
a ticketing system, sending an email, or preventing an incident from being closed without an assigned owner. You can create a post-processing
script to cover these scenarios.
Common Scripts: Includes the GenerateInvestigationSummaryReport script, which generates a report when an investigation is closed.
Case Management - Generic: Includes the CloseLinkedIncidentsPostProcessing, which closes any linked incidents when the incident is
closed.
You need to create a post-process script and then add the script to the incident type.
Example 6.
For an example of creating post-processing scripts that prevent an incident from being closed without an assigned user or the close notes not
being filed out correctly, together with a Service Now example, see the following video:
An error occurred.
Argument Description
openDuration The open incident duration between the created and closed dates.
closingUserId The username of the user who closed the incident, or DBot if the incident was closed by DBot (for example, through
a playbook).
N/a Any other field values passed in at closure, whether through the incident close form, the CLI, or a playbook task.
Example 7.
The following script example requires the user to verify all To Do tasks before closing an incident. Before you start, you need to configure
and enable a Cortex XSOAR REST API instance. For more information, see Core REST API.
inc_id = demisto.incidents()[0].get('id')
tasks = list(demisto.executeCommand("core-api-get", {"uri": "/todo/{}".format(inc_id)})[0]['Contents']['response'])
if tasks:
if not task.get("completedBy"):
return_error("Please complete all ToDo tasks before closing the incident")
break
Example 8.
In this example, create a post-processing script for Service Now incidents using a SNOW instance, where there are required fields to
resolve and close (such as Resolution Code and Resolution Notes).
This script works with the defaults from Service Now and resolves and closes the mirrored ticket in Service Now.
commonfields:
id: c8eeeb6c-3622-4bcb-897a-d183625609fd
version: 20
vcShouldKeepItemLegacyProdMachine: false
name: ServiceNowCloseIncidentTicket
script: |-
# return the args and incident details to the war room, useful for seeing what you have available to you
# args can be called with demisto.args().get('argname')
# debugging
# demisto.results(demisto.args())
# demisto.results(demisto.incident())
# get the close notes and reason from the XSOAR Incident
close_reason = demisto.args().get('closeReason')
close_notes = demisto.args().get('closeNotes','No close notes provided')
servicenow_sysid = demisto.incident().get("dbotMirrorId", False)
# handle if there is no service now sys_id, resolve and close snow ticket
if servicenow_sysid:
demisto.results(demisto.executeCommand("servicenow-update-ticket",
{"id":servicenow_sysid,"close_code":close_code,"state":6,"close_notes":close_notes}))
demisto.results(demisto.executeCommand("servicenow-update-ticket", {"id":servicenow_sysid,"state":7}))
else:
demisto.results("No ServiceNow sys_id found, doing nothing...")
type: python
tags:
- post-processing
- training
comment: Post processing script to resolve and close Service Now tickets if the XSOAR
Incident is closed.
enabled: true
scripttarget: 0
subtype: python3
timeout: 80ns
pswd: ""
runonce: false
dockerimage: demisto/python:1.3-alpine
runas: Administrator
NOTE:
If there is an additional custom argument defined for a post-processing script, arguments such as closeNotes, closeReason, closed,
and openDuration, are not available in the demisto.args() dictionary. In this case, there are two options:
1. Remove the additional custom argument from Script settings and instead add it as a field on the Close Form for the incident type.
This results in the additional argument being passed to the post-processing script.
2. Manually add the default system arguments such as closeNotes, closeReason, closed, and openDuration to the Script settings,
in addition to the custom argument. If not added, the code example above close_notes =
demisto.args().get('closeNotes','No close notes provided') always returns "No close notes provided".
b. Click the incident type you want to add the post-processing script.
After you add a post-processing script to the incident type, the incident type will use the post-processing script.
NOTE:
Customize close reasons for incidents by adding a server configuration in Cortex XSOAR.
False Positive
Resolved
Duplicate
Other
To customize the incident close reason, you need to add a new server configuration.
1. Select Settings & Info → Settings → System → Server Settings → Server Configuration → Add Server Configuration.
Key Value
By default, when editing the following inline values in an incident, the changes are not saved until you confirm your changes (clicking the check
mark icon in the value field):
Text values, such as Asset ID. (You can only edit when you click the pencil in the value field).
These icons provide an additional level of security before you make changes to the fields in incidents, indicators, and threat intel reports. If you
want to allow users to change to inline fields without clicking the check mark, you need to add a server configuration.
1. Select Settings & Info → Settings → System → Server Settings → Server Configuration → Add Server Configuration.
Key Value
inline.edit.on.blur Set the server configuration to true, which enables you to make changes to
inline fields without clicking the check mark. The changes are automatically
saved when clicking anywhere on the page or when navigating to another
page. For text values you can also click anywhere in the value field to edit.
Export an incident using Cyrillic characters. Export an incident to CSV using UTF8-BOM format. Server configuration.
By default, when exporting an incident to a CSV format, Cortex XSOAR generates the report in UTF8 format. If you want to export an incident that
contains Cyrillic characters, such as Russian, Greek, etc., you need to change the format to UTF8-BOM. This also changes exporting an indicator
to the UTF8-BOM format.
NOTE:
When changing the format to UFT8-BOM you also change the format for indicators.
Key Value
Export.utf8bom true
11 | Playbooks
Abstract
Playbooks are a series of tasks, conditions, automation, commands, and loops that run in a predefined flow, which are at the heart of the Cortex
XSOAR system.
Automate complex workflows by using playbooks to streamline repetitive tasks. Create or customize playbooks, define inputs and outputs,
integrate custom scripts, and rigorously test with the built-in debugger for flawless execution.
Cortex XSOAR playbooks enable you to structure and automate many of your security processes. Parse incident information, interact with users,
and remediate.
Playbooks are a series of tasks, automations, conditions, commands, and loops that run in a predefined flow to save time and improve the
efficiency and results of the investigation and response process. They are at the heart of the Cortex XSOAR system, because they enable you to
automate many security processes, including handling investigations and managing tickets. For example, a playbook task can parse the
information in an incident, whether it is an email or a PDF attachment.
Playbooks have different task types for each action you want to take. For example:
Use conditional tasks to validate conditions based on values or parameters and take appropriate direction in the playbook workflow.
Use automation tasks to automatically remediate an alert by interacting with a third-party integration, open tickets in a ticketing system such
as Jira, or detonate a file using a sandbox.
You can also structure and automate security responses that were previously handled manually.
You define the logical flow of your playbook when you design your use case. After developing and testing the playbook, it then runs during
investigation and response.
NOTE:
Cortex XSOAR currently does not support the IoT Security Third-party Integrations Add-on . For more information, see the IoT Security
documentation.
Follow the playbook development flow to create playbooks that structure and automate many of your security processes.
The playbook development checklist follows the logical flow for developing a playbook.
We recommend that you review the following steps to successfully implement your playbook.
Step 1. Plan your During the initial planning stage when designing your use case, start defining the playbook flow. See topic
playbook
Consider the process you want to automate and the steps and the decisions during the process. These
steps and decisions become the playbook tasks.
Step 2. Develop Consider whether to customize an existing playbook or create a new playbook from scratch. Create See topic
your playbook playbook tasks, inputs, and outputs. Maintain playbook versioning to keep track of playbook
development history.
Step 3. Customize Fine tune your playbook for your needs, including extracting indicators, extending context, and adding See topic
your playbook incident fields to the system.
Step 4. Debug your Debug errors in your playbook. Use playbook metadata to troubleshoot playbook performance. See topic
playbook
When defining the work flow of your playbook, consider the following:
What conditions do you need along the way? Are these conditions manual or automatic?
Review the following workflow for a phishing use case. Also, review the playbooks in the Phishing content pack to see how they work.
Detection
Identification
Analysis
Remediation
Each of these high-level processes can contain a number of sub-processes that require step-by-step actions, all of which can be automated with
either customized or new playbooks.
The Default Playbook provides generic capabilities for automated incident enrichment and severity calculations that you can adjust for your needs.
Watch this video for more details.
An error occurred.
Create a new playbook or customize an existing one based on your organization's needs.
When developing your playbook, you can either customize an existing out-of-the-box playbook from a content pack or create a new playbook from
scratch.
Developing a new playbook from scratch enables a tailored solution for your use case, whereas customizing an out-of-the-box playbook can save
time, reduce complexity, and be a more efficient way to meet your organization's specific security and incident response needs.
Task 1. Choose from out-of-the- Search for an out-of-the-box playbook to use, customize it, or create one based on your See
box playbooks or customize your needs. topic.
own
Task 2. Configure playbook Define playbook metadata, such as the name of the playbook, who can edit and run the See
settings playbook, and whether Quiet Mode is turned on. topic.
Task 3. Add tasks Build your playbook by adding tasks that enable you to run scripts and sub-playbooks, See
communicate with end users, set conditions, and store relevant data. Define inputs and topic.
outputs for your tasks.
Task 4. Add custom playbook Customize your playbook, including adding scripts, sub-playbooks, filtering and see topic.
features transforming data, extracting indicators, extending context, setting and updating incident
fields, and polling.
Task 5. Test and debug the Set breakpoints, conditional breakpoints, skip tasks, and input and output overrides in the See
playbook playbook debugger. topic.
Task 6. Manage playbook content Save versions of your playbook in Cortex XSOAR, or manage your playbook content See
development and testing using a remote repository. topic.
Abstract
Use an out-of-the-box playbook, create a new playbook, or customize an existing one based on your organization's needs.
Search for a playbook that is included out-of-the-box with Cortex XSOAR or after downloading from Marketplace.
In the Cortex XSOAR Playbooks page, use free text in the search box to search for playbooks. You can search using part or all of the playbooks'
names or description. You can also search for an exact match of the playbook name by putting quotation marks around the search text. For
example, searching for "Block Account - Generic" returns the playbook with that name.
Search for more than one exact match by including the logical operator "or" in-between your search texts in quotation marks. For example,
searching for "Block Account - Generic" or "NGFW Scan" returns the two playbooks with those names. Wildcards are not supported in free
text search.
TIP:
You can also browse Marketplace to check for out-of-the-box playbooks that you can customize for your process. For an extensive list of
available out-of-the-box playbooks, see Generic Playbooks.
When installing a playbook from a content pack, by default, the playbook is attached, which means that it is not editable (apart from some input
values).
To edit the playbook, you need to detach or make a duplicate. While it is detached, the playbook is not updated by the content pack. This may be
useful when you want to update the playbook without breaking customization. If you want to update the playbook type through content pack
updates, you need to reattach the playbook, but any changes are overridden by the content pack on upgrade. If you open an attached playbook in
a tab, it can be detached from within the editor page.
A blank playbook opens with the Playbook Triggered task that holds the playbook inputs and outputs.
NOTE:
To open multiple playbooks at the same time, edit the first playbook and then click New next to the playbook name to create a new tab.
You can either create a new playbook, or add an existing one.
You can view recently modified or deleted playbooks by clicking version history for all playbooks .
Abstract
Use an out-of-the-box playbook, create a new playbook, or customize an existing one based on your organization's needs.
Tagging
Access
Whether to associate the playbook with an incident type. This needs to be set under the Settings & Info → Settings → Object Setup →
Incidents → Types tab.
2. If it is a content pack playbook, detach or duplicate the playbook by clicking the ellipsis icon.
If you detach the playbook and want to keep any changes, ensure that you duplicate the playbook before reattaching.
NOTE:
b. Add any tags as required by either typing a new tag or selecting from the list.
e. In the ADVANCED section, determine whether the playbook runs in quiet mode.
When Quiet Mode is selected, playbook tasks do not display inputs and outputs and do not extract indicators.
Playbook tasks are not indexed so you cannot search on the results of specific tasks. All of the information is still available in the
context data, and errors and warnings are written to the War Room.
TIP:
In the War Room, you can run the !getInvPlaybookMetadata command to analyze the size of playbook tasks in a specific incident
Work Plan to determine whether to implement quiet mode for playbooks or tasks.
Abstract
Use an out-of-the-box playbook, create a new playbook, or customize an existing one based on your organization's needs.
Playbook tasks are the building blocks of playbooks. Tasks enable you to run scripts and sub-playbooks, communicate with end users, set
conditions, and store relevant data.
Cortex XSOAR supports different task types for different actions to be taken in a playbook, and each task can receive and generate data in the
form of inputs and outputs. For example, for enrichment, you might want to run an enrichment sub-playbook or a command that returns additional
information for an indicator.
Tasks can be reused across playbooks and you can copy, cut, paste, and delete tasks within or between playbooks using keyboard shortcuts. To
see a list of keyboard shortcuts, see Keyboard shortcuts.
The Task Library contains scripts, tasks, and playbooks. You can create new tasks from scripts, repurpose existing tasks, and use existing
playbooks as sub-playbooks.
You can add a brief description for each task, explaining what the task does. Descriptions are added in the Task Description task field.
NOTE:
To open multiple playbooks at the same time, edit the first playbook and then click the New icon next to the playbook name to create a new tab.
You can either create a new playbook, or add an existing one.
Once you add tasks to your playbook, connect the tasks in their logical order by dragging and dropping a wire from one task to another.
Section Use a section header task to group related tasks to organize and manage the flow of your playbook.
Section headers can also be used for time tracking between phases in a playbook. This data can be used to display in
dashboards and report time trends.
For example, in a phishing playbook you would have a section for the investigative phase of the playbook such as indicator
enrichment, and a section for communication tasks with the user who reported the phishing.
Standard Standard tasks can be manual tasks such as manual verification to prompt an analyst to verify the severity or classification of
an incident before proceeding with automated actions. They can also be automated tasks such as parsing a file or enriching
indicators.
Automated tasks are based on scripts that exist in the system. These scripts can be created by you or come out-of-the-box as
part of a content pack. For example, the !ad-get-user command retrieves detailed information about a user account using
the Active Directory Query V2 integration.
You can also automatically remediate an incident by interacting with a third-party integration, open tickets in a ticketing system
such as Jira, or detonate a file using a sandbox.
Conditional Use conditional tasks to validate conditions based on values or parameters and take appropriate direction in the playbook
workflow, like a decision tree in a flow chart.
For example, a conditional task may ask whether indicators are found. If yes, you can have a task to enrich them, and if not
you can proceed to determine that the incident is not malicious. Alternatively, you can use conditional tasks to check if a
certain integration is available and enabled in your system. If yes, you can use that integration to perform an action, and if not,
you can continue on a different branch in the decision tree.
Conditional tasks can also be used to communicate with users through a single question survey, the answer to which
determines how a playbook will proceed.
Data Use a data collection task to interact with users through a survey, for example to collect responses or escalate an incident.
Collection
All responses are collected and recorded in the incident context data, from a single user or multiple users. You can use the
survey questions and answers as input for subsequent playbook tasks.
You can collect responses in custom fields, for example, a grid field.
Abstract
Cortex XSOAR playbooks and tasks have inputs (data from incident or integration) and outputs that can then be used as input in other tasks.
Depending on the task type that you select and the script that you are running, your playbook task may have inputs and outputs.
Inputs are data pieces used in a task. The inputs are often manipulated or enriched and they produce outputs. Outputs generated from the result
of a task or command can be used as inputs to subsequent tasks or as information generated to help resolve or escalate an investigation.
An input may come from an incident, such as the role to assign an incident to, or an input can be provided by an integration, for example the
Active Directory integration can be used in a task to extract a user's credentials.
At the beginning of any playbook, click the Playbook Triggered task and enter the playbook inputs and outputs, grouping them as relevant.
Use the task cheat sheet to use context keys in playbook inputs and outputs
When you create your playbook task inputs, the task cheat sheet enables quick access to system and custom fields to populate playbook task
inputs and outputs.
1. Click .
2. Select an incident field, it populates the task input with the corresponding context key.
Example 11.
The following example uses incident context data as the playbook input from the Access Investigation - Generic playbook.
Click the top task Playbook Triggered. The playbook is triggered based on incident context data.
Inputs
The first two inputs are SrcIP, retrieved from the incident.src key, and DstIP, retrieved from the incident.dest key.
Outputs
The Access Investigation - Generic playbook creates an output object that can be used in subsequent playbook tasks.
For example, the Access Investigation - Generic playbook Endpoint.IP output creates a list of endpoint IP addresses which can later be enriched
by an IP enrichment task, and the Endpoint.MAC output creates a list of endpoint MAC addresses which can be used to get information about the
hosts that were affected by the incidents.
Outputs can also be data that was extracted or derived from the inputs. For example, the Access Investigation - Generic playbook contains the
Account Enrichment - Generic v2.1 sub-task, which uses the account username (and optionally domain) as input to Active Directory to retrieve
user information as output, such as the user's email address, manager, and any groups to which they belong.
An output can then serve as input for a subsequent task. For example, in the Account Enrichment - Generic v2.1 sub-task, the Get account info
from Active Directory task output Account.Username is used as an input for the Active Directory - Get User Manager Details task to retrieve
manager details for that user.
Playbook input and output fields are collected into groups. This organizes the inputs and outputs, providing clarity and context to understand which
inputs are relevant to which playbook flow.
For example, the following playbook inputs are grouped under Mailbox selection.
Users with permission to edit playbooks can add, edit, and delete groups and input and output fields. Users without this permission can only view
groups, inputs, and outputs.
Add or delete a group. Deleting a group deletes all the fields defined in the group.
2. Enter a group name and description and click the check mark.
NOTE:
If you do not add any fields, the group will be deleted when you click Save.
You can do the following with input or output fields within a group:
Inputs
1. Within a group, click + Add Input at the bottom of the list of input fields. You may need to scroll down to see it.
Outputs
1. Within a group, click + Add Output or + Add Manually at the bottom of the list of output fields. You may need to scroll down to see these
options.
If you click + Add Output, select from the outputs from previous tasks.
If you click + Add Manually, enter the context path and description for the output.
Abstract
Section header tasks are used to manage the flow of your playbook and help you organize your tasks efficiently.
Section headers are used to manage the flow of your playbook and help you organize your tasks efficiently. You create a section header to group a
number of related tasks.
Section headers can also be used for time tracking between phases in a playbook. When you start time tracking, apply the Start action for the
section header. Because you are using this to time track a particular phase of an investigation, add a stop timer section header when the phase
completes. The time tracking data can be used to display in dashboards and report time trends.
3. Enter a meaningful name in the Task Name field for the section header.
Details Tag the result with: Add a tag to the task result. You can use the tag to filter entries in the War Room.
Timers For a time tracking header, select the action to take when the timer is triggered (start, stop, or pause).
Timer.start: The trigger for starting to send a message or survey to recipients. You can change this trigger or add a
trigger for Timer.stop or Timer.pause. Select the trigger timer field from the drop down.
Add Trigger: You can add other trigger timer fields from the drop down.
5. Click Save.
Abstract
Standard tasks can be manual tasks such as manual verification to prompt an analyst to verify the severity or classification of an incident before
proceeding with automated actions. They can also be automated tasks such as parsing a file or enriching indicators.
3. Enter a meaningful name in the Task Name field for the task that corresponds to the data you are collecting.
4. Select the options you want to configure for the Standard task.
Choose From a drop down list, select a script for the playbook to run. In the following tabs you can set:
script field
Inputs: Each script has its own set of input arguments (or none). You can set each argument to a specific value
(by typing directly on the line under the argument name) or you can click the curly brackets to define a source
field to populate the argument.
Outputs: Each script has its own set of output arguments (or none).
Mapping:
The value for an output key populates the specified field per incident. This is a good alternative to using a task
with the setIncident command.
NOTE:
The output value is dynamic and is derived from the context at the time that the task is processed. As a result,
parallel tasks that are based on the same output may return inconsistent results.
2. Under Outputs, select the output parameter whose output you want to map. Click the curly brackets to see
a list of the output parameters available from the script.
3. Under Field to fill, select the field that you want to populate with the output.
4. Click Save.
Using: Choose which integration instance will execute the command, or leave empty to use all integration
instances.
Extend context: Append the extracted results of the action to the context. For example,
"newContextKey1=path1::newContextKey2=path2" returns "\[path1:'aaa',path2: 'bbb', newContexKey1:
'aaa',newContextKey2:'bbb'\]"
Ignore outputs: If set to true, will not store outputs into the context (besides the extended outputs).
Quiet Mode: When in quiet mode, tasks do not display inputs and outputs or extract indicators. Errors and
warnings are still documented. You can turn quiet mode on or off at the task or playbook level.
Tag the result with: Add a tag to the task result. You can use the tag to filter entries in the War Room.
Task description (Markdown supported): Provide a description of what this task does. You can enter
objects from the context data in the description. For example, in a communication task, you can use the
recipient’s email address. The value for the object is based on what appears in the context every time the
task runs.
Timer.start: The trigger for starting to send a message or survey to recipients. You can change this trigger
or add a trigger for Timer.stop or Timer.pause. Select the trigger timer field from the drop down.
Add Trigger: You can add other trigger timer fields from the drop down.
Number of retries: How many times the task should retry running if there is an error. Default is 0.
Retry interval (seconds): How long to wait between retries. Default is 30 seconds.
Error handling: How the task should behave if there is an error. Options are:
Stop
Continue
This option configures the task to handle potential errors that may occur when executing the current
task's script.
Task SLA: Set the SLA in granularity of weeks, days, hours, and minutes.
Set task Reminder at: Set a reminder for the task in granularity of weeks, days, hours, and minutes.
Advanced Quiet Mode: Determines whether this task uses the playbook default setting for quiet mode. When in quiet mode, tasks
tab do not display inputs and outputs or extract indicators. Errors and warnings are still documented. You can turn quiet
mode on or off at the task or playbook level.
Details tab Tag the result with: Add a tag to the task result. You can use the tag to filter entries in the War Room.
Task description (Markdown supported): Provide a description of what this task does. You can enter objects from
the context data in the description. For example, in a communication task, you can use the recipient’s email
address. The value for the object is based on what appears in the context every time the task runs.
Timers tab Timer.start: The trigger for starting to send a message or survey to recipients. You can change this trigger or add
a trigger for Timer.stop or Timer.pause. Select the trigger timer field from the drop down.
Add Trigger: You can add other trigger timer fields from the drop down.
Abstract
Conditional tasks are used for determining different paths for your playbook. For example, in a playbook for handling phishing emails, a conditional
task can be used to check if an email contains suspicious attachments. If the attachment is identified as malicious, the playbook can automatically
quarantine the email; otherwise, it can proceed to manual review by a security analyst.
Built-in: Creates a logical statement using an entity from within the playbook. For example, in an access investigation playbook, you can
determine that if the Asset ID of the person whose account was being accessed exists in a VIP list, set the incident severity to High.
Otherwise, proceed as normal.
Manual: Creates a conditional task that must be manually resolved. For example, a security analyst is prompted to review and validate a
suspicious file. The playbook task might involve instructions for the analyst to analyze the file, determine if it is malicious, and provide
feedback or take specific actions based on their assessment.
Ask: Creates a single-question survey communication task, the answer to which determines how a playbook proceeds. For more details
about ask tasks, see Create a communication task.
Choose script: Creates a conditional task based on the result of a script. For example, check if an IP address is internal or external using the
IsIPInRanges script. When using a script, the inputs and outputs are generated by the automation script.
3. In the Task Name field, type a meaningful name for the task that corresponds to the data you are collecting.
4. Select the relevant conditional task option. Some field configurations are required, and some are optional.
Built-in
Tag the result with: Add a tag to the task result. You can use the tag to filter entries in the War Room.
Task description (Markdown supported): Provide a description of what this task does. You can enter objects from the context
data in the description. For example, in a communication task, you can use the recipient’s email address. The value for the
object is based on what appears in the context every time the task runs.
Timer.start: The trigger for starting to send a message or survey to recipients. You can change this trigger or add a trigger for
Timer.stop or Timer.pause. Select the trigger timer field from the drop down.
Add Trigger: You can add other trigger timer fields from the drop down.
Advanced: Determines whether this task uses the playbook default setting for Quiet Mode. When in Quiet Mode, tasks do not display
inputs and outputs or extract indicators. Errors and warnings are still documented. You can turn Quiet Mode on or off at the task or
playbook level.
Number of retries: How many times the task should retry running if there is an error. Default is 0.
Retry interval (seconds): How long to wait between retries. Default is 30 seconds.
Manual
Only the assignee can complete the task: Stop the playbook from proceeding until the task assignee completes the task. By
default, in addition to the task assignee, the default administrator can also complete the blocked task. You can also block tasks
until a user with an external email address completes the task.
Task SLA: Set the SLA in granularity of weeks, days, hours, and minutes.
Set task Reminder at: Set a reminder for the task in granularity of weeks, days, hours, and minutes.
Advanced: Determines whether this task uses the playbook default setting for Quiet Mode. When in Quiet Mode, tasks do not display
inputs and outputs or extract indicators. Errors and warnings are still documented. You can turn Quiet Mode on or off at the task or
playbook level.
Tag the result with: Add a tag to the task result. You can use the tag to filter entries in the War Room.
Task description (Markdown supported): Provide a description of what this task does. You can enter objects from the context
data in the description. For example, in a communication task, you can use the recipient’s email address. The value for the
object is based on what appears in the context every time the task runs.
Timer.start: The trigger for starting to send a message or survey to recipients. You can change this trigger or add a trigger for
Timer.stop or Timer.pause. Select the trigger timer field from the drop down.
Add Trigger: You can add other trigger timer fields from the drop down.
Ask
Ask by: The method for sending the message and survey. Options are:
To: The message and survey recipients. You can define by:
Clicking the context icon to define recipients from a context data source.
Subject of the email: The message subject that displays to message recipients. You can write the survey question in the subject
field or in the message body field.
Message body: The text that displays in the body of the message. This field is optional, but if you don't write the survey question
in the subject field, include it in the message body. This is a long-text field.
Reply options: Reply options are sent via the selected channels as options for an answer.
Require users to authenticate: Enable this option to have your SAML or AD authenticate the recipient before allowing them to
answer. You must first set up an authentication integration instance and check Use this instance for external users
authentication only in the integration instance settings.
Retry interval (minutes): Determines the wait time between each execution of a command. For example, the frequency (in
minutes) that a message and survey are resent to recipients before the response is received.
Number of retries: Determines how many times a command attempts to run before generating an error. For example, the
maximum number of times a message is sent. If a reply is received, no additional retry messages will be sent.
Task SLA: Set the SLA in granularity of weeks, days, and hours.
Set task Reminder at: Set a task reminder in granularity of weeks, days, and hours.
Complete automatically if SLA passed without a reply: Select this checkbox to complete the task if the SLA is breached before a
reply is received. You can select yes or no.
Extend context: Append the extracted results of the action to the context. For example,
"newContextKey1=path1::newContextKey2=path2" returns "\[path1:'aaa',path2: 'bbb', newContexKey1:
'aaa',newContextKey2:'bbb'\]"
Ignore outputs: If set to true, will not store outputs into the context (besides the extended outputs).
Quiet Mode: When in quiet mode, tasks do not display inputs and outputs or extract indicators. Errors and warnings are still
documented. You can turn quiet mode on or off at the task or playbook level.
Tag the result with: Add a tag to the task result. You can use the tag to filter entries in the War Room.
Task description (Markdown supported): Provide a description of what this task does. You can enter objects from the context
data in the description. For example, in a communication task, you can use the recipient’s email address. The value for the
object is based on what appears in the context every time the task runs.
Choose script
From a drop down list, select a script for the playbook to run. In the following tabs you can set:
Outputs: Each script has its own set of output arguments (or none).
Mapping:
The value for an output key populates the specified field per incident. This is a good alternative to using a task with a set incident
command.
NOTE:
The output value is dynamic and is derived from the context at the time that the task is processed. As a result, parallel tasks that
are based on the same output may return inconsistent results.
2. Under Outputs, select the output parameter whose output you want to map. Click the curly brackets to see a list of the output
parameters available from the automation.
3. Under Field to fill, select the field that you want to populate with the output.
4. Click Save.
Using: Choose which integration instance will execute the command, or leave empty to use all integration instances.
Extend context: Append the extracted results of the action to the context. For example,
"newContextKey1=path1::newContextKey2=path2" returns "\[path1:'aaa',path2: 'bbb', newContexKey1:
'aaa',newContextKey2:'bbb'\]"
Ignore outputs: If set to true, will not store outputs into the context (besides the extended outputs).
Quiet Mode: When in quiet mode, tasks do not display inputs and outputs or extract indicators. Errors and warnings are still
documented. You can turn quiet mode on or off at the task or playbook level.
Tag the result with: Add a tag to the task result. You can use the tag to filter entries in the War Room.
Task description (Markdown supported): Provide a description of what this task does. You can enter objects from the context
data in the description. For example, in a communication task, you can use the recipient’s email address. The value for the
object is based on what appears in the context every time the task runs.
Timer.start: The trigger for starting to send a message or survey to recipients. You can change this trigger or add a trigger for
Timer.stop or Timer.pause. Select the trigger timer field from the drop down.
Add Trigger: You can add other trigger timer fields from the drop down.
Retry interval (seconds): How long to wait between retries. Default is 30 seconds.
Error handling: How the task should behave if there is an error. Options are:
Stop
Continue
This option configures the task to handle potential errors that may occur when executing the current task's script.
5. Click Save.
Abstract
Communication tasks in playbooks enable you to send surveys and collect data. Ask task, data collection task.
Communication tasks enable you to send surveys to users, both internal and external, to collect data for an incident. The collected data can be
used for incident analysis, and also as input for subsequent playbook tasks. For example, you can send a scheduled survey requesting analysts to
send specific incident updates or send a single (stand-alone) question survey to determine how an issue was handled.
Ask tasks: A conditional task that sends a single question survey. The answer is used to determine how the playbook proceeds.
Data collection tasks: A data collection task sends a survey of one or more questions. The answers are recorded in context data and can
be used as input for subsequent tasks.
An ask task is a type of conditional task that sends a single question survey, the answer to which determines how a playbook proceeds. If you
send the survey to multiple users, the first answer received is used, and subsequent responses are disregarded. For more information about ask
task settings, see Create a conditional task.
Because this is a conditional task, you need to create a condition for each of the answers. For example, if the survey answers include, Yes, No,
and Maybe, there should be a corresponding condition (path) in the playbook for each of these answers.
Users interact with the survey directly from the message, meaning the question appears in the message and they click an answer from the
message.
The survey question and the first response is recorded in the incident context data. This enables you to use this response as the input for
subsequent playbook tasks.
For all ask conditional tasks, a link is generated for each possible answer the recipient can select. If the survey is sent to more than one user, a
unique link is created for each possible answer for each individual recipient. These links are visible in the context data of the incident's Work Plan.
The links appear under Ask.Links in the context data.
In this example, the message and survey will be sent to recipients every hour for six hours, until a reply is received (it is repeated every 60
minutes, 6 times). The SLA is six hours. If the SLA is breached, the playbook will proceed according to the Yes condition.
In this example, a message and survey are sent by email to all users with the Analyst role. We are not including a message body because the
message subject is the survey question we want recipients to answer. There are three reply options, Yes, No, and Not sure. In the playbook, we
will only add conditions for the Yes and No replies. We require recipient authentication, which first involves setting up authentication.
The data collection task is a multi-question survey (form) that survey recipients access from a link in the message. Users do not need to log in to
access the survey, which is located on a separate site.
All responses are collected and recorded in the incident context data, whether you receive responses from a single user or multiple users. This
enables you to use the survey questions and answers as input for subsequent playbook tasks. If responses are received from multiple users, data
for multi-select fields and grid fields are aggregated. For all other field types, the response received most recently will override previous responses
as it displays in the field. All responses are always available in the context data.
For all data collection tasks, a single link is generated for each recipient of the survey. These links are visible in the context data of the incident's
Work Plan. The links appear in the context data under the Links section of that survey.
Stand alone questions. These are presented to users directly in the message, and from which users answer directly in the message (not an
external survey).
Field-based questions. These are based on a specific incident field (either system or custom), for example, an Asset ID field. The response
(data) received for these fields automatically populates the field for this Incident. For single-select field based questions, the default option is
taken from the field’s defined default.
3. Enter a meaningful name in the Task Name field for the task that corresponds to the data you are collecting.
4. Select the communication options you want to use to collect the data.
Message Ask by: The method for sending the message and survey. Options are:
Generated link (appears in the context data): A link to the data collection survey is available in the context data
of the task.
Email: If you select this option, enter below the subject and message of the email and the email addresses of
the users who should receive this message or survey.
To: The message and survey recipients. You can define by:
Clicking the context icon to define recipients from a context data source.
Subject of the email: The message subject that displays to message recipients. You can write the survey question in
the subject field or in the message body field.
Message body: The message question body to be used in the notification sent to the given users along with the
reply options.
Require users to authenticate: Enable this option to have your SAML or AD authenticate the recipient before
allowing them to answer. You must first set up an authentication integration instance and check Use this instance for
external users authentication only in the integration instance settings.
Questions Web Survey Title: The title displayed for the web survey.
Short Description: A description displayed above the questions on the web survey. Click Preview to see how it
displays.
Answer Type: The field type for the answer field. Options are:
Short text
Long text
Number
Date picker
Attachments
Mandatory: If this checkbox is selected for a question, survey recipients will not be able to submit the survey until
they answer this question.
Help Message: The message that displays when users hover over the question mark help button for the survey
question.
NOTE:
You can drag questions to rearrange the order in which they display in the survey.
Timing Retry interval (minutes): Determines the wait time between each execution of a command. For example, the
frequency (in minutes) that a message and survey are resent to recipients before the response is received.
Number of retries: Determines how many times a command attempts to run before generating an error. For
example, the maximum number of times a message is sent. If a reply is received, no additional retry messages will
be sent.
Task SLA: Set the SLA in granularity of weeks, days, and hours.
Set task Reminder at: Set a task reminder in granularity of weeks, days, and hours.
Reached task SLA (with or without a reply): This option is grayed out.
Details Tag the result with: Add a tag to the task result. You can use the tag to filter entries in the War Room.
Task description (Markdown supported): Provide a description of what this task does. You can enter objects from the
context data in the description. For example, in a communication task, you can use the recipient’s email address.
The value for the object is based on what appears in the context every time the task runs.
Advanced Using: Choose which integration instance will execute the command, or leave empty to use all integration instances.
Extend context: Append the extracted results of the action to the context. For example,
"newContextKey1=path1::newContextKey2=path2" returns "\[path1:'aaa',path2: 'bbb', newContexKey1:
'aaa',newContextKey2:'bbb'\]"
Ignore outputs: If set to true, will not store outputs into the context (besides the extended outputs).
Quiet Mode: When in quiet mode, tasks do not display inputs and outputs or extract indicators. Errors and warnings
are still documented. You can turn quiet mode on or off at the task or playbook level.
5. (Optional) To customize the look and feel of your email message, click Preview.
You can determine the color scheme and how the text in the message header and body appear, as well as the appearance and text of the
button the user clicks to submit the survey.
Data collection task examples
Stand-alone question with a single-select answer
In this example, we create a stand-alone question, with a single-select answer. This question is not mandatory. If we selected the First option is
default checkbox, the reply option "0" is the default value in the answer field.
In this example, we create a question based on a custom grid field that we mark as mandatory. For the question field, we included a descriptive
sentence explaining how to fill in the grid.
When sending a form in a communication task, you can configure user authentication to ensure only authorized users gain access to the form.
The authorized users are usually external users not in Cortex XSOAR, and they will not be able to access anything else in Cortex XSOAR.
1. Set up your SSO if it is not already configured. See Authenticate users using SSO for more details.
2. In the Task details of your playbook communication task, check Require users to authenticate to have your SAML or AD authenticate the
recipient before allowing them access to the form.
Abstract
When defining a task, you can decide if the playbook continues, stops, or continues on an error path.
You can determine how the playbook behaves if there are script errors during execution.
When defining a standard task that uses a script or a conditional task that uses an script, you can define how a playbook task continues by
selecting one of the following options:
Stop: The playbook stops, if the task errors during execution. For example, if the task requires a manual review, you may want the playbook
to stop until completion.
Continue: The playbook continues to execute if the task errors. For example, the playbook task requires EWS, but EWS is not required for
the playbook to proceed.
Continue on error path: If a task errors, the playbook continues on an error path.
The error path may be useful if you want to take action on an error, like clean-up, retry, etc. You may also want to handle errors in different
ways. For example, in case of a quota expired error you may want to retry in 1 minute, but if you receive an internal error 500, you may want
to stop the playbook.
You may want to create a separate path when an analyst manually reviews the incident and research is needed outside Cortex XSOAR.
Once an analysis is complete, you can add a task to consider escalating to a customer and, if so, generate a report which can be attached
to a ticket system such as Jira or ServiceNow.
Instead of a playbook waiting on manual input, which displays an error state, such as missing an argument in a script, you can add a
separate path for these kinds of issues.
Use the GetErrorsFromEntry script (part of the Common Scripts Pack) to check whether the given entry returns an error and returns an error
message. For example, when using the script in a playbook, you can fetch the error message from a given task, such as a runtime error. You
can then add a step in the playbook flow to send those error messages to the relevant stakeholder through Slack, email, opening a Jira ticket,
etc.
When errors are created, they are added to context under task.id.error.
1. When adding the connector from this task to the following task, a dialog box appears which enables you to select one of the following paths:
NOTE:
You can set up script error handling when running a script in a Standard task or a Conditional task. For more information about error
handling settings for these tasks, see Playbook tasks.
Built-in Conditional tasks have On Error settings for number of retries and retry interval, but not Error Handling.
3. For new tasks, in the Task Name field, type a meaningful name for the task that corresponds to the data you are collecting.
5. In the Number of retries field, type the number of times the tasks attempts to run before generating an error.
6. In the Retry Interval (seconds) field, type the wait time between retrying the task.
Stop
Continue
8. Click Save.
9. When adding the connector from this task to the following task, a dialog box appears which enables you to select one of the following paths:
Standard Path: When adding a task to this path, it executes without any exceptions.
If you select the Standard Path, the task continues on this path and executes without exceptions.
Error Path: When adding a task to this path, it executes where the source task errors during execution.
If you select Error path, if the task errors, the playbook continues with this path.
Abstract
Use an out-of-the-box playbook, create a new playbook, or customize an existing one based on your organization's needs.
Customizing a playbook helps you automate tasks to match your needs, making workflows more efficient, accurate, and easier to integrate with
your existing processes.
Customize the SOC Customize the name of the SOC that appears in the survey header.
name
Add a sub-playbook Playbooks can be divided into two categories, depending on their use.
Parent playbooks are playbooks that run as the main playbook of an incident. For example, Phishing - Generic
v3.
Sub-playbooks are playbooks that are nested under other playbooks. They appear as tasks in the parent
playbook flow and are indicated by the sub-playbook icon. A sub-playbook can also be a parent playbook in a
different use case. For example, IP Enrichment - Generic v2.
Field mapping You can map output from a playbook task directly to an incident field. This means that the value for an output key
populates the specified field per incident. This is a good alternative to using a task with a set incident command.
You can map when you select a script in a Standard or Conditional task. For more information, see Create a standard
task.
Filter and transform Filters extract relevant data to help focus on relevant information and discard irrelevant or unnecessary data.
data
Transformers take one value and transform or render it to another value or format.
Use scripts Perform specific automated actions using commands that are also used in playbook tasks and in the War Room.
Extract indicators Extract indicators from incident fields and enrich them using commands and scripts defined for the indicator type.
Extend context Save additional data from the raw response of commands that return data.
Set and update Use the setIncident script in a playbook task to set and update incident fields.
incident fields
Use playbook polling Configure a playbook to stop and wait for a process to complete on a third-party product, and continue when it is
done.
Abstract
Use an out-of-the-box playbook, create a new playbook, or customize an existing one based on your organization's needs.
The debugger provides a test environment where you can make changes to data and playbook logic and view the results in real-time to test and
troubleshoot playbooks. You can see exactly what is written to the context at each step and which indicators are extracted.
Abstract
Use an out-of-the-box playbook, create a new playbook, or customize an existing one based on your organization's needs.
Abstract
If your use case involves investigating a type of phishing event, you can customize a playbook from the Phishing content pack, for example the
Phishing - Generic V3 playbook. For an overview of the Phishing - Generic V3 playbook, go to minute 4:06 in this video.
An error occurred.
2. To edit the playbook (add/delete tasks, sub-playbooks, or the flow, etc.) choose Detach Playbook from the three button menu.
NOTE:
If you want to receive content updates for the playbook, duplicate rather than detach the playbook. If you duplicate the playbook, change
the default playbook for the Phishing incident type from Phishing - Generic v3 to your new playbook name.
1. Role: By default, the least busy user is assigned to the incident. If you set a role, the incident will only be assigned to users with that role.
2. SearchAndDelete: Turn on or off the Search and Delete Process in EWS O365. This is part of the remediation process. If set to true, in case
of a malicious email, the Search and Delete sub-playbook looks for other instances of the email and deletes them pending analyst approval.
We will leave the default option, False.
3. BlockIndicators: This is part of the remediation process. It automatically blocks malicious indicators in relevant integrations. For our
example, we keep this as False.
4. AuthenticationEmail: Whether to authenticate the email. Leave as the default, true. See Authenticate the email under the investigation
stage.
5. OnCall: Whether to assign a user that is currently on shift. Change this to True.
6. SearchAndDeleteIntegration: This setting is only relevant if SearchAndDelete is set to true. If you later decide to use the SearchAndDelete
option, change the integration here from EWS to O365.
If you use enable SearchAndDelete and set the SearchAndDeleteIntegration to O365, continue with the O365 inputs such as
O365DeleteType.
7. CheckMicrosoftHeaders: If using EWS O365, the Bulk Confidence Level (BCL), Spam Confidence Level (SCL) and the Phishing Confidence
Level (PCL) values on the Microsoft headers are considered as part of severity and email classification (whether the email is spam). These
values help security teams determine whether the email is coming from a spam, phishing or bulk sender. Leave as the default, True.
8. InternalDomains: When the Email Address Enrichment Generic v2.1 sub-playbook runs, it uses the internal domain entered here to
determine if the email was reported from an internal or external email address.
9. GetOriginalEmail: Used to retrieve the original email, when the phishing email is forwarded and not attached. Change this to True only if you
have permissions in EWS O365 to execute global search (eDiscovery). This input is used to determine if the Get Original Email - Generic v2
sub-playbook should run.
The Engage with User stage stores the name of the user. The Email Address Enrichment - Generic v2.1 sub-playbook here receives the email
address of the user reporting the phishing email. If there is an email address from an internal domain, the sub-playbook uses Active Directory to
find the name of the user reporting the phishing email. When we engage with the user, we can then address them by name.
Triage
An email is sent to the reporting user acknowledging that the incident was received. You can update the message (body) by clicking the
Acknowledge incident was received task.
The Triage section extracts relevant information such as indicators from a file, detonates the file, and uses machine learning to predict the phishing
type and update the incident with predictions. Triage includes the following tasks:
Triage starts with the Process Email - Generic v2 playbook, which processes the email and extracts relevant information from files
including attachments. This playbook branches according to whether the email is attached. To view the playbook, hover over the task and
then click the eye icon below the task.
Email artifacts and attachments are extracted by this task, which uses the ParseEmailFilesV2 script. This task takes the email file
and extracts all email addresses, subject, file attachments, etc. The task does not finish until everything has been extracted (inline).
The results are then entered into the incident layout.
While best practice is to attach a file containing the email when reporting spam or phishing, in some cases, users will forward the
email instead.
If the original email is necessary for the investigation and not attached and the GetOriginalEmail input in the main playbook is set to
True, the Get Original - Generic v2 sub-playbook obtains the original email and then adds details to the incident, such as sender, text
body, size, body, etc.
Headers section
If email headers were extracted, the headers are displayed (you can later use them for authentication). At the same time attachment
information is added, such as size, MD5 hashes, number of attachments, etc.
If the email is HTML-formatted (not in all cases), it creates an image out of that HTML data to show how the email was seen by the
user using Rasterize.
After extracting the relevant information, we return to the Phishing - Generic v3 main playbook. The following tasks are undertaken at the
same time:
Files are executed in an isolated sandbox environment and their behavior is analyzed. Any sandbox integrations that you have
enabled run and provide details about the file reputation, whether the email is forwarded or attached as a file. If, for example, the
email is forwarded (not attached as a file), if the email contains a PDF file, then that PDF file will go through detonation/indicator
extraction. In this example we use Palo Alto Networks WildFire to detonate files. The payload is triggered so files can be isolated
and analyzed.
NOTE:
Detonation is different from file enrichment. For example, VirusTotal file enrichment provides reputation information about the file,
based on the file hash. If VirusTotal does not recognize the file hash, no enrichment information is returned, unless the actual file is
submitted for analysis. A sandbox integration, by contrast, runs the file in an isolated environment and provides exact information
about the file execution.
This sub-playbook uses integrations to safely detonate URLs in a sandbox environment and analyze the website behavior.
If a file is attached, you can extract more indicators from the file. Indicators may exist in the file and not the email. You can extract text
based files, Word, PDF, or other supported files, like PPT files (sometimes these contain executable macros). For PDFs, we utilize the
image OCR, which extracts text from images inside PDF files.
Predict Phishing Type - The playbook uses your custom trained phishing model to predict the phishing type. If you do not have a
custom trained phishing model, it uses a pre-trained out-of-the-box phishing model instead. If the model is able to predict the
phishing type, the incident is updated with the prediction.
Predict Phishing URLs - If the Rasterize integration is enabled, the DBotPredictURLPhishing script predicts phishing URLs.
After extracting indicators from an email and a detonated file (if there is a file attachment) we need to enrich the indicators. This gives us additional
information about the indicators that have been extracted. The Entity Enrichment - Phishing v2 playbook contains the following sub-playbooks:
File Enrichment - Generic v2: Enriches the file using the File Enrichment - VirusTotal (API v3) playbook. If there is a SHA256 hash and
Cylance Protect v2 is enabled, it enriches the file using Cylance Protect v2.
IP Enrichment - External - Generic v2: Checks whether there are internal and external IP addresses and enriches external IP addresses
using the !IP command, VirusTotal automation (if enabled) and Threat Crowd (if enabled).
Email Address Enrichment - Generic v2.1: Checks whether email addresses are internal or external. For internal email addresses,
additional information is retrieved using Active Directory. For external email addresses, this sub-playbook checks for a domain list input and
for domain squatting (such as using a similar domain).
URL Enrichment - Generic v2: Checks for URLs, verifies SSL & captures screenshots using the Rasterize integration.
Domain Enrichment - Generic v2: Enriches domain using Cisco Umbrella (if enabled), and VirusTotal (if enabled).
As we are using VirusTotal, there is no need to customize these sub-playbooks. If you use different enrichment integrations these need to be
added. These playbooks add to the DBot score (reputation of the indicator).
At the beginning of the playbook (Playbook Triggered), in the Inputs section, we left CheckMicrosoftHeaders as the default, True.
The Process Microsoft’s Anti-Spam Headers playbook finds the SCL, BCL and PCL values (if they exist) in the Microsoft headers, calculates
the severity based on those scores and classifies whether the email is spam or phishing. You can change the minimum severity of each
score.
The Detect & Manage Phishing Campaigns sub-playbook uses the FindEmailCampaign automation, which utilizes machine-learning to
identify existing incidents in Cortex XSOAR that are part of the same campaign of the currently investigated incident. You can customize the
inputs as required.
If the sub-playbook finds that the incident is part of a campaign, it generates campaign-related data which you can observe in the linked
Phishing Campaign incident, and take actions related to the campaign.
At the beginning of the playbook (Playbook Triggered), in the Inputs section, we left AuthenticateEmail as the default, True.
Using DKIM, DMARC and SPF we check to see if the email is coming from its alleged source, or whether the email has been tampered with.
The result of the authenticity check is added to the incident field using the setIncident script.
Domain-squatting
At the beginning of the playbook (Playbook Triggered), in the Inputs section, we left HuntEmailIndicators as the default, True.
The Phishing - Indicators Hunting sub-playbook runs to hunt malicious indicators found in other emails and optionally, automatically create
new incidents for each found email if EmailHuntingCreateNewIncidents is set to True.
The results of the previous tasks are used by the Calculate Severity - Generic v2 sub-playbook, which calculates and assigns the incident
severity based on the highest returned severity level from the following:
Severity of the critical assets according to the Calculate Severity - Critical Assets v2 sub-playbook. This playbook checks critical users,
critical user groups, critical groups, critical endpoint groups or critical endpoints. You can define the critical users in your organization by
editing the inputs. If one critical entity is involved in the incident, it will raise the severity to critical.
The DBot Score from tasks that run in the parent playbook or sub-playbooks, (such as process email, extract indicators from file, detonation
playbooks, machine learning, etc.)
The incident is now assigned to an analyst. Incidents can be assigned according to a role such as Analyst, by the least busy user (less-busy-user),
randomly, by user online, etc. For this task, by default, the incident is assigned to the least busy user.
The final task in the investigation section determines Is the email malicious?
The incident severity determines if the email is malicious. If the severity is equal or greater than 2 (medium is 2, high is 3, critical is 4), it is
considered malicious. We can change this criteria if necessary.
This stage of the process depends on whether the email is undetermined or malicious.
Undetermined
This is a manual task. If the severity is low or unknown it is regarded as undetermined. The analyst manually reviews the incident and
decides whether it is malicious. If not, the analyst updates the user (who sent the email) that the email is safe and then closes the
investigation.
Malicious email
If malicious, we update the user that the email is malicious and then start the remediation process. If the incident was part of a phishing
campaign we also update the user that the email is part of a malicious campaign.
The last part of the process is remediation. A timer starts at this point to track remediation time.
The Search and Delete Emails Generic v2 sub-playbook searches and deletes the email from all users across the organization. This sub-
playbook runs if the original email was retrieved and SearchAndDelete is set to True in the playbook inputs. During setup, we kept the
default setting, False. If you decide to use this playbook with O365, change the SearchAndDelete setting to True, change
SearchAndDeleteIntegration from EWS to O365, and configure the O365 - Security And Compliance - Content Search v2 integration.
The Block Indicators - Generic v2 sub-playbook contains sub-playbooks to blocks IPs, files, emails, and domains. For example, the Block
IP - Generic v2 sub-playbook blocks IP addresses using one or more of the following integrations (depending on which integrations you
have configured) PAN-OS, MineMeld, Zscaler, CheckPoint FW, and Fortinet. You can also customize the playbook to add additional
integrations.
To choose the remediation method(s), go to the Playbook Triggered task (at the beginning of this playbook), and set the BlockIndicators and the
SearchAndDelete fields. If one or both are set to true, the playbook follows those branches. If the email is found to be malicious, the analyst
assigned to the incident is prompted to manually remediate the incident, regardless of whether the search and delete emails and/or block
indicators branches are executed.
After finishing the remediation section, the timer stops and the investigation is closed.
Customize your playbook to extract indicators, extend context, add incident fields, filter and transform data, run scripts, and perform triggered
actions, sub-playbook loops, and polling.
Abstract
Add a server configuration to customize the name of the security operations center (SOC) that appears in communication tasks.
The default name that appears in the survey header is Your SOC team. Follow these steps to customize the name of the SOC.
1. Go to Settings & Info → Settings → System → Server Settings → Server Configuration → Add Server Configuration.
2. Add the soc.name server configuration, and the display name of your SOC as the value.
This name is used in the default message and email of the communication tasks, and the web survey for all communication tasks.
Abstract
Parent playbooks are playbooks that run as the main playbook of an incident. For example, Phishing - Generic v3 and Malware Investigation
& Response Incident Handler.
Sub-playbooks are playbooks that are nested under other playbooks. They appear as tasks in the parent playbook flow and are indicated by
the sub-playbook icon . A sub-playbook can also be a parent playbook in a different use case. For example, IP Enrichment - Generic v2
and Retrieve File From Endpoint - Generic v3. These playbooks are usually used as part of a bigger investigation.
Since sub-playbooks are building blocks that can be used in other playbooks and use cases, you should define generic inputs for them.
Inputs can be passed to sub-playbooks from the parent playbook, used and processed in the sub-playbook, and sent as output to the parent
playbook.
NOTE:
Any change made to a sub-playbook impacts the parent playbook in the next run of the parent playbook.
An error occurred.
Sub-playbook loops
Looping uses sub-playbooks to create loops within a parent playbook. When running the loop, the values are calculated based on the context data
for the sub-playbook and not the parent playbook.
NOTE:
The maximum number of loops (default is 100). A high number of loops or a high wait time combined with a large number of incidents
may affect performance.
Periodically check looping conditions to ensure they are still valid for the data set.
When the task input is an array, it is iterated automatically (no need to define a loop).
1. In the Playbooks page, select the parent playbook that contains the sub-playbook task you want to run in a loop.
If the playbook is installed from a content pack, you need to either detach or duplicate the playbook before editing.
3. Select the task that contains the sub-playbook for which you want to create the loop.
Option Description
Exit when Enables you to define when to exit the loop. Click {} and expand the source category. Hover over
the required source and click Filter & Transform to the left of the source to manipulate the data.
Equals (String) Select the operator to define how the values should be evaluated.
recommends that you balance between the number of iterations and the number of seconds to wait
between iterations so you don't overload the server.
For each input: Runs the sub-playbook based on defined inputs. Enter the number of seconds to wait between iterations.
Choose Loop automation: Select the automation from the drop-down list to define when to exit the loop. The parameters that appear
are applicable to the selected automation.
In the parent playbook (the task that contains the sub-playbook), you can configure to exit a loop running the sub-playbook automatically when the
last item in the sub-playbook input is executed.
If there are multiple input lists with the same amount of items, the sub-playbook runs once for each set of inputs.
If there are multiple input lists with different amounts of items, the sub-playbook runs the first set of inputs, followed by the second, third, and
so on, until the end.
For example:
Input Value
Input x 1,2,3,4
Input y a,b,c,d
Input z 9
The following example shows how a sub-playbook loop works using the Palo Alto Networks Cortex XDR - Investigation and Response integration.
After you install the Palo Alto Networks Cortex XDR - Investigation and Response content pack, configure the Palo Alto Networks Cortex XDR -
Investigation and Response integration to fetch incidents. By default, the integration uses the Cortex XDR classifier, which automatically classifies
Cortex XDR incident types. In this example, we are using the Cortex XDR incident type which runs the Cortex XDR incident handling v3 playbook.
NOTE:
1. Go to Incidents, open a Cortex XDR incident, and go to the Work Plan tab.
You can see the incident uses the Cortex XDR incident handling v3 playbook.
2. The playbook starts retrieving incident data from Cortex XDR and finds similar incidents by fields. If similar incidents are found, the analyst
can close them as duplicates.
3. If the alert is not a duplicate, the playbook continues to Loop on alert id - Alert enrichment.
4. The playbook runs the Cortex XDR Alerts Handling sub-playbook in a loop, by categorizing and enriching alerts until completion.
To view the looping settings, go to Playbooks and open the Cortex XDR Alerts Handling playbook. In the Inputs tab, view the playbook
returns incident and alerts IDs. In the Loop tab, the For Each Input option is selected. This means the playbook iterates over all
defined playbook inputs until complete.
The playbook determines if the alert is malware, a port scan, or anything else and enriches according to the category.
If the alert is not malware or port scan, the playbook completes the processing.
The applicable sub-playbook processes the enriched information and outputs the problematic endpoints.
After completing the processing of an alert ID, the playbook iterates through the remaining inputs until all alert IDs have been
processed (looping).
Go to the Cortex XDR Alerts Handling playbook task and click the Results tab. You can see information returned and the number of
times the playbook looped.
Abstract
Use filters and transformers to manipulate data. Use filters and transformers in playbook tasks or when mapping an instance.
In Cortex XSOAR, data is extracted and collected from various sources, such as playbook tasks, command results, and fetched incidents, and
presented in JSON format. The data can be manipulated by using filters and transformers.
Filters enable you to extract relevant data which you can use elsewhere in Cortex XSOAR. For example, if an incident has several files with
varying file types and extensions, you can filter the files by file extension or file type, and use the filtered files in a detonation playbook. You can
filter as many objects as required. Cortex XSOAR automatically calculates the context root to which to filter. You can change the context root as
necessary.
CAUTION:
You can change the context data root to filter, but it is not recommended to select a different root, as it affects the filter results. The drop-down
list displays the filter root for backward compatibility.
Transformers
Transformers modify or format data to make it suitable for further processing or presentation. For example, you can convert a date in non-Unix
format to Unix format. Another example is applying the count transformer, which renders the number of elements. When you have more than one
transformer, they apply in the order that they appear. You can reorder them using click-and-drag.
2. In the field you want to add a filter or transformer (for example, inputs or outputs), click the curly brackets and then select Filters and
Transformers.
3. In the Get field, type or select data you want to filter or transform. For example, EWS.Items.Name.
By default, the transformer is set to To upper case(String). Click it to pick a different transformer, for example to change the date
format for when incidents occurred.
6. (Optional) To test the filter or transformation click Test and select the investigation or add it manually.
In this example, we want to filter all EWS Item names that have the extension exe.
1. From the Filters & transformers window, in the Get field, type EWS.Items.Name to extract all Item names in EWS.
7. Click Test.
You should see Item names are filtered with the extension exe.
Example (advanced): Filter hostname for the last resolved time
In this example, we want to see the LastResolved time only from the demisto.com host name.
{
"IP": [
{
"Address": "192.168.10.96",
"AutoFocus": {
"Resolutions": [
{
"Hostname": "79463wwfqq,dattolocal.net",
"LastResolved": "2022-08-02 04:01:02"
},
{
"Hostname": "demisto.com",
"LastResolved": "2022-09-10 09:47:17"
},
{
"Hostname": "securesense.call4pchelp.com",
"LastResolved": "2022-04-22 11:49:06"
}
]
}
},
{
"Address":"192.168.10.96",
"AutoFocus": {
"Resolutions":[
{
1. From the Filters & transformers window, in the Get field, type IP.AutoFocus.Resolutions.LastResolve.
Cortex XSOAR automatically calculates that the context root to filter is IP.AutoFocus.Resolutions.
7. Click Test.
If you require a filter or transformer that is not provided out-of-the-box, you can create your own by creating a script and then adding to the
operators window.
If you want a custom transformer that operates on an entire array rather than on each individual item, you need to add the
entirelist tag.
Argument Description
left Mark as mandatory. This argument defines the left-side value of the transformer operation. In
this example, this is the value being checked if it falls within the range specified in the right-side
value.
right Mark as mandatory. This argument defines the right-side value of the transformer operation. In
this example, this is the range to check if the left-side value is in.
Argument Description
value Mark as mandatory. The value to transform. In this example, this is the UNIX epoch timestamp to
convert to ISO format.
Abstract
You can use built-in filters to define your filter, they are grouped by category. Before defining a filter, consider the following.
Filter considerations
Filters try to cast the transformed value and arguments to the appropriate type. The task fails if casting fails. For example, “a” Equals
{“some”: “object”} => Error
If the filter's left-side value expects a single item but receives a list, the filter passes if at least one item meets the requirements. For
example, [“a”, “b”, “c”] Equals “b” => true.
If the filter's left-side value expects a list but receives a single item, it converts it to a list with a single item. For example, “a” Contains “a” =>
True.
Some custom filters are implemented as scripts with the filter tag. You can find examples in the playbook automation task description.
Filters in conditional tasks do not iterate the items of the root. Instead, they fetch the left-side value and the right-side value and compare
them.
When adding a filter, clicking the default Equals (String) field opens a search window showing the available built-in filters. They are defined by
category as follows:
General
Filter Description
Contains Tests whether the value on the left is contained in the value on the right. Can be used for any kind of object (not limited to a
string).
Doesn't Tests whether the value on the left is NOT contained in the value on the right. Can be used for any kind of object (not limited
Contain to a string).
Has length of Tests whether a list specified on the left has the number of items specified on the right.
In Tests whether the value on the left is contained in the object on the right.
NOTE:
Is defined considers false and empty strings and lists to be defined values. If you don't want those to be included as
defined, use Is not empty.
Not defined Tests whether a key on the left does NOT exist in context.
NOTE:
Not defined considers false and empty strings and lists to be defined values. If you don't want those to be included as
defined, use Is empty.
Not in Tests whether the value on the left is NOT contained in the object on the right.
String
Determines the relationship between the left-side string value and the right-side string value, such as starts with, includes, and in the list. The
string filter returns partial matches as True.
Filter Description
Doesn't end with Tests whether the string on the left is NOT the end of the string on the right.
Doesn't equal Tests whether the strings are NOT the same.
Filter Description
Doesn't include Tests whether the string on the right is NOT a substring of the string on the left.
Doesn't start with Tests whether the string on the right is NOT the beginning of the string on the left.
Ends with Tests whether the string on the left is the end of the string on the right.
Has length Tests whether the two strings have the same length.
In list Tests whether the string on the left is in the list on the right.
Includes Tests whether the string on the right is a substring of the string on the left.
Matches - regex Tests whether the string on the left matches the regex on the right. Uses Go-style regex.
Not in list Tests whether the string on the left is NOT a substring of the string on the right.
Starts with Tests whether the string on the right is the beginning of the string on the left.
StringContainsArray Tests whether a substring or an array of substrings on the left is within a string array on the right. Supports single
strings as well. For example, for substrings ['a', 'b', 'c'] in string 'a' the script returns true.
Number
Determines the relationship between the left-side number value and the right-side number value, such as Equals, Greater than, and Less than.
Filter Description
Doesn't equal Tests whether the number on the left does NOT equal the number on the right.
Equals Tests whether the number on the left equals the number on the right.
Greater or Tests whether the number on the left is greater than or equal to the number on the right.
equal
Greater than Tests whether the number on the left is greater than the number on the right.
InRange Tests whether the number on the left is within a range specified on the right. For example, if the left value is 4, and the range
on the right is 1,8, the condition is true.
Filter Description
Less or equal Tests whether the number on the left is less than or equal to the number on the right.
Less than Tests whether the number on the left is less than the number on the right.
Date
Determines whether the left-side time value is earlier than, later than, or the same time as the right-side time value.
Filter Description
After Tests whether the date on the left is after the date on the right.
AfterRelativeDate Tests whether the date on the left occurred after the provided relative time (such as '6 months ago') on the right. Returns
True or False.
Before Tests whether the date on the left is before the date on the right.
Format Example
RFC1123Z Tues, 02 Jan 2019 15:04:05 -0700 // RFC1123 with numeric zone
RFC3339 2019-01-02T15:04:05Z07:00
Format Example
RFC3339Nano 2019-01-02T15:04:05.999999999Z07:00
Kitchen 3.04PM
Boolean
Determines whether a field is true or false, or the string representation is true or false.
Filter Description
Other
Filter Description
CheckIfSubdomain Tests whether the value on the left is a subdomain of the value on the right.
CIDRBiggerThanPrefix Tests whether the CIDR prefix on the left is bigger than the defined maximum prefix on the right.
GreaterCidrNumAddresses Tests whether the number of available addresses in IPv4 or IPv6 CIDR on the right is greater than the input
given on the left.
IsInCidrRanges Tests whether the IPv4 address on the left is contained in at least one of the comma-delimited CIDR ranges on
the right. Multiple IPv4 addresses can be passed in a comma-delimited list and each address is tested.
IsNotInCidrRanges Tests whether the IPv4 address on the left is NOT contained in at least one of the comma-delimited CIDR
ranges on the right. Multiple IPv4 addresses can be passed in a comma-delimited list and each address is
tested.
Filter Description
IsRFC1918Address Tests whether an IPv4 address on the left is in the private RFC-1918 address space (10.0.0.0/8, 172.16.0.0/12,
192.168.0.0/16) on the right.
LowerCidrNumAddresses Tests whether the number of available addresses in IPv4 or IPv6 CIDR on the right is less than the input given
on the left.
Abstract
You can use built-in transformers to define your transformer, they are grouped by category. Before defining a transformer, consider the following.
Transformer considerations
Transformers try to cast the transformed value (and arguments) to the necessary type. Tasks will fail if casting has failed, for example
{“some”: “object”} To upper case => Error.
Some transformers are applied on each item of the result. For example, a, b, c To upper case => A, B, C.
Some transformers operate on the entire list. For example, a, b, c count => 3.
Some custom transformers are implemented as scripts with the transformer tag. You can find examples in the playbook automation task
description.
When adding a transformer, clicking the default To upper case (String) field opens a search window showing the available built-in transformers.
They are defined by category as follows.
deleteCount: Number of
elements to remove from
‘index’, default is 0.
Get Extracts a given field from the {“name”: “john”, “color”: “white”}
field given object. field: “color” “white”
a => 1
NOTE:
Name Description Example
To make regex
case non-
sensitive, use the
(?i) prefix (for replace Returns a string with some or all matches pluto,is,not,a,planet
example (? match of a regex pattern, and replaces with a regex: “,” replaceWith:
i)yourRegexText.
specified string. “;”
=>“pluto;is;not;a;planet”
regex: A regex pattern to be replaced by
the replaceWith argument. “pluto is not a planet”
regex .*to replaceWith vega
replaceWith: The string that replaces the
=> vega is not a planet
string specified in the toReplace
argument, default is an empty
string.Detailed RegEx syntax can be
found at
https://github.com/google/re2/wiki/Syntax.
Split Splits a string into an array of strings, hello world,bye bye world
using a specified delimiter string to => hello world, bye bye
determine where to make each split. world
Split & Splits a string into an array of strings and hello & world delimiter: & =>
trim removes whitespace from both ends of hello, world
the string, using a specified delimiter
string to determine where to make each
split.Argumentsdelimiter: Specifies the
string which denotes the points at which
each split should occur (default delimiter
is”,”).
From Returns a subset of a string from the first pluto is not a planet
string from string occurrence. from: pluto is => not a
planet
from (required): String to substring from.
To string Returns a subset of a string until the first pluto is not a planet to:
to string occurrence. a planet => pluto is not
concat Returns a string concatenated with given night prefix good => good
prefix and suffix. night
prefix: A prefix to concat to the start of the night suffix shift=> night
argument. shift
2.5 => 3
Decimal Truncates the number of digits after 8.6666 by: 2 => 8.66
precision the decimal point, according to the by
argument.
Date Converts any date to a specified string format. The date 2021-10-
to input must be in ISO format. For example, 2021-10- 06T13:44:07 =>
string 06T13:44:07. The default output format is RFC822. 06 Oct 21 13:44
EDT
format: The desired string output format. For example,
if you want to convert to RFC822 format, enter 02 Jan
06 15:04 MST.
RFC3339Nano = 2006-01-
02T15:04:05.999999999Z07:00
Format Example
Format Example
RFC1123Z Tues, 02 Jan 2019 15:04:05 -0700 // RFC1123 with numeric zone
RFC3339 2019-01-02T15:04:05Z07:00
RFC3339Nano 2019-01-02T15:04:05.999999999Z07:00
Kitchen 3.04PM
Abstract
Extract indicators from Cortex XSOAR incident fields and enrich them with commands and scripts defined for the indicator type.
In Cortex XSOAR, the indicator extraction feature extracts indicators from incident fields and enriches them using commands and scripts defined
for the indicator type. If indicator extraction is enabled, indicators are extracted according to the incident type. For more information about indicator
extraction, see Indicator extraction.
1. Select the playbook where you want to add indicator extraction, and click Edit.
4. For Indicator Extraction mode, select the mode you want to use (default is inline).
5. Click OK.
Example 14.
The following scenario shows how indicator extraction is used in the Process Email - Generic v2 playbook to extract and enrich a very specific
group of indicators.
1. Navigate to the Playbooks page and search for the Process Email - Generic v2 playbook.
3. Open the Add original email details to context task, and for the Script drop down, change the script from Set to ParseEmailFilesV2.
Under the Outputs tab, you can see all of the different data that the task extracts.
4. Click the Advanced tab and set Indicator Extraction mode to Inline. This ensures all the outputs are processed before the playbook moves
ahead to the next task.
5. Open the Display email information in layout - Email.Headers task. This task receives the data from the saved attachment tasks and sets the
various data points to context.
6. Click the Advanced tab and set Indicator Extraction mode to None , because the indicators were already extracted earlier in the Extract email
artifacts and attachments task and there is no need to extract them again.
Inline: Indicators are extracted within the context that indicator extraction runs (synchronously). The findings are added to the context data.
For example, if indicator extraction for the phishing alert type is inline:
For incident creation, the playbook you define to run by default does not run until the indicators have been extracted.
For an on-field change, extraction occurs before the next playbook tasks run. This option provides the most robust information
available per indicator.
NOTE:
While indicator creation is asynchronous, indicator extraction and enrichment are run synchronously. Data is placed into the incident
context and is available via the context for subsequent tasks.
Out of band: Indicators are extracted in parallel (asynchronously) to other actions. The extracted data will be available within the incident,
however, it is not available for immediate use in task inputs or outputs because the information is not available in real-time.
For incident creation, out of band is used in rare cases where you do not need the indicators extracted for the proceeding flow of the
playbook. You still want to extract them and save them in the system as indicators, so that they can be reviewed at a later stage for manual
review. System performance may be better as the playbook flow does not stop extracting, but if the alert contains indicators that are needed
or expected in the proceeding playbook execution flow, inline should be used, as it will not execute the playbook before all indicators are
extracted from the alert.
NOTE:
When using Out of band, the extracted indicators do not appear in the context. If you want the extracted indicators to appear select Inline.
If indicators are not extracted, check whether the indicator mode is set to none. Even if you select the relevant incident fields and the indicators to
extract, if the mode is set to none, indicators do not extract.
Abstract
Extend context to retrieve specific information from integrations or commands and map to fields.
By design, integrations do not write all of the data returned from a command to the context. This prevents large context size and enables you to
store only the most relevant information.
The Extend Context feature enables you to save additional data from the raw response of the command. For example, when a command runs to
retrieve events from a SIEM, only some of the event fields are written to context, according to the integration design. With Extend Context, you can
save additional fields specific to your use case.
Extend Context can also be used when the same command runs multiple times in the same playbook, but the outputs need to be saved to
different context keys. For example, you can execute the !ad-get-user command twice, once to retrieve the user's information and again to
retrieve the user's manager’s information. By default, an integration command writes the data from the same command to the same context key.
By using Extend Context, you can write the command’s response to a custom context key of your choice.
You can extend context either in a playbook task or directly from the command line. Whichever method you use, first run your command with the
raw-response=true flag. This helps you identify the information that you want to add to your extended data.
You can use DT to get select keys of interest from a command that returns a list of dictionaries containing many keys. For example, the
findIndicators automation returns a long list of indicator properties, but you may only be interested in saving the value and the indicator_type to
Example 15.
2. Use the following value for extend-context to save only value and indicator_type into a context key called FoundIndicators:
3. Use the following value for extend-context to save only the incident name, status, and id to a key called FoundIncidents:
1. Go to the Advanced tab of the relevant playbook task, such as a Data Collection task.
2. In the Extend Context field, enter the name of the field in which you want the information to appear and the value you want to return. For
example, using the !ad-get-user command, enter name="john" attributes=displayname to place the user's name in the displayName
key.
The following image shows the result of the !IPReuptation ip=20.8.1.5 raw-response=true command.
To include more than one field, separate the fields with a double colon. For example:
attributes=displayName::manager=attributes.manager
3. To output only the values for Extend context and ignore the standard output for the command, select the Ignore Outputs checkbox.
While this will improve performance, only the values that you request in the Extend Context field are returned. You cannot use Field Mapping
as there is no output to which to map the fields.
For example, to add the user and manager fields to context use the ad-get-user command, as follows:
!ad-get-user=${user.manager.username} extend-context=manager=attributes.manager::attributes=displayName
2. To output only the values that you set as Extend context, run the command with the ignore-ouput flag=true. !ad-get-
user=${user.manager.username} extend-context=manager=attributes.manager::attributes=displayName ignore-
output=true
Example 16. Extend context using the CLI with the IBM Qradar v3 integration instance
By default, after adding an IBM Qradar v3 integration instance, incidents pulled from QRadar to Cortex XSOAR return multiple fields, including
event_count, device_count, offense_type, description. You can use extend context to show which additional information is available. You
can also use that information to map it to a field in Cortex XSOAR.
Run the command !qradar-offenses-list raw-response="true". From the context data, you should see that multiple fields are
returned.
Identify the fields that you want to view and run your command. For example, to retrieve the number of devices affected by a given incident,
as well as the domain in which those devices reside, run the following command:
!qradar-offences-list extend-context=device-count=device_count::domain_id=domain_id
Abstract
Use the setIncident script to set and update all system incident fields.
Using a playbook to create incident fields offers a structured and automated approach to defining and populating fields with relevant data during
incident handling. This ensures consistency in data collection, enhances the organization of incident information, and facilitates streamlined
analysis and response processes.
Creating incident fields is essential for structuring and storing specific information related to security incidents. These fields enable efficient
organization and retrieval of incident data, enhancing analysis, decision-making, and automated response actions. It is an iterative process in
which you create fields as you better understand your needs and the information available in the third-party integrations you use. You initially
define incident fields after the planning stage, with mapping and classification for how the incidents will be ingested from third-party integrations
into Cortex XSOAR.
During the investigation, you can then use the setIncident script in a playbook task to set and update incident fields.
NOTE:
The name field has a limit of 600 characters. If there are more than 600 characters, you can shorten the name field to under 600 characters
and then include the full information in a long text field such as the description field.
There are many fields already available as part of the Common Type content pack. Before creating a new incident field, check if there is
an existing field that matches your needs.
For more information on creating custom incident types and fields, see this video.
An error occurred.
Abstract
Generic Polling playbook enables you to periodically poll the status of a process on a remote host.
When working with third-party products (such as detonation, scan, search, and other third-party products) you may need to wait for a process to
finish on the remote host before continuing. In these cases, the playbook should stop and wait for the process to complete on the third-party
product, and continue when it is done. Integrations or automations may not be able to do this due to hardware limitations.
URL detonation
To use polling, Cortex XSOAR comes out-of-the-box with the GenericPolling playbook, which periodically polls the status of a process being
executed on a remote host, and when the host returns that the process execution is done, the playbook finishes execution. For more information
about using this playbook, see Generic Polling.
The GenericPolling playbook is used as a sub-playbook to block the execution of the main playbook until the remote action is complete. There are
a number of playbooks that use the GenericPolling playbook that come out-of-the-box or installed from a content pack, such as:
Contex Polling - Generic: Polls a context key to check if a specific value exists.
Scan Site - Nexpose: Scans according to asset IP addresses or host names from Rapid7 Nexpose, and waits for the scan to finish by polling
the scan status in pre-defined intervals.
See this video for more information on how to use generic polling in Cortex XSOAR.
PREREQUISITE:
You need to use the GenericPolling playbook as a sub-playbook in a main playbook, such as Detonate File - JoeSecurity.
1. Start Command: The task contains a command that fetches the initial state of the process and saves it to context. This command starts
the process that should be polled. For example:
Detonation: Submits a sample for analysis (detonated as part of the analysis), using the joe-analysis-submit-sample command.
Scan: Starts a scan for specified asset IP addresses and host names using the nexpose-start-assets-scan command.
2. Polling Command: The task contains the GenericPolling sub-playbook that polls for an answer. For example:
Scan: After the scan runs in Nexpose, using the playbook polls for scan information such as the scan type, the number of assets
found, the scan ID, and other information.
Search: The playbook runs the qradar-get-search to poll for the search ID and status.
3. Results Task: Returns the results of the operation. The task contains the results that were polled, which are added to context. For
example, after polling JoeSecurity, the results are added to context.
For information about the GenericPolling playbook inputs such as Ids, Interval, and dt, see Playbook inputs.
Example 17.
This generic polling example uses the Detonate File - JoeSecurity playbook from the Joe Security content pack.
The Detonate File - JoeSecurity playbook detonates one or more files using the Joe Security integration and returns relevant reports to the War
Room and file reputations to the context data.
1. If you have not done so, go to Marketplace and download the Joe Security content pack.
3. Open the JoeSecurity Upload File task. This task uses the joe-analysis-submit-sample command, which starts a new analysis of a file
in Joe Security. This is the Start command.
PollingCommandName: The joe-analysis-info command returns information for a specified analysis, such as status, MD5,
SHA256, vendor.
(val.Status !==‘finished’).ID Gets the object that has a status other than ‘finished’, and then gets its ID field. The polling is
done only when the result is finished. When finished, the dt filter returns an empty result, which triggers the playbook to stop
running.
5. Open the JoeSecurity Get Info task. The joe-analysis-info command returns details of the IDs that have finished polling. This is the
Results task.
6. Open the Set Context task. The context path to store the poll results is Joe.Analysis.
Global context outputs enable receiving information from multiple integrated products when executing playbooks and commands.
The following are common generic polling issues and the recommended ways to deal with them.
As generic polling schedules tasks are outside the context of the playbook (not visible in the playbook run), errors may appear only in the
War Room. Go to the War Room for the incident and check for errors or warnings related to GenericPolling tasks.
The GenericPolling task completes but the status has still not "finished".
If the timeout is reached, the playbook successfully finishes even if there are items that did not complete. Try increasing the timeout value
for the GenericPolling task.
The integration returns an ID not found error when running from the GenericPolling sub-playbook, but when running manually, it finishes
successfully.
Some products cannot handle consecutive requests to query an action status right after the request to perform the action. After you initiate
the action, try adding a Sleep task before calling the GenericPolling sub-playbook.
11.6 | Scripts
Abstract
Create and edit a script including detaching and attaching and automation settings.
Scripts perform specific automated actions using commands that are used in playbook tasks and in the War Room.
On the Scripts page, you can view, edit, and create scripts in JavaScript, Python, or PowerShell. When creating a script, you can access all Cortex
XSOAR APIs, including access to alerts, investigations, share data to the War Room. Scripts can receive and access arguments and can be
password protected.
When you developing a script, consider editing an out-of-the-box script to leverage existing functionality and save time and effort. On the Scripts
page, use free text in the search box to find an existing script. You can search using part or all of the scripts' names or tags. You can also search
for an exact match of the script name by putting quotation marks around the search text. For example, searching for "AddEvidence" returns the
script with that name. You can search for more than one exact match by including the logical operator "or" in-between your search texts in
quotation marks. For example, searching for "AddEvidence" or "AddKeyToList" returns the two scripts with those names. Wildcards are not
supported in free text search.
The Script Helper provides a list of available alphabetically ordered commands and scripts.
Cortex XSOAR comes out-of-the-box with several common scripts that can be used in playbooks and commands (from the War Room), the
majority of which are contained in the Base and Common Scripts content packs.
The Base content pack is a core pack that helps you get started and includes scripts that can be used in other JavaScript, Python, and PowerShell
scripts. The Common Scripts content pack includes scripts that are commonly used, such as EmailReputation, RunDockerCommand, and
ConvertXMLToJson.
Common Scripts contain code (such as functions and variables) that can be used across scripts and can be embedded when writing your scripts
and integrations. Common Scripts are reusable modules or functions that provide additional functionality and capabilities to interact with APIs.
Instead of duplicating code across multiple scripts or integrations, developers can create common scripts containing commonly used API
interactions, such as authentication, data retrieval, or data manipulation. For example, in the CommonServer script, the tableToMarkdown function
takes a JSON and transforms it into markdown. You can call this function from integrations and scripts that you author.
On the Scripts page, you can view/edit common scripts such as:
CommonServer
The CommonServer script contains JavaScript functions and variables that can be can be used when writing your scripts and integrations.
The script contains nearly 200 functions/variables, such as tabletoMarkdown, closeInvestigation, and SetSeverity.
You can copy the script and add new functions/variables or add your functions to the CommonUserServer script. You can also use your
scripts to override the existing scripts in the CommonServer script.
CommonServerPython
The CommonServerPython script contains Python functions that can be used when writing your scripts and integrations.
The script contains over 400 functions, such as appendContext, vtCountPositives (which counts the number of detected URLs in the
War Room entry), and datetime_to_string, (which converts a DateTime object into a string).
You can copy the script and add new functions/variables or add your functions to the CommonServerUserPython script. You can also use
your scripts to override the existing scripts in the CommonServerPython script.
CommonServerPowerShell
The CommonServerPowerShell script contains PowerShell arguments/functions that can be used when writing your scripts and integrations.
The script contains many arguments/functions, such as SetIntegrationContext, Write-HostToLog (which writes to the demisto.log), and
ReturnOutputs (which returns results to the user more intuitively).
You can copy the script and add new arguments/functions or add your own to the CommonServerUserPowerShell script. You can also use
your scripts to override the existing scripts in the CommonServerPowerShell script.
Abstract
Create or edit an out-of-the-box script, including detach and attach and automation settings.
Developing scripts in Cortex XSOAR helps to automate repetitive tasks, streamline security operations, and make incident response more
efficient. Customizing scripts can improve threat detection, mitigation, and remediation processes specific to your organization's needs.
Rather than creating a script from scratch, you can edit existing scripts. If the script was installed from a content pack, by default, the script is
attached, which means that it is not editable. To edit the script, you need to either make a copy or detach it. While the script is detached, it is not
updated by the content pack. This may be useful when you want to update the script without breaking customization. If you want to update the
script through content pack updates, you need to reattach it, but any changes are overridden by the content pack on upgrade. If you want to keep
the changes, make a copy before reattaching.
NOTE:
You can enable/disable a script in the Settings, without having to detach or duplicate the script.
You can view recently modified or deleted scripts by clicking version history for all scripts .
Parameter Description
For example, if a script is intended for phishing, tagging it with the phishing tag helps organize,
classify, and manage the script among other scripts.
Organizations can also implement policies or restrictions based on tags associated with scripts.
For example, they may restrict certain users from accessing or executing a script tagged for
phishing.
Enabled Whether the script is available for playbook tasks and indicator types, or to run in the CLI.
Tags
Script tags enable you to use the script in a specific area in Cortex XSOAR. For example, a script can be tagged to use in post-processing,
indicator formatting, field display, and indicator enhancement. The following table includes the commonly used tags:
NOTE:
All custom scripts are available for conditional tasks, including scripts without the condition tag.
System scripts are only available for use in conditional tasks if they have the condition tag.
general-dynamic-section General purpose dynamic section script for object layouts (excluding incidents and indicator layouts)
Arguments
Parameter Description
Parameter Description
You can create, edit, or delete outputs as required. Define the outputs according to types such as string, number, date, and boolean. For more
information, see Context and Outputs.
Parameter Description
Context Path A dot-notation representation of the path to access the Context. For example,
ThreatStream.Analysis.ReportID.
Description A short description of what the context path represents. For example, the ID of the report
submitted to the sandbox.
Type The value type of the context path, such as string, number, and date Enables Cortex XSOAR to
format the data correctly.
Script permissions
Define the script permissions to set who can view and execute the script.
Parameter Description
Password Protect Enables you to add a password for the script, which will be required when running the script
from the CLI.
Parameter Description
Run as Script permissions are determined by the Run as and Role fields.
Run as: Defines the permissions with which the script runs.
By default, most scripts run as a Limited user with restricted access and can only perform
specific operations allowed by that role.
If you select the DBotRole, user roles can execute with full permission. Users with lower
permissions can also view the results. To address this, assign Run as according to the
user roles you want to give access to the information the script can extract.
NOTE:
NOTE:
Content packs typically use scripts, and the scripts can have dependencies on each other, so
it is important to assign the Run as and Role parameters consistently to ensure scripts run
properly.
If you change permissions for a script, the new permissions do not affect playbooks that are
already using the script. The playbooks continue using the previous permissions until the
next run, even when the playbook is triggered manually.
In this example, you created a script that searches for all incidents in the system. The following scenarios show how the system behaves using the
Run as and Role configurations.
Incident 3: Analysts
NOTE:
To limit incident access to specific roles, see Limit access to investigations using access control.
Run as: Only users with at least the analyst role can execute the script. The script
Analyst returns
The script is executed with the permissions and access levels associated with the analyst role. The script
incidents 2 and
Role: has the same permission level as an analyst user, allowing you to perform actions based on the assigned
3.
Analyst role.
Run as: All users can execute the script, but only incidents that are viewable by the analyst role are returned. The script
Analyst returns
incidents 2 and
Role: Not 3.
set
Run as: The analyst role can execute the script. The script
Limited returns 2 and
The script is executed with restricted permissions and access levels. The limited user role defines the
User 3.
specific operations the script can perform, restricting it from executing unauthorized actions or accessing
Role: sensitive information within Cortex XSOAR. This provides a more restricted execution environment with a
Analyst narrower scope of capabilities.
Run as: All users can execute the script, whether they are Analysts or Instance Administrators, and they have All incidents
DBotRole access to all incidents that are returned.
Role: Not
set
Run as: Any user with at least analyst permissions can execute the script. All other users can't run the script in All incidents
DBotRole playbook tasks, run the script manually from the command line or in a playbook.
Role: All users can see the results of the execution in the War Room.
Analyst
Advanced
Parameter Description
Timeout (seconds) Time (in seconds) before the script times out. Default is 180.
Docker image name For Python scripts, this is the name of the Docker image to use to run the script.
2.7
The default Docker image that Cortex XSOAR uses is demisto/python3, but you can use
other Docker images from a private image registry. See Change the Docker image in an
integration or script for more information.
Depends on commands
You can set the commands that the script depends on directly from these settings. You still have the option to set the dependencies in the
script YAML file.
Modify parameters, logic, or integrations within a script to adapt it to specific use cases, optimize performance, and address evolving security
needs without starting from scratch.
The Script Helper provides a list of available alphabetically ordered commands and scripts.
Example 19.
An error occurred.
Set breakpoints, conditional breakpoints, skip tasks, and input and output overrides in the playbook debugger.
The debugger provides a test environment where you can make changes to data and playbook logic and view the results in real-time to test and
troubleshoot playbooks. You can see exactly what is written to the context at each step and which indicators are extracted.
To open a detached system playbook, a copy of a system playbook, or a custom playbook in the debugger, select the playbook and click Edit.
To open an attached playbook in the debugger, select the playbook and click View to access the debugger. While editing a playbook, sub-
playbooks can be opened directly in the debugger by choosing Open sub-playbook in the task pane.
In some cases, you may have a playbook that includes two or more copies of the same sub-playbook. When you set breakpoints, override inputs
or outputs, or skip tasks in sub-playbook A, the same changes apply to the identical sub-playbook B. In addition, if you set a breakpoint, override
inputs or outputs, or skip tasks within a loop in a playbook, that setting will be applied every time the loop executes.
NOTE:
The debugger runs with the permissions of the logged in user. The user must have permissions for both playbooks and investigations
(View/Edit) to run the debugger.
The debugger uses test data to execute the playbook, so you can see what your expected results would be. The following are options for test
data.
1. New Mock Incident: By default, the debugger runs using an empty mock incident. An empty mock incident is useful to test simple
functionality, such as a playbook that does simple tasks such as parsing inputs.
2. Playground: You can load the contents of the Playground as test data, enabling you to use uploaded files and custom context data for
testing purposes.
3. Existing Incident: You can select an existing incident. For example, when debugging a phishing playbook, you might want to use an
existing phishing incident that came from the mail listener integration. Using an existing incident in the debugger does not change the
original incident.
If you need to use event data from third-party software that is not yet set up as an integration, you can import a JSON file into Cortex
XSOAR through the mapping feature and create an incident that can then be used as test data.
You can use a file attachment for your test data by adding the file to an incident and selecting the incident or by uploading the file to the
playground and using the playground as test data.
Set a breakpoint
At the breakpoint, you can override inputs and outputs to see how changes affect playbook execution. In addition, conditional breakpoints set
conditions for the playbook to proceed. The playbook only pauses if your condition is met, letting you manipulate data to see how different
scenarios impact how the playbook runs. For example, you can set a conditional breakpoint to pause the playbook when a phishing incident
targets a member of a VIP asset list. If there are no VIPs in this incident, the execution does not pause. If there is a VIP in the incident, you can
check that the member was properly identified by the playbook task.
Breakpoints do not apply to manual tasks, as a manual task will always pause the playbook run unless you skip the manual task. When the
playbook reaches a breakpoint, no new tasks begin, but parallel tasks that have already begun continue. Breakpoints can be set in both the parent
1. To set a breakpoint, go to a task and click on the breakpoint button. When a breakpoint is set, the breakpoint button changes to orange.
2. After a breakpoint is reached, click the task to override inputs and outputs if needed.
3. When you are finished with the task, run the debugger, and in the task, select an option for the playbook to continue.
For an automated task, you have the options Run automation now or Complete Manually. If you choose Complete Manually, click on Mark
Completed for the playbook to continue.
For a task that is a sub-playbook, click Run playbook now for the playbook to continue.
For a conditional task, choose which branch the playbook should follow and click Mark Completed for the playbook to continue. The default
branch is else.
When the playbook reaches a breakpoint, the task has an orange line at the top to indicate the breakpoint.
Breakpoint alerts are also displayed at the top of the playbook, enabling you to navigate between multiple breakpoints that have been
reached in the playbook or sub-playbooks.
Conditional breakpoints enable you to debug loops and tasks with multiple values. The playbook only pauses if your condition is met, letting you
manipulate data to see how different scenarios impact the playbook run.
After a breakpoint is set and the breakpoint icon is orange, a tooltip appears enabling you to add a condition to the breakpoint.
2. On both sides of the condition statement, you can choose available playbook data From previous tasks or use As value to set any other
value.
Clicking on the curly brackets enables you to use data from the current playbook and from sub-playbooks.
3. Click on the Equals (String) to select from a set of conditions (such as: contains, ends with, greater than. )
NOTE:
If the breakpoint condition as defined does not exist when the debugger runs, the condition will default to false. For example, if you
choose IP address and there is no IP address available, the playbook will not pause.
The debugger runs the playbook with the permissions of the logged in user. If a user runs potentially harmful commands, they are logged to the
audit trail with the user’s username. When the user sets breakpoints, skips tasks, or overrides inputs or outputs, those changes only apply to the
Breakpoints pause playbook execution before a specific task. When the playbook is paused, the Debugger Panel displays the current state of
context data, indicators, and task information.
To start the debugger, click Run. When you click Stop, the debugger stops, and the context data is reset to the original incident data. In the case of
a new mock incident, the context data is cleared and the context is empty. Any breakpoints, skips, or overrides you applied are still available.
The debugger enables you to temporarily override inputs and outputs for a playbook run and to view the results in real time. When you override an
input or output in the debugger, the change is saved only in the debugger view and only for the user who made the change. If after testing you
decide to keep the temporary changes you made and apply them permanently to the playbook for all users, you need to cancel the override and
edit the task. Tasks can be edited directly in the debugger or outside of the debugger using the standard playbook editing options.
You can override task inputs or outputs before or during a playbook run to troubleshoot tasks that fail or to try different input and outputs as part of
playbook development. If you override an input or output during a playbook run, the override is applied to the run if the playbook has not yet
reached that task. If you edit (permanently change) inputs during a playbook run, the changes only take effect the next time you run the playbook.
You cannot use filters or transformers for overrides.
1. To override an input or output, open the task and hover over any existing input or output. Click Override Input.
2. Enter a new input or output that will be used only in the debugger. For output overrides, you can enter a value, an array of values, or JSON.
For input overrides, you can only enter plain text.
The playbook task card displays a label indicating that the task input or output has been overridden.
Skip tasks
For testing purposes, you may want to skip a task that for example closes a port in a firewall, deletes an email, or sends a notification to a
manager. Or you might skip a task where the integration has not yet been configured. By skipping a task and overriding the output, you can
provide the data necessary to complete the playbook run. When you skip a conditional task, you can choose which branch runs after the skipped
task, enabling you to test different outcomes for multiple branches.
To skip tasks with potentially harmful results such as blocking a user or opening a port in a firewall.
2. If the output is required for the playbook to proceed, click the task and override inputs and outputs.
When you skip a conditional task, you can set which branch runs after the skipped task, enabling you to test different outcomes for multiple
branches.
1. Choose skip for a conditional task. The skip button will turn orange.
2. Click on the task. Select which branch runs after the skipped task. If you do not choose, the else branch runs by default.
Within the debugger panel, you can view the context data during the playbook run as well as the indicators as they are extracted by clicking any
completed task in the playbook while the debugger is running.
You can see the results of that task in the debugger panel.
Abstract
You can analyze playbook metadata such as tasks input and output, the amount of storage each task input/output uses, and the type of task. This
is useful when troubleshooting your custom playbook if your system has slowed down and is using high CPU usage, memory, or storage (disk
space).
After an incident has been assigned to a playbook you can analyze it to see its tasks inputs/outputs storage. You can filter the data according to
the KB used in each task input/output.
From the Incidents page, in the Incident War Room, run the following command in the CLI.
!getInvPlaybookMetaData incidentId=<incident ID> minSize=<size of the data you want to return in KB. Default is
10>
Example 20.
To view the playbook metadata that is used in incident number 964, in the CLI type !getInvPlaybookMetaData incidentid=”964”
minSize=”0”!getInvPlaybookMetaData incidentid=”964” minSize=”0”.
In Cortex XSOAR, you can develop and test your playbook content on development machines before using it in a production environment using
the remote repository feature.
For more information about content management in Cortex XSOAR, see Content management in Cortex XSOAR.
You can save versions of a playbook as you are developing it. When you save a version of a playbook, add a meaningful comment so that you will
be able to recognize the changes you made in that version at a later time. The version is saved with the name of the playbook, your commit
message, an indication of what the change was (modify, insert), the date the playbook was saved, and the name of the author who last saved it. If
necessary, you can access the playbook’s version history and revert your playbook to a previous version.
1. In a playbook, after making changes, click the list next to Save Playbook and then click Save version for current Playbook.
2. Enter a description of the change that was made to the current version.
a. Click the icon next to New Playbook. The tooltip displays Version history for all Playbooks.
b. Search for the required playbook. The description that was entered when the version was saved should help you locate the version
you now require.
The following guidelines are best practices for building playbooks as well as optimizing playbook design and performance. Whether you are just
starting or are creating advanced workflows, we recommend reviewing these recommendations carefully so your playbooks have a clear logical
flow and run correctly and efficiently.
The Use Case Builder content pack helps you streamline the use case design process, including building your playbook. It contains tools to help
you measure and track use cases through your automation journey and quickly autogenerate OOTB playbooks and custom workflows.
Describe tasks clearly. Tasks should be clear to someone not familiar with the playbook workflow. This applies to task names, task descriptions,
and the playbook description. When naming tasks, the guideline should be that users can understand what the playbook does by reading the task
names, without having to open individual tasks to view the details.
Clear Unclear
Grouping inputs organizes the input fields and provides clarity and context to understand which inputs are relevant to which playbook flow.
Use the PascalCase convention for inputs, keeping in mind that inherently capitalized terms should be kept in upper case. For example, the
Entity ID input should be named EntityID and MITRE Technique should be MITRETechnique.
When configuring playbook outputs, configure sub-keys as much as possible, do not limit configuration to only the root keys. For example,
instead of outputting File, output File.Name, File.Size, etc. This helps when viewing the outputs of the playbook within another
playbook.
Avoid using Cortex XSOAR Transform Language (DT) in the Get input field definition.
If you need to use DT for complex processing and you think a new filter or transformer would provide a better alternative to your DT solution,
you can request the feature or contribute it. Consider using DT only if it can drastically simplify the playbook or improve performance.
In each task, make sure appropriate logical operations are performed on input data. For example:
Be aware of potential race conditions. When you want to add multiple values to the same key, do not use multiple tasks that run Set,
SetAndHandleEmpty, or any other script that sets data in context at the same time, because a race condition can cause your data to be
overwritten by the same tasks. This is especially problematic when trying to append data. Instead, run the tasks one after the other or use
scripts to append the data instead of setting a new value to the key.
Verify whether the data you're getting is As value (simple value) or From Previous Tasks (from context).
Tasks take their inputs from the context, not directly from the previous tasks (even if it says from previous tasks). For an example of a task
not receiving the right context, see this bug (since fixed) in a playbook:
The playbook begins by classifying the emails as internal or external. It then checks the reputation of external email addresses if any were
found. That happens on the right side of the image. We expect that branch to run only if external addresses are found.
However, we did not apply a filter to the last task that gets the reputation on the right side:
This means that if both internal and external email addresses are found, we proceed with both branches (internal and external) of the
playbook, and the task that gets the reputation runs without an applied filter, effectively taking all the emails we have in the inputs. The
correct task input should have been:
Use ignore-case option where possible, especially when checking Boolean playbook inputs such as True which users may end up
configuring as true with a lowercase t:
When working with two lists, if you need multiple items from list A, which are also in list B, use the in filter instead of the equals or
contains filters.
Get the IP addresses that are in Get the IP addresses where the addresses contain the list. This is incorrect because they
the list of inputs. don't contain the list, they contain individual items from it.
Differentiate between checking if a specific element exists versus checking if an element equals something. This is a common
mistake that can lead to tests working in some situations, but not all.
Check if any object where the NetworkType Check if the NetworkType of the IP object is External. This is incorrect
is External exists. because the IP object may contain multiple IPs, some internal and some external.
Run one or more tasks based on the object types versus running either one task or the other based on the type of one
object.
Check the existence of both object types and run tasks for the Check if there is either an internal or an external IP, and take only
types found. one path even if both types exist.
Use playbook loops only where needed. Loops are needed when certain actions have to be performed on specific pairs of data.
Either use filters and transformers or loop through each A user has a playbook that creates relationships for multiple indicator types. All
separate indicator to verify they're creating the correct indicator types and malware families are in their ${inputs.Domain} and
relationships. ${inputs.MFam} playbook inputs.
The user wrongly assumes that when creating the relationships, the correct
malware families in ${inputs.MFam} correspond to the correct domains in
${inputs.Domain}.
Use the IsIntegrationEnabled script in your playbook to make sure any integrations you need to run are enabled.
In order to minimize your incident response time and make sure the system runs optimally, it's important to follow design and performance
guidelines.
Playbooks
When returning to work on a playbook after a break, verify you’re working on the latest version. Reattach the playbook if it’s detached, and update
it to ensure you’re not editing an older version and introducing regressions. If you don’t want to reattach your playbook, or you’re still working on
your custom version, we recommend reviewing the release notes to see what changes were made to the out-of-the-box playbook and copying
those changes to your version.
Scripts
Update scripts and integration commands in playbook tasks to their most current version. Scripts that have updates or are deprecated are
designated by a yellow triangle.
If a playbook has more than thirty tasks, consider breaking the tasks into multiple sub-playbooks. Sub-playbooks can be reused, managed easily
when upgrading, and they make it easier to follow the main playbook.
Playbooks that are triggered by an incident/job are considered a parent playbook. Sub-playbooks are playbooks that are used from within a parent
playbook, as building blocks. The parent playbook is the main playbook that runs on the investigation, and each sub-playbook has a specific
goal/responsibility.
Parent playbooks usually have a closeInvestigation task at the end because they are the main playbook for that incident.
Parent playbooks usually contain inputs that are passed down to sub-playbooks. Certain True/False flags may come from the parent
playbook inputs.
Run playbooks in quiet mode to reduce the incident size and execute playbooks faster. For playbooks running in jobs, indicator enrichment should
be done in quiet mode.
When indicator extraction is enabled for a playbook task, the task by default tries to extract all indicator types from the task Results. (The Results
entry is the information printed to the War Room, not the outputs of the task). Extracting all indicator types can slow down the playbook, so it is
important to only extract indicators as needed. For example, for the ParseEmailFilesV2 script which prints email information to the War Room,
extraction should be enabled in order to extract email addresses, URLs, and other indicators. However, if your task runs the Sleep script, there is
no point in extracting indicators.
Set the Indicator Extraction mode to None in the playbook task Advanced tab.
Can I consolidate the API calls into one call? If not, can an integration enhancement solve this by accepting arrays as input instead of
running multiple times for each input?
Am I unnecessarily storing the same data twice? Do I have the data I need already stored?
If your task requires extracted indicators, change the indicator extraction mode to inline. Use this mode carefully because it can affect
performance. In addition, it is important to customize and limit the indicators extracted from incident fields of the incident type you are
ingesting in the incident type settings Indicator Extraction Rules.
When creating new incident fields that do not need to be searched, double check whether they should be searchable under the
relevant checkbox. Example of fields that should be searchable: Endpoint ID, Is Admin. Example of fields that should not be
searchable: Additional Notes, Alert Summary.
12 | Lists
Abstract
Create and manage lists and add them to your playbook or script.
Create reusable data lists in Markdown, HTML, CSS, or JSON, and add them to your playbooks and scripts. Add data to your lists and leverage
them across various automations for maximum efficiency.
A list is a data container for storing data and is mainly used in playbooks and scripts, but can be accessed anywhere the context button appears
(double-curly brackets). For example, in a playbook task, access the data in a list via the context button under Lists, or by using the path ${lists.
<list_name>}. Different types of data can be stored in a list, for example, text, string, numbers, Markdown, HTML, CSS, and JSON objects.
NOTE:
Use cases
Organizing Network Security: Use lists to keep track of internal networks and IP addresses. Compare them to a set list to ensure only
allowed connections get through.
Store Data Objects: For example, a list of URLs, which you can call as an input for scripts and playbooks.
Prioritizing Incident Response: Create lists to identify critical assets like important users or servers. This helps manage incidents better by
focusing on the most important things first.
Create a list that can be accessed later such as in a playbook script or managed in the CLI.
4. Add content as required. For an example of a JSON list and how to use it, see Use cases: JSON lists.
Click Save.
Click Save Version to save your changes in Version history for all Lists. This allows you to revisit and restore previous versions.
NOTE:
If you want to edit a list from a content pack, you need to duplicate or detach a list. Detached lists do not receive updated content in subsequent
Cortex XSOAR content releases. To retain an updated list, reattach it.
Use the following list commands in the CLI, scripts, and playbook tasks:
getList Retrieves the contents of the specified listName: The name of the list for which to retrieve the
list. contents.
createList Creates a list with the supplied data. listName: The name of the list to which to add
items.
addToList Appends the supplied items to the listName: The name of the list to which to
specified list. If you add multiple items, append items.
make sure you use the same list
separator that the list currently uses, for listData: The data to add to the specified list. The
example, a comma or a semicolon. data will be appended to the existing data in the
list.
setList Adds the supplied data to the specified listName: The name of the list to which to add
list and overwrites existing list data. items.
removeFromList Removes a single item from the listName: The name of the list from which to
specified list. remove an item.
Example
In this example, a manageOOOusers script uses the getList, createList, and setList commands.
def _get_current_user():
current_username = demisto.executeCommand("getUsers", {"current": True})
if isError(current_username):
demisto.debug(f"failed to get current username - {get_error(current_username)}")
return
else:
return current_username[0]["Contents"][0]['username']
def main():
# get current time
now = datetime.now()
# args
list_name = demisto.getArg("listname")
username = demisto.getArg("username")
option = demisto.getArg("option")
days_off = now + timedelta(days=int(demisto.getArg("daysoff")))
off_until = days_off.strftime("%Y-%m-%d")
# update list name to start with 'OOO', so we can't overwrite other lists with this
if not list_name.startswith("OOO"):
list_name = f"OOO {list_name}"
current_user = _get_current_user()
if not current_user and not username:
return_error('Failed to get current user. Please set the username argument in the script.')
if not username:
# Current user was found, running script on it.
username = current_user
else:
# check if provided username is a valid xsoar user
users = demisto.executeCommand("getUsers", {})
if isError(users):
return_error(f'Failed to get users: {str(get_error(users))}')
users = users[0]['Contents']
# get the out of office list, check if the list exists, if not create it:
ooo_list = demisto.executeCommand("getList", {"listName": list_name})[0]["Contents"]
if isError(ooo_list):
return_error(f'Failed to get users out of office: {str(get_error(ooo_list))}')
# check status of the list, and add/remove the user from it.
if not ooo_list:
list_data = []
else:
list_data = json.loads(ooo_list)
Manage JSON lists in Cortex XSOAR that can be accessed by automations, playbooks, etc. List commands, lists arrays separators delimiters
List data can be stored in various structures, including JSON format. When accessing a valid JSON file from within a playbook, it is automatically
parsed as a JSON object (list). Depending on how you store the data, you may need to Transform a List into an Array. For example, if using non-
built-in commands in a script or you want to loop over list items, you should transform a list into an array. Working with a JSON file list in a
playbook typically involves the following actions:
Apply transformers to extracted data. See Filter and transform data for more details.
Create a JSON list and use the Set automation to create a new context key that can extract the data from the list.
1. Create a List:
c. In the Content Type field, select JSON and add the following content:
{
"domain": {
"name": "mwidomain",
"prod_mode": "prod",
"user": "weblogic",
"admin": {
"servername": "AdminServer",
"listenport": "8001"
},
"machines": [
{
"refname": "Machine1",
"name": "MWINODE01"
},
{
"refname": "Machine2",
"name": "MWINODE02"
}
],
"clusters": [
{
"refname": "Cluster1",
The Set script sets a value in context under the key entered.
e. In the key field, define a context key name for the data. For example, JSONData.
f. In the value field, set the list you want to extract by clicking the curly brackets.
h. In the Get field, click the curly brackets, and in the Select source for value section, select the list you created in step 1: Test1.
j. Click Test.
In this example, the test results have found the list data.
3. Check all the data is stored in the context key you defined by testing the playbook using the debugger:
a. Click Run.
The key you defined, JSONData, holds the data in context from the JSON object.
In general, you can extract subsets of context data in a playbook to analyze a specific information set. This also applies to working with lists, for
example extracting a subset of the data from a JSON object. In this example, we want to extract server information from the list created above.
c. In the value field, set the list you want to extract by clicking the curly brackets.
g. Click Test.
2. Check that all the data is stored in the context key you defined by testing the playbook using the debugger.
b. The key you defined (JSONDataSubset) holds the subset of the data in context from the JSON object.
You can filter the data subset you extracted and analyze this information on a more granular level. In this example, you want to filter Box1
information from the list created in Extract the data from a JSON Object above.
In this example, retrieve the list of machines named Box1 from Test1 list by setting the filter lists.Test1.domain.servers.machine
Equals Box1.
5. Click Test.
6. Check whether the data subset was accessed successfully by selecting the data source from an incident. You can see the results
returned machine: Box1.
In general, in a playbook task, you can transform (apply changes) to the data you extracted. This also applies to working with lists, for example, to
transform extracted data from a JSON object. In this example, we extract the first element in the list and transform the data to upper case from the
list created in Extract data from a JSON object above.
1. Re-open the task, click the contents of the value field, and keep the current filters.
1. Add the Get index (General) transformer to extract a specific machine element.
The To upper case (String) transformer does not work on lists, only on individual elements. Therefore, the Get index
(General) transformer should be applied before adding the To upper case (String) transformer.
4. In the Fetch Data field, select anincident to test and click Test.
Create a transformer to split a list into an array when adding or editing a task in a playbook or when mapping an integration instance in Cortex
XSOAR.
Create a transformer to split a list into an array, add or edit a task in a playbook, or map an instance.
6. Add a transformer.
f. (Optional) In the delimiter field, type the delimiter used to separate the items in the string (default is ",").
For an example of using a transformer in a list, see Apply Transformers to Extracted Data.
13 | Jobs
Abstract
Schedule playbooks to run automatically by defining a job based on events or specific times. For instance, process indicators automatically upon
ingestion and then add them to your SIEM.
Jobs run playbooks and are either time-triggered (run at specific times) or event triggered (run when there are changes to a feed).
A job is an automated playbook task or set of playbook tasks that are scheduled to run at predefined intervals or under specific conditions. Jobs
can be used for data enrichment, periodic reporting, threat intelligence gathering, or any repetitive operational tasks that need to be performed
regularly without manual intervention. There are two types of jobs:
Time triggered jobs that run at specific times: For example, you can schedule a time triggered job that runs nightly and removes expired
indicators.
Jobs triggered by a delta or change in a feed: For example, you can define an event triggered job to run a playbook when a specified TIM
feed finishes a fetch operation for new indicators.
Action Details
Edit an existing job In the table, select a job and click Edit.
Perform additional job In the table, select a job and click one of the following:
management
Run now
Disable
Enable
Pause
Resume
Abort
Delete
Action Details
View job status The chart panel at the top of the Jobs page shows various status buttons. Click one of the following buttons
to filter the list of jobs for that status:
Running
Waiting
Error
Disabled
Time Triggered
Event Triggered
Search for a specific job Enter a search query in the filter field. You can also save a filter.
View job details in the table By default, the displayed table columns are:
Name
Job Status
Last Run
Next Run
Details
Click to change the displayed columns. You can also select to show:
Owner
Playbook
SLA
Labels
Attachments
Job Schedule: This column shows a human readable description of a cron schedule for a job.
Create a time triggered or feed triggered job in Cortex XSOAR to run a playbook.
Time triggered jobs run at predetermined times. You can schedule the job to run at a recurring time or one time at a specific date and time. For an
example, see the Create jobs to process indicators example.
3. If you want the job to repeat at regular intervals, select Recurring and select the desired interval.
You can configure the recurring job using a cron expression. To do so, after selecting the Recurring checkbox, click Switch to Cron view and
enter the expression. For help defining the cron expression, click Show cron examples after switching to cron view.
NOTE:
To view a human readable description of a cron schedule for an existing job, click and select Job Schedule from the available
columns.
4. If you do not want the job to repeat, Select date and time for the job to run.
5. Add or create any relevant tags to use as a search parameter in the system.
6. In the BASIC INFORMATION, section, add relevant time triggered job parameters from the following:
Name Description
Labels Select the labels that are available in the incident type.
Phase Select the phase of the investigation in which this incident is opened.
All fields that have the Add to all incident types checkbox selected appear in incident and indicator fields.
8. In the QUEUE HANDLING section, select one of the following response options to use if the job is triggered while a previous run of the job is
active:
Cancel the previous job run and trigger a new job run
Trigger a new job run and execute concurrently with the previous run
We recommend to avoid triggering a job while a previous run of the job is active by configuring the playbook a job triggers to close the
investigation before running a new instance of the job.
Create a job that is triggered when a feed has complete an operation and there is a change in the content.
Jobs triggered by a delta in a feed (event triggered jobs) run when a feed completes an operation and there is a change in the content. For the job
to trigger, there must be a delta between the incoming feed and the previous one. You can define a job to trigger a playbook when the specified
feed or feeds finish a fetch operation that includes a modification to the feed. The modification can be a new indicator, a modified indicator, or a
removed indicator. For example, you may want to update your firewall every time a URL is added, modified, or removed from the Office 365 feed.
You can configure a job that triggers the firewall update playbook to run whenever a modification is made to the feed.
For an example of using a job triggered by a delta in a feed, see the Create jobs to process indicators example.
NOTE:
A job triggered by a delta in a feed runs only if there is a change in the feed, and does not run on a feed’s initial fetch. For the initial fetch, you
can run the playbook manually and then set up an event triggered job for subsequent fetches.
If you want to trigger a job after a feed completes a fetch operation and the feed does not change frequently, you can select the Reset last seen
option in the feed integration instance. The next time the feed fetches indicators, it will process them as new indicators in the system.
3. Add or create any relevant tags to use as a search parameter in the system.
Any feed: The playbook runs when a modification is made to any feed.
Specific feeds: Select the feed instances that will trigger the playbook to run when a modification is made to them.
Select the playbook you want to run when the conditions for the job are met.
Provides an example of a job triggered by a delta in a feed to process incoming indicators and a time triggered job to push indicators to a SIEM.
In this example, when indicators are fetched from a threat intel feed, a job triggers a playbook to enrich the indicators to determine which
indicators should be investigated. A time triggered job then pushes the relevant indicators to your SIEM.
Use the following integration and playbooks to ingest and process the indicators.
Unit 42 Intel Objects This integration fetches a list of threat intel objects, including Campaigns, Threat Actors, Malware, and Attack
Feed integration Patterns, provided by Palo Alto Network's Unit 42 threat researchers.
TIM - Process Indicators This playbook tags indicators ingested by feeds that require manual approval. To enable this playbook, the
- Manual Review indicator query needs to be configured. The playbook uses the Indicator Auto Processing sub-playbook, which
playbook identifies indicators that should not be added to a blocked list, such as IP indicators that belong to business
partners or important hashes.
For the TIM - Process Indicators - Manual Review playbook to run, it needs to be triggered by a job. The job
concludes by creating a new incident that includes all the indicators that the analyst must review.
TIM - Add All Indicators This playbook sends to the SIEM only indicators (IP, bad hash, domains, and URLs) that have been processed
Types to SIEM playbook and tagged accordingly after an automatic or manual review process.
By default, the playbook is configured to work with ArcSight and QRadar, but change this to match the SIEM in
your system.
1. Go to Settings & Info → Settings → Integrations → Instance and search for Unit 42 Intel Objects Feed.
Before customizing the playbook, we recommend creating a list of indicators that you want to exclude from the manual review process. In this
example, we will create a list of business partner IP addresses.
4. Select who can view or edit the list in the PERMISSIONS section.
Task 3. Customize the TIM - Process Indicators - Manual Review playbook to process the indicators
1. Go to Playbooks and search for TIM - Process Indicators - Manual Review and either detach or duplicate the playbook.
NOTE:
If you detach a playbook, it does not receive content pack updates until it is reattached, but then your changes are discarded. Duplicate
the playbook if you want to receive content pack updates and keep your changes.
a. Change From Context data → Inputs → General (Inputs group) → OpenIncidentToReviewIndicatorsManually the value to Yes, so an
incident with the indicators for review is created.
c. Under Query, enter a query to process the specific indicators that you want. For
example, sourceBrands:"Unit42IntelObjectsFeed".
a. To exclude business partner IP addresses that you defined in Task 2, locate and edit the TIM - Process Indicators Against Business
Partners IP List task.
b. From the Inputs tab, under BusinessPartnersIPListName, select the source, and under LISTS, add the created list.
Task 4. Define a job to trigger the playbook when indicators are fetched
3. From the TRIGGERS section, select Specific feeds and add the feed configured in Task 1.
Whenever indicators are ingested from Unit 42, the playbook runs and creates an incident if an incident needs to be reviewed. You can track
the status of the job in the table on the Jobs page.
Task 5. Customize the TIM - Add All Indicators Types to SIEM playbook
1. Go to Playbooks and search for TIM - Add All Indicator Types to SIEM and either detach or duplicate the playbook.
NOTE:
If you detach a playbook, it does not receive content pack updates until it is reattached, but then your changes are discarded. Duplicate
the playbook if you want to receive content pack updates and keep your changes.
1. Select From indicators and set the query for the indicators to add. For example tags:approved_black, approved_white.
The purpose of the playbook is to send to the SIEM only indicators that have been processed and tagged accordingly after an
automatic or manual review process. The playbook comes out-of-the-box with queries that you can update if required.
Ensure the playbook includes a task that closes the investigation once it is completed.
Task 6. Define a time triggered job to push the indicators to the SIEM
3. (Optional) Select Recurring and determine how often you want the job to run. For example, run once a day at midnight.
5. In the Playbook field, select the TIM - Add All Indicator Types To SIEM playbook to run.
Whenever an indicator is ingested that has a relevant tag such as approved_list, the job pushes that indicator to the SIEM.
1. Open the job that you created to process indicators from Task 3.
You can tag any indicator with the tags that you want to push. It does not have to be this job.
2. In the Work Plan, open the Create Process Indicators Manually incident task.
5. Review the indicators and update the indicators with tags that you want to push to the SIEM.
6. When finished with the review, in the Work Plan, click the Manually review the incident task, select Yes, and Mark Completed.
7. Select the job you defined in Task 6 and click Run now.
This tag is appended to every indicator that has been processed and pushed to the SIEM.
14 | SLAs
Abstract
SLAs enable you to define specific goals and responsibilities and improve quality and availability in your investigations.
Service Level Agreements (SLAs) empower you to define clear expectations, prioritize incidents effectively, and ensure efficient resolution.
Configure SLAs within incident types and fields, and set automated timers directly in your playbooks or scripts for enforcement. Additionally, you
can manage SLA/timers through the CLI.
SLA fields count down the time remaining. SLAs fields can be incorporated in cases. You can trigger actions in the event the SLA passes.
SLAs are an important aspect of case management in Cortex XSOAR. SLAs enable you to define specific goals and responsibilities and improve
quality and availability. Analysts can prioritize incidents and ensure that those incidents are handled efficiently. Managers can see an overview of
those incidents, improve reaction time, and measure success.
Action Description
Define SLAs in Incorporate SLAs into your incidents to set how long an action should take. SLAs are not enforced inherently, but can
incident types and be configured to be acted upon by the user. You can view how much time is left before the SLA becomes due, as well
fields as configure actions to take if the SLA passes its due date.
You can define an SLA in an incident type, which occurs when the incident is created. These global settings apply when
the incident opens until closed. Some out-of-the-box incident types have the SLA defined by default. For more
information, see Configure an SLA in an incident type.
You can also define an SLA in an incident field for more granular control, such as setting the time to assign an incident.
For more information, see Configure Timer/SLA fields.
When set up, you can see the SLAs for the incident type and incident fields in the incident table and incident layout.
Set up Timers Timer incident fields can be started, stopped, or paused in a playbook, script, or manually in the CLI. These fields give
you granular control when tracking the response to a given incident. For example, the Time to Assignment incident field
tracks the time to assign an incident that can be started, stopped, or paused.
NOTE:
Timers measure how much time has passed since the event. SLAs measure how much time is left until the event.
SLA scripts
You can use SLA scripts to act on breaches, such as sending an email when a breach occurs, or specific changes to an incident field, such as a
change of incident owner. Cortex XSOAR includes out-of-the-box scripts or you can create your own script. For more information, see Automate
If you want to set or change the SLA for an incident type or field you can use the setIncident command in the CLI. For timers, you can use
commands such as startTimer, stopTimer, and pauseTimer. For more information, see Use SLA and Timer field commands manually in the
CLI.
Incident layouts
When you configure the Timer/SLA fields, you can add them to your incident layout to view the status of the SLA, if any of the SLAs are overdue,
and if so, by how much. You can also view the number of cases that are at risk of passing the SLA or are already late. You can set the risk
threshold for each incident field or rely on the default setting, which is 72 hours. You can change the default threshold by adding a server
configuration. See Configure the Global Risk Threshold.
Dashboards
Cortex XSOAR comes out-of-the-box with an SLA dashboard, where you can view SLA information, such as within SLA by type, late SLA by type,
mean time to resolution, etc. You can also generate reports such as late incidents, open incidents, etc.
Further resources
Watch the following video to see how to set up SLA/Timers in your use case.
An error occurred.
On the Incidents page, in the incident table, you can view the SLA (due date) by default. You can also search using the dueDate parameter, such
as dueDate:>="now" to search for incidents that are either due now or overdue. If it has not been set, you need to configure the incident type.
Some out-of-the-box incident types have a default SLA date. To update out-of-the-box incident types, you need to either duplicate or detach
them.
3. In the SLA field, add the weeks, days, and hours required.
Estimate how long the incident should take from being ingested into Cortex XSOAR until it is closed. For example, if you expect your
incident type to be closed within 36 hours, select 1 day and 12 hours.
The owner of the incident will receive an email that the SLA expiration date is approaching.
6. (Optional) To test the SLA, go to the integration instance where you ingest incidents.
NOTE:
Any previous incident types that were ingested will not have the SLA set. You need to ingest the incidents again.
a. Open the instance settings and select Fetches incidents (if not already set).
Create a new SLA or timer and add an SLA script to trigger when SLA time has passed.
By default, Cortex XSOAR comes out-of-the-box with several Timer/SLA fields, such as Remediation SLA and Time to Assignment, or create your
own Timer/SLA fields. You can use the fields as an SLA, an SLA and timer, or a timer.
Action Description
SLAs Set the date in the incident field, which counts the completion time. Use it to create widgets in a dashboard/report and to the
incident layout, which is useful to see when an SLA is breached or at risk.
You can also add an SLA script, so when an SLA is breached certain actions can occur, such as sending an email. For more
information, see Automate changes to incident fields using SLA scripts.
NOTE:
Incidents sorted using an SLA/Timer field are sorted by the due date of the SLA field.
SLA Counts the time elapsed since the incident field started. You can add it to a playbook task or script. It does not run automatically.
Timers You need to start/stop/pause it in a playbook, script, or manually in the CLI.
In the following example, configure the SLA information in the Time to Assignment field.
1. Navigate to Settings & Info → Settings → Object Setup → Incidents → Incident Fields.
NOTE:
If creating a new SLA field, in the field type field, select Timer/SLA
By default, the SLA field shows hours and minutes. You can change this to days and hours, by clicking Hours.
For example, if you set the SLA for one day and the Time to Assignment has started but not stopped within one day, the analyst will be in
breach of the SLA.
Useful for dashboards and reports. When the timer falls below this threshold, it is considered at risk. By default, the threshold is 3 days. You
can change this by adding a server configuration. See Configure the Global Risk Threshold.
5. Under Run on SLA Breach, select the script to run when the SLA time has passed. For example, the sendEmailOnSLABreach script sends
an email when the SLA is breached. For more information, see Automate changes to incident fields using SLA scripts.
NOTE:
Only scripts to which you have added the SLA tag appear in the list of scripts you can select.
When you hover over the machine name (below the Field Name) note the name which is used in the command line or script.
Ensure that the incident layout is used in the incident type you want to view the SLA information.
In this example, you want to create a new field that notifies a user when it reaches a particular stage in the investigation with an SLA of three days
and the risk set to one day.
To run a timer, it must be run in a playbook task, a script, or manually in the CLI.
You can set a Timer/SLA field to start running by doing the following:
In a Timer/SLA field such as the Time To Assignment field, you can control all incidents that use the field regardless of the playbook
configured for them by configuring a script to run when the Owner field changes.. This method automatically stops the timer when an analyst
is assigned. See Automate changes to incident fields using SLA scripts. The advantages of using this option are scalability and consistency.
Stop the field through a playbook. The Timer/SLA field can be triggered to start, pause, or stop when a certain task occurs. For example, a
timer can be triggered to stop for the Time to Assign field when the incident is assigned an owner, and to immediately start the timer for
the Time to Remediation field.
When defining a Timer in a task or section header, in the Timers tab, select the action that you want the timer to perform for the task.
NOTE:
If creating tasks for SLAs they do not have to execute anything. You can also use section headers.
Option Description
NOTE:
NOTE:
Timers are automatically stopped when an incident is closed. After a timer is stopped, you can only reset a timer using the
resetTimer command in the CLI.
Some playbooks, such as Phishing - Generic v3, comeout-of-the-box with SLA timer tasks included. If you need the same timers across use
cases, create a sub-playbook based on your use case or conditions such as incident severity.
Although you can create your own SLA sub-playbooks, the CaseManagement - Generic content pack includes several SLA playbooks, which you
can configure. For more information, see the CaseManagement - Generic content pack.
The Case Management - Generic - Start SLA Timers playbook starts the Time to Assignment or Remediation SLA timers field based on whether
an owner is assigned to the Incident. You can add this as a sub-playbook to your use case.
NOTE:
When a task or section has a Timer/SLA action configured, it displays the hourglass icon.
1. The first task is a conditional task which determines whether an incident.owner has been assigned.
2. On the left-hand side task, if no owner is assigned the Time to Assignment timer starts.
The Print script returns details to the War Room confirming that the script has started to run.
3. On the right-hand side task, if an owner is assigned the Remediation SLA timer starts.
NOTE:
This playbook sets the SLAs for incidents, the Time to Assignment Timer, and the Remediation SLA Timer based on the incident severity using
playbook inputs. For example, set the number of minutes for incident and remediation SLAs for critical incidents. For more information, see Case
Management - Generic - Set SLAs based on Severity. Add this as a sub-playbook to your use case.
Alternatively, create a playbook or script to modify SLA fields based on certain conditions. For example, in the Set Severity to Medium task, you
can add an SLA such as Time to Assignment 15 minutes where there is high severity (3).
Create scripts to perform specific actions in Cortex XSOAR when the SLA is breached. Properties in the SLA timer field value.
Scripts in Cortex XSOAR enable you to automate processes. In the context of SLA, you can create scripts that will perform specific actions when
the SLA is breached. Each SLA script must include the SLA tag.
Cortex XSOAR comes with an out-of-the-box script, called SendEmailOnSLABreach, that sends an email to specific users when the script is
triggered. You can add this to any incident field as required. For example, add the script to the Remediation SLA incident, so that when an
SLA/Timer is breached, an email is sent automatically. By default, the script sends an email to the incident assignee, but you can manually edit the
script to add additional recipients..
In the following example, you want to stop the Time To Assignment timer when an owner is assigned and start the Remediation SLA timer.
If you have not done so already, download the CaseManagement-Generic content pack. This content pack includes the TimersOnOwnerChange
script.
In the War Room, the field returns the new value from idle to running.
c. In the War Room, you should see that the Time to Assignment has ended and the Remediation SLA has started:
Create scripts that perform specific actions in Cortex XSOAR when the SLA is breached. Properties in the SLA timer field value.
When you create your scripts, the following arguments are automatically added, in addition to the basic elements provided with every script (for
example, current investigation and current incident):
fieldValue: The current triggered SLA field's value. For example the startDate.
The following table lists the different properties in the SLA timer field value:
dueDate Date The date by which the SLA for this timer is due.
breachTriggered Boolean Whether the timer was already in breach of the SLA.
sla INT (in The period is defined as the SLA for this timer. This is the value that you defined in the
minutes) Timer field.
lastPauseDate Date The last date at which the SLA timer was paused.
startDate Date The date at which the SLA timer was started.
accumulatedPause INT (in The total number of seconds that the timer was in a paused state.
seconds)
totalDuration INT (in The total number of seconds that the timer was running. This property is populated after
seconds) the timer is stopped.
slaStatus INT Represents the Cortex XSOAR SLA status. Values are:
runStatus String Represents the current status of the timer. Values are:
idle
running
paused
ended
See the following video for a practical example of creating an SLA script and how to use it in a playbook.
An error occurred.
14.7 | Use SLA and Timer field commands manually in the CLI
Abstract
Use timers and SLA commands for a specific incident, such as decreasing the required response time for a high-priority incident.
You can manage the timers and SLA for a specific incident manually in the CLI, which enables you to manage SLAs on a global level and a more
granular level within specific incidents when the need arises. For example, if the severity of the incident dictates that you decrease the response
time for the given incident.
Use the setIncident command to set the SLA incident due to date or to set a specific SLA field in an incident. When adding the sla parameter to
the command, it sets the time for the incident's due date. If you also add the slaField you set the SLA for the incident field.
For example, to change the Time to Assignment field to 30 minutes in the current incident:
!setIncident sla=2024-02-01T11:12
NOTE:
When defining the values for the slaField use the machine name for the field, which is lowercase and without spaces. You can check the
machine name by editing the incident field. For example, the Remediation SLA field is remediationsla.
Command Description
startTimer Starts the timer in a Timer/SLA field. For example, !startTimer timerField=timetoassginment. This command can also
be used to restart a paused timer.
NOTE:
Timer/SLA fields are not started automatically when an incident is created unless run in a playbook.
pauseTimer Pauses the timer in a Timer/SLA field. For example, !pauseTimer timerField=timetoassignment. Use this command
when a Timer/SLA field has started.
stopTimer Stops the timer in a Timer/SLA field. For example, !stopTimer timerField=timetoassignment .After a Timer/SLA field is
stopped, you can only reset a timer using the resetTimer command.
NOTE:
Command Description
resetTimer Resets a timer in a Timer/SLA field, which resets the elapsed time, and the status of the timer for the incident. This command
should be used to enable a timer that was stopped. For example, !pauseTimer timerField=timetoassignment.
NOTE:
When running the commands, you can specify the incidentID to change the timer for a different incident.
Add server configuration in Cortex XSOAR to change the SLA Risk threshold from the default 72 hours.
By default, the risk threshold is 72 hours. You can change the threshold by adding a parameter to the system settings.
NOTE:
When changing the server configuration, the new value does not affect existing fields retroactively. It affects new fields that you create.
1. Navigate to Settings & Info → Settings → System → Server Settings → Server Configuration → + Add Server Configuration.
b. In the value field, enter the number, in hours, to which to set the risk threshold.
3. Click Save.
Search incidents based on their SLA status, a SLA field, or a timer field.
You can search for incidents based on their SLA in several ways:
NOTE:
The SLA status is not defined unless the timer is in a stopped mode, meaning either paused or ended.
For example, you can search for all of the timer fields that are currently running, or you can search for all incidents with a specific SLA status.
2. To search for an incident whose Timer/SLA is still active, enter the following:
This parameter is required for queries whose run status is neither ended nor paused, to improve query performance.
3. To search for an incident whose timer is no longer active, enter the SLA Status.
In the following example, search for all incidents using the Remediation SLA field that fulfill the following criteria:
The Remediation SLA run status has not ended or paused AND the due date is later than now OR the SLA status is within time.
The Remediation SLA run status has not ended or paused AND the due date is earlier than now OR the SLA status is late.
The Remediation SLA run status has not ended or paused AND the due date is between now and five hours (the five hours represent our
risk threshold) OR the SLA status is Risk.
Create, edit, and share dashboards and reports in Cortex XSOAR. Add widgets to a dashboard and configure a default dashboard
Create or modify dashboards and reports, schedule automated reports for recurring needs, and design custom widgets to suit your visualization
goals. Leverage fully customizable widgets from different sources and display them in clear formats like graphs, pie charts, and text.
15.1 | Dashboards
Abstract
Create, edit, and share dashboards in Cortex XSOAR. Add widgets to a dashboard and configure a default dashboard.
Dashboards offer graphical overviews of your tenant's activities, enabling you to effectively monitor incidents and overall activity in your
environment. Each dashboard comprises widgets that summarize information about your endpoint in graphical or tabular format.
Default dashboards
NOTE:
If you install a content pack which contain dashboards, these can be added from the More Dashboards dropdown. To change the order of the
dashboards, hover over the six block icon next to a dashboard name. When the cursor turns into a hand, drag and drop the dashboard into the
required location.
Dashboard Description
Dashboard Description
API Execution Metrics Information about API calls. You can use the API Execution Metrics
for Enrichment Command widget for troubleshooting and to make
decisions about indicator enrichment.
Cost Optimization Playbooks Information about playbooks including task executions, average
runtime, etc.
Threat Intelligence Feeds Information about TIM feeds that are being ingested into Cortex
XSOAR.
Cost Optimization Instances Information about commands that have been executed in Cortex
XSOAR.
MITRE ATT&CK Information about MITRE ATT&CK techniques. Part of the MITRE
ATT&CK content pack.
NOTE:
You can add this to your displayed dashboards when clicking More
dashboards.
Threat Intel Management Information about active indicators by reputation, type, expired
indicators, etc.
NOTE:
You can add this to your displayed dashboards when clicking More
dashboards.
VirusTotal API Execution Metrics Information about VirusTotal API commands. Part of the VirusTotal
content pack.
NOTE:
You can add this to your displayed dashboards when clicking More
dashboards.
Abstract
Cortex XSOAR dashboards provide visual data from customizable widgets. Create, edit, import, share and delete Cortex XSOAR dashboards.
In the Dashboards tab, you can set the date range from which to return data and the refresh rate. In each dashboard you can also do the
following.
Filter dashboard data You can filter dashboard data by either typing the query in the query bar, or in the relevant widget, by clicking Filter
In. When clicking Filter In the query is added to the query bar. To filter out, delete the query. For example, if you
only want to see active incidents that are high severity, in the Active Incidents by Severity widget, hover over High
and click Filter In.
NOTE:
If you want to see more information about the data, click the data to take you to the relevant page. For example,
in the Active Incidents by Severity widget, to see only high incidents, click High. This takes you to the Incidents
page, where you can see all the active critical incidents.
After creating the filter, you can send the URL of the filtered dashboard to other users.
Change the color of You can change the color of items (such as indicator types and incident types) in some widgets, depending on the
legend items in graphs widget type and the chart/graph type. When editing a widget, click the item within the legend in the preview
window on the right. The Edit color option appears and you can select the color for the item.
If you edit the color after a widget has been added to a dashboard or report, the change only applies to the widget
within that dashboard or report. If you edit the widget directly in the Widgets Library before adding it to a
dashboard or report, the change is applied every time you add the widget to a dashboard or report. Changes to an
item within a widget only apply within that widget. For example, changing the color for the Phishing incident type
within the Active Incidents widget only applies to Active Incidents, and not other widgets that contain incident
types.
Copy values from In the Quick chart definitions window, click an item in the legend and select Copy value. This enables copying the
graphs value from the widget for commands in the War Room.
Create or edit a Design a new interface for specific security investigation needs, or edit an existing dashboard. To edit out-of-the-
dashboard box dashboards, you first need to duplicate them.
Import and export a The dashboard is exported as a JSON file. You can make any changes you require and then import the file, for
dashboard example between test and production environments.
(Admin only) Define In a production environment, an administrator defines the default dashboard for each user and selects the default
dashboard access dashboards that the user sees when logging into the tenant, depending on a user’s role. If a user has not modified
their dashboard, these dashboards are added automatically, otherwise users can add these dashboards to their
existing dashboards. These default dashboards can be removed but not deleted, and can be added again if
required.
NOTE:
For more information, see Manage roles in the Cortex XSOAR tenant.
Share a dashboard Sharing dashboards enables collaboration and alignment among security teams by providing real-time visibility
into key metrics and insights, facilitating informed decision-making and coordinated response efforts. Out-of-the-
box dashboards and dashboards from content packs cannot be shared unless you duplicate them.
Create a report You can generate a report from the dashboard as is, or configure report settings, for example add new widgets to
the report, change the report format, and schedule running a report. To create a report from a dashboard, click
and select Create report. Click Run Now to generate the report.
Abstract
Create and customize a dashboard in Cortex XSOAR, including adding widgets to a dashboard. Share a dashboard.
Creating a new dashboard enables designing a personalized interface for specific security investigation needs. This facilitates quick access to
critical information and enhances operational effectiveness.
Editing an existing dashboard enables security teams to focus on the most relevant information. By presenting only the most relevant data and
metrics, investigation and response is more efficient and streamlined.
Once you create or edit a dashboard, you can share it with relevant roles to facilitate collaboration and enhance visibility into relevant data and
insights across teams.
Create a dashboard
1. To create a new dashboard, select Dashboards & Reports → Dashboards → More Dashboards → New Dashboard.
From the Date Range dropdown list, set the date range for the dashboard.
By default, a widget inherits the date range that you specify when creating the widget. If the date range for the report or dashboard does not
include the widget date range, the data is blank. To change the widget’s date range, click and select Use Widget’s date range or Use
Dashboard’s date range. By default, the dashboard’s date range is used and the option in the dropdown shows as Use Widget’s date range.
If you change this to use the widget’s date range, the dropdown then shows the option to Use Dashboard’s date range.
NOTE:
Each widget can have its own date range, which can be different from the dashboard's date range.
1. Click to create a custom widget or in the Widgets Library, find a relevant existing widget and click Add.
The edits to the widget in the dashboard apply only for the report. If you want to make changes that are available for other users,
dashboards, or reports, edit the widget directly in the Widgets Library by clicking the pencil edit icon.
3. To add a new widget from the Widgets Library, follow the procedure in Create a widget using the widget builder.
4. Click Save.
5. Save the dashboard. If you select Save Version, you can view a history of the changes made to your dashboard and you can revert to
previous versions.
Edit a dashboard
From the Date Range dropdown list, set the date range for the dashboard.
By default, a widget inherits the date range that you specify when creating the widget. If the date range for the report or dashboard does not
include the widget date range, the data is blank. To change the widget’s date range, click and select Use widget’s date range or Use
dashboard’s date range. By default, the dashboard’s date range is used and the option in the dropdown shows as Use widget’s date range.
If you change this to use the widget’s date range, the dropdown then shows the option to Use dashboard’s date range.
NOTE:
Each widget can have its own date range, which can be different from the dashboard's date range.
1. Click to create a custom widget or in the Widgets Library, find a relevant existing widget and click Add.
The edits to the widget in the dashboard apply only for the report. If you want to make changes that are available for other users,
dashboards, or reports, edit the widget directly in the Widgets Library by clicking the pencil edit icon.
3. To add a new widget from the Widgets Library, follow the procedure in Create a widget using the widget builder .
4. Click Save.
7. Save the dashboard. If you select Save Version, you can view a history of the changes made to your dashboard and you can revert to
previous versions.
Share a dashboard
To share an out-of-the-box dashboard, you need to duplicate it and share the copy.
3. In the Share dashboard dialog box, select the roles with whom you would like to share this dashboard and their permission levels.
Dashboards can be shared for all roles or for specific roles, with the following permission levels.
Once shared, regardless of who creates the dashboard, any user who has read and write permissions can change the sharing options
including to stop sharing. If an analyst who created and shared the dashboard deletes the shared dashboard, it is removed from all
users.
Read & Edit: Edit, copy, share, export import, and remove the dashboard. For example, analysts may want to enable and encourage
team-based dashboards, so that dashboards can be edited and maintained by more than a single user.
NOTE:
4. Click Save.
The dashboard is now shared among other analysts in the specified role or all roles.
5. (Users) To add a shared dashboard, from the home page, select More Dashboards and select the shared dashboard from the drop-down.
Instead of a user adding the dashboard, you can send the URL to the user, after sharing the dashboard.
NOTE:
If you are using a remote repository, all dashboards are automatically shared in the development environment. As a result, the Share
option can not be selected from the Settings menu.
c. Click Save.
15.2 | Reports
Abstract
Create, edit, and customize reports in Cortex XSOAR. Schedule reports with Cron expressions.
Reports contain statistical data in the form of widgets, which enable you to analyze data from inside or outside Cortex XSOAR in different formats
such as graphs, pie charts, or text.
After generating a report, it also appears in the Reports tab for future reference.
Abstract
Create a new report or customize an existing report in Cortex XSOAR, including adding widgets and changing the timezone and time format in a
report. Schedule and generate a report.
You can create and edit reports in the Reports tab, including adding widgets, scheduling times, setting incident time range, adding recipients, and
changing the format and size. Reports support PDF and CSV.
Report actions
Create or edit a When creating a report, what you see is what you get. How you configure the report is how it generates. You can add
report widgets to a report, change the format and paper size, and insert page breaks by adding the Page Break widget. If you
have a table widget that contains many rows, you can select the number of rows on each page or print the whole table
(in the table widget, right click and select Force Print full Chart).
You can add your own logo by going to Settings & Info → Settings → System → Server Settings → Logo Configuration
and uploading your logo in the Full-size logo field. Reports are generated in PDF or CSV formats.
Create a report You can create a report from the dashboard as is, or add new widgets as required. You have the same functionality as
from a dashboard custom reports, such as format, when to run, and orientation. To create a report from the dashboard, on the
Schedule a report You can schedule a report to run specific times, or run the report immediately. You can also send the report to specific
recipients, and restrict the report according to roles.
Generate an out-of- Cortex XSOAR comes with out-of-the-box reports, such as critical and high incidents, daily incidents, and last 7 days
the-box report incidents. You can change the time range for the incidents, the scheduled time and who can receive the report. If you
want to make more comprehensive changes to out-of-the-box reports, copy or download (and then upload) the report.
Schedule a report Captures investigation-specific data and shares it with team members. You can customize how the information is
from an incident displayed for existing incidents.
Create a report
1. In the Dashboards & Reports page Reports tab, select New Report.
1. Click to add a custom widget or select an existing widget from the Widgets Library.
The edits to the widget in the report apply only for the report. If you want to make changes that are available for other users,
dashboards, or reports, edit the widget directly in the Widgets Library by clicking the pencil edit icon.
3. To add a new widget from the Widgets Library, follow the procedure in Create a widget using the widget builder.
4. Click Save.
Date Range The Date Range for the report. Default is Last 7 days.
By default, a widget in a report inherits the date range that you specify when creating the report. If the date range for
the report does not include the widget date range, the data is blank. To change the widget’s date range, click and
select Use widget’s date range or Use dashboard’s date range. By default, the dashboard’s date range is used and
the option in the dropdown shows as Use widget’s date range. If you change this to use the widget’s date range, the
dropdown then shows the option to Use dashboard’s date range.
NOTE:
Each widget can have its own date range, which can be different from the report’s date range.
Schedule You can schedule a report to run at specific times with start and end dates. You can also add restrictions on the
report content and the number of recipients.
If you want to send the report to users by email, you need to add an email integration instance, such as EWS, Gmail,
or Mail Sender. Default is Disabled.
To schedule a report:
1. Under the Schedule field, click Disabled or the date it was last run.
3. If you want to restrict the content of the report according to roles, in the Run as Roles field, from the dropdown
list, select one of the roles.
Human view: Schedules a report according to the set number of hours. You can add days of the week
with start and end times.
When scheduling a report in the Human view the Next Run date may be incorrect. You may need to
change the number of hours field when scheduling the report.
Cron view: Schedules a report according to a Cron time string format, which consists of five fields that
Cron converts into a time interval. Use this view to schedule a report on certain hours, days, months,
years, and so on. For examples of Cron strings, see Report scheduling examples.
NOTE:
When using the Cron view, the Start at and Ends fields may conflict with Cron string expressions. For
example, when using frequencies (i.e. ‘/’) if you type the expression 0 */6 * * * (runs every 6
hours), with a start time of 15.00, the next run time is not 21:00. The run time depends on Cron run
times, which are 00.00, 06:00, 12:00, and 18:00 per day. In this example, the report runs at 15.00,
18.00, and then 00.00, etc. For examples using Cron generally, see Cron examples.
Never (default)
7. Select Run Now to run the report immediately. If you click Save, the report appears in the main Reports tab
with the scheduled run date in the Next Run field.
If you want to send the report to users by email, you need to add an email integration instance, such as EWS, Gmail,
or Mail Sender.
Format The report file format. Options are PDF (default) or CSV.
NOTE:
Only tables and text based widgets are exported in CSV. Other widgets are ignored.
Orientation Sets the report display orientation. Options are Portrait (default) or Landscape.
TIP:
We recommend using landscape orientation to ensure that all information displays in the report.
A4 (default)
A3
Letter
5. Save the report. If you select Save Version, you can view a history of the changes made to your report and you can revert to previous
versions.
Edit a report
1. On the Dashboards & Reports page Reports tab, select the Duplicate Report icon for the report you want to edit.
1. Click to add a custom widget or select an existing widget from the Widgets Library.
The edits to the widget in the report apply only for the report. If you want to make changes that are available for other users,
dashboards, or reports, edit the widget directly in the Widgets Library by clicking the pencil edit icon.
3. To add a new widget from the Widgets Library, follow the procedure in Create a widget using the widget builder.
4. Click Save.
Date Range The Date Range for the report. Default is Last 7 days.
By default, a widget in a report inherits the date range that you specify when creating the report. If the date range for
the report does not include the widget date range, the data is blank. To change the widget’s date range, click and
select Use widget’s date range or Use dashboard’s date range. By default, the dashboard’s date range is used and
the option in the dropdown shows as Use widget’s date range. If you change this to use the widget’s date range, the
dropdown then shows the option to Use dashboard’s date range.
NOTE:
Each widget can have its own date range, which can be different from the report’s date range.
Schedule You can schedule a report to run at specific times with start and end dates. You can also add restrictions on the
report content and the number of recipients.
If you want to send the report to users by email, you need to add an email integration instance, such as EWS, Gmail,
or Mail Sender. Default is Disabled.
To schedule a report:
1. Under the Schedule field, click Disabled or the date it was last run.
3. If you want to restrict the content of the report according to roles, in the Run as Roles field, from the dropdown
list, select one of the roles.
Human view: Schedules a report according to the set number of hours. You can add days of the week
with start and end times.
When scheduling a report in the Human view the Next Run date may be incorrect. You may need to
change the number of hours field when scheduling the report.
Cron view: Schedules a report according to a Cron time string format, which consists of five fields that
Cron converts into a time interval. Use this view to schedule a report on certain hours, days, months,
years, and so on. For examples of Cron strings, see Report scheduling examples.
NOTE:
When using the Cron view, the Start at and Ends fields may conflict with Cron string expressions. For
example, when using frequencies (i.e. ‘/’) if you type the expression 0 */6 * * * (runs every 6
hours), with a start time of 15.00, the next run time is not 21:00. The run time depends on Cron run
times, which are 00.00, 06:00, 12:00, and 18:00 per day. In this example, the report runs at 15.00,
18.00, and then 00.00, etc. For examples using Cron generally, see Cron examples.
Never (default)
7. Select Run Now to run the report immediately. If you click Save, the report appears in the main Reports tab
with the scheduled run date in the Next Run field.
If you want to send the report to users by email, you need to add an email integration instance, such as EWS, Gmail,
or Mail Sender.
Format The report file format. Options are PDF (default) or CSV.
NOTE:
Only tables and text based widgets are exported in CSV. Other widgets are ignored.
Orientation Sets the report display orientation. Options are Portrait (default) or Landscape.
TIP:
We recommend using landscape orientation to ensure that all information displays in the report.
A4 (default)
A3
Letter
5. Save the report. If you select Save Version, you can view a history of the changes made to your report and you can revert to previous
versions.
Generate a report
Date Range
Recipients
Next Run
2. Click Run.
TIP:
Ensure that you enable pop-ups in your browser. If reports do not download after you click Run, add the Cortex XSOAR URL to your browser's
pop up blocker exceptions. For more information, see Troubleshoot script timeout for reports.
Abstract
Examples of scheduling a Cortex XSOAR report using Cron expressions. Cron scheduler format.
The following examples describe how to schedule a report using the Cron scheduler format. The Cron time string format consists of five fields that
Cron converts into a time interval. For example, a Cron string of 0 10 15 * * runs a report on the 15th of each month at 10:00 am.
In this example, you want to schedule a report on January 1, 2020 at 0800 (8:00 am) and thereafter on the 1st of each month.
Number Description
00 00 in minutes
8 8am
1/1 Starting in January, and every month thereafter. If you want the report to start on a different month, change
1/1 to the relevant month, such as 2/1 for February, 3/1 for March and so on.
The reports run at 8am on January 1, 2020, February 1, 2020, March 1, 2020 and so on.
NOTE:
Cron calculates the next relevant date. If you want the report to run next month, provided that date has passed in the current month, you do not
need to specify the month. For example, assume the date is December 12. To run the report on January 11 at 8:00 am, type 00 8 11 * *. The
report starts running on January 11 (and on 11th of each month thereafter). If the current date is December 10, the next run date would be
December 11.
In this example you want to schedule a report on January 6, 2020 at 0800 (8:00 am) and every year on January 1 (the current date is Thursday 12
December 2019).
Number Description
00 00 minutes
8 8am
The report runs at 0800 on January 1, 2020, January 1, 2021, January 1, 2022, etc.
In this example, you need to schedule a report at midnight every week on a Monday (the current date is Thursday, 12 December 2019)
Number Description
00 00 in minutes
0 Midnight
* Any day
* Any month
1 Monday
The report runs on the first available Monday December 16 at midnight, and on December 23, December 30, January 6, etc.
In this example, you need to schedule a report at 1730 (5:30 pm) every weekday (Monday - Friday) starting in February for 6 months (assume the
current date is Thursday December 12, 2019).
Number Description
30 30 minutes
17 5pm
* Any day
In this example, you need to schedule a report every day at 0600 (6:00 am) (the current date is Wednesday 12 December).
Number Description
00 00 in minutes
6 6am
Number Description
* Any day
* Any month
* Any day of the week. If you want to run from Monday to Friday, type 1-5. For Sunday to Thursday, type 0-4.
The report runs at 0600 (6:00 am) on December 13, 14, 15, 16 and so on.
Abstract
By default, Cortex XSOAR uses the UTC timezone in all reports. You can set a different timezone in the report.
NOTE:
For most out-of-the-box reports, timezone and time formats in a widget cannot be changed unless you copy the report. Some out-of-the-box
reports, such as Open Incidents, include a title widget which shows the date the report is generated according to the system default timezone.
You may want to configure the timezone for a specific report to align event timestamps with the local time zone of the team reviewing the report.
This provides context for event timelines and facilitates better analysis and decision making.
For example, if your SOC team is based in New York but you need to generate a report for another team in London, configure the time zone to
GMT to match the incident timestamps to the local time in London to make it easier for the London team to understand and respond to the events
accurately.
1. For a new report, select Dashboards & Reports → Reports → New Report.
To edit an existing report, in the Reports page, locate the report you want to edit and click the Edit button.
2. From the Time Zone dropdown list, select the timezone a specific report will display.
Save Version enables you to view a history of the changes made to your report. You can revert to previous versions of the report.
We recommend keeping the default (UTC) timezone for all reports and configuring the timezone for specific reports only. However, you can
change the timezone displayed in all reports by adding a server configuration.
1. Go to Settings & Info → Settings → System → Server Settings → Server Configuration → Add Server Configuration.
Key Value
Asia/Jerusalem
UTC (Default)
America/New York
CET
EST
GMT
Abstract
Change default timeout value for Cortex XSOAR reports, using a server configuration.
If you generate a report that runs a script and the report or the section in the report that has the script is blank or empty, increase the script timeout
value.
1. Select Settings & Info → Settings → System → Server Settings → Server Configuration → + Add Server Configuration.
Key Value
script.timeout 10
15.3 | Widgets
Abstract
Create and edit widgets in Cortex XSOAR for reports and for dashboards.
Widgets are visual components that enable you to analyze data internally or externally from Cortex XSOAR, in different formats such as graphs,
pie charts, and text from information.
Abstract
Overview of widgets, including methods for creating and adding widgets. Use widgets to analyze and display data in a dashboard or report in
Cortex XSOAR.
Cortex XSOAR provides out-of-the-box system widgets, such as Late Incidents and Saved by Dbot (ROI Widget). You can edit these widgets
when creating or editing a dashboard or report.
NOTE:
You can create custom widgets as follows and then add them to a dashboard or report as required.
Widgets Library Create a widget using the widget builder in the Widgets Library which is available for all users.
For more information, see Create a widget using the widget builder.
From an incident Create the widget from the Incidents page and then add it to a dashboard or a report.
From an indicator Create the widget from the Threat Intel (Indicators) page and then add it to a dashboard or a report.
In the War Room In the War Room, view an incident in widget format, for example, as severity in a bar chart.
TIP:
Abstract
Create a widget in the Widgets Library in and then add the widget to a dashboard or report.
In the Widgets Library, you create a widget using the widget builder, which enables you to define and configure data, and preview how that widget
appears. The widget builder allows you to create complex widgets, eliminating the need to write scripts or upload JSON files (although you have
the option to do this). These complex widgets have the same capabilities as if you were creating a script-based widget.
In the Widgets Library of the report or dashboard you are creating or editing, click and select the widget type as follows.
Incidents Use incident data to create widgets related to incidents, for example timestamps, duration, incident types, and any incident
field.
Indicators Use indicator data to create widgets related to indicators, for example timestamps, indicator types, and any indicator field.
SOAR Metrics Use SOAR metrics data to create widgets related to scripts, playbooks, and integrations, for example executions,
durations, and errors.
Tasks Use tasks data to create widgets related to investigation tasks, for example assignee, playbook name, and duration
(manual or automated).
NOTE:
When creating a widget based on the results of an investigation task, only the following task types are supported for
widget aggregation:
Manual tasks
Oversized tasks
Scripts Use a script to create a widget. Although you can create complex widgets using the widget builder, you can also create
dynamic widgets using scripts, such as calculating the percentage of incidents that DBot closed. The script can also pull
information from the Cortex XSOAR API.
NOTE:
Before creating a script based widget, you need to create a script in the Scripts page and then select the script in the
widget builder. The script must have the widget tag assigned, otherwise it does not appear when selecting the script in
the widget builder.
In the widget builder, you cannot manipulate data (no data appears in the Operations tab). However, you can define script
arguments and change the color, layout, and legends.
Threat Intel Use threat intel data to create widgets related to threat intel reports that have been created, for example reports by type
Reports and status.
Upload Upload a JSON file to create a static widget which displays basic information, such as grouping incidents severity by type
and active incidents by type.
Parameter Description
Widget display format Select one of the widget format icons. You can see a preview of how the
widget appears.
Parameter Description
Cortex XSOAR retrieves data relevant for that data source. For example,
for Incidents, in the Group by field all data relating to incidents is retrieved,
such as type, owner, and created by.
War Room Entries Use War Room entry data to create widgets,
for example number of entries according to
owner.
NOTE:
Manual tasks
Oversized tasks
Parameter Description
NOTE:
Threat Intel Reports Use threat intel data to create widgets related
to threat intel reports that have been created,
for example reports by type and status.
Query Queries data in the Lucene query syntax form relating to the data source.
For example when the data source is incidents and the query is: -
status:closed and owner:"", it queries all incidents that are not closed
which do not have an owner.
Or to see all incidents that are not closed, not archived, and are not jobs,
use the query: -status:closed and -status:archived and -
category:job.
This step enables data manipulation, similar to scripting. You can configure the data according to groups and fields (including custom calculations
on fields).
1. (Not relevant for tables or text) Click the Operations step, and in the Values section select one of the following calculations to perform on the
data (not relevant for Script and War Room Entries data sources).
Calculation Description
Count Counts the total value of the field. For example, display the total number of incidents in your system. You can then group
by type and severity.
Average Calculates the average value of the field. For example, display the average number of incidents in your system over the
selected time frame. You can then group by type and severity.
Sum Counts the value of the field according to a specific value. For example, when you define a metrics widget type, select
the execution count, total duration, errors count, or create your own custom calculations.
Min Calculates the minimum numeric value of the data. For example, you may want to see the minimum number of fetched
events.
Max Calculates the maximum numeric value of the data. For example, you may want to see the maximum number of fetched
events.
2. (Not relevant for Count) Select one of the fields from the dropdown or create your own custom calculations by selecting Custom calculations
on fields.
The custom calculation modal suggests incident fields based on the widget data type, which are automatically validated. You can add your
own fields (provided these fields exist), according to the widget data type, by using the CLI name. These fields are not validated.
You can add mathematical operators (such as +, -, /, *) between fields. Variables using {} are also supported. For example:
To see the average time that incidents are late, type {now}-remediationsla.dueDate.
To calculate the average time between detection and remediation for phishing incidents (in the phishing generic playbook we set the
time detection and remediation SLA timers), type remidationsla.startDate-detectionsla.startDate.
4. In the Axis and grouping section Group by field, from the dropdown, select the group you want to add.
By default, the results are limited to the top 10 most popular results. If you want to change the top most popular to the least popular, change
the number, or you want to see the remaining results that are not covered in one group (the Show ‘Others’ checkbox), click the edit button
and update as required.
If you want to add a custom field, ensure the Make data available for search incident type field is checked when editing or creating a new
field.
You can limit the amount of results to return, view the most or least popular, and for some fields select the time format. For example, you
may want to see the top 10 most popular active incidents active incidents by month.
5. (Optional) Define custom groups (for example, define specific owners in the owner group).
2. In the Create Custom groups window, click Equals (String) to change the operator.
6. If you want to add a group for all other values that have not been defined, click the Create and display a group for all remaining values
checkbox.
You can manipulate data according to one or two groups (two groups are useful for vertical bars and line charts). Within each group, you can
group by a bucket. For example, for two teams - Team A and Team B, each one is made up with different team members. You only want to
see Team A and Team B and not the individual team members.
6. In the Second group by field, add the group as required. For example, to see data filtered by owner and severity, select Group By Owner
and Second Group by Severity.
1. Click the Visuals step and define how the widget appears.
Parameter Description
Axis name The name of the axis for both horizontal and vertical.
Format Select the format of the table for both horizontal and vertical axis. For example, hours, minutes,
days, weeks, etc.
Reference Line Whether you want a line showing the average, minimum, maximum, or custom line.
Show Legend Whether you want to see the legend in your widget.
Show also percentage Displays the percentage when selecting a pie chart.
Parameter Description
Show values on the graph Add the values on the chart widget.
Display trend Compares dates for a particular period in a number widget. For example, this week vs. last
week, this year vs. last year, and so on. To change the comparison period, in the Time frame
field from the dropdown, select the relevant date.
Widget color threshold Select the Widget color threshold in a number or duration widget to highlight the threshold data
and define the threshold by selecting the Widget color threshold checkbox. For example, if less
than 150 red, 100 yellow, 50 green. To add more thresholds, click Add new threshold. You can
change the colors as required.
2. To change the color, in the preview section, hover next to the legend, click the ellipsis and then click Edit color.
1. Click Save.
When you add the widget, it automatically uses the date range of the dashboard or report. You can change it by clicking the settings icon
and selecting Use widget’s date range. To revert, click the settings icon again and select Use dashboard’s date range.
Abstract
In this example we want to create a bar chart widget that shows the following:
2. Select Incidents.
Type remediationsla.startDate-detectionsla.startDate
Line chart
2. Select Incidents.
Query: -category:job
In a bar chart
2. Select Incidents.
c. Type {now}-remediationsla.dueDate.
We want to see the average time that incidents are late (from today’s date). We add a variable {now}, so that we do not have to
change the date.
d. In the Group by field, select Owner and then click Custom Group by.
f. In the Second group by field, from the dropdown list, select Type.
g. Select the checkbox for Create and display a group for all remaining values and then click Save.
Abstract
You can create a custom widget for your dashboard or report using a JSON file and then add the new widget to a new or edited dashboard or
report. If you want to create more complicated widgets using scripts, see Create a custom widget using a script.
1. Create a JSON file, and add the relevant JSON file widget parameters.
Parameter Description
dataType The data source of the widget. Must be one of the following:
incidents
indicators
messages
entries
scripts
tasks
generics
Relevant when creating Threat Intel reports. When used, the definitionId value must be
ThreatIntelReport.
query Queries query data in the Lucene query syntax form relating to the dataType. For example when
dataType is incidents and the query is: -status:closed and owner:"", it queries all incidents
that are not closed, which does not have an owner.
For script based widgets, the query is the name of the script.
sort Sorts the data, when displaying the widgetType (applies to table and list widget types) as a list of
objects, which consists of the following:
asc: Whether to sort data in ascending values. If true, the order is in ascending value.
Parameter Description
widgetType The type of widget you want to create. Must be one of the following:
bar
column
pie
number
line
table
trend
list
duration
image
size The maximum number of returning elements. Use 0 for the widgetType's default.
NOTE:
Table/List: Default is up to 13
category Adds a category name. The widget appears under a category instead of being classified by
dataType.
dataRange The time period for which to return data. The time period is overridden by the dashboard or report
time period. Default is all times.
fromDate: The start date from which to return data in the format: “YYYY-MM-DDTHH:MM:SSZ”.
For example, "2019-01-01T16:30:00Z".
toDate: The end date for which to return data in the format: "YYYY-MM-DDTHH:MM:SSZ". For
example, "2019-01-01T16:30:00Z".
byTo: The to period unit of measurement. Values are ‘minutes', 'hours', 'days',
'weeks', 'months'.
byFrom: The from period unit of measurement. Values are: 'hours', 'days',
'weeks', 'months'.
fromValue: The duration of the from period. Integer. For example, last 7 days - {
byFrom: 'days', fromValue: 7 }.
Parameter Description
params Enriches the widget with specific parameters, mainly based on the widgetType. Includes the
following:
groupBy: An array of field names for which to group the returned values. Used when widget
type is bar, column, line or pie. For example, ["type", "owner"]: Groups results by type
and owner, and returns a nested result for each type with statistics according to the owner.
NOTE:
keys: An array that enables processing the data value and modifies it by the given list of keys.
For example, ["avg|openDuration / (3600*24)"] process for each group found in the
result, the average open duration (in days).
text : The markdown text for text widgets or image data for image widgets. For example, if
you want the widgets to appear on separate pages in a report, use [ “\\pagebreak” ].
timeFrame: Supplies the custom time frame for which the widget scales. Values are
"years", "months", "days", "hours", "minutes". The default is “days”.
tableColumns: Enables you to define the name of the columns in a list or table. For example,
"[{ "key": "name" }, { "key": "mycustomfield" }]": Displays the name and a
custom field.
legend An array of objects that consists of a name and color. The name must match a group name. The
color can be the name of the color, the hexadecimal representation of the color, or the rgb color
value.
4. Select the JSON file you created in step 1 and click Open.
The JSON file to display incident severity by type contains the following:
Bar chart
Grouped by severity and for each severity display the nested group size (count of incidents displayed by the length of the bar) colored
according to type.
{
"name": "Incident Severity by Type",
"dataType": "incidents",
"widgetType": "bar",
"query": "-category:job and -status:archived and -status:closed",
"dateRange": {
"period": {
"byFrom": "days",
"fromValue": 30
}
},
"params": {
The query specifies that you do not want to return incidents that are categorized as job nor incidents that are archived and closed.
For the date range, the fromValue sets the widget to display the last 30 units of time. The byFrom sets the units of time to days, which
results in the last 30 days.
The params parameter is set with a groupBy value marking the first group by severity name and then by type (making the bar chart
stacked).
After you import the widget into the Widget Library the following widget appears:
You can see the incidents are grouped by severity and the number of incidents are displayed by the length of the bar, which are colored according
to type.
Display incidents by type
{
"dataType": "incidents",
"widgetType": "column",
"params": {
"groupBy": [
"occurred(d)",
"type"
],
"valuesFormat": "abbreviated",
"timeFrame": "days"
},
"dateRange": {
"period": {
"byFrom": "days",
"fromValue": 7
}
},
"propagationLabels": [
"all"
],
"customCalculation": {
"operation": "count",
"fieldName": "",
"expression": ""
},
"name": "Change Sort Order In Column Chart - Sort by Date",
"sort": [{ "field": "occurred", "asc": true }]
}
The Widget is called Change Sort Order In Column Chart - Sort by Date.
For the date range, the fromValue sets the widget to display the last 7 units of time. The byFrom sets the units of time to days, which results
in the last 7 days.
The params parameter is set with a groupBy value marking the first group by occurrence date and then by type (making the column chart
stacked).
After you import the widget into the Widget Library the following widget appears:
Abstract
Create a custom script based widget in using a script. Use custom widgets in dashboards and reports.
You can use scripts in custom widgets to create dynamic widgets for more complex calculations. For examples of creating widgets using scripts,
see Script-based widget examples.
NOTE:
Before creating a script-based widget in the Widgets Library, you need to create or upload the script to the Scripts page. In the Widgets Library,
you can define the script arguments and change the visuals.
NOTE:
If you upload a script to the Scripts page, the Arguments field is automatically updated. You can then define the arguments in the widget builder.
If you create a new script (without uploading) in the Scripts page, you need to add the arguments manually for them to appear in the Widgets
Library when creating or editing a widget.
1. In the Scripts page, upload or create a new script for one of the following widget types:
Text
Number
Duration
Trend
Chart
Table or list
3. Select the Scripts data type and then add the script to the widget.
(Upload script only) If you have added arguments, these appear when creating a widget. If you have not uploaded the script, you need to
add the arguments manually in the Scripts page.
4. Add the script-based widget where relevant, for example to a report, a dashboard, or an incident.
Abstract
Create script based widgets based on automation scripts for reports and dashboards in Cortex XSOAR.
The following are sample arguments/scripts to create a widget. After creating the widget from a script, add the widget to a dashboard or report. For
more details, see Create a widget using the widget builder.
To add a time stamp or a use a search query, add the following arguments to a script.
Argument Description
demisto.args()[‘from’] The start date of the time-stamp date range of the widget.
demisto.args()[‘to’] The end date of the time-stamp date range of the widget.
demisto.args()['searchQuery'] The search query entered into the search bar at the top of the
dashboard.
Text
In this example, create a script that queries and returns current on-line users, and displays the data in a markdown table.
JavaScript
Python
When creating or editing the widget in Cortex XSOAR, to add a page break, type /pagebreak in the text box. When you generate a report, the
widgets that follow the page break are on a separate page.
NOTE:
(Multi-tenant) Script-based text widgets are not supported in the Main Account.
Number
This example shows how to create a single item widget with the percentage of incidents that DBot closed.
JavaScript
res = executeCommand("getIncidents", {
'status': 'closed',
'fromdate': args.from,
'todate': args.to,
'size': 0 });
var overallClosed = res[0].Contents.total;
Python
res = demisto.executeCommand("getIncidents", {
"query": "status:closed and investigation.users:\"\"",
"fromdate": demisto.args()["from"],
"todate": demisto.args()["to"],
"size": 0
})
closedByDbot = res[0]["Contents"]["total"]
res = demisto.executeCommand("getIncidents", {
"status": "closed",
"fromdate": demisto.args()["from"],
"todate": demisto.args()["to"],
"size": 0
})
overallClosed = res[0]["Contents"]["total"]
if overallClosed == 0:
demisto.results(0)
else:
result = round(closedByDbot * 100 / overallClosed)
demisto.results(result)
Duration
In this example, create a script that queries and returns a time duration (specified in seconds), and displays the data as a countdown clock. If
using a JSON file, you must set widgetType to duration.
JavaScript
Python
The return type should be a string (any name) and an integer. The time is displayed in seconds.
After you have uploaded the script and created the widget, you can add the widget to the dashboard or report. The widget displays the time
duration:
Chart
A valid result for a chart widget is a list of groups. Each group points to a single entity, for example, in bar charts each group is a bar. A group
consists of the following:
Name - A string.
Color - A string representing a color that will be used as a default color for that group. It can be the name of the color, a hexadecimal
representation of the color, or an rgb color value (optional).
NOTE:
In this example, we show how to create a script that will query and return the trend between two sums in a pie chart.
Pie
Line
Bar
Column
Simple pie/chart
JavaScript
var data = [
{name: "2018-04-12", data: [10], color: "blue"},
{name: "2018-04-10", data: [3], color: "#029be5"},
{name: "2018-04-17", data: [1], color: "rgb(174, 20, 87)"},
{name: "2018-04-16", data: [34], color: "grey"},
{name: "2018-04-15", data: [17], color: "purple"}
];
return JSON.stringify(data);
Python
data = [
{"name": "2018-04-12", "data": [10], "color": "blue"},
{"name": "2018-04-10", "data": [3], "color": "#029be5"},
{"name": "2018-04-17", "data": [1], "color": "rgb(174, 20, 87)"},
{"name": "2018-04-16", "data": [34], "color": "grey"},
{"name": "2018-04-15", "data": [17], "color": "purple"}
]
demisto.results(json.dumps(data))
After you have uploaded the script and created the widget you can add the widget to a dashboard or report.
JavaScript
var data = [
{name: "2018-04-12", data: [10], groups: [{name: "Unclassified", data: [10] }]},
{name: "2018-04-10", data: [3], groups: [{name: "Unclassified", data: [2] }, {name: "Access", data: [1] }]},
{name: "2018-04-17", data: [1], groups: [{name: "Unclassified", data: [1] }]},
{name: "2018-04-16", data: [34], groups: [{name: "Unclassified", data: [18] }, {name: "Phishing", data: [14] }]},
{name: "2018-04-15", data: [17], groups: [{name: "Access", data: [17] }]}
];
return JSON.stringify(data);
Python
Trend
In this example, create a script that queries and returns the trend between two sums.
JavaScript
Python
The return displays an object which compares the current sum with the previous sum.
Table or list
In this example, you need to create a script that queries and returns employee information in a table. For Table or List, if creating a JSON file, set
the widgetType to table or list. When using lists, a maximum of two columns displays, the rest are ignored (do not display).
JavaScript
Python
After you have uploaded the script and created a widget you can add the widget to a dashboard or report. The following widget displays the
employee information:
Example 23. Display filtered incident and indicator data in a widget with a bar graph
In this example, you create a filter according to type (phishing, access and IP) and then pivot to the relevant incident/indicators page. You need to
add the following to the JSON or python script.
query: Filters according to the value in the relevant page. For example, for phishing, if you define ‘type:Phishing’ and the
dataType:incidents, you are taken to the Incident page with the ‘type:Phishing’ filter.
pivot: Filters the dashboard according to data set. For example, pivot: “type:Phishing” enables you to filter data that relates to
phishing in the dashboard.
Python
data = [
{"name": "Phishing", "data": [50], "dataType": "incidents", "Query": "type:Phishing", "pivot": "type:Phishing"},
{"name": "Access", "data": [50], "dataType": "incidents", "query": "type:Access", "pivot": "type:"Access"},
{"name": "IP", "data": [50], "dataType": "indicators", "query": "type:"IP", "pivot": "type:IP"}
]
demisto.results(json.dumps(data))
After you upload the script and created a widget, add the widget to a dashboard or report page.
Example 24. Display filtered incident and indicator data in a widget with a line graph
In this example, you create a filter according to type (phishing, access and IP) and then pivot to the relevant incident/indicators page. You need to
add the following to the JSON or python automation script.
JavaScript
return JSON.stringify([
{
"name": "Jan 1, 2024",
"data": [6],
"groups": [
{ "name": "Phishing", "data": [1], "pivot": "type:Phishing", "query": "type:Phishing" },
{ "name": "Access", "data": [2], "pivot": "type:Acce", "query": "type:Access" },
{ "name": "IP", "data": [3], "pivot": "type:IP", "query": "type:IP" }
]
},
{
"name": "Jan 2, 2024",
"data": [7],
"groups": [
{ "name": "Phishing", "data": [2], "pivot": "type:Phishing", "query": "type:Phishing" },
{ "name": "Access", "data": [1], "pivot": "type:Access", "query": "type:Access" },
{ "name": "IP", "data": [4], "pivot": "type:IP", "query": "type:IP" }
]
},
{
"name": "Jan 3, 2024",
"data": [8],
"groups": [
{ "name": "Phishing", "data": [3], "pivot": "type:Phishing", "query": "type:Phishing" },
{ "name": "Access", "data": [4], "pivot": "type:Access", "query": "type:Access" },
{ "name": "IP", "data": [1], "pivot": "type:IP", "query": "type:IP" }
]
}
]);
data = [
{
"name": "Jan 1, 2024",
"data": [6],
"groups": [
{ "name": "Phishing", "data": [1], "pivot": "type:Phishing", "query": "type:Phishing" },
{ "name": "Access", "data": [2], "pivot": "type:Acce", "query": "type:Access" },
{ "name": "IP", "data": [3], "pivot": "type:IP", "query": "type:IP" }
]
},
{
"name": "Jan 2, 2024",
"data": [7],
"groups": [
{ "name": "Phishing", "data": [2], "pivot": "type:Phishing", "query": "type:Phishing" },
{ "name": "Access", "data": [1], "pivot": "type:Access", "query": "type:Access" },
{ "name": "IP", "data": [4], "pivot": "type:IP", "query": "type:IP" }
]
},
{
"name": "Jan 3, 2024",
"data": [8],
"groups": [
{ "name": "Phishing", "data": [3], "pivot": "type:Phishing", "query": "type:Phishing" },
{ "name": "Access", "data": [4], "pivot": "type:Access", "query": "type:Access" },
{ "name": "IP", "data": [1], "pivot": "type:IP", "query": "type:IP" }
]
}
]
demisto.results(json.dumps(data));
After you upload the script and create a widget, add the widget to a dashboard or report page.
Abstract
Although there are various out-of-the-box system widgets available, you can create custom widgets from incidents and then add them to a
dashboard or report.
To create a widget from an incident, you need to run a query from the Incidents page and then save the visual results as a widget.
1. In the Incidents page, from the dropdown list select the date range.
2. In the Query field, type the query criteria as required and run the query.
3. Click .
4. Follow the procedure from Task 2. Define the widget data in Create a widget using the widget builder.
5. Click Save.
NOTE:
By default, the widget inherits the date range that you specify when creating the widget, but you can modify the date range when you
create the dashboard or report. If the date range for the report or dashboard does not include the widget date range, the data is blank. To
override the dashboard or report’s date range, click Use Widget’s date range.
Example 25. Create a widget from an incident example
2. Click .
3. Type the name (Closed Job Incidents (past 6 months)) and save the query results as a widget:
5. Add the widget to the dashboard. If no data is returned, click Use widget’s date range.
Abstract
Create a custom widget from an indicator and add it a dashboard or report in Cortex XSOAR.
To create a widget from an indicator, you need to run a query from the Threat Intel page, and then save the visual results as a widget. If you do not
have a TIM license, the page is called Indicators.
1. In the Threat Intel (Indicators) page, select the date rage from the dropdown list.
2. In the query field, type the query criteria as required and run the query.
3. Click .
5. Click Save.
NOTE:
By default, the widget inherits the date range that you specify when creating the widget, but you can modify the date range when you
create the dashboard or report. If the date range for the report or dashboard does not include the widget date range, the data is blank. To
override the dashboard or report’s date range, click Use Widget’s date range.
Abstract
You can edit an existing widget in the dashboard or report, or in the Widgets Library. If editing a widget in the Widgets Library, it is available to all
users. If editing a widget in a dashboard or a report directly, the original widget in the Widgets Library is unaffected.
NOTE:
The edits to the widget in the dashboard or report, appear only for the dashboard or report. If you want to make changes that are available
for other users, dashboards, or reports, edit the widget directly in the Widgets Library by clicking the pencil edit icon. You can then adjust the
size and move the widget as required.
If the widget is not in a dashboard or report, you need to add the widget.
3. Edit the widget in the Widgets Library by following the procedure in Create a widget using the widget builder.
4. Click Save.
Abstract
You can add a script-based widget in the War Room by running a command. After creating a script in the Scripts page, to add the widget you need
to run a command in the War Room.
Example 26. Add a custom widget that returns indicator severity in an incident as a bar chart
commonfields:
id: ee3b9604-324b-4ab5-8164-15ddf6e428ab
version: 49
name: IndicatorWidgetBar
script: |-
# Constants
HIGH = 3
SUSPICIOUS = 2
LOW = 1
NONE = 0
indicators = []
scores = {HIGH: 0, SUSPICIOUS: 0, LOW: 0, NONE: 0}
incident_id = demisto.incidents()[0].get('id')
data = {
"Type": 17,
"ContentsFormat": "bar",
"Contents": {
"stats": [
{
"data": [
demisto.results(data)
type: python
tags:
- dynamic-section
enabled: true
scripttarget: 0
subtype: python3
runonce: false
dockerimage: demisto/python3:3.7.3.286
runas: DBotWeakRole
2. Add the script in the War Room by running the !IndicatorWidgetBar command.
Abstract
Customize the Saved by Dbot widget that calculates the amount saved by Cortex XSOAR. Return on Investment (ROI) widget.
In the Dashboard, Incidents tab, Cortex XSOAR comes with a number of pre-installed widgets, such as Saved by DBot (ROI) Widget.
The Saved by Dbot widget calculates the amount saved in US dollars according to actions carried out by all users in Cortex XSOAR across all
incidents.
NOTE:
The widget is disabled by default, as enabling it may affect performance when running large amounts of automations. To enable the widget,
contact Customer Support.
The following parameters are used to calculate the amount saved by Dbot (ROI):
Parameter Description
Roundtrip The time it takes in minutes to run an integration task with any of the integrated products. This can
be a command within a script or inside the War Room.
Script The time it takes to undertake an action that a script would do.
(# of times roundtrip completed * time taken to do roundtrip) + (# of times report generated * time taken to generate
report) + (# of times script runs * time taken to run script)] * cost of 1 Man hour in dollars.
The time to run an integration task with any integrated product, the time to generate a report, and the time to run an automated action are set to 5
minutes.
'Number of times' is how many times an automated procedure has run since Cortex XSOAR was first used. The Saved by dBOT widget is based
on absolute time and does not support date ranges.
You can change the way ROI is calculated based on your own statistics of time taken to perform the tasks for the actions when done manually. To
change the statistics, select Settings & Info → Settings → System → Server Settings → Server Configuration → + Add Server Configuration and
add the following :
Keys Values
You can also change the currency symbol from US dollars to a currency of your choice.
The default currency symbol in the Saved by Dbot widget is the Dollar sign ($). To change the currency symbol, you need to create a widget using
a JSON file. For more details, see create a widget using a JSON file.
In this example, which you can use as a template, we changed the value for the currencySign argument to Euro (€).
{
"size":5,
"dataType":"roi",
"params":{
"currencySign":"€"
},
"query":"",
"modified":"2019-01-12T15:13:09.872797+02:00",
"shouldCommit":false,
"name":"Return On Investment (ROI)",
"shouldPush":false,
"dateRange":{
Investigate incidents and indicators that have been ingested into Cortex XSOAR.
Cortex XSOAR enables you to centralize and manage every aspect of your investigations. Consolidate evidence, assign and review tasks, and
leverage the Workplan to orchestrate your response. Deduplicate incidents and create and close them efficiently. For indicators, create, extract
and enrich them, and explore their relationships to gain deeper insights. If you have a TIM license, see the Indicator investigation section for more
features, such as Unit 42 Intel data and creating a Threat Intel Report.
16.1 | Incidents
Abstract
Incidents are potential security data threats that are ingested or created in Cortex XSOAR for investigation and remediation.
Incidents are potential security data threats that SOC analysts identify and remediate. There are several incident triggers, including:
SIEM alerts
Mail alerts
Security alerts
These alerts are generated from third-party services, such as SIEMs, mailboxes, and data.
Cortex XSOAR includes several out-of-the-box incident types, fields, and layouts, which can be customized to suit your use case. Incidents can
also be created manually, from a JSON file, the Cortex XSOAR RESTful API, or an integration feed.
When incidents have been created, you can start managing and investigating incidents in Cortex XSOAR.
On the Incidents page, you can view all of the incidents in Cortex XSOAR and do the following:
NOTE:
If you are unable to perform a specific action or view data, you may not have sufficient user role permissions. Contact your Cortex XSOAR
administrator for more details.
Action Description
Search for incidents You can search for incidents by doing the following:
Search query: The Incidents page displays all open incidents from the last 7 days by default. For more
information about search queries and to create a query and save it for future use, see Search for incidents.
Search incidents globally using the search box. For more information, see Use the search box.
Filter incidents using Bar charts display important incident information, such as the incident type, severity, and owner. You can change the
the Bar Charts criteria in each bar chart.
NOTE:
Incidents sorted using an SLA/Timer field are sorted by the due date of the SLA field.
Create a new incident Create an incident manually. For more information, see Create an incident.
Create a widget Create a widget based on the search criteria and add it to a dashboard or report. For more information, see Create a
widget from an incident.
NOTE:
You can change how the top half of the incident page appears, by hiding the chart panel, and query panel, and switching to a detailed view.
In the incidents table, view general information about each incident, such as the type, the severity, and when it occurred. The status of the incident
is classified as follows:
Status Description
Active The investigation has started. The War Room is activated and the playbook starts, if assigned. Users can be assigned to this
incident.
Pending The investigation has not started and no War Room has been activated. As soon as you open the incident, it becomes active.
Incidents can be assigned a severity at incident creation when running a playbook, or after creation through the CLI or in the incident layout.
Incident severity levels are:
Critical (4)
High (3)
Medium (2)
Low (1)
Informational (0.5)
Unknown (0)
Action Description
Investigate View, investigate, and take remedial action on the incident by clicking the incident ID hyperlink. For more information, see
an incident Investigate an incident.
Assign Assign incidents to any user who has been added to Cortex XSOAR, including users who are marked as away. You can
assign users to many incidents at one time.
Edit Edit the incident parameters and then rerun a playbook on the incident, which is useful while developing playbooks. You can
process an incident multiple times during playbook development, without creating new incidents every time.
NOTE:
When batch editing multiple incidents, uploading files is currently not supported.
Mark as Deduplicate an incident. Closing an incident as a duplicate enables you to investigate one rather than multiple incidents.
Duplicate When selected, you need to add the ID you want to retain. When validated, and the other is closed as a duplicate, the
duplicated incident is removed from the table.
If you want to link an incident with or without closing you can use the !linkIncidents command. For more information, see
Link incidents.
Run You can select multiple incidents and run a command across all of them.
Command
Export Export incidents to an Excel or a CSV file. For more information, see Export incidents.
Close You can select multiple incidents and close all of them. If required, add the close reason and details. The investigation will be
closed.
When you close an incident, the close reason is set to whatever value you last entered. For example, when closing an
incident, if you initially selected False Positive as the Close Reason, reopened, and closed it again, leaving the Close Reason
empty, the empty Close Reason will overwrite the previous Close Reason. To keep the close reason that was entered
previously on the incident, add the previous value in the Close Reason argument.
NOTE:
The close reasons are customizable by server configurations. Provided you have administrator permission, you can
change the reasons. For more information, see Customize incident close reasons.
You can also close the incident when investigating the incident.
Delete You can select multiple incidents and delete all of them.
You can also delete the incident when investigating the incident.
Star an To help you focus on the most important incidents, you can mark an incident as a favorite. Starring incidents enables you to
incident narrow down the scope of incidents on the Incidents page.
TIP:
Any incidents assigned to yourself, starred incidents, and incidents you are participating in, can easily be accessed in the My Incidents section.
Further information
To see how to manage incidents, watch the following video in Live Community:
Abstract
Cortex XSOAR comes with powerful search capabilities. You can search for data by:
Using the Search Query: Cortex XSOAR searches for information using the Bleve query syntax. The search query appears on several
pages such as Incidents, Indicators, and Playbooks. To search for all incidents that have the status as pending and are critical, type
status: Pending and severity:Critical. You can save and share queries, as required.
Using the search box: Cortex XSOAR searches for incidents, entries, evidence, investigations, and indicators. The search box appears in
the top right-hand corner of every page.
By default, the Incidents page displays all open incidents from the last seven days. You can customize which incidents are displayed by creating
and saving queries.
When you start typing your search, Cortex XSOAR lists all the indexed fields, such as type and severity, including custom and out-of-the-box
fields. The search follows the Bleve query syntax, which is similar to the Lucene query syntax but with some differences, such as query syntax for
numeric ranges and date ranges. For more information, see Bleve Query String Query.
The search is performed on certain pages such as incidents, indicators, or the entire data (such as titles, entries, chats).
You can add inputs when searching for data, such as:
Input Description
Add text Type any text. The results show all data where one of the words appears. For example, the
search low virus returns all data where either the low or the virus string appears.
and Searches for data where all conditions are met. For example, status:Active and
severity:High finds all incidents with an active status and high severity.
or Searches for data where either conditions are met. For example, status:Pending and
severity:High or severity:Critical finds all incidents with a pending status and high or
critical severity.
* Wildcard search: * and ? should be used when searching for partial strings. For example, when
searching for all scripts that start with AD, use AD**. If you need to search for a script that
?
contains "get", search for *get*.
“” An empty value.
- Excludes from any search. For example on the Incidents page the -status:closed -
category:job searches for all incidents that are not closed and for categories other than jobs.
“me” Filters incidents by a user’s account. For example, owner:{me} displays all incidents where you
are the owner. It can also be used for other fields such as createdBy:{me} that displays all
incidents you created.
Input Description
Relative time. For example, “today”, “half Relative time in natural language can be used in search queries. Time filters - < and > can be
an hour ago”, “1 hour ago”, “5 minutes used when referring to a specified time, such as dueDate:>="2024-03-05T00:00:00 +0200",
ago”, “10 days ago”, “5 seconds ago”, or when searching for high severity incidents: Severity:High and created:>= "1 hour ago"
“five days ago”, “a month ago”, "in 1
year". NOTE:
The timezone for searches is UTC. The system timezone is not used.
When adding some fields, such as Occurred you can enter the date from the calendar. You
can also filter the date when the results are displayed.
Search using Regex You need to use the value “//”, when searching for Regex values. For example, to search for
indicator values that contain www and end with .com, type: value: "/w{3}..*.com/". This
returns values such as www.namecheap.com, www.kloshpro.com.
Search for indicator values To search for indicator values that contain lower-upper a-z letters and 0-9 numbers with a length
of 32, type: value:"/[a-zA-Z0-9]{32}/". This returns values such as
775A0631FB8229B2AA3D7621427085AD, 87798e30ca72f77abe624073b7038b4e.
Timer/SLA fields To search for Timer/SLA fields in incidents, see Search incidents for Timer/SLAs.
Special characters To explicitly use the following characters in a search query, place them within double quotes. An
escape character \ is not required.
&& || ! {} [] () ~ * ?
To explicitly use the following characters in a search query, place them within double quotes and
use an escape character \.
For information about using special characters, see Run commands in the CLI.
NOTE:
When searching for incidents, the following fields match any incident containing the searched value:
phase
name
details
type
For example, you have several incidents with the idle accounts name. When searching for the name: "idle", it returns any name that
contains the word idle (including idle accounts). Other fields return anything that matches the exact world, idle.
Exact matches for name, type, and phase fields, add raw to the search field. For example, enter rawName:"idle".
After defining the search query, you can save it for future use. The search query and the bar charts are saved.
TIP:
To edit an existing saved query, create a new query and save it with the exact name of the query you want to replace.
By default, the query is -status:closed -category:job, which searches for categories other than jobs and not those that have been
closed. You can add fields like severity or type to narrow your search to critical issues or issues of a certain type.
NOTE:
If you change drill down in the bar chart fields, the query also changes. For example, in the Severity bar chart, if you click High,
severity:High is added to the query.
a. Click .
To view all saved queries, click . The list of saved queries appears. You can mark a saved query as a default, or delete a query.
Shared queries enable you to share your customized configurations with all users. For example, you can define queries for security analysts to
help focus them on incidents relevant for them to analyze.
Once you create and save a query, to share it with all users click and then click for that query.
The icon next to the name of the query changes to . Hovering over this icon in the list of saved queries shows that the query is shared. To
remove sharing, click and remove the users.
The shared query appears in the users’ Saved queries list. Users see the query with a icon and the name of the shared query owner.
NOTE:
Edits made to shared queries are not saved. To save an edited version of the shared query, make a copy and then edit and save it.
Copying the shared query or clicking Mark Default (to make the query the page default) keeps the shared query in the user’s Saved
queries list even if the shared query owner removes the share. Otherwise, the query will disappear from the users’ Saved queries list if the
query owner removes the share.
The search box searches for incidents, investigations, and indicators. The search box appears in the top right-hand corner on most pages. You
can either type free text or search using the search query format (use the arrow keys to assist you in the search). For example,
incident.severity:Low searches for all incidents that have low in the severity category.
If using the search box during an investigation, you can select whether to search across all incidents or limit the search to the current incident.
NOTE:
When searching in the current incident, Cortex XSOAR searches only the War Room entries. If a value exists in the incident but is not a War
Room entry, no results are returned.
Further information
For more information about how to search for incidents and indicators, see the following video in Live Community:
Searching in XSOAR
Abstract
An indicator
The API
To create a single incident using the API, use /incident. If you create an incident via the API and do not set createInvestigation:
true, the incident is created but an investigation will not be opened and a playbook will not automatically run. For more information, see
Create or update an incident.
To view the full API documentation, go to Cortex XSOAR 8 API Reference guide.
Integration feeds
Incidents can be created from an integration instance. For more information about how to fetch incidents, see Fetch incidents from an
integration instance.
NOTE:
If you can't create an incident from any of these options, you may not have sufficient user role permissions. Contact your Cortex XSOAR
administrator for more details.
NOTE:
If any fields are missing, these fields can be added when configuring a layout.
You need administrator permission to configure a layout. For more information, see Incident layout customization.
The import JSON feature enables you to import event data from third-party software and use it to create new incidents in Cortex XSOAR. These
incidents can be used to build and troubleshoot playbooks for integrations that have not yet been installed or configured.
1. Go to Settings & Info → Settings → Object Setup → Incidents → Classification & Mapping and click the mapper you want to use.
2. From the Get Data drop-down, choose Upload JSON and then select the JSON file you want to upload.
3. Map the fields as required. For more information, see Classification and mapping.
Abstract
If you want to export an incident as a JSON file, run the !js script="return ${.}" command in the War Room.
NOTE:
When exporting an incident to a CSV format, Cortex XSOAR generates the report in UTF8 format. If you want to export an incident that contains
Cyrillic characters, such as Russian and Greek, you need to change the format to UTF8-BOM.
Administrator permission is required to change server configurations, including the format. For more information, see Export an incident to CSV
using the UTF8-BOM format.
The data does not include files, attachments, and artifacts. All text is plain text (there is no formatting).
Select which data appears in your exported file by adding columns to the incidents table. If a column is hidden, the data is not exported. You can
hide, show, or reorder the columns in the table by using the settings icon on the Incidents page.
1. On the Incidents page, at the top of the incidents table, click the settings wheel to configure the columns to include for export.
You can export up to 1,000 incidents at a time. If the incidents you select contain more than 10,000 combined entries, an error appears and the file
is not generated. The maximum file size for download is 100 MB.
NOTE:
The date format displayed in the Excel/CSV file matches the timestamp format set by the user on the Server Settings page. If you haven't set a
timestamp format, the default timestamp format is used.
Automatically: If associated with a playbook, incidents open automatically for investigation and run the associated playbook.
Manually: Open an incident manually by selecting the incident in the Incidents table.
NOTE:
If the incident ID hyperlink is unavailable, the incident was closed before the investigation started, usually through a preprocess rule or it
was already closed when fetched. If you want to see the incident details, click the Switch to detailed view icon at the top of the incidents
page.
After an incident is created, it is assigned a Pending status. When you start to investigate an incident the status changes automatically to
Active, which starts the remediation process.
In the CLI: If you want to open an incident in the CLI, type /investigate id=<incidentID#>.
You can limit access to investigations and restrict investigations according to your requirements, as described in Limit access to investigations
using access control.
NOTE:
When you open an incident, you can see various tabs that assist you in the investigation. The following tabs are common to most incident types:
NOTE:
Tabs, tab names, sections, and fields vary according to the incident layout.
In an investigation, images from external links don't appear, as they are restricted due to security issues. To use an image, either upload the
image using base64 or upload it using markdown in the War Room.
Tab Description
Case Info A summary of the incident, such as case details, outstanding tasks, linked incidents, and evidence. Some
fields are informational and some are editable. Includes the following sections (depending on the layout):
CASE DETAILS: A summary of the incident, such as type, severity, and when the incident occurred.
Update these fields as required.
WORK PLAN When you click on the section, you can view or take action on the following:
Playbook tasks: When a playbook runs, any outstanding tasks appear. You can take various
actions here or in the Work Plan tab.
You can also create To-Do Tasks from the Actions tab. See Incident Tasks.
NOTES: If added to the layout, notes help you understand specific actions taken, and allow you to view
conversations between analysts to see how they arrived at a certain decision. You can see the thought
process behind identifying key evidence and identifying similar incidents.
You can add notes in this section or in the War Room. Notes are searchable when using the incidents
search bar.
EVIDENCE: A summary of data marked as evidence. You can add evidence in this tab, the NOTES field,
or the Evidence Board tab.
LINKED INCIDENTS: Add or remove linked incidents. For more information, see Link incidents.
Investigation Provides an overview of the information collected about the investigation, such as indicators, email
information, and URL screenshots.
War Room A comprehensive collection of all investigation actions, artifacts, and collaboration. It is a chronological journal
of the incident investigation. Each incident has a unique War Room. For information, see Use the War Room
in an investigation
Work Plan A visual representation of the running playbook that is assigned to the incident. For more information, see Use
the Work Plan in an investigation.
Evidence Board View any entity that has been designated as evidence. The Evidence board stores key artifacts for current and
future analysis. You can reconstruct attack chains and piece together key pieces of verification for root cause
discovery. For more information, see Evidence Handling.
Incident actions
You can do several actions when investigating an incident, such as adding members, creating a report, and restricting incidents.
When viewing an incident, from the Side panels dropdown, you can do the following:
Action Description
Incident Add tasks for users to complete as part of an investigation. For more information, see Incident Tasks.
tasks
NOTE:
When you mention team members in the CLI, they are automatically added as team members.
Context View context data to see what information was returned. The context is a map (dictionary) created for each incident and is used
data to store structured results from the integration commands and scripts. Context keys are strings and the values can be strings,
numbers, objects, and arrays.
Context data acts as an incident data dump from which data is mapped into incident fields. When an incident is generated
in Cortex XSOAR and a playbook or analyst begins investigating it, context data will be written to the incident to assist with the
investigation and remediation process.
NOTE:
All incident data stored in incident fields are also stored in the context data. In most cases, not all context data is stored in
incident fields. Incident fields represent a subset of the total incident data.
When an incident is created, the incident data is stored in the context data, under the incident key. When an investigation is
opened and integration commands are run, data returned from those commands is also stored outside of the main incident
key.
When viewing an incident, from the Actions dropdown, you can do the following:
Action Description
Report Create a report to capture investigation-specific data and share it with team members. For more information, see Create
an incident summary report.
Add a child Child investigations are used to compartmentalize sensitive War Room activity. You can create child investigations to
incident collaborate discreetly with a select group of people on a specific topic of investigation. Child investigations are also used
where a secondary investigation is needed and its content may add too much "noise" to the original investigation.
Select the Restricted checkbox to turn the child investigation into a discrete investigation.
Restrict/Permit an Restrict an investigation for the incident owner and team. If restricted, select permit to open the incident to all users. For
incident more information, see Limit access to investigations using access control.
Action Description
Close/Reopen an Mark the incident as closed. If closed, you can select Reopen the incident.
incident
When you close an incident, the close reason is set to whatever value you last entered. For example, when closing an
incident, if you initially selected False Positive as the Close Reason, reopened, and closed it again, leaving the Close
Reason empty, the empty Close Reason will overwrite the previous Close Reason. To keep the close reason that was
entered previously on the incident, add the previous value in the Close Reason argument.
Retain/Undo Mark the incident for retention or disable retention for the incident. For more information, see Retain incidents.
Retain an incident
Incident navigation
You can navigate directly to a specific incident via the incident ID or incident name, using Ctrl+ K for Windows or Command-K for macOS.
When investigating an incident opened from My Incidents or the main Incidents page, you can navigate to the next/previous incident from within
the incident, without returning to the original list. The navigation buttons appear next to the Action button. The total number of incidents from the list
of incidents is shown (depending on your search criteria) and where you are in the list. For example, in the last 30 days, there were 7000
incidents. When opening an incident, you can investigate 7000 incidents using the navigation buttons without returning to the Incidents page.
Only users with permission to edit incidents can view the navigation buttons.
The navigation buttons are only available if the incident is opened from My Incidents or the Incidents page. If you navigate directly to an incident,
without going through the Incidents page or My Incidents list, no navigation buttons appear.
Abstract
You can mark up to 1000 incidents for permanent retention so that any important incidents can't be inadvertently deleted manually, or by an API
call.
NOTE:
Up to 1,000 incidents per tenant can be selected. Retained incidents are not deleted. If you reach 1000 retained incidents, you won't be able to
add additional incidents, unless you disable incident retention for some or all of your existing retained incidents.
Only user roles that have the Retain incident permissions, can retain or undo incident retention. For more information, see Role-based
permissions.
The lock icon appears when the incident has been marked for retention.
To disable retention for an incident, select Undo Retain Incident from the Actions menu.
To search for retained incidents in the Incidents search bar, use the retained field, with T (True) or F (False). You can also add the Retain Incident
field to the Incidents table to easily view which incidents are retained.
Abstract
Restrict an investigation
Restrict an investigation
You can restrict an investigation to the incident owner and the team associated with the investigation.
Restrict an incident to only team members. For example, if an incident contains sensitive data, and you only want specific users to investigate the
incident, you can mark the incident as restricted. Other users cannot view or access the incident. Team members are added automatically when
you send them a notification in the CLI. You can remove the restricted investigation at any time.
NOTE:
All team members have read and write permissions. If you add team members, but their roles have read-only permission, the user still has read
and write permission and can access the investigation.
1. Go to the Incidents page and select the incident you want to restrict.
NOTE:
If using the CLI, run the /invesitgation_restrict id=<id number> or the /invesitgation_permit id=<id number> command.
When you add a role to the incident, you restrict access to all roles other than those you have specifically added. For example, after an
investigation is closed, add administrators or those with specialty roles, so only they can reopen or link incidents. The added roles have read and
write permission, but all other roles do not have access (unless you have added them in the XSOAR Read Only Roles field).
NOTE:
If you add a role, but the incident has been restricted to team members, and the user is not a team member, the user cannot access the
incident regardless of the role. For example, if you restrict the incident to User A and User B team members who are Tier 1 analysts but
then try to add Tier 2 analysts (none of whom are team members) to the list of roles, a Tier 2 analyst cannot access the incident.
To access an incident, you must be assigned the same role that is assigned to the incident, even if you are the creator of the incident.
If the Roles field is added to the incident layout, select the relevant role.
In the CLI, run !setIncident roles =<name of role> to set the role.
You can also run the /incident_set command roles <name of role>, which has the same effect.
The War Room entry confirms that the role has been updated.
NOTE:
When you create or edit an incident, you can select the required Role.
You can add this field to the incidents table on the Incidents page (you can't add roles in the table).
You can add a Read-only role to the incident, which restricts access to the incident. When granting read-only access, the user can view the
incident but not edit it. For example, when an incident is in triage (phase 1), you may want all Tier-2 analysts to have read-only access, so that
Adding a team member overrides this restriction, so if you add User A, (Tier 1) as a team member, even if you assign Tier-1 as a Read-only role,
the user still has Read/Write access. You need to remove the user as a Team Member.
NOTE:
If you assign a role (read and write permission) and assign the same role as read-only, the user still has read/write permission. You need to
remove the assigned role. If you restrict the incident, the read-only role does not override the restriction. In other words, team members'
permission takes precedence.
If the XSOAR Read Only Roles field is added to the incident layout, select the relevant role.
In the CLI, you run !setIncident xsoarReadOnlyRoles=<name of role> to set the read-only role.
The War Room entry confirms that the role has been updated.
NOTE:
If added to the opening incident form, when editing or creating an incident, you can select the reqiuired XSOAR Read Only Roles.
You can add this field to the incidents table on the Incidents page (you can't add roles in the table).
Abstract
Playbook tasks and to-do tasks are tasks users complete as part of an investigation. Add incident tasks as part of your investigation process.
Incident tasks are tasks for users to complete as part of an investigation, which is split according to the following:
Task Description
Playbook A task that is part of the Work Plan (playbook) for an incident. When a playbook runs you can take action on any tasks that
task require attention in the Work Plan, such as assigning an owner, setting a due date, and completing the task. These tasks
include the following subtypes:
Automated tasks
Manual tasks
To-Do An ad-hoc item that is not attached to the incident Work Plan. Create tasks for users to complete as part of an investigation.
tasks These are like a To-Do list that you keep in an investigation on an ad-hoc basis rather than the Work Plan which follows a pre-
defined process.
NOTE:
You can close an incident even if there are open playbook tasks or open To-Do tasks.
You can view outstanding tasks in the INCIDENT TASKS pane, by clicking Side panels → Incident Tasks.
NOTE:
You can also access the INCIDENT TASKS pane from the Case Info tab, in the WORK PLAN section, or the TO-DO TASKS section if it has
been added to the layout.
1. In the incident, click Side panels and then select Incident Tasks.
NOTE:
If your Case Info tab in the incident layout includes a TO-DO TASKS section or has a WORK PLAN section you can access the
INCIDENT TASKS section directly.
Parameter Description
Task Description A meaningful description of the task that provides sufficient information for the assignee to complete the
task.
Assignee The user to assign to the task. You can only assign a single user per task.
Set due date The due date for the task. If the task is not completed by this date, it is marked as overdue but is not a
roadblock for the investigation.
Tag the result with Tags to apply to the to-do task, so you can easily find it in the War Room.
Use the !MyToDoTasksWidget command in the CLI to see all your assigned tasks in the War Room. You can also use the !Todo command to
manage the task, such as add, assign, and complete.
When you are added to a task, you receive a notification by email. To turn this on or off, go to <your name → User Preferences → Notifications
and select the relevant section.
Abstract
Use the War Room for real-time investigation into an incident, to filter war room entries, and to disable indicator notifications.
The War Room contains an audit trail of all automatic or manual actions that take place in an incident. A War Room is where you can review and
interact with your incidents. Cortex XSOAR provides machine learning insights to suggest the most effective analysts and command-sets. Each
incident has a unique War Room.
Within Cortex XSOAR, real-time investigation is facilitated through the War Room, which is powered by ChatOps and helps you to do the
following:
Run real-time security actions through the CLI, without switching consoles
Every Incident has a War Room, but every user has access, subject to permissions, to a private War Room called the Playground.
The Playground
The Playground is a non-production environment where you can safely develop and test data, such as scripts, APIs, and commands. It is an
investigation area that is not connected to a live (active) investigation.
If you type a command in the incident, the results are returned to the incident War Room, not the Playground.
NOTE:
If you want to erase an existing playground and create a new one, run the /playground_create command.
When you open the War Room, you can see all the actions taken on an incident, such as commands, notes, and evidence in several formats such
as Markdown, and HTML When Markdown, HTML, or geographical information is received the content is displayed in the relevant format. You can
schedule a command in the War Room to run at a specific time. For more information, see Schedule a command in the War Room.
Files: Anything uploaded to the War Room in a playbook, script, or by the analyst
IMPORTANT:
Make sure files uploaded as attachments to the War Room are smaller than 250 MB. Uploading larger files can affect performance.
Incident History: Any incident field or SLA Timer field that was modified
Commands and playbook tasks: Any actions taken by playbook tasks or run manually by the analyst
You can also highlight any command thread for tracking commands.
NOTE:
Cortex XSOAR does not index notes, chats, and pinned as evidence entries.
In each War Room entry, you can take the following actions:
Action Description
Edit You can edit, format, or delete your entries. If an entry has been changed, a History link will appear where you can view
all changes to the entry.
Mark as Evidence Opens the Mark as evidence window where you specify the evidence details to be saved in the Evidence Board. The
Evidence Board stores key artifacts for current and future analysis. You can also add evidence in the Case Info tab or
the Evidence Board tab. For more information, see Evidence Handling,
Mark as note Marks the entry as a note, which can help you understand why certain action was taken and assist future decisions.
If the Case Info tab includes a NOTES section, add it to the section.
In the CLI by running the !markAsNote entryIDs=<ID of the war room entry> command.
In the relevant War Room entry, click Copy to CLI to retrieve the ID of the War Room entry.
When marked as a note, it is highlighted, so you can easily find them in the War Room or the Case Info tab.
Download artifact Downloads an artifact according to the entry type, such txt files for text, json for a JSON entry, etc.
Action Description
Add tags Add any relevant tags to use that help you find relevant information.
Copy to CLI ID: Entry IDs are used to uniquely identify War Room entries and take the format
<ENTRY_IDENTIFER>@<INCIDENT_ID>, for example, 54925dc3-a972-4489-8bef-793331fa6c77@1. Many out-
of-the-box commands and scripts use entry IDs arguments to pass in files as inputs.
URL: Copy the URL which is a direct link to the War Room entry
To find the entry ID or URL of an entry in the War Room, click on the vertical ellipsis icon at the upper right of the entry,
then copy the value.
You can also upload files to the War Room by selecting the paperclip icon next to the CLI. Any files that have been uploaded can also be
downloaded from the War Room entry.
CAUTION:
You are not protected from malicious content when downloading files from the War Room.
You can run a scheduled command once or on a recurring schedule, setting the start time, end time, and frequency. Some common use cases for
scheduling a command:
Sending an email to a user, waiting a determined amount of time, and then sending the email again if a response has not been received.
NOTE:
1. Open an incident, locate the command entry in the War Room, and click the clock icon.
By default, scheduling options are displayed in a human-readable view. For recurring commands, you can use either a human-readable view
or switch to the Cron view.
For a non-recurring command, set the date and time for the command to run.
For a recurring command, select Recurring and then set the frequency.
To remove a scheduled command, click the clock icon and then click Remove schedule.
Abstract
Cortex XSOAR enables you to run system commands, integration commands, scripts, and more, from an integrated CLI.
Cortex XSOAR enables you to run system commands, integration commands, and scripts from an integrated command line interface (CLI), which
enables you to make comments in your incident (in plain text or Markdown) and to execute automation scripts, system commands, and integration
commands. This gives SOC teams the power to execute automations ad-hoc to support their investigations or make notes as they investigate
incidents.
NOTE:
If you are unable to run commands in the CLI, you may not have sufficient user role permissions. Contact your Cortex XSOAR administrator for
more details.
In the CLI, you can run various commands, by typing the following:
Action Description
! Runs integration commands, scripts, and built-in commands, such as adding evidence and assigning an analyst.
/ Runs system commands and operations, such as adding notes and closing an investigation.
You can find relevant commands, scripts, and arguments with the CLI’s auto-complete feature. This also includes fuzzy searching to help you find
relevant commands based on keywords. If you type the exclamation mark (!) and start typing, autocomplete populates with options that might suit
your needs. For example, if you want to work with tasks, type !task, and all commands and scripts that include the task in their name will display.
The CLI is available throughout Cortex XSOAR, except Marketplace and while editing Playbooks.
NOTE:
You can use the up/down arrow buttons in the CLI to do a reverse history search for previous commands with the same prefix.
You can hide the CLI when it is not needed by clicking on the down arrow to the right of the CLI. You can click the same button to restore the CLI.
If you can't see the ^ button, remove the ? Help Center button. To restore the Help Center, click Help (left menu) and click In-App Help Center.
Characters Description
&&, ||, !, {, }, [, ], (, ), ~, *, To use these characters, place them within single or double quotes. An escape character \ is
? not required.
\, \n, \t, \r, ", ^, :, comma, and To use these characters, place them within single or double quotes and use an escape
space character \.
TIP:
When writing a query or complex text in the CLI, we strongly recommend enclosing your text with the backtick (`) character. Text within the
backticks does not require you to escape single quotation marks ('), double quotation marks (''), or backslashes (\).
Common Arguments
The following common arguments are available for every script run from the CLI.
inline: Extracts indicators within the indicator extraction run context (synchronously).
none - Does not extract indicators (recommended for scripts with large outputs when indicator extraction is not
required).
execution-timeout Defines how long a command waits in seconds before it times out.
extend-context Select which information from the raw JSON you want to add to the context data.
ignore-outputs Possible values: true or false. If set to true, it does not store outputs in the context (besides extend context).
raw-response Possible values: true or false. If set to true, it returns the raw JSON result from the script.
retry-count Determines how many times the script attempts to run before generating an error.
retry-interval Determines the wait time (in seconds) between each script execution.
using-brand Selects which integration runs the command. If the selected integration has multiple instances, the script may run
multiple times. Use the using argument to select a single integration instance.
using-category Selects which category of integrations runs the command. If the selected category includes multiple integration
instances, the script may run multiple times. Use the using argument to select a single integration instance.
You can view and run commands and scripts (not system commands, operations, and notifications) in the Automations Browser, by clicking
next to the CLI.
The Automations Browser enables you to run commands and all associated arguments. The scripts and commands are separated into sections
such as scripts and built-in commands. In each argument, you can do the following:
You can dynamically pass information into the argument, by clicking the curly bracket. For example, the EmailAskUser command asks a
user a question via email. In the email argument, rather than typing the user's email address, you can send it to whoever created the
incident.
You can use transformers and filters to filter and transform data from the command. For more information, see Filter and transform data.
Common arguments
Argument Description
Extend context Determines the wait time (in seconds) between each script execution.
Ignore outputs Does not store outputs in the context (besides extend context).
Execution timeout (seconds) Defines how long a command waits in seconds before it times out.
Number of retries Determines how many times the script attempts to run before generating an error.
Retry interval (seconds) Determines the wait time (in seconds) between each script execution.
To run the print script with a value of "hello" and the key a from the context:
To run the searchIncidentv2 script with the query of myfield equals "this is a test" using escape characters:
To run the Python command returning Hello World using escape characters:
Abstract
Add evidence to the evidence board to assist with your investigation. Mark any entity as evidence in the War Room by adding tags.
While you're investigating an incident, you can add notes and evidence to assist you with your investigation.
Notes can help you understand why certain actions were taken and assist future decisions. Notes are highlighted, so you can easily find them,
especially in the War Room.
When marking an artifact as evidence, these artifacts are added to the Evidence Board tab, which enables you to see all artifacts for current and
future analysis in a single location.
NOTE:
You can change a note to evidence or vice-versa and have the same entry as a note and evidence.
Action Description
When adding a time/date you need to save it before updating the evidence.
Upload a file Upload a file to the War Room by selecting Mark as Evidence.
IMPORTANT:
Make sure files uploaded as attachments to the War Room are smaller than 250 MB. Uploading larger files can affect
performance.
Using the Run the !AddEvidence entryIDs=ID of the war room entry command.
CLI
In the relevant War Room entry, click Copy to CLI to retrieve the ID of the War Room entry.
Playbook In a Playbook task (Advanced tab). Tasks can be automatically added as evidence from script outputs.
task
Case Info If the Case Info tab includes an EVIDENCE section, you can add it to the section.
tab
Whenever you add evidence, this appears in both the Evidence Board tab and the EVIDENCE section in your layout.
Evidence Board
The Evidence Board tab shows all the entries marked as evidence for current and future analysis. Typically you can use the Evidence Board to do
the following:
Construct a timeline of events that can further clarify your incident response
Use it for audit reports and compliance requirements to show how you reached a decision.
You can search for evidence and select the date range when the evidence occurred.
When viewing an Evidence artifact you can see the following fields:
occurred: The time/date that you added when the artifact occurred. For example, when the file was created. If no time/date is specified it is
marked as Unknown.
fetched: The time/date when the entry was created in Cortex XSOAR.
You can also edit or remove evidence from the Evidence Board.
NOTE:
Adding tags to evidence from the Evidence Board does not create the same tags in the War Room.
Use the toggle button to switch between Table View or Summary View. In the Table View, you can remove, export, or show evidence in the
War Room. In the Summary View you can remove or edit the evidence.
Abstract
A Work Plan is a visual representation of the running playbook that is assigned to an incident. Use it to monitor and manage a Playbook workflow.
The Work Plan is a visual representation of the running playbook assigned to the incident. Playbooks enable you to automate many security
processes, such as managing your investigations and handling tickets. Work Plans enable you to monitor and manage a playbook workflow, and
add new tasks to tailor the playbook to a specific investigation.
In an investigation, when you open the Work Plan tab you can see the playbook, the playbook name, and navigation tools.
By default, the Follow checkbox is checked, which allows you to see the playbook executing in real-time. The playbook moves when a task is
completed.
Action Description
Change the default On the left-hand side of the window, select the playbook you want to run.
playbook
When changing the playbook, all completed tasks are removed and the new playbook will run. If you select playbooks
several times you can view the history of which playbooks ran.
Rerun the playbook When changing the playbook, select the current playbook to run again.
View inputs and View the inputs and outputs of each task that has run. You can't view inputs and outputs of any task that hasn't run.
outputs
Action Description
Manage tasks View, create, and edit a playbook task. For each task, you can do the following:
Assign an owner
You can manage these tasks in the CLI by using the /task command. For more information about tasks, see Incident
Tasks.
Export to a PNG Export the Work plan to a PNG format for easy analysis.
The color coding and symbols in the Work Plan help you to easily troubleshoot errors or respond to manual steps. The following table displays the
playbook tasks and icons in the Work Plan.
Task Description
The arrow and lightning bolt indicate a standard automated task. This task does not require any analyst
intervention. They turn green automatically if they are successful.
The arrow indicates a standard manual task. These tasks are used where usually it's not possible to automate
them. You can add comments, assign them to an owner, and set a due date.
Conditional task
The diamond indicates a conditional task, which is either an automated conditional task (with the lightning bolt)
or a manual conditional task. These tasks are used as decision trees in your work plan.
The speech bubble indicates a data-collection task. This task prompts you to respond to multi-questions.
Active task
Completed task
Task Description
Overdue task
The orange user icon indicates that the playbook is pending action. The task requires you to open it and
manually mark it as complete.
Failed task
The red warning icon indicates that the automation failed to complete as expected and requires manual
inspection and troubleshooting. Contact your Cortex XSOAR administrator.
Sub-playbook task
The workflow icon indicates that the task is a playbook nested within the parent playbook. You can view that
playbook by opening the task and selecting Open sub-playbook.
Abstract
Add ad-hoc tasks to a Work Plan in Cortex XSOAR for a specific iteration of a playbook.
As part of your incident investigation, within the Work Plan you can create tasks for a specific iteration of a playbook. The task type can be an
automation or another playbook. For example, within a manual task, you might need to enrich some data and run an investigation playbook.
When you create a task, add a name, automation, and description. The name and description should be meaningful so that the task corresponds
to the data that you are collecting.
1. In the Work Plan, go to the task where you want to add and click the + sign at the bottom right-hand corner of the task.
The ad-hoc task is added after the task on which you clicked.
The playbook functions as any playbook would and requires you to define the inputs and outputs, as well as any other details.
3. Click Save.
4. To run the Work Plan again, click the Run again icon.
Example 27.
For a phishing investigation, after the initial playbook run parses the email and extracts email addresses, as part of the manual investigation, you
could use the Email Address Enrichment - Generic v2.1 playbook as an ad-hoc playbook task to get more information about these email
addresses.
Abstract
While you're investigating an incident, you can use the canvas to create a visual map of the incident and it's associated incidents and indicators.
This enables you to analyze the threat landscape of the investigation. Using the canvas, you and other team members can produce threat hunting
activities to enhance the organization's security defenses.
To access the investigation canvas, click Canvas from the incident you want to investigate. The incident or indicator appears on the canvas
display. In the Add entity to canvas section, Cortex XSOAR provides suggested indicators and incidents that might be related or relevant to the
current incident for you to add to the canvas.
Incident Suggestions
The incidents are calculated according to the related incidents algorithm, which are based on several factors:
Common labels
Common indicators
You can add the incidents by dragging and dropping the incident onto the canvas.
Indicator Suggestions
The indicators are determined according to the following factors (in this order):
1. Indicators with a malicious verdict from the original incident (the incident that initiated the investigation).
2. Indicators that are shared between incidents that you added to the canvas.
3. The malicious ratio, which is the ratio between the indicators that appear in incidents with a malicious verdict, compared to the total number
of incidents in Cortex XSOAR.
You can add the indicators by dragging and dropping the indicators onto the canvas.
Key Features
Quick view of the incident and indicator: Click the incident or indicator to view details.
Connect incidents: Connect each incident by linking each incident and use comments on entity connections to communicate important
information with team members by adding notes to connectors between entities.
Adding notes: You can add notes on the connection. Using notes enables you and other team members to collaborate on important issues.
The note also shows the last user to edit the note and the time it was edited.
Dynamic Connections: When you rearrange entities on the canvas, the connections dynamically move with the entities. Connections that
are dotted lines indicate that the indicator is part of the investigation, or two incidents are defined as related incidents. These connections
are dynamic, which means if one entity is an IP address and you add that IP address to the allow list after it was added to the canvas, the
dotted-lined connection is automatically removed.
Capture the Canvas as an image: Capture and study the incident by clicking Export to PNG or Export snapshot to War Room.
Relationships: You can expand or add relationships. From the entity, right-click and select Expand Relationships.
Abstract
When ingesting incidents, you may find that several incidents have similar or identical information. You have the following options:
From the incidents table, mark the incident as duplicate. You select which incident to keep and which to close.
From the Incident, in the LINKED INCIDENTS section, add linked incidents. These incidents are linked but not closed.
In the CLI you can use the !linkIncidents command to deduplicate, and link/unlink incidents
When you link an incident without closing, you can view all similar incidents together without closing them as duplicates. When you link an incident
you can see them all in one table and take action altogether, such as running commands or closing the incidents.
If you find during your investigation you want to unlink incidents, run the !linkedIncidents command in the CLI.
4. Click Submit.
The linked incident appears in the War Room and the LINKED INCIDENTS section.
5. (Optional) To take action or view all linked incidents, go to the Linked Incidents table by clicking in the LINKED INCIDENTS section.
NOTE:
The linked incident appears in the LINKED INCIDENTS section and the War Room.
2. (Optional) To take action or view all linked incidents, go to the Linked Incidents table by clicking in the LINKED INCIDENTS section.
Abstract
Create and generate a custom Incident Summary report in Cortex XSOAR, from the incident page. Save reports as templates.
In an incident investigation, you can generate an incident summary report in PDF format, which enables you to capture investigation-specific data
and share it with team members.
Action Description
Select a tab to Apart from the War Room, Work Plan, and Evidence Board tabs, you can select which tab to generate a report from
generate a report including any custom tabs or tabs from a layout installed from a content pack. For example, the Phishing Campaign
from layout includes the Campaign Overview and Campaign Management tabs. You can select any of those tabs to generate a
report.
When generating a report, you can decide what sections to include from the Case Info tab, by selecting Legacy
Summary.
You can save the reports as templates. Templates cannot be edited after they are created.
Create a report The Investigation Summary report is included out-of-the-box. This report includes the following sections:
from a template
General information
Close notes
Custom data
Investigation Timeline
Indicators
Skipped tasks
Team members
Linked incidents
TIP:
If you want a less detailed report, we recommend downloading the CaseMangement-Generic content pack which includes a Case Report. This
report includes case details, investigation details, labels, closing information, indicators, team members, notes, and any War Room Chat.
The administrator can create a tab in your layout to include any information for reports. For more information about customizing layouts, see
Incident layout customization.
After you create a template, it appears on the Reports page under Incident Reports.
2. Select the tab that has the information you want to appear, and click Actions → Report.
Add the required properties. We recommend the landscape orientation, so that all information is displayed in the report.
To use an existing template, choose From Template tab and select the template.
4. If you want to use the report settings as a template, click the Save report as template checkbox.
NOTE:
You can also use the !GenerateSummaryReports command in the CLI to generate a report. If you want to automate the process, the
administrator can use the Send Investigation Summary Reports Job playbook.
Perform actions (create, edit, export, delete) and search for indicators on the Cortex XSOAR Indicators (no TIM license).
After you start ingesting indicators into Cortex XSOAR, you can start your investigation, including extracting indicators, creating indicators, adding
indicators to an incident, and exporting indicators.
NOTE:
You need a TIM license to investigate the indicator on the Indicator page, and use the Unit 42 features such as Sample Analysis, and Sessions
and Submissions. For more information, see Indicator investigation with a TIM license.
The Indicators page displays a list of indicators added to Cortex XSOAR, where you can perform the following indicator actions:
Action Description
Create an indicator Indicators are added to the Indicators table from incoming incidents, feed integrations, or manually creating a
new indicator.
When creating an indicator, in the Verdict field, you can either select a verdict or leave it blank to calculate it
by clicking Save & Enrich, which updates the indicator from enrichment sources. After you select an indicator
type, you can add any custom field data.
Create an incident Create an incident from the selected indicator and populate relevant incident fields with indicator data.
Edit Edit a single indicator or select multiple indicators to perform a bulk edit.
Delete and Exclude Delete and exclude one or more indicators from all indicator types or a subset of indicator types. For more
information, see Delete and exclude indicators.
If you select the Do not add to exclusion list checkbox, the selected indicators are only deleted.
Export CSV Export the selected indicators to a CSV file. By default, the CSV file is generated in UTF8 format.
You need administrator permission to change server configurations including the format. To change the
format, see Export incidents and indicators to CSV using the UTF8-BOM format.
Upload a STIX file To upload a STIX file, click the upload button (top right of the page) and add the indicators from the file.
NOTE:
By default, when editing a list or text values in an incident/indicator, the changes are not saved until you confirm your changes (clicking the
checkmark icon in the value field). These icons are designed to give you additional security when updating fields in incidents and indicators.
You can change this default behavior by updating the server configuration. You need administrator permission to update server configurations.
For more information, see Configure inline value fields.
You can also undertake various actions on the indicator, such as:
Action Description
Enrich an You can view detailed information about the indicator (WHOIS information for example), using third-party integrations
indicator such as VirusTotal and IPinfo. For more information, see Extract and enrich an indicator.
Expire an You may want to expire an indicator to filter out less relevant alerts, allowing analysts to focus on active threats. For
indicator more information, see Expire an indicator.
View indicator Relationships enable you to enhance investigations with information about indicators and how they might be connected
relationships to other incidents or indicators. You can't create, edit, or delete relationships unless you have a TIM license. For more
information, see View indicator relationships in an investigation.
Abstract
How to query indicators in the threat intel library (without a TIM license).
You can search for indicators using any of the available search fields. This is a partial list of the available search fields.
Field Description
Malicious
Suspicious
Benign
Unknown
aggregatedReliability Searches for indicators based on a reliability score such as A - Completely reliable.
expirationSource The source (such as script or manual.) that last set the indicator's expiration status.
You can use a wildcard query, which finds indicators containing terms that match the specified wildcard. For example, the * pattern matches any
sequence of 0 or more characters, and ? matches any single character. For a regex query, use the following value:
"/.*\\?.*/"
Abstract
How to use and create indicator relationships in Cortex XSOAR and how it benefits an investigation.
Indicator relationships are connections between different indicators. These relationships can be IP addresses related to one another, domains
impersonating legitimate domains, etc. These relationships enable you to enhance investigations with information about indicators and how they
might be connected to other incidents or indicators. For example, if you have a phishing incident with several indicators, one of those indicators
might lead to another indicator, which is a malicious threat actor. Once you know the threat actor, you can investigate to see the incidents it was
involved in, its known TTPs (tactics, techniques, and procedures), and other indicators that might be related to the threat actor. The initial incident
which started as a phishing investigation immediately becomes a true positive and relates to a specific malicious entity.
Relationships are created from threat intel feeds and enrichment integrations that support the automatic creation of relationships, such as
AlienVault OTX v2 and URLhaus, by selecting Create relationships in the integration settings. Based on the information that exists in the
integrations, the relationships are formed.
You can view indicator relationships by clicking on the indicator from an incident, and then from the Quick View window click the Relationships tab.
NOTE:
To manage indicator relationships including how to create them, you need a TIM license. For more information, see Manage indicator
relationships.
Cortex XSOAR Threat Intel Management includes features such as Access to Threat Intel 42 data, investigate files using Sample Analysis, submit
Sessions and Submissions, and deep dive into indicators.
Threat Intel Management enables you to unify the core components of threat intel, including threat intel aggregation, scoring, and sharing. Cortex
XSOAR automates threat intel management by ingesting and processing indicator sources, such as feeds and lists, and exporting the enriched
intelligence data to SIEMs, firewalls, and any other system that can benefit from the data.
Learn how to use TIM in your investigation, utilizing Unit 42 Intel in your investigation.
Before diving in, you should understand the Cortex XSOAR Threat Intelligence Management's functionality and how it integrates with your needs.
Review the use cases and key details to optimize your Cortex XSOAR experience from the start. Threat Intel management includes the following
features:
NOTE:
Although some features are available without a TIM license such as indicator customization, you must have the Cortex XSOAR Threat Intel
Management (TIM) license to use the TIM features.
Licenses
Cortex XSOAR requires a yearly license per user. Multi-year licenses are available.
This table describes the types of Cortex XSOAR licenses which are used in the following circumstances:
Cortex XSOAR (Enterprise) Built for customers who need a complete security automation solution. Includes the SOAR Enterprise
Edition and TIM Enterprise licenses.
Cortex XSOAR Threat Intel Built for Threat Intelligence and Security Operations teams who need Includes the TIM Enterprise
Management Edition threat intelligence-based automation. license only.
Cortex XSOAR Starter Built for Security Operations and Incident Response customers who Includes the SOAR Enterprise
Edition need case management with collaboration and playbook-driven license only.
automation.
License quota
The following table describes the license quotas of each version in Cortex XSOAR.
XSOAR TIM (TIM Only) XSOAR Starter Edition (SOAR Only) XSOAR (SOAR + TIM)
Threat Intel Library Unlimited Intelligence detail view and relationship data are Unlimited
not included
Unit 42 Intelligence Unlimited UI access, 5k/day Not included Unlimited UI access, 5k/day
API points API points
NOTE:
Intel feed quotas are based on the selected Fetches Indicators field in the integration instance settings, not the enabled status. Disabling an
integration instance does not affect the Intel feed quota. For example, if the AWS Feed is enabled and is fetching indicators and you don't want
to include this in your quota, open the integration settings and clear the Fetches Indicators checkbox.
Abstract
The Cortex XSOAR native threat intel management capabilities allow you to unify the core components of threat intel, including threat intel
aggregation, scoring, and sharing. Cortex XSOAR automates threat intel management by ingesting and processing indicator sources, such as
feeds and lists, and exporting the enriched intelligence data to SIEMs, firewalls, and any other system that can benefit from the data. These
capabilities enable you to sort through millions of indicators daily and take automated steps to make those indicators actionable.
NOTE:
Although some features are available without a TIM license such as indicator customization, you must have the Cortex XSOAR Threat Intel
Management (TIM) license to use the TIM features.
Supercharge investigations with instant access to a large repository of built-in, high-fidelity Palo Alto Networks threat intelligence
crowdsourced from the largest footprint of network, endpoint, and cloud intel sources.
Indicator relationships
Indicator connections enable structured relationships between threat intelligence sources and incidents. These relationships surface
important context for security analysts on new threat actors and attack techniques.
Take automated action to shut down threats across over 600 third-party products with purpose-built playbooks based on proven SOAR
capabilities.
Take charge of your threat intel with playbook-based indicator lifecycle management and transparent scoring that can be easily extended
and customized.
Eliminate manual tasks with automated playbooks to aggregate, parse, prioritize, and distribute relevant indicators in real-time to security
controls for continuous protection
The largest community of integrations with content packs that are prebuilt bundles of integrations, playbooks, dashboards, field subscription
services, and all the dependencies needed to support specific security orchestration use cases.
Security orchestration, automation, and response (SOAR) solutions have been developed to weave threat intelligence management into workflows
by combining TIM capabilities with incident management, orchestration, and automation capabilities. SOAR solutions weave threat intelligence
into a more unified and automated workflow. It matches alerts both to their sources and to compiled threat intelligence data and can automatically
execute an appropriate response.
As part of the extensible Cortex XSOAR platform, TIM unifies threat intelligence aggregation, scoring, and sharing with playbook-driven
automation. It empowers security leaders with instant clarity into high-priority threats to drive the right response across the entire enterprise.
Cortex XSOAR provides a common platform for incidents and threat information, where there is no disconnect between external threat data and
your environment. Automated data enrichment of indicators provides analysts with relevant threat data to make smarter decisions.
Integrated case management allows for real-time collaboration, boosts operational efficiencies across teams, and automates playbooks to speed
response across security use cases.
Cortex XSOAR collects data from sources such as incidents, Unit 42, and external threat intel feeds. After the data is ingested, Threat Intel
playbooks examine the data proactively. The data gets deduped, normalized, and stored in the Threat Intel database so that a Threat Intel analyst
can start a threat analysis. The analyst can then send that information to firewalls, share it with other stakeholders, and take remedial action as
necessary.
Abstract
Typical use cases for analysts and how to set up the use cases by administrators.
The following examples illustrate typical use cases for Threat Intel Management analysts, including how to configure playbooks and jobs for
administrators.
In this example, Firewall Admins are responsible for ensuring employees can always access SaaS applications such as Zoom and Office 365.
They need to manage a stream of inbound change requests from the security team and other business units. Regardless of these daily changes,
critical apps must always be allowed. The network infrastructure of SaaS applications is constantly changing/rotating IP addresses and Domains.
1. Configure a feed integration such as Office 365, Amazon AWS, Unit 42, etc.
For example, the TIM - Indicator Auto Processing playbook identifies indicators that shouldn’t be added to a block list, such as IP indicators
that belong to business partners or important hashes you do not wish to process.
3. Go to Threat Intel page and run the following search to return IP, IPv6 or IPv6CIDR results:
1. In the Instances page, search for Generic Export Indicators Service and Add instance.
5. Test the EDL by running the Curl command: curl -v-u- user:pass https://ext-
<tenant>crtx<region>.paloaltonetworks.com/xsoar/instance/execute/<instance-name>
The security team needs to leverage threat intelligence to block known bad domains, IPs, hashes, etc (indicators). The indicators are being
collected from many different sources which need to be normalized, scored, and vetted (ensure not blocking business partners) before pushing to
security devices such as Firewalls for blocking.
1. Configure feed integrations such as Unit 42 ATOMs feed, TAXII feed, etc.
1. Go to Settings & Info → Settings → Integrations → Instances and in the Category field, select Threat Intel Feeds.
3. Go to the Threat Intel page and run the following search to return IP addresses with the verdict malicious with high reliability:
1. In the Instances page, search for Generic Export Indicators Service and Add instance.
5. Test the EDL by running the Curl command: curl -v-u- user:pass https://ext-
<tenant>crtx<region>.paloaltonetworks.com/xsoar/instance/execute/<instance-name>
Incident enrichment
Incident Responders are receiving an endless stream of alerts, usually with little to no context of the external threat. Enriching alerts with curated
threat intelligence from Unit 42 enables analysts to see the bigger picture and make more informed decisions when responding to alerts, ensuring
comprehensive containment of the threat.
3. Configure threat feeds and enrichment sources, relevant to your use case. For example, Unit 42 ATOMs Feed, Feodo Tracker IP Blocklist
Feed, TAXII Feed (to ingest ISAC data).
For example, configure the Palo Alto Networks Cortex XDR Investigation and Response integrations to ingest alerts from Cortex XDR. In the
incidents page, open an incident from Cortex XDR. In the Case Info tab, you can see brief information, such as affected hosts, and affected users.
In the Investigation tab, you can view the alert file artifacts or network artifacts. You can deep dive into the indicator by viewing the summary
(verdict, sources, related incidents, timeline relationships, etc). In the Unit 42 Intel tab you can get additional details from Unit 42. For a file, you
can see static and dynamic analysis. In the Work Plan, a playbook was run on whether an investigation is needed.
A new critical vulnerability is disclosed to the public which impacts the world's most popular applications (e.g. Log4J). The security team has
already begun the search for the vulnerable software, however, the threat intel team needs to inform all technology employees of this critical
threat. The intel team crafts a brief report summarizing the threat and adds analysis describing why this threat is relevant to the organization. This
is also a great way to “advertise” the availability of threat intelligence services across the organization.
1. Ingest industry news events and security research blogs using the RSS Feed integration. E.g. threat post, Dark Reading, ZDNet security,
Krebs on Security.
3. Create a report.
In this example, you want to create a flash intel report about the Log4j security vulnerability, which will be sent to all internal stakeholders. You
want to include the impact on the business with a brief analysis.
1. Get the relevant RSS feeds by going to fields by going to Threat Intel → Indicators and searching for sourceBrankds: RSS Feed log4j.
3. Create a report by selecting Threat Intel Reports → New Threat Intel Report.
Each report type has different fields. After you create the report you can update all fields.
For example, you may want to add the RSS feeds to the relationship fields as well as the CVE file that it relates to.
8. (Optional) Mark the report for review and send it to one of your colleagues for review.
You have the option to share it via PDF for a wider reach.
The security team needs to perform due diligence, ensuring the organization has not been impacted by newly collected intelligence. Querying
historical log data is a slow and tedious process for analysts (after acquisitions, organizations have multiple log stores). Additionally, running taxing
historical queries is not possible during working hours, as compute resources are prioritized for SOC operations. The security team needs to
automate this task during non-peak hours.
1. Configure feed integrations such as Unit 42 ATOMs feed, TAXII feed, etc.
For example, the TIM - Indicator Auto Processing playbook identifies indicators that shouldn’t be added to a block list, such as IP indicators
that belong to business partners or important hashes you do not wish to process.
3. Define the Triggered by delta in feed job to run that will trigger the playbook when the indicators are fetched.
4. To push the processed indicators to a SIEM, use the TIM - Add All Indicators Types to SIEM playbook.
Before you start customizing and investigating you should be familiar with the following terms
Indicators
Indicators are artifacts associated with security incidents and are an essential part of the incident management and remediation process. They
help correlate incidents, create hunting operations, and enable you to easily analyze incidents and reduce Mean Time to Response (MTTR).
Fetch indicators
Cortex XSOAR includes integrations that fetch indicators from either a vendor-specific source, such as TAXII, or from a generic source, such as a
CSV or JSON file.
Indicator ingestion
Cortex XSOAR automates threat intel management by ingesting and processing indicator sources, such as feeds and lists, and exporting the
enriched intelligence data to SIEMs, firewalls, and any other system that can benefit from the data. These capabilities enable you to sort through
millions of indicators daily and take automated steps to make those indicators actionable in your security posture.
Integration Feed integrations: Fetch indicators from a feed, for Indicator classification and mapping is done in the
example, TAXII, Office 365, and Unit 42 ATOMS Feed. Feed Integration and not in the Cortex XSOAR
Settings & Info → Settings → Object Setup →
Indicators → Classification & Mapping tab.
Indicator extraction Indicators are extracted from selected incidents that Only the value of an indicator is extracted, so no
flow into Cortex XSOAR, from an integration. classification or mapping is needed.
When indicators are ingested, regardless of their source, they have a unified, common set of indicator fields, including traffic light protocol (TLP),
expiration, verdict, and tags.
The same indicator can originate from multiple sources and be enriched with multiple methods (such as integrations, scripts, and
playbooks). Cortex XSOAR implements a smart merge logic to make sure indicators are accurately scored (verdict) and aggregated.
Indicator fields are merged according to the source reliability hierarchy. When there are two different values for a single indicator field, the field is
populated with the value provided by the source with the highest reliability score. For multi-select and tag fields, new values are appended, rather
than replacing the original values.
To avoid exceeding API quotas for third-party services, indicators are only updated after the cache expiration period. By default, the cache expires
4,320 minutes (3 days) after an indicator is updated, and cannot be cleared manually. The cache expiration can be set in the indicator type
parameters. Indicator enrichment cache expiration only applies to automatic enrichment, triggered by the enrichIndicators command, and
does not apply when you run reputation commands such as !ip.
Indicator timeline
The indicator timeline displays an indicator’s complete history, such as the first-seen and last-seen timestamp and changes made
to indicator fields.
Indicator expiration
When ingesting and processing many indicators daily, it’s important to control whether or not they are active or expired and to define how and
when indicators are expired. Cortex XSOAR offers multiple options to set indicator expiration.
Exclusion list
Indicators added to the exclusion list are disregarded by the system and are not created or involved in automated flows such
as indicator extraction.
Jobs
Administrators can define a job to trigger a playbook when the specified feed or feeds finish a fetch operation that includes a modification to the
list. The modification can be a new indicator, a modified indicator, or a removed indicator.
Abstract
Indicators are artifacts associated with incidents and are an essential part of the incident management and remediation process.
Indicators are text-based artifacts associated with incidents, such as IP addresses, URLs, and email addresses, and are an essential part of the
incident management and remediation process. They help correlate incidents, create hunting operations, and enable you to easily analyze
Step Details
1. Identify the Cortex XSOAR analyzes the text-based artifact and if it matches the indicator type profile. The indicator value is
indicator type and extracted, based on the indicator profile definition. You can set up indicator extraction automatically in the incident type,
value or playbook. Indicator extraction identifies indicators from various sources within Cortex XSOAR, such as email headers,
IP addresses, email addresses, and file hashes in file attachments. For more information about indicator extraction, see
Indicator extraction.
You can create or customize existing indicator types, fields, and layouts for your use case. For more information, see
Customize indicator types, fields, and layouts.
2. Formatting and Formatting and validation of the indicator are done using a formatting script that validates the data that represents the
validation indicator's value and determines how we want the data to appear in Cortex XSOAR. For example, the URL indicator
type uses the FormatURL script, which defangs URLs. For more information, see Formatting scripts.
3. Create or If the indicator is not known to Cortex XSOAR, an indicator is created or you can create your own. If already known, it is
update an updated with any new data including last seen dates. If the indicator is in an expired state but new data is received, it
indicator changes to active status.
If you have a TIM license, you can add Unit 42 data by adding an indicator to Cortex XSOAR. For more information, see
Query indicators with Unit 42 Intel data.
Step Details
4. Gather You can run reputation commands and enhancement script commands on indicator values. You need to set them to run
reputation and in the indicator type. The enhancement script also runs on the indicator type. Both determine the indicator's verdict. For
enrichment more information, see Enhancement scripts.
information
When a reputation command/enhancement script is run, the verdict gets added to the incident context, when attached to
an incident. Generally, the information is found under the Dbot Score key, the specific Indicator type, and specific vendor
information.
NOTE:
To run enhancement scripts and reputation commands, you must configure a relevant enrichment integration, such as
VirusTotal, IPinfo v2, etc.
You can exclude reputation commands from specific integrations in the indicator type settings if, for example, you are
limited with API credits, or the integration is unreliable.
5. Reputation Reputation scripts can be used if you want to override existing reputation commands with custom logic. For those
scripts indicator types without reputation commands, a custom reputation script can be applied. Use it to customize verdicts
and DBotScore context entry. For more information, see Reputation scripts.
6. Map indicator After your indicator is enriched, you can map fields. Some indicator fields are automatically mapped by Cortex XSOAR
fields to contain the relevant values. The default settings can be changed for each indicator type. You can create and
associate any custom fields with indicators. For more information, see Indicator classification and mapping.
7. Expiration Many indicators have expiration dates as threats are dynamic. IP addresses may change, systems may be fixed, etc.
When configuring an indicator type, you can set it never to expire or after a time interval. For more information, see
Configure indicator expiration.
TIP:
Abstract
A Threat Intel Management (TIM) analyst may have a different persona in the SOC. In some organizations, the TIM analyst is part of the SOC
analyst’s definition of work, but they have different workflows and use cases. The daily work of SOC analysts and TIM analysts are different.
Roles Responsibility
Roles Responsibility
Threat Intel Analyst (SOC Tier 2-3) Incident responders and threat hunters
Create indicator types, fields, and layouts, customize the exclusion list, indicator reputation, and indicator extraction.
Customize your indicators to your specific needs. Edit existing indicator types, fields, and layouts, add scripts, and configure tailored extraction
and expiration settings for optimal insights.
Abstract
Cortex XSOAR provides out-of-the-box indicator types, fields, and layouts. However, you may need to customize indicators to suit your use case,
either by editing existing indicator types, fields, or layouts or by creating new ones to help investigate and respond to potential security threats
specific to your organization.
Custom indicators can provide more accurate and efficient identification of potential cyber security threats. For example, you can customize
indicators to monitor and detect unusual activity within your organization's internal network. This can include creating indicators to flag
unauthorized access attempts or unusual data transfers, or identifying insider threats or compromised accounts.
Before customizing an indicator, review the ingested indicator and then customize it as needed. After ingesting incidents and indicators, check the
indicator information associated with your incident. From an incident, review the context data (from Side panels). If there is information in the
context data that you don't see in the indicator, map it into indicator fields and display it in the layout.
Option Description
Indicator Customize an indicator type by setting the relevant fields, display layout, scripts to run, and reputation command for the
type indicator type. You can create a new indicator type or you can edit an out-of-the-box indicator type. For more information, see
Create an indicator type.
Indicator Custom indicator fields add specific details or attributes to indicators, helping to better classify and understand the nature of
fields potential security threats. You can edit an existing indicator field or create a new one. After creating a new indicator field, map
the field to the relevant context data. You can add the field to an indicator type and view it in an indicator layout.
Indicator Custom indicator layouts enable you to organize and display specific details about potential threats in a way that makes sense
layout for your organization, making it easier to quickly understand and respond to security issues. You can view, customize, import
and export indicator layouts as well as add a custom layout to an indicator type.
NOTE:
If you do not have a TIM license, you cannot edit the Indicator Summary layout. You can only edit the Quick view and the
New/Edit form tabs.
Abstract
In addition to the system-level indicator types, you can create custom indicator types in Cortex XSOAR.
Indicators are categorized by indicator type, which determines the indicator layout and fields that are displayed and which scripts are run on
indicators of that type. Cortex XSOAR includes several out-of-the-box indicator types, such as:
IP Address
Domain
URL
File
For more information about file indicators and how to configure the file hash, see File indicators.
When you create a new indicator type, you define its properties, including whether and how to format the indicator data and how the verdict is
calculated.
2. Click New.
3. In the Settings tab, add the required indicator profile, such as name and Regex.
An error occurred.
The following example describes how to create a new indicator type to manage employee emails, for example for resource management or inside
threat investigation.
Create a new indicator type for the employee email addresses which contain the “our_company.com” company domain.
1. Under Settings & Info → Settings → Object Setup → Indicators → Types → New, in the Settings tab, define the following.
Regex: .*?@our_company.com (simplified to capture all the email addresses using the our_company.com domain).
Reputation command: Not relevant for this example, since we don't want any external enrichment.
Formatting script: If more formatting is needed, you can use a formatting script to edit the saved value.
Reputation script: If needed, you can create a reputation script to affect the DBot score given to the new custom indicator.
2. In the Custom Fields tab, map custom fields for the new indicator type.
You can map fields returned using an integration such as Active Directory to obtain more data about the actual user to whom the email
belongs. You can also collect data using integrations such as Okta (MFA, SSO), SIEM, and email security. Fields such as Username, Full
name, and various groups the user is part of as well as other identifiers are returned to context and mapped into the indicator using the
custom fields.
NOTE:
If you miss mapping any field, you can create additional new indicator fields and either relate them to all indicator types, or relate them
only to the new indicator type (recommended).
You can use the Dynamic section in the indicator layout to run python scripts and return results from within the layout itself.
Abstract
Create or edit an indicator type and configure fields that determine how the system interacts with indicators of that type.
Each indicator type has its own profile that enables Cortex XSOAR to recognize it across the platform. During the indicator extraction flow, the
order of execution is regex, formatting script, reputation command, and reputation script. You can update the following fields when updating an
indicator type.
Field Description
Reputation script The output of the reputation script is a verdict score, which is used as the basis for the indicator
verdict. Reputation scripts must be tagged reputation to appear in the list for the indicator type.
For more information, see Reputation scripts
The results of reputation scripts do not print to the War Room in the extraction flow.
Formatting scripts must be tagged indicator-format to appear in the list for the indicator type.
For more information, see Formatting scripts.
Field Description
Enhancement script The enhancement script is not part of the indicator extraction flow and is run manually on the
indicator type. Examples of enhancement scripts include an enrichment script and a script that
runs a search in an SIEM for the indicator.
After indicators are identified, you can go to the Indicator Quick View page, click the Actions
button, and run an enhancement script directly on an indicator. For these scripts to be available
in the menu, they need the enhancement tag. For more information, see Enhancement scripts.
When you run an enhancement script, it is the equivalent of running the script in the CLI. The
script can write to context, return an entry, etc.
Reputation command Calculates the reputation of indicators of this type. The verdict (reputation) is only associated
with the specific indicator value on which it’s run (not the indicator type). The command returns
the reputation of the indicator value as an entry with entry context and in some cases also
returns context values that can be mapped to the indicator type custom fields.
The results of the reputation command do not print to the War Room in the indicator extraction
flow. For more information, see Reputation commands.
Regex The regular expression (regex) to identify indicators for this indicator type.
Exclude these integrations for the Integrations to exclude when calculating the verdict, evaluating, and enriching indicators of this
reputation command indicator type. This only applies to the indicator extraction and enrichment mechanism and does
not apply when directly running reputation commands such as !ip, !url, !domain, etc.
Indicator Expiration Method The method by which to expire indicators of this type. The expiration method that you select is
the default expiration method for indicators of this indicator type.
The expiration can also be assigned when configuring a feed integration instance, which
overrides the default method.
Time Interval: indicators of this type expire after the specified number of days or hours.
For more information, see Configure indicator expiration.
Context path for verdict value (Advanced) When an indicator is extracted, the entry data from the command is mapped to the incident
context. This path defines where in context the data is mapped.
Context value of verdict (Advanced) The value of this field defines the actual data that is mapped to the context path.
Cache expiration in minutes (Advanced) The amount of time (in minutes) after which the cache for indicators of this type expire. The
default is 4,320 minutes (three days). The cache enables you to limit API requests by only
updating indicators after a specific time period has passed. The cache cannot be cleared
manually.
NOTE:
Indicator cache expiration rules only apply to standard enrichment (for example, running the
enrichIndicators command). If you run a reputation command, such as !ip, the
commands executes even if the cache has not expired.
Abstract
You can have a single file indicator for file objects in Cortex XSOAR or each file can have a hash as its own indicator.
Cortex XSOAR uses a single File indicator for file objects. As a result, files that appear with their SHA256 hash and all other hashes associated
with the file, (MD5, SHA1, and SSDeep) are listed as properties of the same indicator. In addition, when ingesting an incident through an
integration, all file information is presented as one object.
When investigating an incident, in the Indicators field (Investigation or Case info tabs), click a File indicator. You can see additional information for
that indicator, including:
SHA256
MD5
SHA1
SSDeep
The File.Name values associated with the indicator hash, based on File context objects created in Cortex XSOAR (automatically
populated).
Modified
The date and time the File indicator was last modified.
First Seen
The date and time the file was first seen in Cortex XSOAR.
If the file appears in a different incident with a different name and has any of the same hash values, it automatically associates with the original
indicator.
NOTE:
A new File indicator only affects new indicators ingested to the Cortex XSOAR platform. Indicators that were already in Cortex XSOAR continue
to appear as their respective hash-related indicators.
By default, Cortex XSOAR uses a single file indicator for file objects. As a result, files that appear with their SHA256 hash and all other hashes
associated with the file, (MD5, SHA1, and SSDeep) are listed as properties of the same indicator. In addition, when ingesting an incident through
an integration, all file information is presented as one object.
If the file appears in a different incident with a different name, and has any of the same hash values, it automatically associates with the original
indicator.
If you want to have each file hash appear as its own indicator, do the following:
File SHA-256
File SHA-1
File MD5
SSDeep
4. Click Enable.
When a file is created in the system, whether from a feed, indicator extraction or manually added, its original value is created as the indicator’s
value, while its complementing hashes are saved as fields.
ID: 1
Type: File
Value: FF79D3C4A0B7EB191783C323AB8363EBD1FD10BE58D8BCC96B07067743CA81D5
SHA256: FF79D3C4A0B7EB191783C323AB8363EBD1FD10BE58D8BCC96B07067743CA81D5
MD5: D7AB69FAD18D4A643D84A271DFC0DBDF
Afterwards, through a custom feed, the cmd.exe’s MD5 D7AB69FAD18D4A643D84A271DFC0DBDF hash is brought in, and Cortex XSOAR creates an
indicator of type File with the MD5 hash as its value.
ID: 2
Type: File
Value: D7AB69FAD18D4A643D84A271DFC0DBDF
MD5: D7AB69FAD18D4A643D84A271DFC0DBDF
The automatic merging flow for the File indicator type identifies that the two indicators are the same file and merges them together.
The final file indicator is a consolidation of the two, and is the same as the first example above:
ID: 1
Type: File
Value: FF79D3C4A0B7EB191783C323AB8363EBD1FD10BE58D8BCC96B07067743CA81D5
SHA256: FF79D3C4A0B7EB191783C323AB8363EBD1FD10BE58D8BCC96B07067743CA81D5
MD5: D7AB69FAD18D4A643D84A271DFC0DBDF
Abstract
Formatting scripts validate input and modify how indicators are displayed.
Validate inputs, for example, to check that the top-level domain (TLD) is valid.
Modify how the indicator appears in Cortex XSOAR such as the War Room.
After indicator values are extracted according to the defined regex, the formatting script can be used to modify how the indicator value appears in
the War Room and reports. For example, the IP indicator type uses the UnEscapeIPs formatting script, which removes any defanged characters
from an IP address, so 127[.]0[.]0[.]1 is formatted to 127.0.0.1. When you click the IP address in the War Room, you see the formatted IP
address. This extracted indicator value is then added to the Threat Intel database.
You can create a new script, or you can use an out-of-the-box formatting script on the Scripts page, for example:
UnEscapeIPs: Removes escaping characters from IP addresses. For example, 127[.]0[.]0[.]1 transforms to 127.0.0.1.
ExtractDomainAndFQDNFromUrlAndEmail: Extracts domains and FQDNs from URLs and emails, used by the Domain indicator. It
removes prefixes such as proofpoint or safelinks, removes escaped URLs, and extracts the FQDN.
ExtractEmailV2: Verifies that an email address is valid and only returns the address if it is valid.
In the following example, the RemoveEmpty script removes empty items, entries, or nodes from an array.
function toBoolean(value) {
if (typeof(value) === 'string') {
function isObject(o) {
return o instanceof Object && !(o instanceof Array);
}
function isEmpty(v) {
return (v === undefined) ||
(v === null) ||
(typeof(v) == 'string' && (!v || EMPTY_TOKENS.indexOf(v) !== -1)) ||
(Array.isArray(v) && v.filter(x => !isEmpty(x)).length === 0) ||
(isObject(v) && Object.keys(v).length === 0);
}
function removeEmptyProperties(obj) {
Object.keys(obj).forEach(k => {
var ov = obj[k];
if (isObject(ov)) {
removeEmptyProperties(ov);
} else if (Array.isArray(ov)) {
ov.forEach(av => isObject(av) && removeEmptyProperties(av));
obj[k] = ov.filter(x => !isEmpty(x));
}
if (isEmpty(ov)) {
delete obj[k];
}
});
}
if (toBoolean(args.remove_keys)) {
vals.forEach(v => isObject(v) && removeEmptyProperties(v));
}
return vals.filter(x => !isEmpty(x));
The formatting script requires a single input argument named input that accepts a single indicator value or an array of indicator values. The input
argument should be an array to accept multiple inputs and return an entry-result per input.
Argument Description
input Accepts a string or array of strings representing the indicator value(s) to be formatted. Will be accessed within the script using
demisto.args().get(‘input’, []).
In the script settings, the Is Array checkbox must be selected (see screenshot below).The script code must be able to handle a
single indicator value (as string), multiple indicator values in CSV format (as string) and an array of single indicator values
(array).
The indicators appear in a human-readable format in Cortex XSOAR. The output should be an array of formatted indicators or an array of entry
results (an entry result per indicator to be created). The entry result per input can be a JSON array to create multiple indicators. If the entry result
is an empty string, it is ignored and no indicator is created.
Single-value result:
results = CommandResults(
outputs_prefix='VirusTotal.IP',
outputs_key_field='Address',
outputs={
'Address': '8.8.8.8',
'ASN': 12345
}
)
return_results(results)
Multiple-value results:
results = [
CommandResults(
outputs_prefix='VirusTotal.IP',
outputs_key_field='Address',
outputs={
'Address': '8.8.8.8',
'ASN': 12345
}
),
CommandResults(
outputs_prefix='VirusTotal.IP',
outputs_key_field='Address',
outputs={
'Address': '1.1.1.1',
'ASN': 67890
}
)]
return_results(results)
NOTE:
Formatting scripts must have the indicator-format tag to appear in the list.
NOTE:
Formatting scripts for out-of-the-box indicator types are system-level, which means that the formatting scripts for these indicator types are not
configurable. To create a formatting script for an out-of-the-box indicator type, you need to disable the existing indicator type and create a new
(custom) indicator type. If you configured a formatting script before this change and updated your content, this configuration reverts to content
settings (empty).
You can run out-of-the-box or custom formatting scripts in the CLI to check the extracted indicator data is properly formatted.
The following are examples of the syntax for running the out-of-the-box UnEscapeIPs formatting script in the CLI.
!UnEscapeIPs input=127.0.0[.]1,8.8.8[.]8
!UnEscapeIPs input=${contextdata.indicators} (where the key contextdata.indicators in the context object is an array)
Abstract
Enhancement scripts are run manually and can enrich indicators, write to context, and return entries to the War Room.
Enhancement scripts enable you to gather additional data about the highlighted entry in the War Room. They can enrich indicators, search a SIEM
for a specific indicator, write indicator details to context, and return entries to the War Room.
NOTE:
Enhancement scripts are different from reputation commands. A reputation command runs every integration that has that command within it, to
enrich the indicator. The reputation command ip , for example, runs every IP integration command in your enabled integrations, to collect data
from multiple sources. An enhancement script is manually run after the initial extraction and enrichment for the indicator type is complete.
The enhancement script requires the indicator value as the input argument.
Argument Description
The value of the For example ip, email, url.The argument name should match the indicator type in lower case. For example, the
indicator IPReputation script requires the ip input. For an EmailReputation script the input is email.
In the following example, the DomainReputation script uses domain as the input.
The enhancement script output depends on its input because the script is run manually. If you want the output to be added to indicator enrichment
or the Threat Intelligence screen, it should follow the DBotScore convention in the content output as described in
https://xsoar.pan.dev/docs/integrations/dbot.
output =
{
'Type': entryTypes['note'],
'ContentsFormat': formats['json'],
'Contents': ‘this is the enrichment data’,
'EntryContext': {
'Email': ‘[email protected]’,
‘DBotScore’: {}},
}
return_results(output)
NOTE:
Enhancement scripts must have the enhancement tag applied to appear in the list.
You can run out-of-the-box or custom enhancement scripts in the CLI to enrich specific indicator values.
The following are examples of the syntax for running the out-of-the-box IPReputation and URLReputation enhancement scripts in the CLI.
!URLReputation url=cardcom.com
Abstract
Reputation scripts are used to assess and assign reputation scores to indicators. These scripts integrate external threat intelligence or internal
data sources to evaluate the reputation of indicators (such as IP addresses, URLs, or file hashes). Reputation scripts enable you to implement
custom logic and algorithms for determining the reputation of indicators.
Reputation scripts return the verdict of an indicator as a number. The number overrides the verdict returned from the reputation command but
does not override a manually set verdict. The reliability of the score from a reputation script is by default A++ - Reputation script.
You can modify the reliability by navigating to Settings & Info → Settings → System → Server Settings → + Add Server Configuration and adding
the server configuration enrichment.reputationScript.reliability with the desired reliability score.
NOTE:
The Reputation script overrides any default settings for the indicator that relates to the verdict.
You can create a new reputation script, or you can use an out-of-the-box reputation script in the Scripts page, for example:
CertificateReputation
cveReputation
MaliciousRatioReputation
SSDeepReputation
The reputation requires a single input argument named input that accepts an indicator value.
Argument Description
Either a number or a dbotScore. It can either be a raw number which is the score, or a full entry with DBotScore.
def main():
url_list = argToList(demisto.args().get('input'))
entry_list = []
demisto.results(entry_list)
Constant Value
Common.DbotScore.NONE NONE = 0
Common.DbotScore.GOOD GOOD = 1
Common.DbotScore.SUSPICIOUS SUSPICIOUS = 2
Common.DbotScore.BAD BAD = 3
NOTE:
Reputation scripts must have the reputation tag applied to appear in the list.
You can run out-of-the-box or custom reputation scripts in the CLI to set the verdict for a specific indicator.
The following are examples for running the out-of-the-box CertificateReputation and MalicioiusRationReputation reputation scripts in the
CLI.
Abstract
Reputation commands run based on the indicator type and return a verdict for the indicator.
Reputation commands are built-in or custom commands that use integrations such as Unit 42 to provide predefined functionalities for obtaining an
indicator verdict for specific indicator types. These commands simplify the process of fetching reputation data from external services or threat
intelligence feeds without requiring extensive scripting. Reputation commands come with preconfigured parameters and settings for commonly
used threat intelligence sources.
NOTE:
Running a reputation command directly (such as !ip) might not apply the result to an indicator, nor does it use the enrichment cache. To ensure
an indicator is enriched, and to take advantage of caching, use the enrichIndicators command or the Enrich button in the UI. This runs the
appropriate reputation command/script based on the indicator type settings. Note that extracted indicators are enriched in the same way.
You can create a new reputation command, or you can use an out-of-the-box reputation command, for example:
ip
file
url
domain
For more details on using out-of-the-box reputation commands or developing new reputation commands, see Generic Commands Reputation.
The reputation command uses the indicator value as the input argument.
Arguments Description
The value of the For example ip, email, url. Inputs are based on different integrations. Basic inputs are common to all reputation
indicator commands. For example, the !ip command has the following basic inputs:
- name: ip
arguments:
- name: ip
default: true
description: List of IPs.
isArray: true
In this example, the ip script uses ip, as the input, with the is array field checked.
The following are examples of the syntax for running the ip , domain, and file reputation commands in the CLI.
Abstract
Indicator mapping enables you to automatically update the value of an indicator field without having to manually change it. For example, the IP
indicator automatically maps the Country field. If it was not mapped, each time the IP address changes country the analyst would have to update
the country every time that indicator type is ingested.
The value of an indicator field is determined by the value of the key in context data the field is mapped to in Cortex XSOAR.
When you start ingesting indicators, the incoming fields are automatically mapped to the relevant indicator fields. Sometimes you may want to
change the default settings or map custom indicator fields to specific context data. Before you map custom indicator fields, you need to create the
indicator field and add it to the relevant indicator type layout.
NOTE:
Some integrations have indicator mappers and classifiers, such as AWS. If you want to use an integration mapper or classifier, see Indicator
classification and mapping.
To map custom fields to the indicator type, you need to enrich the indicator either by using the !enrichindicators command in the CLI, in a
playbook, or by opening an indicator and click Enrich indicator. Enrichment returns an entry, with the EntryContext property as the source of the
mapping process. When editing an indicator type, in the Custom Fields tab, type the name of the indicator exactly how it appears (in the Threat
Intel page) and click Load.
For the enrichment data to be considered valid, EntryContext must include a DBotScore with the fields: Indicator, Score, Vendor , and Type.
If DBotScore has those fields, all the data of EntryContext is used as the source for the mapping, and not only the data under
EntryContext.DBotScore.
The custom fields associated with this indicator type are listed in the table. If you do not see a custom field in the list, verify that you
associated the custom field with this indicator type.
4. (Optional) In the Indicator Sample panel, enter an indicator relevant to the indicator type to load sample data.
5. Click Choose data path to map the custom field to a data path.
a. (Optional) Click the curly brackets to map the field to a context path.
b. (Optional) From the Indicator Sample panel, select a context key to map to the field.
Abstract
Create a new indicator field in the Fields tab in Cortex XSOAR. Add specific indicator information to incidents.
Indicator fields are used to add specific indicator information to incidents. When you create an indicator field, you can associate the field to a
specific indicator type or all indicator types. You can then map the custom field to the relevant indicator type. You can also add an indicator field
trigger script.
NOTE:
Cortex XSOAR IOC fields are based on the STIX 2.1 specifications. For more information, see Indicator field structure.
Field types
Boolean Checkbox
Grid (table) Include an interactive, editable grid as a field type for selected indicator types or all indicator types.
To see how to create a grid field and to use a script, see Add an indicator field trigger script to an indicator field.
When you select Grid (table) you can format the table and determine if the user can add rows.
HTML Create and view HTML content, which can be used in any type of indicator.
Long text Long text is analyzed and tokenized, and entries are indexed as individual words, enabling you to perform advanced
searches and use wildcards.
Long text fields can't be sorted and used in graphical dashboard widgets.
While editing a long text field, pressing enter will create a new line (case is insensitive).
Markdown Add markdown formatted text as a template, which will be displayed to users in the field after the indicator is created.
Markdown lets you add basic formatting to text to provide a better end-user experience.
An empty array field for the user to add one or more values as a comma-separated list
Role The role assigned to the indicator. Determines which users (by role) can view the indicator.
Short text Short text is treated as a single unit of text and is not indexed by word. Advanced search, including wildcards, is not
supported.
Short text fields are case-sensitive by default but can be changed to case-insensitive when creating the field.
While editing a short text field, pressing enter will save the change.
Single select Select a value from a list of options. Add comma-separated values.
1. Select Settings & Info → Settings → Object Setup → Indicators → Fields → New Field.
Parameter Description
Field A meaningful display name for the field. After you type a name, you will see below the field that the Machine name is
Name automatically populated. The field’s machine name is applicable for searching and the CLI.
4. In the Basic Settings tab, define the values (according to the selected field type).
Field Description
Script to run when field value changes The script dynamically changes the field value when script conditions are met. For a
script to be available, it must have the field-change-triggered-indicator tag
when defining the script. For more information, see Indicator field trigger scripts.
Add to all indicator types This option is selected by default, which means this field is available to use in all
indicator types.
Clear the checkbox to associate this field with a subset of indicator types.
Make data available for search The values for this field can be returned in searches.
If you subsequently edit the field, you can optionally select Don't show in the indicators layout. If you select this, the indicator field does not
appear in the layout but the data is displayed in the context data.
8. (Optional) In the indicator type, map custom indicator fields, so an indicator field is automatically updated, without the analyst having to
manually change it.
Abstract
Indicator fields structure aligned with STIX standards to more easily share and work with IOCs.
Cortex XSOAR IOC fields are based on the STIX 2.1 specifications. These fields provide a guideline for the fields we recommend you maintain
within an IOC. None of the fields are mandatory, except the value field. Maintaining this field structure enables you to share and export IOCs to
additional threat intel based systems as well as to other cybersecurity devices.
Like STIX, Cortex XSOAR indicators are divided into two categories, STIX Domain Objects (SDOs) and STIX Cyber-observable Objects (SCOs).
The category determines which fields are presented in the layout of that specific IOC. In Cortex XSOAR, all SCOs can be used in a relationship
with either SDOs or SCOs.
Custom core fields - Custom fields shared by all IOCs of the same time (SDO or SCO). Fields may be empty.
Custom unique fields - Fields unique to a specific type of IOC. If a user associates more fields with the IOC, the additional fields are also
treated as unique.
Account
Similar to STIX User Account Object, this indicator type represents a user account in various platforms such as operating system, social media
accounts, and Active Directory. The value for the object is usually the username for logging in.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Blocked A Boolean switch to mark the object as blocked in the user environment.
Account Type Specifies the type of the account, comes from account-type-ov by STIX.
Creation Date The date the account was created (not the date the indicator was created).
Display Name The display name of the account as it is shown in the UI.
User ID The account's unique ID according to the system it was taken from.
Domain / DomainGlob
Network domain name, similar to the STIX Domain Name object. The value is the domain address.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Blocked A Boolean switch to mark the object as blocked in the user environment.
Community Notes Comments and free form notes regarding the indicator.
DNS Records All types of DNS records with a timestamp and their values (GRID).
WHOIS Records Any records from WHOIS about the domain (GRID).
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Blocked A Boolean switch to mark the object as blocked in the user environment.
Community Notes Comments and free form notes regarding the indicator.
None
File
Represents a single file. For backward compatibility, the indicator has multiple fields for different types of hashes. New hashes, however, should be
stored under the Hashes grid field. The file value should be its hash (either MD5, SHA-1, SHA-256, or SHA-512, in that order).
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Blocked A Boolean switch to mark the object as blocked in the user environment.
Community Notes Comments and free form notes regarding the indicator.
Represents an IP address and its subnet (CIDR). If no subnet is provided, the address is treated as a single IP (same as a /32 subnet).
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Blocked A Boolean switch to mark the object as blocked in the user environment.
Community Notes Comments and free form notes regarding the indicator.
WHOIS records Any records from WHOIS about the domain (GRID).
URL
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Blocked A Boolean switch to mark the object as blocked in the user environment.
Community Notes Comments and free form notes regarding the indicator.
Attack Pattern
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Community Notes Comments and free form notes regarding the indicator.
Kill Chain Phases The list of kill chain phases this Attack Pattern is used for.
External References List of external references consisting of a source and ID. For example, {source: mitere, id: T1189}
Campaign
A campaign is a grouping of adversarial behaviors that describes a set of malicious activities or attacks (sometimes called waves) that occur over
a period of time against a specific set of targets. Campaigns usually have well defined objectives and may be part of an intrusion set.
Campaigns are often attributed to an intrusion set and threat actors. The threat actors may reuse known infrastructure from the intrusion set or
may set up new infrastructure specifically for conducting that campaign.
Campaigns can be characterized by their objectives and the incidents they cause, people or resources they target, and the resources (such as
infrastructure, intelligence, and malware, tools) they use.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Community Notes Comments and free form notes regarding the indicator.
Objective The campaign’s primary goal, objective, desired outcome, or intended effect.
Course of action
A course of action is an action taken either to prevent an attack or to respond to an attack that is in progress. It may describe technical,
automatable responses (applying patches, reconfiguring firewalls), but can also describe higher level actions such as employee training or policy
changes. For example, a course of action to mitigate a vulnerability could describe applying the patch that fixes it.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Community Notes Comments and free form notes regarding the indicator.
CVE
To preserve backward compatibility, our vulnerability indicator is referred to as CVE, but it is equivalent to the Vulnerability object defined by STIX.
Unlike STIX, in TIM the object is identified by its CVE number. A vulnerability is a weakness or defect in the requirements, designs, or
implementations of the computational logic (code) found in software and some hardware components (firmware) that can be directly exploited to
negatively impact the confidentiality, integrity, or availability of that system.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Community Notes Comments and free form notes regarding the indicator.
Infrastructure
The Infrastructure SDO represents a type of TTP and describes any systems, software services and any associated physical or virtual resources
that support some purpose (for example, C2 servers used as part of an attack, a device or server that is part of a defense, and database servers
targeted by an attack). While elements of an attack can be represented by other SDOs or SCOs, the Infrastructure SDO represents a named
group of related data that constitutes the infrastructure.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Community Notes Comments and free form notes regarding the indicator.
Infrastructure types The type of infrastructure being described. Values should come from STIX infrastructure-type-ov open
vocabulary.
Intrusion set
An intrusion set is a grouped set of adversarial behaviors and resources with common properties that is believed to be orchestrated by a single
organization. An intrusion set may capture multiple campaigns or other activities that are all tied together by shared attributes indicating a
commonly known or unknown threat actor. New activity can be attributed to an intrusion set even if the threat actors behind the attack are not
known. Threat actors can move from supporting one intrusion set to supporting another, or they may support multiple intrusion sets.
Whereas a campaign is a set of attacks over a period of time against a specific set of targets to achieve an objective, an intrusion set is the entire
attack package and may be used over a very long period of time in multiple campaigns to achieve potentially multiple purposes.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Community Notes Comments and free form notes regarding the indicator.
Goals The high-level goals of this intrusion set, what it is trying to do.
Primary Motivation The primary reason, motivation, or purpose behind this intrusion set. Values should come from STIX attack-
motivation-ov open vocabulary.
Secondary Motivation The secondary reason, motivation, or purpose behind this intrusion set. Values should come from STIX attack-
motivation-ov open vocabulary.
Resource level Specifies the organizational level at which this intrusion set typically works. Values should come from
STIX attack-resource-level-ov open vocabulary.
Malware
Malware is a type of TTP that represents malicious code. It generally refers to a program that is inserted into a system, usually covertly. The intent
is to compromise the confidentiality, integrity, or availability of the victim's data, applications, or operating system (OS) or otherwise annoy or
disrupt the victim.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Community Notes Comments and free form notes regarding the indicator.
Architecture The processor architectures (for exmple, x86, ARM) that the malware instance or family is executable on. The
values should come from the STIX processor-architecture-ov open vocabulary.
Capabilities Any of the capabilities identified for the malware instance or family. The values should come from
STIX malware-capabilities-ov open vocabulary.
Implementation The programming language(s) used to implement the malware instance or family. The values should come from
Languages the STIX implementation-language-ov open vocabulary.
Is Malware Family Whether the object represents a malware family (if true) or a malware instance (if false).
Malware Types Which type of malware. Values should come from STIX malware-type-ov open vocabulary.
Report
Reports are collections of threat intelligence focused on one or more topics, such as a description of a threat actor, malware, or attack technique,
including context and related details. They are used to group related threat intelligence together so that it can be published as a comprehensive
cyber threat story.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Community Notes Comments and free form notes regarding the indicator.
Threat actor
Threat actors are individuals, groups, or organizations believed to be operating with malicious intent. A threat actor is not an intrusion set but may
support or be affiliated with various intrusion sets, groups, or organizations over time.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Community Notes Comments and free form notes regarding the indicator.
Goals The high-level goals of this threat actor, what it is trying to do.
Resource Level The organizational level at which this threat actor typically works. Values for this property should come from
STIX attack-resource-level-ov open vocabulary.
Primary Motivation The primary reason, motivation, or purpose behind this threat actor. Values for this property should come from
STIX attack-motivation-ov open vocabulary.
Secondary Motivation The secondary reasons, motivations, or purposes behind this threat actor. Values for this property should come
from STIX attack-motivation-ov open vocabulary.
Sophistication The skill, specific knowledge, special training, or expertise a threat actor must have to perform the attack. Values
for this property should come from STIX threat-actor-sophistication-ov open vocabulary.
Threat actor type The type(s) of this threat actor. Values should come from STIX threat-actor-type-ov open vocabulary.
Tool
Tools are legitimate software used by threat actors to perform attacks. Knowing how and when threat actors use such tools can help understand
how campaigns are executed. Unlike malware, these tools or software packages are often found on a system and have legitimate purposes for
power users, system administrators, network administrators, or even regular users. Remote access tools such as RDP and network scanning tools
such as Nmap are examples of tools that may be used by a threat actor during an attack.
Value Defines the indicator on Cortex XSOAR. The value is the main key for the object in the system.
Source Time Stamp When the object was created in the system.
Community Notes Comments and free form notes regarding the indicator.
Tool Types The kind(s) of tool(s) being described. Values for this property should come from STIX tool-type-ov open
vocabulary.
Kill Chain Phases The list of kill chain phases this attack pattern is used for.
Abstract
Associate Cortex XSOAR indicator fields with scripts that are triggered when the field changes.
Indicator field trigger scripts are automated responses that are triggered by a change in an indicator field value. In the script, you define the
change in the indicator field value to check for and the actions to take when the change occurs. For example, you can:
Create a script that runs when the Verdict field of an indicator changes. For example, the script will fetch all incidents related to the indicator
and take any action that is configured, such as reopening or changing severity.
Create a script that runs when the Expiration Status field changes. For example, you can define a script that will immediately update the
relevant allow/block list and not wait for the next iteration, as seen in the following sample script:
indicators = demisto.args().get('indicators')
new_value = demisto.args().get('new')
indicator_values = []
for indicator in indicators:
current_value = indicator.get('value')
indicator_values.append(current_value)
if new_value == "Expired":
# update allow/block list regarding expired indicators
else:
# update allow/block list regarding active indicators
NOTE:
You must have a TIM license to run field change-triggered scripts on indicator fields.
Scripts can be created in Python, PowerShell, or JavaScript on the Scripts page. To use a field trigger script, you need to add the field-change-
triggered-indicator tag when creating the script. You can then add the script in the Attributes tab when you edit or Create a Custom Indicator Field.
If you did not add the tag when creating the script, the script will not be available for use.
Indicator field trigger scripts have the following triggered field information available as arguments (args):
Argument Description
associatedToAll Whether the field is associated with all or some indicators. Value: true or false.
cliName The name of the field when called from the CLI.
Argument Description
ownerOnly Specifies that only the creator of the field can edit. Value: true or false.
selectValues If this is a multi-select type field, these are the values the field can take.
Indicator field trigger scripts can be configured on the Verdict, Related Incidents, Expiration Status, and Indicator Type fields, as well as any
custom indicator fields.
Indicator field trigger scripts work in all TIM (Threat Intelligence Management) scenarios and workflows, except for feed ingestion.
Fields that can hold a list (related incidents, multi-select/tag/role type custom fields) will provide an array of the delta. For example, if a multi-
select field value has changed from ["a"] to ["a", "b"], the new argument of the script will get a value of ["b"].
Indicator field trigger scripts run as a batch. This means that if multiple indicators are changed in the same way and are set to trigger the
same action, it will happen in one batch.
For example, in the following scenario for a configured indicator field trigger script named myTriggerScript on the Verdict indicator field:
The Threat Intel Library has two existing Malicious indicators: 1.1.1.1 and 2.2.2.2.
The myTriggerScript script will run just once, with the following parameters:
new - "Benign"
old - "Malicious"
indicators - "[{<indicator_1.1.1.1>},{<indicator_2.2.2.2}]"
When writing indicator field trigger scripts, avoid scenarios that call the scripts endlessly (for example, a change in field A triggers script X,
which changes field B's value, which in turn calls script Y, which changes field A's value).
After creating an indicator field trigger script in the Scripts page in Python, PowerShell, or JavaScript, you can then associate it with an indicator
field.
3. In the Attributes tab, under Script to run when field value changes, select the desired indicator field trigger script.
NOTE:
Indicator field trigger scripts must have the field-change-triggered-indicator tag to appear in the list.
Abstract
Customize an indicator layout for an indicator type in Cortex XSOAR. View the layout in the indicator Summary and Quick View.
Each indicator type has a unique set of data relevant to that specific indicator type, including layouts. It is important to display the most relevant
data for users. Each out-of-the-box indicator comes with a layout. You can customize almost every aspect of the layout, including which tabs
appear, in which order they appear, who has permission to view the tabs, what information appears, and how it is displayed.
You can see which indicator type uses the indicator layout in the Types tab under Settings & Info → Settings → Object Setup → Indicators. The
indicator layout name appears in the Layout column.
You can customize the display information including fields for existing indicators, by modifying the sections and fields for the following views:
Section Description
Indicator You can customize almost every aspect of the layout, including which tabs appear, the order they appear, and who has
Summary permission. In each field or tab, you can add filters by clicking the eye icon, which enables you to add conditions that show
specific fields or tabs relevant to the indicator.
You can add a script in the indicator layout, such as a mapping script, which determines where an IP address originates and
displays it on a map.
NOTE:
Quick View Add, edit, and delete sections, fields, and filters in the Quick View section from an incident.
"New"/"Edit" Add, edit, and delete fields and buttons to be displayed when creating or editing an indicator.
form
NOTE:
By default, when editing a list or text values in an incident/indicator layout, the changes are not saved until you confirm your changes (clicking
the checkmark icon in the value field). These icons are designed to give you additional security when updating fields in incidents and indicators.
You can change this default behavior by adding a server configuration. For more information, see Configure inline value fields.
1. Select Settings & Info → Settings → Object Setup → Indicators → Layouts → New Layout.
You can see that you can customize the Indicator Summary section, Quick View, and the New/Edit form.
3. Customize the tabs by clicking the settings wheel icon and then doing the following:
NOTE:
Action Description
Rename You can also edit a tab’s name by clicking the tab.
Show empty The setting that you configure in the layout becomes the default value seen in the report for the specific tab, which
fields can then be overridden.
You can also set a global default value using the UI.summary.page.hide.empty.fields server configuration,
which can also be overridden for a specific tab.
Hide tab Hides the tab. Rather than deleting the tab, you may want to use the tab again for future use.
Action Description
Format for Build your layout based on A4 proportions to match the format used for exporting. Selecting this option hides the tab
exporting by default, but the tab will remain available for export.
Display Filter Add or view a filter applied to the tab. If the filters apply, the specific fields or tabs are shown in the layout. If the
mandatory field is not shown in the layout, the user is not obliged to complete it.
4. Do the following:
Drag and drop the required sections, fields, buttons, and tabs.
6. In the New/Edit Form, drag and drop the required fields and buttons.
You can also edit the Basic Information and the Custom Field sections.
2. Click the name of the indicator type layout you want to edit.
You are presented with the current layout, which is populated with demo data so you can see how the fields fit.
3. If using a Content Pack Indicator Type Layout, detach or duplicate the layout.
NOTE:
If you duplicate the layout, you need to update the indicator type to add the new layout.
While an indicator layout is detached, it does not receive content pack updates. If you detach an indicator type layout, edit, and later want
to receive content pack updates for that layout, we recommend you duplicate the indicator layout before reattaching the original to protect
your changes from content pack updates. When detached, you can also edit the layout from the Indicator Type tab.
a. Select the checkbox for the indicator layout you want to detach.
Customize sections
2. From the Sections tab in the Library, drag and drop the following sections:
Section Description
New Section After creating a new section, click the Fields and Buttons tab and drag and
drop the fields as needed.
Section Description
Cortex XSOAR out-of-the-box sections Out-of-the-box sections such as Expiration Status and Verdict.
General Purpose Dynamic Section You can add a script in the indicator layout. For example, to assign a script
that determines and displays the Geolocation of an IP address on a map.
For more information, see Set up Google Maps.
NOTE:
To remove or duplicate a section, select the section, click and then select Duplicate, or Remove.
3. Define the section properties, by clicking and then Edit section settings.
TIP:
Limit the number of incident fields to 50 in each section. You can create additional sections as needed.
You can determine how a section in the layout appears in the layout. For example, you may want a section header, or configure the fields to
appear in rows or as cards. If some of the field values will be very long, use rows instead of cards. If the field values are short, you might
want to use cards so you can fit more fields into a section.
To add a long description in the Description field, click Scrollable description to add a scrollbar to enable the displayed information to grow to
fit the content.
You can add content to the Indicator Summary tab, based on a script, by adding the script in the General Purpose Dynamic Section. The script can
return simple text, markdown, or an HTML, the results of which appear in the General Purpose Dynamic Section.
You can add any required information from a script. For example:
Add a mapping script that determines where an IP address originates and displays it on a map.
Add a custom widget to the indicator page. The procedure is similar for indicators and incidents.
Add the FeedRelatedIndicator script from the Scripts page, which contains information about the relationship between an indicator, entity
(such as malware), and other indicators (such as a MITRE ATT&K indicator), and connects externally to those indicators, if relevant.
NOTE:
Ensure that you have added the dynamic-indicator-section tag, otherwise, you can't select it when adding a script
The layout must either be custom content (a layout you created), a layout duplicated from a content pack layout, or a detached layout from a
content pack. You cannot edit a layout that is attached. To detach an attached layout, select the indicator layout and click Detach. The layout
must either be custom content (a layout you created) or a detached layout from a content pack. You cannot edit a layout that is attached.
3. Drag and drop the General Purpose Dynamic Section onto the page.
4. Select the General Purpose Dynamic Section, click , and then click Edit section settings.
5. In the Name and Description fields, add a meaningful name and a description for the dynamic section that explains what the script displays.
6. In the Automation script field, select the script that returns data for the dynamic section.
7. Click OK.
You can add existing buttons or create buttons and then drag and drop them in the layout.
To add a custom button, create a script and then add the new button to the indicator layout and choose the script, as described in the example
below. These buttons can simplify and assist an analyst in carrying out various tasks. For example, you can create a button to run an enrichment
script on an identified indicator.
For fields (script arguments) that are optional, you can define whether to show them to analysts when they click on buttons. To expose an optional
field, select the Ask User checkbox next to the script arguments in the button settings page.
NOTE:
When creating a script for use in an indicator layout, the indicator-action-button tag must be assigned for the script to be available for
custom buttons.
In the following example, create a button that adds the indicator to a Hunt incident type so the Threat Intel team can review it.
1. Save the following script on your computer. On the Scripts page, click the upload script icon and upload the file.
commonfields:
id: d3716514-4c2b-453c-8072-4fd4807bca0a
version: 30
vcShouldKeepItemLegacyProdMachine: false
name: newIncidentFromIndicator
script: |+
from pprint import pformat
args = demisto.args()
fields = {}
fields['type'] = args['type']
fields['details'] = args['indicator']['value']
fields['name'] = args['type'] + " for " + args['indicator']['value']
newID = res[0]['EntryContext']['CreatedIncidentID']
type: python
tags:
- indicator-action-button
enabled: true
args:
- name: type
required: true
description: Incident Type
scripttarget: 0
subtype: python3
pswd: ""
runonce: false
dockerimage: demisto/python3:3.8.5.11789
runas: DBotWeakRole
When uploading to Cortex XSOAR the newIncidentFromIndicator name and the indicator-action-button is already populated.
2. Go to Settings & Info → Settings → Object Setup → Indicators → Layouts and click the relevant indicator type layout.
3. From the Fields and Buttons tab, drag the +New Button and drop into the relevant section.
4. Click to configure.
5. Enter a descriptive name for the button. For this example, we call it Send to the Threat Hunt Team.
6. Select a color.
8. In the Script Arguments field, under the type field, add Hunt.
When you view an indicator and click this button, an incident is created with the Hunt incident type.
To test the button, add the layout to the indicator type, go to the Threat Intel (Indicators) page, create a new indicator, and assign it to the
relevant indicator type. View the indicator, click the Pass to Threat Hunt Team button, and verify that a new incident has been created.
3. In the Layout field, from the dropdown list, add the customized layout.
a. In the Layouts page, select the new layout and then click Contribute to Marketplace.
b. In the dialog box select either Save and submit your contribution or Save and download your contribution for later use, which you can
view in the Contributions tab in Marketplace.
If you select Save and submit your contribution, your layout is validated and you are prompted to submit to review. You can also view
your contribution in Marketplace.
Abstract
The following table shows methods by which indicators are detected and ingested in Cortex XSOAR and how they are classified and mapped.
Integration Feed integrations: Fetch indicators from a feed, for Indicator classification and mapping is done in the
example, TAXII, Office 365, and Unit 42 ATOMS Feed. integration code by duplicating the integration in
Integrations → Instances and not in the Indicators
→ Classification & Mapping tab. For more
information, see Feed Integrations.
Indicator extraction Indicators are extracted from selected incidents that Only the value of an indicator is extracted, so no
flow into Cortex XSOAR, for example from an SIEM classification or mapping is needed.
integration.
For more information, see Indicator extraction.
The indicator classification and mapping feature enables you to take the data that Cortex XSOAR ingests from integrations, and classify and map
the data to indicator types and indicator fields. By classifying the data as different indicator types, you can process them with different playbooks
suited to their respective requirements.
NOTE:
When creating a new indicator type, you classify and map the indicator fields in the indicator type settings. For more details, see Map custom
indicator fields.
Classification determines the type of indicator that is created for data ingested from a specific integration. You create a classifier and define that
classifier in an integration.
You can map the fields from your third-party integration to the fields in your indicator layouts as follows:
Map your fields to indicator types irrespective of the integration or classifier. This means that you can create a mapping before defining an
instance and ingesting indicators. By doing so, when you do define an instance and apply a mapper, the data that comes in is already
mapped.
Create default mapping for all of the fields that are common to all indicator types, and then map only those fields that are specific to each
alert type individually. You can still overwrite the contents of a field in the specific indicator type.
When an integration fetches indicators, it populates the raw JSON object for the indicator. The raw JSON object contains all of the attributes
(fields) for an indicator. For example, source, when the event was created, the priority that was designated by the integration, and more. When
classifying ingested indicator data, you want to select an attribute (field) that can determine the indicator type.
Use this procedure to create a classifier or duplicate an existing classifier for ingested indicator data.
1. Select Settings & Info → Settings → Object Setup → Indicators → Classification & Mapping.
If the classifier is installed from a content pack, you need to duplicate and then edit.
3. Under Get data, select from where you want to import the indicator data. You will classify the indicator type based on this information.
NOTE:
You can optionally skip importing data. Click the pencil on the right of each indicator type on the right pane to enter the value manually.
Pull from instance: Select an existing integration instance to import indicator data from.
Upload JSON: Upload a formatted JSON file that includes the fields you want to classify by.
Cortex XSOAR searches through the imported indicator objects for the values for the field you select.
5. Drag the found values from the Unmapped Values column to the relevant indicator type on the right pane.
b. Select an existing integration instance you want to apply the indicator classifier to or create a new integration instance.
c. In the integration instance settings under Classifier, select the classifier you created and click Save.
Mappers enable you to map the information from ingested indicator data to the indicator fields that you have in your system.
1. Map all of the fields that are common to all indicators in the default mapping.
2. Map the additional fields that are specific for each indicator type, or overwrite the mapping that you used in the default mapping.
NOTE:
In the Classification & Mapping page, the mapping does not indicate for which indicator types they are configured. When creating a mapper, it is
best practice to add to the mapper name and the indicator type the mapper is for. For example, Mail Listener - Phishing.
When mapping a list, we recommend you map to a multi-select field. Short text fields do not support lists. If you need to map a list to a short text
field, add a transformer in the relevant playbook task to split the data back into a list.
Use this procedure to create a mapper or duplicate an existing mapper to map all of the ingested indicator fields to an indicator layout.
1. Select Settings & Info → Settings → Object Setup → Indicators → Classification & Mapping.
If the mapper is installed from a content pack, you need to duplicate and then edit.
3. Under Get data, select from where you want to import the indicator data. You will map the indicator data based on this information.
Pull from instance: Select an existing integration instance to import indicator data from.
Upload JSON: Upload a formatted JSON file that includes the fields you want to map.
4. Under Indicator Type, start by mapping out the Common Mapping. This mapping includes the fields that are common to all of the indicator
types and saves you time having to define these fields individually in each indicator type.
5. Click the attribute (field) to which you want to map. You can further manipulate the field using filters and transformers.
6. Repeat this process for the other indicator types for which this mapping is relevant.
b. Select an existing integration instance you want to apply the classifier to or create a new integration instance.
c. In the integration instance settings under Mapper, select the mapper you created and click Save.
Abstract
Indicator extraction identifies indicators from different text sources in the system (such as War Room entries, email content, etc.), extracts them
(usually based on regex) and creates indicators in Cortex XSOAR. After extraction, the indicator can be enriched.
After indicators are extracted, they are enriched using commands and scripts defined for the indicator type. Indicator enrichment provides detailed
information about the indicator, based on enrichment feeds such as VirusTotal and IPinfo.
To extract indicators from incoming feeds without enrichment or to prevent enrichment for existing indicators, see Exclude indicators from
enrichment.
NOTE:
Reputation commands, such as !ip and !domain, can only be used after you configure and enable a reputation integration instance, such as
Virus Total and Whois.
Some content packs include a dashboard and widget that track API rate limit errors. You can use this information for troubleshooting and to make
decisions about indicator enrichment.
Incident types
You can extract indicators from incident fields when an incident is created and when an incident field changes. Indicator extraction rules for
content pack incident types are determined by the content pack. For example, in a Phishing incident type, by default, in the Destination IP
field, IPv6 and IP indicators are extracted. For the Detection URL field, the URL indicator field is extracted.
If enabled, indicator extraction is automatic. For example, in a Phishing incident, indicator extraction is set to extract the IP indicator (in the
incident type). When the incident field updates, the IP indicator field is extracted automatically. In the War Room, you can check that the IP
indicator field has been extracted by typing 1.1.1.1. Cortex XSOAR recognizes the indicator as an IP indicator by matching it to the IP
indicator’s regex. It then extracts and enriches the indicator using an integration that includes the IP command (such as IPinfo).
NOTE:
To change the indicator extraction rules for an incident type installed with a content pack, including an incident type propagated to a
tenant in a multi-tenant environment, you need to detach the incident type. Once detached, the incident type does not receive new
content from Cortex XSOAR. If you want to receive content updates reattach the incident type. If you want to instead receive content
updates and save the content, duplicate the incident type and edit the duplicate type. For more information, see Incident layout
customization.
CAUTION:
Extracting indicators can adversely affect system performance. We recommend that you define extraction settings for each incident type,
as needed.
For example, for Malware you may want to extract all IP addresses, for Phishing you may only want to extract IP addresses from specific
email headers. For attachments, you may want to disable indicator extraction to reduce external API usage and protect restricted data
(the hash) from being sent.
Playbook tasks. For more information, see Set the indicator extraction mode for a playbook task.
Commands: Run a command using the command line in Cortex XSOAR during an investigation. For more information, see Extract and
enrich an indicator.
None
Inline
Out of band
For detailed information about the modes and how to set them up, see Indicator extraction modes.
Indicator Scripts
When creating or editing an indicator type, you can add the following scripts:
Enhancement scripts
Reputation scripts
During the indicator extraction and extraction flow, the order of execution is regex, formatting script, and reputation command, reputation script.
Enhancement scripts are not part of the flow.
Indicators are identified using regex, and then the formatting script transforms the regex into a usable indicator for use in Cortex XSOAR in the
War Room, reports, dashboards, etc. Reputation commands and scripts enable you to change the reputation of the indicator.
Enhancement scripts enable you to gather additional data about the highlighted entry in the War Room.
You can run commands in the CLI, such as !extractIndicators, !enrichindicators, !ip , !domain, and reputation script commands such as
!1URLReputation, !IPReputation. For more information, see Extract and enrich an indicator.
Abstract
Configure the indicator extraction mode. Options are none (no extraction), inline, out-of-band, or use system default.
Inline: Indicators are extracted within the context that the indicator extraction runs (synchronously). The findings are added to the context
data. For example, if you define indicator extraction for the phishing incident type as inline:
For incident creation, by default, the playbook you defined to run does not run until the indicators have been extracted.
For an on field change, extraction occurs before the next playbook tasks run. Use this option when you need to have the most robust
information available per indicator.
NOTE:
This configuration may delay playbook execution. While indicator creation using the command createIndicator is asynchronous,
automatic indicator extraction and enrichment is run synchronously. Data is placed into the incident context and is available via the
context for subsequent tasks.
Out of band: Indicators are extracted in parallel (asynchronously) to other actions. The extracted data is available within the incident, but it
is not available for immediate use in task inputs, or outputs, since the information is not available in real time.
For incident creation, out of band is used in rare cases where you do not need the indicators extracted for the playbook flow. You still want to
extract them and save them in the system as indicators, so that they can be reviewed at a later stage for manual review. System
performance may be better as the playbook flow does not stop to extract, but if the incident contains indicators that are needed or expected
in the playbook execution flow, inline should be used, as it will not execute the playbook before all indicators are extracted from the incident.
NOTE:
When using Out of band, the extracted indicators do not appear in the context. If you want the extracted indicators to appear select Inline.
Use system default: Indicators are extracted according to the following defaults:
Incident creation Sets the indicator extraction mode for incident creation. It extracts from all associated fields at the Inline
point of incident creation. You can change the value when editing an incident type.
Incident field Sets the indicator extraction mode for incident field change. You can change the value when editing an Out of
change incident type. band
Tasks Applies to the result of the task. You can change the value when editing a task. None
Manual Applies to commands triggered from the CLI. You can change the value when using the indicator Out of
extraction parameter. band
Abstract
You can disable enrichment for individual indicators or disable enrichment for all indicators fetched by any of the following feeds:
Cloudflare Feed
Fastly Feed
AWS Feed
Zoom Feed
If you disable enrichment for an incoming feed, the indicators are extracted and saved but not enriched by Cortex XSOAR, enabling you to
conserve system resources when dealing with known indicators.
When an indicator has enrichment excluded, the Enrich Indicator button is disabled. If you try to enrich an indicator that is enrichment excluded, an
error will occur.
IP
Domain
URL
File
To exclude enrichment for indicators fetched from a feed integration, when configuring an instance of the feed integration, select the Enrichment
Excluded checkbox.
When creating or editing an indicator of one of the following types: IP, Domain, Email, URL, or File, you have the option to set Enrichment
Excluded to Yes or No. The default is No.
In the indicators table, you can filter the list of indictors by the enrichment excluded option.
Abstract
Create indicator extraction rules for an incident type. Customize indicator extraction in Cortex XSOAR.
You can extract indicators from incident fields on creation of an incident and when a field changes. For example, you might want to extract the IP
address upon incident creation and again when the field changes.
The indicator extraction feature extracts indicators from incident fields and enriches them using commands and scripts defined for the indicator
type.
2. For a content pack installed incident type, detach or duplicate the incident type, and then click the detached or duplicated incident type. For
custom incident types, click the incident type.
3. From the Indicators Extraction Rules tab, in the On incident creation and the On field change fields, select the required indicator extraction
mode.
If you select Out of band, the extracted indicators do not appear in the context. If you want the extracted indicators to appear, select Inline.
For more information, see Indicator extraction modes.
4. In the What to Extract section, if you want to extract all incident fields, select Extract all indicators from all fields.
5. If you want to choose which indicators are extracted according to each field, select Extract specific indicators.
You can search and filter the incident fields. For each field, use the dropdown menu to control the indicator types to extract:
(Optional) You can select all indicators, set all indicators to none, or copy settings from an incident type by clicking (to the right of the
table’s column headers).
All indicator types with regex Some indicator types are associated with a regex (such as IP), and some are not (such as
Registry Key).
Specific indicator types You can choose one or more indicator types based on regex. The system extracts values
that match the regex from this incident field.
Select the Use field value checkbox, to use any indicator based on the field value (not regex based). This creates an indicator out of the
entire value of the field, regardless whether the indicator type has a configured regex. This can be used in cases such as extracting
hostnames.
NOTE:
If you want to extract attachments, select the attachment field and then select File as the indicator type to extract. The File extracts
a hash (usually SHA-256), which can be viewed in the War Room. You may want to disable indicator extraction for attachments to
reduce external API usage and protect restricted data (the hash) from being sent.
6. Click Save.
7. (Optional) If you want to configure which scripts and commands the indicator type executes, go to Settings & Info → Settings → Object
Setup → Indicators → Types and edit or Create an indicator type.
Add scripts and reputation commands for the indicator type. When indicator extraction occurs, indicators are extracted as defined in an
indicator type, and enriched using the commands and scripts associated with the indicator type. For example, the URL indicator is enriched
using the !url command.
In this example, if an email is forwarded that potentially includes phishing, we want to extract at incident creation (inline) and upon a field change
(out of band):
Abstract
Create indicator extraction rules for a playbook task in Cortex XSOAR. Auto extract for a playbook task. Edit task. Use case indicator extraction.
You can set the indicator extraction mode for specific playbook tasks.
1. Select the playbook where you want to add indicator extraction to a task, and click Edit.
4. In the indicator extraction drop-down menu, select the mode you want to use.
5. Click OK.
Abstract
This procedure describes how to disable indicator extraction for a specific script or an integration.
For example:
entry = {
'Type': entryTypes['note'],
'Contents': {
'Echo' : demisto.args()['echo']
},
'ContentsFormat': formats['json'],
'ReadableContentsFormat': formats['markdown'],
'HumanReadable': hr,
'IgnoreAutoExtract' : True
}
To disable indicator extraction for an integration, add the 'IgnoreAutoExtract' entry with the value of true, when returning an entry.
entry = {
'Type': entryTypes['note'],
'Contents': result,
'ContentsFormat': formats['json'],
'ReadableContentsFormat': formats['markdown'],
'HumanReadable': tableToMarkdown('ServiceNow ticket', hr, headers=headers, removeNull=True),
'EntryContext': {
'Ticket(val.ID===obj.ID)': context,
'ServiceNow.Ticket(val.ID===obj.ID)': context
},
'IgnoreAutoExtract': True
}
entries.append(entry)
return entries
For more information about command results in Python, see Python code conventions for CommandResults.
If indicators are not extracting, check whether the indicator mode is set to none. Even if you select the relevant incident fields and the indicators to
extract, if the mode is set to none, indicators do not extract.
When creating new incident types, if you select Extract all indicators from all fields, all fields are extracted including custom fields. If you select
Extract specific indicators by default, indicator extraction for new custom fields is set to none.
Abstract
Add a server configuration to manage the indicator timeline in Cortex XSOAR and improve indicator timeline performance.
To effectively investigate an incident and analyze associated indicators, the SOC analyst must have access to up-to-date data and a clear view of
the most recent changes made to the relevant indicators, as well as earlier entries of indicator changes. The indicator timeline provides access to
recent and earlier indicator activity data, facilitating quicker threat detection and response actions.
NOTE:
You must have the Cortex XSOAR Threat Intel Management (TIM) license to access the indicator timeline.
You can configure server configurations to disable the indicator timeline display or disable indicator extraction to the indicator timeline.
1. Select Settings & Info → Settings → System → Server Settings → Server Configuration → Add Server Configuration.
To see the indicator timeline entries, from the Threat Intel page select an indicator to go to the Indicator Summary page. If it does not contain the
indicator timeline, you can edit the indicator layout and add the Timeline section.
By default, the indicator timeline table displays dates, events, and sources that affect indicators, such as change of verdict and traffic light protocol.
Click to edit the table settings to also display category and indicator ID, or to search the table columns.
Latest events: Shows a table listing the most recent indicator timeline entries. This ensures continuous monitoring of security threats and
provides access to the latest activity data.
Initial events: Shows a table listing the first indicator timeline entries.
The maximum number of entries the tabs display is by default 100. The first 100 entries are displayed in both tabs. If there are more than 100
entries, the Initial events table displays the first 100 entries, and the Latest events table displays the 100 latest entries. For example, if there are
105 entries, the Latest table displays the five latest entries plus the 95 entries that occurred chronologically before them.
Abstract
Cortex XSOAR indicators have an active or expired status which can be set to expire after a specific period or never to expire. Set default
expiration method.
Indicators can have the Expiration Status field set to Active or Expired, which is determined by the Expiration field. When indicators expire, they
still exist in Cortex XSOAR, meaning they are still displayed and you can still search for them. A job that runs every week checks for newly expired
indicators and updates the Expiration Status field.
When indicators expire, the expiration status and expiration fields are updated. You can use it to take actions based on indicator expiration. For
more information, see Indicator field trigger scripts.
You can set the default expiration method for indicators either to never expire or to expire after a specific period. The default expiration method is
set by the indicator type. For more information see Indicator type profile.
The following table shows the hierarchy by which indicators are expired.
Method Description
Manual Manually expire the indicator either in the indicator layout or CLI. This method overrides all other methods.
NOTE:
Use the expireIndicators command to change the expiration status to Expired for one or more
indicators. This command accepts a comma-separated list of indicator values and supports multiple
indicator types. For example, you can set the expiration status for an IP address, domain, and file hash:
!expireIndicators value=1.1.1.1,safeurl.com,45356A9DB614ED7161A3B9192E2F318D0AB5AD10
Use the !setIndicator or for multiple indicators use the !setIndicators command to reset the
indicators' expiration value. The value can also be set to Never, so that the indicators never expire. For
example, !setIndicators indicatorsValues=watson.com expiration=Never.
You can also use these commands in a script, but the user can override this if running a command in the
CLI or the indicator layout.
Feed integration Some integrations support setting the expiration method on an integration instance level, which overrides
the method defined for the indicator type.
Indicator type The expiration method (interval or never) is defined according to indicator type, which applies to all
indicators of this type. This is the default expiration method for an indicator.
You can download and install Threat Intel content packs including the following Threat Intel integrations such as:
MITRE ATT&CK
Unit 42 ATOMs
AlienVault
AWS
NOTE:
If you have a TIM license you can set up unlimited feeds. If not, you are limited to 5 active feeds and 100 indicators. For more information, see
Understand Cortex XSOAR licenses.
2. Configure the Threat Intel integration by going to Settings → Settings & Info → Integrations → Instances, search for your integration, and
click Add Instance.
The following table is a non-exhaustive list of the most common feed integration parameters. Each feed integration may have parameters
unique to that integration. Read the documentation for specific feed integrations for more details.
Parameter Description
Fetches indicators Select this option for the integration instance to fetch indicators.
Some integrations can fetch indicators or incidents. Select the relevant option for what you
need to fetch in the instance.
Feed Fetch Interval When the integration instance should fetch indicators from the feed.
Indicator verdict The indicator verdict that will apply to all indicators fetched from this integration instance. See
Indicator verdict.
Source reliability The reliability of the source that provides the threat intelligence data.
Indicator Expiration Method The method by which to expire indicators from this integration instance. The default expiration
method is the interval configured for the indicator type to which this indicator belongs.
Indicator Type: The expiration method defined for the indicator type to which this
indicator belongs (interval or never).
Time Interval: Expires indicators from this instance after the specified time interval, in
days or hours.
When removed from the feed: When the indicators are removed from the feed they are
expired in the system.
NOTE:
Some feeds only provide information about new indicators and do not specify when
indicators are removed. Indicators from these feeds cannot be automatically expired
on removal.
Bypass exclusion list When selected, the exclusion list is ignored for indicators from this feed. This means that if an
indicator from this feed is on the exclusion list, the indicator might still be added to the system.
Trust any certificate (not secure) When selected, certificates are not checked.
Use system proxy settings Runs the integration instance using the proxy server (HTTP or HTTPS) when an engine is
selected.
Do not use in CLI by default Excludes this integration instance when running a generic command that uses all available
integrations.
Abstract
Azure Feed
Cloudflare Feed
Fastly Feed
AWS Feed
Zoom Feed
If you disable enrichment for an incoming feed, the indicators are extracted and saved but not enriched by Cortex XSOAR, enabling you to
conserve system resources when dealing with known indicators.
When an indicator has enrichment excluded, the Enrich Indicator button is disabled. If you try to enrich an indicator that is enrichment excluded, an
error will occur.
IP
Domain
URL
File
To exclude enrichment for indicators fetched from a feed integration, when configuring an instance of the feed integration, select the Enrichment
Excluded checkbox.
When creating or editing an indicator of one of the following types: IP, Domain, Email, URL, or File, you have the option to set Enrichment
Excluded to Yes or No. The default is No.
In the indicators table, you can filter the list of indictors by the enrichment excluded option.
Abstract
Jobs trigger TIM playbooks and process large numbers of indicators. TIM playbook configuration and settings.
TIM (Threat Intelligence Management) playbooks run on an indicator search query and are used for processing large numbers of incoming
indicators from feeds. Feed integrations enable you to ingest indicators from external sources into Cortex XSOAR. Once indicators are in Cortex
XSOAR, they can be enriched and assigned a verdict. Enriched indicators can be used for incident investigations in Cortex XSOAR and can be
pushed to a SIEM or other external system.
The TIM playbook performs an indicator query. For example, the query might return indicators using the from-feed tag. The TIM playbook runs
using the indicators matching the query as an input. When configuring your TIM Playbook to use an indicator query, we recommend you first run
your query on the main Threat Intel page, which enables you to view the indicators returned and verify you have the results you need for your
NOTE:
By default, a query run on the Threat Intel page is limited to the last 7 days, unless otherwise specified. This same limit does not apply when
you enter the query in Playbook Inputs and Outputs, but you can add your required time filter to the query.
If you do not have a TIM license, there are several limitations, such as the number of active feeds and indicators. For more information, see
Understand Cortex XSOAR licenses.
If more than 1000 indicators are returned, the indicators are processed in batches of 1000. For example, if there are 4000 indicators returned, the
playbook runs the first time on the first 1000. Each task receives 1000 indicators as a list, or if the task does not support lists, loops over the 1000
indicators. When the playbook reaches the end, it runs again with the next batch of 1000 indicators and repeats until all indicators have been
processed. The playbook loops automatically through batches of indicators, you do not need to configure the playbook to loop. After all indicators
have been processed, the playbook automatically closes the incident. You do not need to include a close incident task.
Quiet mode
TIM playbooks often process thousands of indicators. By default, quiet mode is enabled for TIM Playbooks. In quiet mode, entries are not written
to the War Room and inputs and outputs are not presented for Work Plan tasks. For troubleshooting purposes, you can temporarily disable quiet
mode during playbook development. Quiet mode can be disabled in the playbook settings or on a per-task basis.
We strongly recommend that you have quiet mode enabled for any playbook that is in production, to prevent possible performance issues.
NOTE:
While quiet mode is disabled, any changes you make to the playbook indicators query will turn quiet mode back on.
The Playbook search query returns all of the indicators that match a particular search, including all fields for each indicator. Individual tasks may
only require a subset of that data. If you need to run different tasks for different types of indicators, use a conditional task and set the input to
check for the indicator type. For example, in the TIM - Indicator Auto Processing playbook, the Are there IP results? conditional task searches for
any IP indicators. If it finds any IP indicators, the condition is met.
If no IP indicator types are found, the condition is not met and the playbook proceeds to the else branch.
You can also use filters based on indicator attributes. For example, you can limit a task to only run on indicators where the type is IP.
NOTE:
In the Get field, if you change playbookQuery.indicator_type to playbookQuery.value it returns the indicator values, such as the IP
addresses. Using playbookQuery returns all of the indicator attributes, not only the indicator value.
1. Indicators are added to Cortex XSOAR through feed ingestion. You can configure your integration to automatically tag all new/updated
indicators from a particular instance. For example, you can tag them using the from-feed tag.
3. Define a job to run that triggers the playbook when the indicators are fetched.
When a feed has been completed and there is a change of content you can add a TIM playbook to process indicators to a job. Create a Job
Triggered by Delta in a Feed that runs when the ingestion is completed. The job runs a TIM playbook, which performs an indicator query. For
example, the query might return indicators using the from-feed tag, and that were added or modified since the last time the job that
triggered the playbook was run.
4. If you want to push the enriched indicators to a SIEM, you can set up a time-triggered job to run a playbook.
To see how you can use a job to process indicators, see Create jobs to process indicators example.
Export indicators from the Indicators table, using an integration, or playbook, or set up an External Dynamic list (EDL) by using the Generic Export
Indicators integration.
In the Indicators table, you can export indicators in a CSV or STIX file. You can also export indicators using an integration or a playbook.
You can export indicators in a hosted text file (External Dynamic list) from Cortex XSOAR or an engine using the Generic Export Indicators
Service integration. Exported indicators can be used for example in firewall block lists, allow lists, and monitoring and analysis in Splunk. See
Generic Export Indicators Service.
The Generic Export Indicators Service integration can be configured to export specific fields in different output formats. Multiple instances of the
integration can be configured for different indicator queries, and the output can be customized to work with a variety of third-party services.
You can set up the Generic Export Indicators Service integration by setting up a long-running integration. See Forward requests to long-running
integrations.
If you configure the Generic Export Indicator to run on-demand, use the !export-indicators-list-update command for the first time to
initialize the export process.
By default, when exporting an incident or an indicator to CSV format, Cortex XSOAR generates the report in UTF8 format. If you need to export an
incident or an indicator that contains Cyrillic characters such as Russian or Greek, you need to change the format to UTF8-BOM.
NOTE:
When changing the format to UFT8-BOM you also change the format for incidents.
1. Select Settings & Info → Settings → System → Server Settings → Add Server Configuration.
Key: export.utf8bom
Value: true
Cortex XSOAR provides numerous out-of-the-box playbooks for TIM, including playbooks that enable you to export indicators. All TIM-related
playbooks have the 'TIM' prefix. Some are generic (for example, TIM - Process Indicators - Fully Automated), and some are dedicated to a specific
vendor, like QRadar (for example, TIM - QRadar Add Domain Indicators) and ArcSight (for example, TIM- Arcsight Add IP Indicators).
NOTE:
If you define a playbook task input that pulls from indicators, the entire playbook runs in Quiet Mode. This means the task or playbook information
is not written to the War Room, and inputs and outputs are not displayed in the playbook. However, errors and warnings are still written to the War
Room.
You should not run a query on a field that you might change in the playbook flow. For example, you shouldn’t have a playbook with query
Verdict:Malicious and then change the indicator verdict as a part of the playbook.
Threat intel reports summarize and share threat intelligence research conducted within your organization by threat analysts and threat hunters.
Threat intelligence reports help you communicate the current threat landscape to internal and external stakeholders, whether in the form of high-
level summary reports for C-level executives, or detailed, tactical reports for the SOC and other security stakeholders.
NOTE:
To customize and manage Threat Intel Reports, you must have a TIM license.
Threat intel reports help address multiple relevant reporting use cases:
Report to colleagues and executives if, and how, such threats affected your organization, and what was done to remediate and prevent
future attacks.
Periodic monitoring
Keep track of infiltration attempts by adversaries within your industry vertical, and publish periodic status updates on any new behaviors.
Aggregate highlights of external publications that should be actively brought to the attention of your SOC. This is usually done to ensure that
relevant employees are up-to-date with the latest security trends so they can make more informed decisions. For a practical example, see
Threat Intel Management use cases.
Threat hunting
Report to colleagues, and the larger threat intelligence community about proactive searches and detection of advanced threats not found by
traditional prevention and detection tools.
Report type: Determines which report types your organization needs. Each type has an associated layout. You can create report types and
report layouts, or customize existing ones. When analysts create a report, they select the report type.
Report layout: Ensures the most relevant information is shown for each report type. The layout includes customizable fields for your use
case.
Report fields: Create fields or add existing fields to report layouts. After a report is created, the analyst can populate the report with relevant
data.
Cortex XSOAR Threat Intel Management comes out-of-the-box with the following report types and layouts:
Campaign Campaign Report Describes a campaign run by a threat actor. Includes fields such as Campaign Details and a free text
field to add the threat type, origin, etc.
Executive Executive Brief Used for an executive summary or any kind of generic report.
Brief Report
Malware Malware Report A report tailored for malware such as Operating System, Aliases, and Malware type.
Threat Actor Threat Actor Report A report tailored for Threat Actors with a special section for Threat Actor metadata, such as the
Threat Actor's name, goals, and motivation.
Vulnerability Vulnerability Report A report tailored for vulnerability with a special section for vulnerability details such as CVE and
CVSS.
These report types, layouts, and fields are part of the Threat Intel Reports (Beta). For more details including screenshots, see Threat Intel Reports
(BETA).
NOTE:
By default, when editing the dropdown or text values in a threat intel report, the changes are not saved until you confirm your changes (clicking
the checkmark icon in the value field).
These icons are designed to let you have an additional level of security before you make changes to the fields in threat intel reports, incidents,
and indicators.
To change the default behavior set the inline.edit.on.blur server configuration to true, which enables you to make changes to inline fields
without clicking the checkmark. The changes are automatically saved when clicking anywhere on the page or when navigating to another page.
For text values, you can also click anywhere in the value field to edit.
Abstract
Create or detach a Threat Intel Report type to suit your use case.
Threat intel reports are categorized by type, which determines the layout that is displayed for the report.
You can create new threat intel report types to support use cases not covered by the out-of-the-box types, which may require different report
layouts. For example, you may want to add a new type for a specific threat-hunting report that your organization needs, which is not covered by
one of the out-of-the-box template types.
If you want to customize a report type by attaching a new layout, you need to detach the existing report type. If you detach it, it does not receive
content pack updates. If you reattach it, any content pack updates override any changes made.
NOTE:
If you disable the report, it is not available for selection when you create a report.
Before creating a Threat Intel Report type, review, customize, or create a new layout, which is then added to the report type.
1. Select Settings & Info → Settings → Object Setup → Threat Intel Reports → Types → New Type.
You can leave this blank and add the layout later.
3. (Multi-tenant only) Add or select Propagation labels. You can also view any dependencies.
Abstract
Add/create Threat Intel Report fields to populate a report layout with relevant data.
Field types
Boolean Checkbox
Grid (table) Include an interactive, editable grid as a field type for selected report types or all report types. To see how to create a grid
field and to use a script, see Create a grid field for an incident type.
When you select Grid (table) you can format the table and determine if the user can add rows,
HTML HTML: Create and view HTML content, which can be used in any type of report.
Long text Long text is analyzed and tokenized, and entries are indexed as individual words, enabling you to perform advanced
searches and use wildcards.
Long text fields cannot be sorted and cannot be used in graphical dashboard widgets.
While editing a long text field, pressing enter will create a new line. Case insensitive.
Markdown Add markdown-formatted text as a Template which will be displayed to users in the field after the indicator is created.
Markdown lets you add basic formatting to text to provide a better end-user experience.
An empty array field for the user to add one or more values as a comma-separated list.
Role The role assigned to the Threat Intel Report determines which users (by role) can view the report.
Short Text Short text is treated as a single unit of text and is not indexed by word. Advanced search, including wildcards, is not
supported.
Short text fields are case-sensitive by default but can be changed to case-insensitive when creating the field.
While editing a short text field, pressing enter will save and close.
Single select Select a value from a list of options. Add comma-separated values.
Timer/SLA Set up when an SLA is due, the risk threshold, and configure actions to take if the SLA does pass.
1. Select Settings & Info → Settings → Object Setup → Threat Intel Reports → Fields → New Field.
Parameter Description
Field A meaningful display name for the field. After you type a name, you will see below the field that the Machine name is
Name automatically populated. The field’s machine name is applicable for searching and the CLI.
Name Description
Script to run when field value changes The script dynamically changes the field value when script conditions are met. For a script
to be available, it must have the field-change-triggered-ThreatIntelReport tag,
which is added when defining a script.
Run the field triggered script after the new By default, the script executes before the threat intel report is stored in the database. If
field value is saved you select this option, the script instead executes after the threat intel report is modified,
so that the script cannot make changes to the threat intel report.
Add to all Threat Intel Report types Determines which threat intel report types have this field available. By default, fields are
available to all types. To change this, clear the checkbox and select the specific threat
intel report types.
Make data available for search Determines if the values in these fields are available when searching. Enabled by default.
Abstract
Configure threat intel report layouts. Add script-based content in the layout.
You can customize almost every aspect of the layout, including which tabs appear, in which order they appear, who has permission to view the
tabs, which information appears, and how it is displayed.
In the Object Setup → Threat Intel Reports → Layouts tab, you can view out-of-the-box layouts and any custom layouts. Each out-of-the-box
layout is attached to the out-of-the-box Threat Intel Report types.
If you want to customize an existing layout, you can detach it without creating or duplicating another one. When a layout is detached, it does not
receive content pack updates.
TIP:
If you detach a layout, make edits, and later want to receive content pack updates for that layout, we recommend you duplicate the report layout
before reattaching the original, to protect your changes from content pack updates.
The following procedure describes how to create a new layout, but you can follow similar steps to customize an existing layout.
1. Select Settings & Info → Settings → Object Setup → Threat Intel Reports → Layouts → New Layout.
(Multi-tenant only) Add or select Propagation labels. You can also view any dependencies.
3. Customize the tabs by clicking the settings wheel icon and then doing the following:
NOTE:
Action Description
Rename You can also edit a tab’s name by clicking the tab.
Show empty The setting that you configure in the layout becomes the default value seen in the report for the specific tab, which
fields can then be overridden.
You can also set a global default value using the UI.summary.page.hide.empty.fields server configuration,
which can also be overridden for a specific tab.
Hide tab Hides the tab. Rather than deleting the tab, you may want to use the tab again for future use.
Format for Build your layout based on A4 proportions to match the format used for exporting. Selecting this option hides the tab
exporting by default, but the tab will remain available for export.
Action Description
Display Filter Add or view a filter applied to the tab. If the filters apply, the specific fields or tabs are shown in the layout. If the
mandatory field is not shown in the layout, the user is not obliged to complete it.
4. From the LIBRARY section, drag and drop the following sections:
Section Description
New Section After creating a new section, click the Fields and Buttons tab and drag and
drop the fields as required.
When hovering over a field, click the eye icon to add a filter to the field.
General Purpose Dynamic Section Add a script to the layout, such as adding a script to create a button on the
layout that sets a threat intel report as published. For more information, see
Step 2. (Optional) Add a script to the Threat Intel Report layout.
Relationships The user can manually create indicator relationships between the report and
an indicator. For more information about indicator relationships, see
Manage indicator relationships.
Determine how a section appears in the layout, such as name and showing the section header. In most sections, you can also configure the
fields to appear in rows, or as cards, and wrap the text labels. For example, if you know that some of the field values are very long, use
rows. If the field values are short, use cards so you can fit more fields in a section.
a. Click the section, click the pencil icon, and then select Edit section settings.
NOTE:
To remove or duplicate click the pencil icon in the section, and select the relevant option.
You can add content to threat intel report layouts, based on a script. You need to add the General Purpose Dynamic Section when editing layouts.
The General Purpose Dynamic Section allows you to configure a section in a layout tab from a script. The script can return text, markdown, or
HTML, the results of which appear in the General Purpose Dynamic Section. You can add any required information from a script. Before you
begin, you need to create a script.
The following is an example of a script that can be added. This script can be used to add a button to the layout that sets a threat intel report as
published.
def publish():
now_utc = datetime.now(timezone.utc)
object = demisto.args('object')
object_id = object.get('id')
roles = execute_command('getRoles', {})
demisto.results('ok')
2. Drag and drop the General Purpose Dynamic Section onto the layout.
3. Select the General Purpose Dynamic Section, click , and then Edit section settings.
4. In the Name and Description fields, add a meaningful name and a description for the dynamic section that explains what the script displays.
5. In the Automation script field, from the dropdown list, select the script that returns data for the dynamic section.
NOTE:
Only scripts to which you have added the general-dynamic-section tag appear in the dropdown list.
6. Click OK.
1. Go to Settings & Info → Settings → Object Setup → Threat Intel Reports → Types.
If the report type is an out-of-the-box type from a content pack you need to detach the report. Otherwise, you need to create a new report.
3. In the Layout field, from the dropdown list, add the customized layout.
5. (Optional) If you have created a new layout (not detached), you can do the following:
Contribute it to Marketplace.
1. From Marketplace , in the Contributions tab, click Contribute Content. From the dropdown menu, select Layouts, Add the new
layout you want to contribute to Marketplace and click Save and Contribute.
If using a dev/prod environment, in the development machine push the layout to the prod machine.
Perform actions (create, edit, export, delete) and search for indicators on the Cortex XSOAR Threat Intel page.
Indicators are artifacts associated with security incidents and are an essential part of the incident management and remediation process. They
help correlate incidents, create hunting operations, and enable you to easily analyze incidents and reduce Mean Time to Response (MTTR).
If you have a TIM license, Cortex XSOAR Threat Intel includes access to the Unit 42 Intel service, enabling you to identify threats in your network
and discover and contextualize trends. Unit 42 Intel provides data from WildFire (Palo Alto Networks’ cloud-based malware sandbox), the PAN-DB
URL Filtering database, Palo Alto Networks’ Unit 42 threat intelligence team, and third-party feeds (including both closed and open-source
Indicators
Sample Analysis
NOTE:
If you don't have a TIM license, you can only view the Indicators tab. For more information, see Manage indicators.
Indicators
Displays a list of indicators added to Cortex XSOAR, where you can perform several indicator actions, including adding Unit 42 data.
NOTE:
If you are unable to perform a specific action or view data, you may not have sufficient user role permissions. Contact your Cortex XSOAR
administrator for more details.
You can perform the following actions on the Threat Intel page.
Action Description
Investigate an indicator Click on an indicator to view and take action on the indicator.
Create an indicator Indicators are added to the Indicators table from incoming incidents, feed integrations, adding Unit 42 data,
or manually creating a new indicator.
When creating an indicator, in the Verdict field, you can either select a verdict or leave it blank to calculate it
by clicking Save & Enrich, which updates the indicator from enrichment sources. After you select an indicator
type, you can add any custom field data.
NOTE:
Create an incident Create an incident from the selected indicator and populate relevant incident fields with indicator data.
Edit Edit a single indicator or select multiple indicators to perform a bulk edit.
Delete and Exclude Delete and exclude one or more indicators from all indicator types or a subset of indicator types.
If you select the Do not add to exclusion list checkbox, the selected indicators are only deleted.
Export CSV Export the selected indicators to a CSV file. By default, the CSV file is generated in UTF8 format.
Administrator permission is required to update server configurations, including changing the format, see
Export incidents and indicators to CSV using the UTF8-BOM format.
Upload a STIX file To upload a STIX file, click the upload button (top right of the page) and add the indicators from the file.
By default, when editing a list or text values in an incident/indicator, the changes are not saved until you confirm your changes (clicking the
checkmark icon in the value field). These icons are designed to give you additional security when updating fields in incidents, indicators, and
Threat Intel Reports.
You can change this default behavior by updating the server configuration. You need administrator permission to update server configurations.
For more information, see Configure inline value fields.
Sample analysis
Unit 42 Intel provides sample analysis for files. This helps you conduct in-depth investigations, find links between attacks, and analyze threat
patterns. If the file indicator is in the Unit 42 Intel service, you have access to a full report on activities, properties, and behaviors associated with
the file. In addition, you can see how many other malicious, suspicious, or unknown file samples included the same activities, properties, and
behaviors, and also build queries to find related samples. For more information, see Investigate files using sample analysis.
Cortex XSOAR users can use their sessions and submissions data for investigation and analysis. Sessions and Submissions data are available
for users with the following products:
WildFire Appliance - Samples that a WildFire appliance submitted to the WildFire public cloud.
For example, if you have a file indicator that has been determined as malicious, and you have a Cortex XDR integration configured, in the
Sessions & Submissions tab, you can see where this file came from and where it is in your network by viewing the firewall sessions this file
passed through. You can see which XDR agents in your system reported the file, which tells you which machines might be infected. You can block
the external IP address with your firewall, and, if needed, isolate the affected machines to contain the attack. If the source is internal, you can
investigate that endpoint. For more information, see Use sessions and submissions in your investigation.
Threat Intel Reports summarize and share threat intelligence research conducted within your organization by threat analysts and threat hunters.
Threat Intel Reports help you communicate the current threat landscape to internal and external stakeholders, whether in the form of high-level
summary reports for C-level executives, or detailed, tactical reports for the SOC and other security stakeholders. For more information, see
Manage Threat Intel Reports.
Abstract
How to query indicators in the threat intel library and in Unit 42 Intel.
You can access Threat Intel data through the following methods:
When investigating an incident, select an extracted indicator. The Quick View shows basic information about the indicator in Cortex XSOAR
and Unit 42 (if available). Full view shows the full Cortex XSOAR indicator summary.
On the Threat Intel page, query an indicator, which may or may not be in the Cortex XSOAR intel library.
Unit 42 Intel data is cloud-based and remotely maintained so that you can view data from Unit 42 Intel and add only the information you
need to your Cortex XSOAR threat intel library. When you search for an IP address, domain, URL, or file, you can view the indicator in
Cortex XSOAR and the additional information provided by Unit 42 Intel. When an indicator does not yet exist in Cortex XSOAR, but does
exist in Unit 42 Intel, you can add the indicator to the Cortex XSOAR threat intel library. You can add the indicator and enrich it with your
existing integrations, or add the indicator without enrichment. When the indicator already exists in Cortex XSOAR, but additional information
is available from Unit 42 Intel, you can update your indicator with the most recent data from Unit 42 Intel.
The Threat Intel library is a centralized space for all indicators, whether they are found in an incident, brought in as a feed, or added
manually. You can view in-depth information on collected indicators and filter the library based on common attributes.
NOTE:
You can search or look up indicators. A search, which can include wildcards and complex queries, can return multiple results. Searches
are only performed in Cortex XSOAR. Lookups are exact values, are performed in both Cortex XSOAR and Unit 42 Intel data, and can
only return one result.
When querying directly on the Threat Intel page, the following considerations apply:
Querying an IP address, domain, URL, or SHA256 file hash, without a wildcard or complex search (Boolean search, type:file, etc.),
queries both the Cortex XSOAR threat intel library and Unit 42 Intel, with no date range limit.
If you enter an indicator type that is not an IP address, domain, URL, or SHA256 file hash, or you enter a wildcard or complex option
(Boolean search, type:file, etc.), no lookup is performed in Unit 42. In Cortex XSOAR, a search is performed. By default, the search is for the
last 7 days, but you can adjust the date range.
Wildcard searches can only be performed in the local Cortex XSOAR threat intel library, and not in Unit 42 Intel data. Example: *xample.com
Complex searches are only conducted in the local Cortex XSOAR threat intel library, and not in Unit 42 Intel data. Example: type:URL and
verdict:Malicious.
For files, only the SHA256 hash returns Unit 42 Intel data.
For a query to include Unit 42 Intel results, it must be a lookup for an exact match.
You can search for indicators using any of the available search fields. This is a partial list of the available search fields.
Field Description
Malicious
Suspicious
Benign
Unknown
aggregatedReliability Searches for indicators based on a reliability score such as A - Completely reliable.
Field Description
expirationSource The source (such as script or manual.) that last sets the indicator's expiration status.
You can use a wildcard query, which finds indicators containing terms that match the specified wildcard. For example, the * pattern matches any
sequence of 0 or more characters, and ? matches any single character. For a regex query, use the following value:
"/.*\\?.*/"
Unit 42 Intel data is not automatically added to the Cortex XSOAR Threat Intel library. When you query for an indicator on the Threat Intel page, in
some cases the indicator is not in the Threat Intel library, but exists in Unit 42 Intel. In other cases, the indicator may already be in the Cortex
XSOAR Threat Intel library, but more in-depth information is available from Unit 42 Intel.
When a query is performed in both Cortex XSOAR and Unit 42 Intel, there are four possible results:
The indicator exists in Cortex XSOAR but does not exist in Unit 42 Intel
The Cortex XSOAR search result is displayed in a table. Click on the value to reach the Summary tab. The Summary tab presents information
about the indicator stored in Cortex XSOAR. The Unit 42 Intel tab is disabled.
The indicator exists in Unit 42 Intel, but does not exist in the Cortex XSOAR threat intel library
To view the Unit 42 Intel data for this indicator, click on the indicator search term in blue.
From the Unit 42 Intel tab, you have the option to add the indicator to Cortex XSOAR or to add and enrich the indicator to Cortex XSOAR.
Add to XSOAR
The indicator is added to Cortex XSOAR. If the indicator is related to one or more Unit 42 threat intel objects already in Cortex
XSOAR (ingested through the Unit 42 Feed integration), relationships are created in the database between the Unit 42 threat intel objects
and the file indicator. No third-party enrichments are run on the indicator. We recommend using this option if, for security reasons, you do
not want to expose the indicator to any third-party services.
The indicator is added to Cortex XSOAR. If the indicator is related to one or more Unit 42 threat intel objects already in Cortex
XSOAR (ingested through the Unit 42 Feed integration), relationships are created in the database between the Unit 42 threat intel objects
and the file indicator. Your configured third-party enrichments are run on the indicator.
When you add indicators to the Cortex XSOAR threat intel library from Unit 42 Intel, the indicators are available for use in scripts and playbooks.
The Cortex XSOAR result is displayed in a table. Click on the value to reach the Summary tab. The Summary tab presents information about the
indicator stored in Cortex XSOAR. Click on the Unit 42 Intel tab to view Unit 42 data. From the Unit 42 Intel tab, you have the option to do the
Update
Updated Unit 42 Intel for the indicator is added to Cortex XSOAR. If the indicator is related to one or more Unit 42 threat intel objects
already in Cortex XSOAR (brought in through the Unit 42 Feed integration), relationships are created in the database between the Unit 42
threat intel objects and the file indicator. No third-party enrichments are run on the indicator. We recommend using this option if, for security
reasons, you do not want to expose the indicator to any third-party services.
Updated Unit 42 Intel for the indicator is added to Cortex XSOAR. If the indicator is related to one or more Unit 42 threat intel objects
already in Cortex XSOAR (brought in through the Unit 42 Feed integration), relationships are created in the database between the Unit 42
threat intel objects and the file indicator. Your configured third-party enrichments are run on the indicator.
The indicator does not exist in Cortex XSOAR or in Unit 42 Intel
If the query was for an indicator type that is not an IP address, domain, URL, or SHA256 file hash OR if the query included a wildcard or a
complex search, the search was performed on Cortex XSOAR data from the last 7 days. You can extend the date range to see if the indicator is in
Cortex XSOAR but is older than 7 days.
Learn how to use TIM in your use case, such as creating a TIM report, accessing and using Unit 42 Intel data, investigating an indicator and
creating indicator relationships.
Cortex XSOAR enables you to centralize and manage every aspect of your TIM investigation. Create, extract, and enrich indicators using Unit 42
Intel data and explore their relationships to gain deeper insights.
After you start ingesting indicators into Cortex XSOAR, you can start your investigation, including creating indicators, adding indicators to an
incident, extracting indicators, exporting indicators, etc.
Cortex XSOAR Threat Intel includes access to the Unit 42 Intel service, enabling you to identify threats in your network and discover and
contextualize trends. Unit 42 Intel provides data from WildFire (Palo Alto Networks’ cloud-based malware sandbox), the PAN-DB URL Filtering
database, Palo Alto Networks’ Unit 42 threat intelligence team, and third-party feeds (including both closed and open-source intelligence). Unit 42
Intel data is continually updated to include the most recent threat samples analyzed by Palo Alto Networks, enabling you to keep up with threat
trends and take a proactive approach to securing your network.
View verdict, enrich, expire, delete and exclude the indicator, add relationships, view related incidents, and add comments. Add or remove
tags, which can help classify known threats. For example, you may want to group specific malware indicators that are part of ransomware,
such as trojan or loader. Unit 42 Intel data also publishes tags to assist your classification.
Additional Details
Add or view any community notes for sharing and any custom details.
Unit 42 Intel
If the indicator is available in Unit 42, you can view related Unit 42 Intel data.
If the indicator has been found in the Unit 42 database you can view the following information (and download the Wildfire report (if
available), according to indicator type:
IP address Verdict
Source
Relationships
PAN-DB Categorization
Passive DNS
URL Verdict
Source
Relationships
PAN-DB Categorization
WHOIS
Domain Verdict
Source
Relationships
PAN-DB Categorization
Passive DNS
WHOIS
File Verdict
Source
Relationships
Summary
WildFire Analysis
Action Description
Enrich an indicator You can view detailed information about the indicator (WHOIS information for example), using third-party
integrations such as VirusTotal and IPinfo. For more information, see Extract and enrich an indicator.
Expire an indicator You may want to expire an indicator to filter out less relevant alerts, allowing analysts to focus on active
threats. For more information, see Expire an indicator.
Manage indicator Threat Intel Management in Cortex XSOAR includes a feed that brings in a collection of threat intel objects
relationships as indicators. These indicators are stored in the Cortex XSOAR threat intel library and include Malware,
Attack Patterns, Campaigns, and Threat Actors. When you add or update an indicator from Unit 42 Intel,
a relationship is formed in the database between the relevant threat intel object and the new, or updated,
indicator. For more information, see Manage indicator relationships.
Delete and exclude indicators Indicators added to an exclusion list are disregarded by the system and are not created or involved in
automated flows. For more information, see Delete and exclude indicators.
Abstract
Cortex XSOAR analyzes indicators to determine whether they are malicious. Create indicator types and custom layouts, exclusion lists, and
indicator verdicts.
An indicator’s verdict is assigned according to the verdict returned by the source with the highest reliability, where reliability is scaled based on the
Admiralty Source and Information Reliability Matrix. In cases where multiple sources with the same reliability score return a different verdict for the
indicator, the worst verdict is taken. Indicators are assigned the following verdicts:
0: Unknown
1: Benign
2: Suspicious
3: Malicious
You can set the verdict manually by editing the indicator. If you manually changed the indicator’s verdict and want to recalculate it according to
enrichment integrations, set the verdict to Unknown and then enrich the indicator. If after manually setting the indicator's verdict you run indicator
enrichment without setting the verdict to Unknown, the indicator is enriched but the manually set verdict is not changed.
Source reliability
The reliability of an intelligence data source influences the verdict of an indicator and the values for indicator fields when merging indicators.
Indicator fields are merged according to the source reliability hierarchy, which means that when there are two different values for a single indicator
field, the field will be populated with the value provided by the source with the highest reliability score.
In rare cases, two sources with the same reliability score might return different values for the same indicator field. In these cases, the field is
populated with the most recently provided source, unless the field is verdict. If two sources have the same reliability score and return different
values for the verdict field, the worse verdict is used.
For the field types Tags and Multi-select, all values are appended, and nothing is overridden.
C: Fairly reliable
E: Unreliable
In this example, two third-party integrations, VirusTotal and AlienVault, return a different verdict for the same indicator. The indicator’s verdict will
be Malicious because VirusTotal’s reliability score is higher than AlienVault.
In this example, two sources with the same verdict score return a different verdict for the same indicator. The indicator’s verdict will be Malicious
because when two sources have the same reliability, the worse verdict applies.
Abstract
Indicator extraction identifies indicators from different text sources in the system (such as War Room entries), extracts them, and creates
indicators in Cortex XSOAR. After extraction, the indicators are enriched. An administrator can set up indicator extraction automatically in an
incident type or a playbook. For more information, see Indicator Extraction.
Indicator enrichment takes the extracted indicator and provides detailed information about the indicator (WHOIS information for example), using
third-party integrations such as VirusTotal and IPinfo.
Command Description
extractIndicators If you want to extract indicators from non-War-Room-entry sources (such as extracting from files), use
the !extractIndicators command from the CLI. Use the command to do the following:
Validate regex: Test a specific string to see if the relevant indicators are extracted correctly, such as a URL.
In a playbook or script. The command extracts indicators in a playbook or a script (non War Room source),
and also creates and enriches them.
Text
File path
For example, type !extractIndicators text="some text 1.1.1.1 something" auto-extract=inline. The
entry text contains the text of the indicators, which is extracted and enriched.
You can also extract indicators by adding the auto-extract parameter with the script and the mode for which you
are setting it up. For example: !ReadFile entryId=826@101 auto-extract=inline.
Usually, when using the CLI, you want to disable indicator extraction. For example, if you return internal/private
data to the War Room, and you do not want it to be extracted and enriched in third-party services, add auto-
extract=none to your CLI command.
enrichIndicators The enrichIndicators command is usually used when you want to batch enrich indicators. This command works
on existing indicators only (it does not create them on its own). When running the command, the relevant
enrichment command is triggered (such as !ip), which is based on the indicator type that is found. The data is
saved to context and the indicator.
NOTE:
Triggering enrichment on a substantial number of indicators can take time (because it's activating all enrichment
integrations per indicator) and can result in performance degradation.
Reputation Reputation commands such as !ip, can be run for new indicators and indicators already in the system. If extraction
commands is on, the data is saved both to the indicator and the incident's context. If not, then the data is saved only to the
context because the mapping flow is always triggered in enrichment commands. The default configuration is set to
none in playbook tasks for extraction.
NOTE:
Reputation commands, such as !ip, !domain can only be used when you configure and enable a reputation
integration instance, such as VirusTotal and WHOIS.
Use the Enrich indicator button in the indicator layout. This is the same effect as running a reputation command.
If there is an enhancement script attached to the indicator type, in the indicator Quick View window, you can run a script to enrich an
indicator. For example, the Domain indicator type uses the DomainReputation enhancement script. In an incident that contains a domain
indicator type, click Quick View. In the Indicators tab, click Domain → Actions → DomainReputation.
Abstract
Indicators can have the Expiration Status field set to Active or Expired. When indicators expire, they still exist in Cortex XSOAR, meaning they are
still displayed and you can still search for them. You may want to expire an indicator to filter out less relevant alerts, allowing analysts to focus on
active threats. Expiring IoCs that are no longer relevant helps ensure that security systems remain focused on current threats.
You can set up expiration in the indicator type, integration feed, or in a script. For more information, see Configure indicator expiration. When you
manually expire an indicator, this overrides indicator extraction rules set in scripts, indicator types, and feeds.
Use the expireIndicators command to change the expiration status to Expired for one or more indicators. This command accepts a
comma-separated list of indicator values and supports multiple indicator types. For example, you can set the expiration status for an IP
address, domain, and file hash: !expireIndicators value=1.1.1.1,safeurl.com,45356A9DB614ED7161A3B9192E2F318D0AB5AD10.
Use the !setIndicator or for multiple indicators use the !setIndicators command to reset the indicators' expiration value. The value
can also be set to Never, so that the indicators never expire. For example, !setIndicators indicatorsValues=watson.com
expiration=Never.
Abstract
How to use and create indicator relationships in Cortex XSOAR and how it benefits an investigation.
Indicator relationships are connections between different indicators. These relationships can be IP addresses related to one another, domains
impersonating legitimate domains, etc. These relationships enable you to enhance investigations with information about indicators and how they
might be connected to other incidents or indicators. For example, if you have a phishing incident with several indicators, one of those indicators
might lead to another indicator, which is a malicious threat actor. Once you know the threat actor, you can investigate to see the incidents it was
involved in, its known TTPs (tactics, techniques, and procedures), and other indicators that might be related to the threat actor. The initial incident
which started as a phishing investigation immediately becomes a true positive and relates to a specific malicious entity.
Relationships are created from threat intel feeds and enrichment integrations that support the automatic creation of relationships, such as
AlienVault OTX v2 and URLhaus, by selecting Create relationships in the integration settings. Based on the information that exists in the
integrations, the relationships are formed.
You can view indicator relationships by clicking on the indicator from an incident, and then from the Quick View window click the Relationships tab.
The Threat Intel Management system in Cortex XSOAR includes a feed that brings in a collection of threat intel objects as indicators. These
indicators are stored in the Cortex XSOAR threat intel library and include Malware, Attack Patterns, Campaigns, and Threat Actors. When you add
or update an indicator from Unit 42 Intel, a relationship is formed in the database between the relevant threat intel object and the new, or updated,
indicator.
You can also manually create and modify relationships, which is useful when a specific threat report comes out. For example, Unit 42’s SolarStorm
report contains indicators and relationships that might not exist in your system, or you might not be aware of their connection.
If a relationship is no longer relevant, you can revoke it. This might be relevant, for example, if a known malicious domain is no longer associated
with a specific IP address.
NOTE:
To create and modify indicator relationships, you must have the TIM license.
When you create a relationship, you can set the relationship type such as whether the indicator is related, attached, applied, etc. For example, a
file is attached-to an email. The email communicated-with the file.
You can create relationships by adding them in a playbook, in the CLI using the CreateIndicatorRelationship command, or when investigating
an indicator in the Threat Intel tab.
2. In the New Relationships window, in Step 1, add a query by which to search for the relevant indicators.
You can optionally limit the time range for the search.
By default, the relationship is related-to. For example, IP address x.x.x.x is related-to IP address y.y.y.y.
NOTE:
You can also add an indicator relationship from the Quick View when selecting an indicator from an incident.
In this example, you can see how to use the relationships feature to further your investigation.
1. When opening the incident, although you can see that the severity is low, the incident has two indicators.
2. When you click the file hash indicator, neither the Info nor Relationships tabs have any additional details. This seems to indicate that the file
is harmless.
Under the Info tab, you can see that the indicator was ingested from a threat intel feed. This already bears further investigation.
What started as a low severity incident, has become a lot more threatening.
Abstract
Indicators added to an exclusion list are disregarded by the system. Add indicators to an exclusion list in Cortex XSOAR.
Indicators added to an exclusion list are disregarded by the system and are not created or involved in automated flows such as indicator
extraction. You can still manually enrich IP addresses and URLs that are on the exclusion list, but the results are not posted to the War Room.
Add indicators to the exclusion list either in the Indicators table or in the Exclusion List page.
Select one or more indicators from the Indicators table and click the Delete and Exclude button. The indicators are deleted from the Indicators
table and added to the exclusion list. You can associate these indicators with one or more indicator types.
If you delete the indicator it is removed from Cortex XSOAR. This option should be used mainly for correcting errors in ingestion, and not as part of
your regular workflow.
From the Exclusion List page, you can view the list of excluded indicators, add an indicator to the exclusion list, or define indicator values to be
excluded using a regular expression (regex) or CIDR.
1. Select Settings & Info → Settings → Object Setup → Indicators → Exclusion List → New excluded indicator.
CAUTION:
Ensure you are using the correct syntax when defining the values for your exclusion lists.
A regular expression enables you to identify a sequence of characters in an unknown string. The following example would identify
www.demisto.com: [A-Za-z0-9!@#$%\.&]*demisto[A-Za-z0-9!@#$%\.&]*.
Classless inter-domain routing (CIDR) enables you to define a range of IP addresses. For example, the IPv4 block 192.168.100.0/22
represents the 1024 IPv4 addresses from 192.168.100.0 to 192.168.103.255.
Domain, URLs, and Excludes a specific domain, and all Define two entries to cover all URLs and subdomains associated
subdomains subdomains and URLs associated with a specific domain.
with the domain.
Entry one:
Entry two:
Subdomain (and URLs) Excludes any subdomains and URLs Value: Subdomains and URLs. Example: \.example\.com
specifically of a domain, but the domain is still
extracted. Select Use Regex.
Specific domain only Excludes a specific domain. Value: The specific domain. Example: example.com
Subdomains and URLs are still
extracted. Do NOT select Use Regex.
URL with wildcards Excludes any indicators of type URL Value: The URL with wildcard added at the end. Example:
matching the regex. Indicators http://examplesub.example.com
example.com and
examplesub.example.com of type Select Use Regex.
Domain would still be extracted. Start
Select indicator type: URL.
the regex with https?:// to exclude
both HTTP and HTTPS URLs.
Specific URL Excludes a specific URL, but the Value: The specific URL. Example:
domain and subdomains are still http://examplesub.example.com/myexample
extracted.
Do NOT select Use Regex.
URLs, domain, and Excludes domain example.com, its Value: Domain, subdomains and URLs, case insensitive and
subdomains, case-insensitive, subdomains, and its URLs. Case- anchored to the start of the indicator. Example: (?
anchored to start insensitive. Anchors regex match to i)^(https?://)?(([a-zA-Z0-9\-]+\.)+)?
the start of the indicator value, so example\.com
indicators that contain but do not
start with a match (e.g., Select Use Regex.
example.net?param=example.com)
Select indicator types: URL, Domain.
are not excluded.
All URLs Excludes all URLs for a specific Value: URLs with or without a path. Example:
domain that have a path (even an example\.com/
empty path), but the domain and
subdomains are still extracted. Select Use Regex.
Abstract
View static and dynamic analysis of file samples to identify malware, investigate trends, and create reports.
Unit 42 Intel's Sample Analysis tools enable you to conduct in-depth investigations and analyses of file samples. If the file indicator is found in the
Unit 42 Intel service, you have access to a full report on activities, properties, and behaviors associated with the file. File samples are run and
analyzed using Palo Alto Networks’ WildFire cloud-based threat analysis service, so you can view dynamic analysis of observed behavior, static
analysis of the file contents, and related sessions and submissions. For example, when investigating a malicious file found in your network, you
want to understand what the file did locally and in the network.
You can search for file samples, either in the Indicators tab or the Sample Analysis tab. If using the Sample Analysis tab, you can search for the
following samples:
Public Samples
Searches for samples that have been submitted by firewalls or sample sources other than those associated with your CSP account.
My Samples
The My Samples option is only available for users with a Palo Alto Networks Firewall, WildFire, Cortex XDR, Prisma SaaS, or Prisma
Access license. It takes data from devices in the same CSP account where your tenant is registered. My Samples data is not available for
multi-tenant deployments.
All Samples
NOTE:
When searching on the Sample Analysis page for relationships -relationships"", some results may appear without their specific
relationships listed, due to internal relationship permissions.
In the Sample Analysis tab, you can search for samples based on the sample hash and it compares all historical and new samples to the search
conditions and filters the search results accordingly.
In the Sample Analysis tab, locate a file you want to investigate and click the SHA256 section to start the investigation.
In the Unit 42 Intel tab, you can see the following sections:
Section Description
General In the top half of the page, you can see the Verdict, a summary of the file, when it was first and last seen by Wildfire, and any
Section relationships.
You can download a WildFire report in PDF format, which includes information such as File Information, Static Analysis, and
Dynamic Analysis.
Section Description
WildFire A high-level overview of the behavior observed when the file was run in the WildFire sandbox. Examples might include
Dynamic potentially malicious behaviors such as connecting to a potentially vulnerable port or creating an executable file in the
Analysis - Windows folder, as well as behaviors frequently performed by legitimate software, such as scheduling a task in Windows
Observed Task Scheduler.
Behavior
WildFire Dynamic analysis provides a granular view of file activity, process activity, registry activity, connection activity, etc. Files run
Dynamic in a custom-built, evasion-resistant virtual environment in which previously unknown submissions are detonated to
Analysis - determine real-world effects and behavior. Behavior can be observed in one or more operating system environments. It is
Sections broken down into the machines it was simulated and the activity itself. For example, Process Activity lists files that started a
parent process, the process name, the action the process performed, and whether they are malicious, suspicious, etc. It
shows not only the observed behavior of the file sample, but also how many times the behavior was observed in other Unit
42 samples (malicious samples, suspicious samples, and unknown samples).
In the following example, you can see that the parent process sample.exe wrote to file
kernel32=E02A3B57EA8B393408FF782866A1D342DD8C6B5F5925BA527981DBB21B6A4080. The same behavior occurred in
3.57m samples that had a verdict of malicious.
WildFire The WildFire Static analysis detects known threats by analyzing the characteristics of a sample before execution in the
Static WildFire sandbox. Static analysis can provide instant identification of malware variants and includes dynamic unpacking to
Analysis analyze threats attempting to evade detection using packer tools. You can analyze files such as Portable Executable (PE)
files and any suspicious files.
Related Shows any related sessions and submissions where the file was seen in your Firewall. Related sessions and submissions
Sessions & data are available if you have one of the following products: Palo Alto Networks Firewall, WildFire, Cortex XDR, Prisma
Submissions SaaS, or Prisma Access.
You have the option to add the file sample (without enriching) to Cortex XSOAR or to add and enrich the indicator to Cortex XSOAR.
Add to XSOAR
The indicator is added to Cortex XSOAR. If the indicator is related to one or more Unit 42 threat intel objects already in Cortex
XSOAR (ingested through the Unit 42 Feed integration), relationships are created in the database between the Unit 42 threat intel objects
and the file indicator. No third-party enrichments are run on the indicator. We recommend using this option if, for security reasons, you do
not want to expose the indicator to any third-party services.
The indicator is added to Cortex XSOAR. If the indicator is related to one or more Unit 42 threat intel objects already in Cortex
XSOAR (ingested through the Unit 42 Feed integration), relationships are created in the database between the Unit 42 threat intel objects
and the file indicator. Your configured third-party enrichments are run on the indicator.
When you add indicators to the Cortex XSOAR threat intel library from Unit 42 Intel, the indicators are available for use in scripts and playbooks.
You can use Unit 42 Intel data to build complex searches for file samples with similar characteristics. In some sections, you can search for specific
characteristics. For example, in the WILDFIRE DYNAMIC ANALYSIS - OBSERVED BEHAVIOR, section you can add Behavior to a search. In the
WILDFIRE DYNAMIC ANALYSIS - SECTIONS, you can add PARENT PROCESS, ACTION, or PARAMETERS or all characteristics of the file
activity to a search.
Adds selected information from a column to a Sample Analysis search (in the WILDFIRE DYNAMIC ANALYSIS - SECTIONS, you can add a
whole row to the search).
Clears any search characteristics you have already added and starts a new Sample Analysis search with the selected characteristics.
After selecting the relevant option, a message appears. You can do the following:
You pivot to the Sample Analysis tab where you can edit or run your search for samples that exhibited the same behavior.
If you want to add additional items to the search, ignore the message.
To run the search, go to the Threat Intel page and click the Sample Analysis tab.
For example, you might have an incident with an extracted file indicator. The Unit 42 Intel tab shows the file’s behavior. You scroll through the
sample's behavior and see a suspicious behavior: Powershell.exe written to a file in the Administrator's User folder, named 443.exe. You want
to find other samples with the same behavior and determine if they are related to a known adversary or malware, so you add that specific behavior
to your search.
Abstract
Use firewall sessions and submissions to products such as Prisma Cloud, and Prisma Access with Cortex XSOAR, to find threats and protect your
network.
The Sessions & Submissions tab enables you to use your firewall sessions and submissions data for investigation and analysis.
Sessions refer to firewall sessions that show connections from one endpoint to another. A firewall can forward information about network sessions
for an investigation. Cortex XSOAR TIM uses session information to learn more about the context of the suspicious network event, indicators of
compromise related to the malware, affected hosts and clients, and applications used to deliver the malware.
Submissions refer to sample logs reported to Wildfire from Palo Alto Networks products, such as Cortex XDR. While Sessions data shows
connections from one endpoint to another, submissions data shows if a file was found on a specific endpoint.
Sessions & Submissions data is available for users with at least one of the following products:
WildFire
Cortex XDR
Prisma Cloud
Prisma Access
You can take steps to block external IP addresses that are the sources of malicious files and threat campaigns. You can find compromised
machines within your network, isolate them as needed, and take remediation steps. For example, search for a file hash in Sessions &
Submissions. If the file appeared in one or more sessions or submissions, you can see when and where that occurred. A firewall session data
enables you to view the source IP and the destination IP for each session that includes the file.
If you are using Cortex XDR, you can see which XDR agent reported the file and which computers are affected.
NOTE:
When searching on the Sessions & Submissions page for relationships -relationships"", some results may appear without their specific
relationships listed, due to internal relationship permissions.
(Multi-tenant) Sessions & Submissions data is not available for Multi-tenant deployments.
In the Session Summary tab you can see the following information:
Section Description
Basic Information Includes general information such as the session Timestamp, destination IP, and source country.
Sample Information Includes file information, such as the file name, SHA, File URL, and Status. The Status for blocked samples
is Blocked, while the status for allowed samples is blank.
NOTE:
The Application is matched to the type of application traffic detected in a session. For example, a search for the
Application web-browsing returns sessions during which web browsing over HTTP occurred. See Applipedia for an
updated list of applications that Palo Alto Networks identifies.
Metadata Includes metadata, such as the source, region, and Device Hostname.
Related Sessions Lists any related sessions and submissions for further investigation
and Submissions
You can use Unit 42 Intel data to build complex searches for sessions and submissions with similar characteristics. From within the Session
Summary tab, any of the items listed in the Basic Information, Sample Information, or Metadata sections can be used to create a new search for
similar sessions and submissions. For example, you can create a new search that includes a specific destination IP and a specific file name that
you found together in a session.
To build a new search, hover your cursor over the end of the desired row. You can submit the following search:
Clears any search characteristics you have already added and starts a new Sessions & Submissions search.
After selecting the relevant option, a message appears. You can do the following:
You pivot to the Sessions & Submissions tab where you can edit or run your search for sessions and submissions that exhibited the same
behavior.
If you want to add additional items to the search, ignore the message.
To run the search without clicking on the popup link, go to the Threat Intel page and click on the Sessions & Submissions tab.
Threat Intel Reports gives you the ability to create, review, publish, and generate threat intelligence reports.
Threat intel reports summarize and share threat intelligence research conducted within your organization by threat analysts and threat hunters.
Threat intelligence reports help you communicate the current threat landscape to internal and external stakeholders, whether in the form of high-
level summary reports for C-level executives, or detailed, tactical reports for the SOC and other security stakeholders.
NOTE:
If users are unable to see the Threat Intel page, ensure that users have access, by verifying that their user role is assigned the Threat Intel
permission (Page Access).
The Threat Intel Reports page shows all the types of reports created. You can do the following:
Create a report
After you create a report, edit the report as required. The core of the report is the Overview/Summary section, which is used to enter
freeform text. By default, users with Administrator or Analyst roles have read/write access to the reports. When creating a report, you can
restrict the report to specific user roles. When you finish a section, select the checkmark to save. If you navigate away and return to the
Threat Intel Reports page, the report appears in the Threat Intel Reports table. Select the report to continue working on it. When finished,
you can send it for review, publish it, and generate a PDF version. When published, it creates a read-only version of the report for you to
share.
Edit a report
You can edit the report when you create the report or from the Threat Intel Reports table (if you navigate away and return to the Threat Intel
Reports page).
Delete a report
By default, all roles have read/write access to the reports. To grant read and read/write access only to specific roles, you can define access to
reports by doing one of the following:
When you create a report, choose one or more roles in the Permissions section of the new report dialog.
After you create a report, choose one or more roles in the Access section of the report layout.
If a role has not been added to either the Access or Permissions section, the role does not have read and read/write access to the Threat Intel
report.
You can create a threat intel report by choosing a type and defining other basic report information. To customize the threat Intel report such as
creating new types and layouts, see Customize Threat Intel Reports.
When you create a report, Cortex XSOAR creates a blank report based on the type you choose. Once created, edit the report to populate it with
relevant content before generating or sharing a report.
In the Overview/Summary section, enter freeform text using the Markdown editor, which enables you to apply formatting options to the body text,
including text sizing, coloring, formatting, pictures/icons, logo, and section headers.
1. Select Threat Intel → Threat Intel Reports → New Threat Intel Report
You can edit any fields after you create the report.
4. Edit report fields as needed and add any information about the specific report.
For example, if you want to send to another user to check before you publish, select Review.
When you have finished drafting a report, you can publish the report, which means all user roles have read-only access to the report to prevent
other users from making changes. If you unpublish a report, that access is reverted. Publish/unpublish does not revert any read/write access that
you granted to a specific role.
1. Navigate to the Threat Intel Reports table and click the Name of the report you want to share.
Once published, anyone you give the report link can see the report (provided they have access to your Cortex XSOAR tenant. To remove
read-only access, unpublish the report.
If you want to send the report to a larger audience other than Cortex XSOAR users, you can generate a report in PDF. Before generating the
report you can save the report as a template, so you don't need to define the settings again. To see a TIM report use case, go to Threat Intel
Management use cases.
1. Navigate to the Threat Intel Reports table and click the Name of the report you want to generate.
2. Click the vertical ellipsis icon at the top right of the report and click Report.
3. (Optional) If you want to generate a report from a specific tab, Select a tab to generate report from and then select the relevant tab..
4. From the Properties section, choose a Format, Orientation, and Paper Size for the report.
18 | Troubleshoot
Abstract
Troubleshoot errors, view, and take action on the System Diagnostics page. View integration and management audit logs and configure
management audit notification forwarding., view integration and management audit logs, set up a Syslog Server, and configure management audit
notification forwarding.
View errors and take action on the System Diagnostics page for Cortex XSOAR On-prem.
The Cortex XSOAR System Diagnostics page enables you to identify and fix potential issues before they become system-critical. By default, the
System Diagnostics page shows trends from the last 24 hours, but you can also select the last hour, 6 hours, 12 hours, 3 days, or 7 days.
NOTE:
To help with debugging issues, you can download the log bundle by clicking in the upper right hand corner. The log bundle contains
information about the system from the current state up to the past ten days, and it should be included when opening a support ticket.
IMPORTANT:
When you click to download the log bundle, indicates the download is in process (without any other messages). The log bundle download
may take a few minutes. Do not refresh or leave the page until the log bundle file download completes.
The System Diagnostics page provides system status data over time in the form of graph and table widgets.
TIP:
If there is a node failure, manage the nodes from the textual UI. For example, if a node fails, remove it and then add a new node to replace it.
For more information, see Manage nodes in a cluster.
Nodes widgets
Nodes - CPU Trend graph showing CPU consumption. The trend graph shows an increase as system usage increases. Temporary
peaks might correlate with system delays or slowness.
Nodes - Memory Trend graph showing memory consumption. The trend graph shows an increase as memory usage increases.
Temporary peaks might correlate with system delays or slowness.
Nodes - Storage Trend graph showing storage usage. The trend graph shows an increase as storage usage increases. Temporary
peaks might correlate with system delays or slowness.
Active Nodes Shows a list of all active nodes and their status - Connected or Disconnected.
Snapshot
The Storage Groups widget displays a graph illustrating storage group utilization. The trend graph shows an increase as storage usage grows. A
rapid surge in storage utilization might indicate a change in system usage.
We recommend increasing storage capacity or performing a data cleanup when utilization reaches 80%.
The Playbooks in Queue widget shows a graph that includes manually and automatically triggered playbooks and displays how many playbooks
were waiting in the queue over the displayed period. The playbook queues are designed to manage playbook executions efficiently and prevent
system overload. A rapid surge in the graph values might indicate a temporary peak of triggered playbooks and cause playbooks to take longer to
execute and may slow UI performance.
If the queue count is constantly higher than 0, contact Customer Support to discuss scaling options.
The Connectivity Snapshot widget shows the connection status between your Cortex XSOAR tenant and the external gateway. If the status is
Disconnected you cannot upgrade Cortex XSOAR, access the Marketplace, or update Docker images.
The Components Snapshot widget shows the status of a Cortex XSOAR component.
Status Action
Healthy None
Warning Check cluster health graphs for temporary peaks or high resources utilization.
If you have recently made changes to your system, verify if these changes might have impacted system components.
Error
Open a support case if you cannot find the source of the issue.
NOTE:
For some components, such as storage, if the system reaches a critical level, Cortex XSOAR will no longer function and you will not be able to
access the System Diagnostics page.
Component Description
Component Description
18.2 | View service limit errors and warnings in the Guard Rails page
Abstract
Use the Cortex XSOAR Guard Rails page to see details about service limit errors or warnings.
The Cortex XSOAR Guard Rails page provides a list of usage limitation errors and warnings that occur during incident ingestion, investigation, and
response. It helps to keep your environment stable and prevent actions that can cause major performance degradation or instability.
Cortex XSOAR has service rate limits for the number of incidents and indicators that can be ingested and stored. The Guard Rails page indicates
when incident or indicator size exceeds predefined service limits and may affect performance.
Cortex XSOAR supports one or more tenants per customer: One for production, and one or more for development. The development tenant allows
you to develop and test components (such as playbooks, automation scripts, and screen layouts) before they are deployed to production.
Indicator volume support differs between customers who own a TIM license and those who do not own a TIM license.
Rate limit of 100 incidents ingested per minute Rate limit of 100 incidents ingested per minute
Rate limit of 100 incidents ingested per minute Rate limit of 100 incidents ingested per minute
The development tenant has different technical specifications and should not be used for a production environment or stress testing.
NOTE:
For multi-tenant deployments, the same service limits apply to each child tenant.
The Cortex XSOAR Guard Rails page displays a table with a list of service limit errors and warnings and their details.
An error occurs when a service limit is exceeded. For example, an error can be generated for exceeding the size limit of an attachment or for
exceeding the number of entries per incident.
A warning occurs when approaching the service limit. For example, a warning can be generated when the number of entries per incident is
approaching the service limit or the number of linked incidents is approaching the service limit.
The service limits are defined out-of-the-box. Contact Cortex XSOAR support if you need to change the values for your service limits.
Access the Guard Rails page from Cortex XSOAR Settings & Info → Settings → System.
Type: The object type the error or warning occurred on, for example incident or indicator.
Subtype: The object sub type (N/A if it doesn't exist), for example entries or attachments.
Count: The number of times a specific item occurred in the last calendar day.
NOTE:
Identical messages generated within the same day are not duplicated in the table, only the Count is updated and the Timestamp shows the date
time the error or warning occurred for the first time. A count greater than one indicates an identical error or warning occurred more than once
within the same day.
View logs for monitoring system health and download log bundles for troubleshooting from the Cortex XSOAR System Diagnostics page or from
your VM textual UI menu.
Logs provide information about events that occur in the system. They are a valuable tool in troubleshooting issues that might arise in your Cortex
XSOAR environment. If you need additional help to find the source of an issue, you can download the log bundle to send to support or engineering
or to attach to a support ticket to facilitate the troubleshooting process.
NOTE:
You need viewer SSH user permissions to view and download logs.
Once Cortex XSOAR is installed and running, you can view system status and download log bundles from the Cortex XSOAR UI. If you encounter
issues during installation or if Cortex XSOAR is not running, you can access logs and log bundles from the textual UI menu.
1. If the textual UI is not already open, either launch the web console from your VM or SSH log in from an external terminal. For more
information, see Troubleshoot your installation.
These logs are not related to any user session in Cortex XSOAR.
The viewer user can use scp/sftp to download the log bundle to their home directory.
View system status and download log bundles from Cortex XSOAR
The System Diagnostics page provides system status data over time in the form of graph and table widgets.
The graphs and tables in the System Diagnostics page show the following data. This information can help troubleshoot system performance
issues. If you need additional help to find the source of the issue, you can download the log bundle to send to support or engineering or to
attach to a support ticket.
Widget Description
Nodes - CPU A trend graph showing CPU consumption. The graph shows an increase as system usage increases. Temporary
peaks may indicate system delays or slowness. We recommend increasing CPU resources when you reach
system limits.
Widget Description
Active Nodes A list of all active nodes and their statuses. Possible values are Connected or Disconnected.
Snapshot
Storage Groups A trend graph showing storage group utilization. The graph shows an increase as storage usage increases. A
rapid surge in storage utilization may indicate a change in system usage. We recommend either increasing
storage capacity or performing a data cleanup when utilization reaches 80%.
Nodes Memory A trend graph showing memory consumption. The graph shows an increase as memory usage increases.
Temporary peaks may indicate system delays or slowness. We recommend increasing memory resources when
you reach system limits.
XSOAR A table showing the status of Cortex XSOAR components. Possible values are Healthy, Warning, or Error.
Components
If you see a warning or error for a Cortex XSOAR component, we recommend you:
Snapshot
Check cluster health graphs for temporary peaks or high resource utilization.
If you have recently made changes to your system, verify if these changes have impacted system
components.
Open a support case if you cannot find the source of the issue.
Playbooks in A graph that includes both manually triggered and automatically triggered playbooks and displays how many
Queue playbooks were waiting in the queue over the displayed time period.
Playbook queues manage playbook executions efficiently and prevent system overload. Rapid surge in the graph
values may indicate a temporary peak of triggered playbooks that can cause playbooks to take longer to execute
and/or slow UI performance.
If the queue count is consistently higher than 0, we recommend contacting customer support to discuss scaling
options.
Nodes - Storage A trend graph showing storage usage. The graph shows an increase as storage usage increases. Temporary
peaks may indicate system delays or slowness. We recommend increasing storage resources when you reach
system limits.
Cortex A table showing the status of the connection between your Cortex XSOAR local tenant and the external gateway.
Connectivity
If the status is Disconnected, you cannot upgrade Cortex XSOAR, access Marketplace, or update Docker
Snapshot
images.
You can choose the following time frames to display the system status data:
Last hour
Last 6 hours
Last 12 hours
Last 3 days
Last 7 days
View, export, extract, and purge the audit trail in Cortex XSOAR. The audit trail logs all administrative user actions in Cortex XSOAR.
The management audit logs display a log of all administrative user interactions within Cortex XSOAR. By default, the logs are sorted by
Timestamp and cover which users interacted in what way with system objects, and associated data.
NOTE:
The audit logs do not include actions performed in the War Room. These actions are documented in the War Room.
You can filter by field, such as email, ID, user name, and type and you can save filters for later use. In addition, you can adjust the appearance of
the columns and add or remove columns.
To view the audit logs, go to Settings & Info → Management Audit Logs.
To export the management audit logs as a tsv text-based file, click the Export to file button. You can also forward management audit notifications
to an email distribution list.
API Keys
Edit Key
Delete Key
Authentication
Login
Logout
Licensing
Includes details about the license such as expiration and ingestion violation.
Permissions
Role created
Role deleted
Cortex Automation
Classifier Incident and indicator classifier subtypes, such as add, copy, and edit subtypes.
ContributionPack Includes contribution content pack subtypes, such as add, edit, and delete.
Credentials Includes integration credential subtypes, such as add, edit, and delete.
Delete
RemoveEntryPermanently
Edit
HyperProcess Includes the add and delete subtypes for the Hyper Process.
Incident Includes incident subtypes, such as add, edit, close, execute, and duplicate.
IncidentField Includes incident field subtypes, such as add, edit, delete, and export
Incident Layout Includes incident layout subtypes, such as add, duplicate, edit, attach, and detach.
IncidentType Includes incident type subtypes, such as attach, detach, enabled, disabled, and delete.
IntegrationsConfig Includes integration configuration subtypes, such as add, edit, and upload.
Jobs Includes job subtypes, such as add, edit, delete, pause, and abort.
Layout Includes indicator layout subtypes, such as copy, detach, edit, attach, and detach.
Playbook Includes playbook subtypes such as add, edit, upload, and delete.
Script Includes script subtypes such as copy, upload, edit, and delete.
ThreatIntelReport Includes the Threat Intel Report subtypes such as create, edit, and delete.
Whitelist Includes the whitelist subtypes such as delete, batchcreate, and add.
Widget Includes the widget subtypes such as edit add and reset.
XSOAR Migration
Includes audit information about the migration from Cortex XSOAR 6 to 8, such as whether users were migrated, the cutoff date, and whether
content and integrations were resynced.
1. Navigate to Settings & Info → Settings → System → Audit Notifications → Add Forwarding Configuration.
2. Enter a name and a description for the configuration and click Next.
To select a subset of the management audit notifications, click the filter button, select the relevant filters, and perform a search. For
example, if you want to forward only notifications related to API keys, click the filter button,, select Type and then select the Api Key value.
4. Click Next.
Distribution Add at least one email address to receive management audit notifications. Yes
List
Notification Change the notification timezone. The notification timezone only affects the time listed in email No
Timezone notifications. You can use the timezone configured in Cortex XSOAR or select Coordinated Universal
Time (UTC).
Grouping Change the grouping time frame. The grouping time frame specifies how often Cortex XSOAR sends No
Timeframe notifications. Every 30 notifications aggregated within this time frame are sent together. To send every
notification as soon as it is generated, set the time frame to 0.
Subject Select to generate the subject automatically or deselect and enter the email subject. By default, this Optional
field is selected.
View and export integration logs in Cortex XSOAR. Integration logs record integration details in Cortex XSOAR for troubleshooting.
The Integration Logs table helps monitor, troubleshoot, and analyze performance. It provides visibility into interactions between Cortex XSOAR
and external systems, facilitating effective integration management and ensuring the operational integrity of security operations.
Logs are generated for integrations developed with Python or non-python code, for example, JavaScript or PS.
To view integration logs, navigate to Settings & Info → Settings → Integrations → Integration Logs.
NOTE:
All Integration logs are located in the integration logs table in your Cortex XSOAR tenant. If you have an engine, verbose logs (including
integration logs) are also stored in the log file on your engine machine.
Field Description
Timestamp The date and time that the integration created the log.
Engine name The engine server name that the integration was running on.
Field Description
Source The integration instance name that was set in the integration instance settings.
instance
Bundle ID A unique identifier for logs generated from a specific integration execution. For example, a bundle ID is assigned for all logs
generated from a specific integration command or fetch incidents.
Filter by field
You can filter by log table column, such as log level, brand, or instance name, and you can save filters for later use. You can also adjust the
width of the columns and add or remove columns.
Export logs
To export the integration log as a .tsv file, click the Export to file button.
19 | Reference
Abstract
Includes reference topics, such as a list of server configurations, and user details and preferences for Cortex XSOAR Cloud
This section provides comprehensive reference information, empowering you to manage Cortex XSOAR effectively. Access essential details like
server configurations and explore powerful in-app search functionalities to quickly find the information you need.
Commands
System commands: Commands that enable you to perform Cortex XSOAR operations, such as clearing the playground or closing an
incident. These commands are not specific to an integration. System commands are entered in the command line using a /.
External commands: Integration-specific commands that enable you to perform actions specific to an integration. For example, you can
quickly check the reputation of an IP address. External commands are entered in the command line using a !. For example, !ip.
Content packs
All Cortex XSOAR content is organized in packs. Packs are groups of artifacts that implement use cases in the product. Content packs are created
by Palo Alto Networks, technology partners, consulting companies, MSSPs, customers, and individual contributors. Content packs may include a
variety of different components, such as integrations, scripts, playbooks, and widgets.
The content repository functionality built into Cortex XSOAR allows you to sync content between development and production machines using a
private repository.
Context data
Different commands and playbook tasks are tied together by the Cortex XSOAR context. Every incident and playbook has a place to store data
called the context. The context stores the results from every integration command and every automation script that is run. It is a JSON storage for
each incident. Whether you run an integration command from the CLI or from a playbook task, the output result is stored in the JSON context in
the incident or the playground. For example, the command !whois query="paloaltonetworks.com" returns the data and stores the results in
the context.
Dashboards
Dashboards include visualized data, including Cortex XSOAR incident, indicator, and system data, displayed for a rolling, relative time frame.
Dashboards enable you to track metrics, analyze trends that appear in your Cortex XSOAR data, and identify areas of concern. Dashboards can
be customized with widgets that focus on the data points most relevant to your organization.
Engines
An engine is a proxy server application that is installed on a remote machine and enables communication between the remote machine and the
Cortex XSOAR tenant. You can run playbooks, scripts, commands, and integrations on the remote machine and the results are returned to the
tenant.
Integration instances for On-prem applications. For example, the GitLab v2 integration enables you to run commands on GitLab instances.
Execute scripts and commands that require access to On-prem resources. For example, the Active Directory v2 integration enables you to
run commands in Active Directory.
Generic Indicator export service. In Cortex XSOAR, you can configure an EDL to share a list of Cortex XSOAR indicators with other
products in your network, such as a firewall or SIEM. For example, your Palo Alto Networks firewall can add IP address and domain data
from the EDL to block or allow lists.
Load balancing which enables the distribution of the command execution load.
Incidents
Incidents are potential security data threats that SOC administrators identify and remediate. There are several incident triggers, including:
SIEM alerts
Mail alerts
Security alerts from third-party services, such as SIEM, mailboxes, data in CSV format, or from the API
Cortex XSOAR includes several out-of-the-box incident types, and users can add custom incident types with custom fields, as necessary.
Incident fields
Incident fields are used for accepting or populating incident data. You create incident fields to hold information received from third-party
integrations, manual input, or via the API.
Incident lifecycle
You can define integrations with your third-party security and incident management vendors. You can then trigger events from these integrations
that become incidents. After the incidents are created, you can run playbooks on these incidents to enrich them with information from other
products in your system, which helps you complete the picture. In most cases, you can use rules and scripts to determine if an incident requires
further investigation or can be closed based on the findings. This enables your analysts to focus on the minority of incidents that require further
investigation.
DBot can simplify your incident investigation process by collecting and analyzing information and artifacts found in War Room entries. Cortex
XSOAR analyzes indicators to determine whether they are malicious. Using indicator types reveals predefined, regular expressions in the War
Room.
There are many out-of-the-box indicator types, but you can add custom indicator types as necessary. The following is a list of some of the indicator
types, but the list is not exhaustive:
Registry key
URL
Domains
CIDR
When you add an indicator type, you can add formatting, enhancement, and reputation scripts, as well as reputation commands. Formatting
scripts modify how the indicator is displayed in the War Room and reports. Enhancement scripts enable you to gather additional data about the
highlighted entry in the War Room. Reputation scripts calculate the reputation score for an entry that DBot analyzed, for example,
DataIPReputation, which calculates the reputation of an IP address. Reputation commands (such as !ip for IP addresses) are an alternate way
to calculate an indicator’s reputation score (verdict) and gather additional data about the indicator. Reputation commands and reputation scripts
are executed when enriching a specific indicator type (for example, when the indicator is extracted from an incident).
Integrations
Integrations are third-party tools and services that the Cortex XSOAR platform works with to orchestrate and automate SOC operations.
NOTE:
Cortex XSOAR 8 currently does not support the IoT Security Third-party Integrations Add-on. For more information, see the IoT Security
documentation.
In addition to third-party tools, you can create your own integration using the Bring Your Own Integration (BYOI) feature.
The following lists some of the integration categories available in Cortex XSOAR. The list is not exhaustive, and highlights the main categories:
Authentication
Case Management
Database
Endpoint
IT Services
Messaging
Network Security
Vulnerability Management
Integration instance
A configuration of an integration. You can have multiple instances of an integration, for example, to connect to different environments. If you are an
MSSP and have multiple tenants, you can configure a separate instance for each tenant.
Jobs
You can create scheduled events using jobs. Jobs are triggered either by time-triggered events or feed-triggered events. For example, you can
define a job to trigger a playbook when a specified TIM feed finishes a fetch operation that included a modification to the list.
Marketplace is the central location for installing, exchanging, contributing, and managing all of your content, including playbooks, integrations,
scripts, fields, layouts, and more.
When a content pack is available for update, you see an Updates waiting in Marketplace notification in the side menu. You can update to the latest
content version or to a specific version. All dependent content packs update automatically with the content pack. We recommend periodically
reviewing your installed Marketplace packs for any available updates, and updating, as required.
Playbooks
Playbooks are self-contained, fully documented prescriptive procedures that query, analyze, and take action based on the gathered results.
Playbooks enable you to organize and document security monitoring, orchestration, and response activities. There are several out-of-the-box
playbooks that cover common investigation scenarios. You can use these playbooks as-is, or customize them according to your requirements.
Playbooks are written in YAML file format using the COPS standard.
A key feature of playbooks is the ability to structure and automate security responses, which were previously handled manually. You can reuse
playbook tasks as building blocks for new playbooks, saving you time and streamlining knowledge retention.
Playground
The Playground is a non-production environment where you can safely develop and test scripts, APIs, commands, and more. It is an investigation
area that is not connected to a live (active) investigation.
To erase a playground and create a new one, in the Cortex XSOAR CLI run the /playground_create command.
Reports
Reports include visualized data, including Cortex XSOAR incident, indicator, and system data, which can be run for a specific time frame and
automatically sent via email to internal or external stakeholders.
Scripts
The Scripts page is where you manage, create, and modify scripts. These scripts perform a specific action, and are comprised of commands
associated with an integration. You write scripts in either Python or JavaScript. Scripts are used as part of tasks, which are used in playbooks and
commands in the War Room.
The Scripts section includes a Script Helper, which provides a list of available commands and scripts, ordered alphabetically.
War Room
The War Room is a collection of all investigation actions, artifacts, and collaboration pieces for an incident. It is a chronological journal of the
incident investigation. You can run commands and playbooks from the War Room and filter the entries for easier viewing.
Search Cortex XSOAR using Lucene query syntax, the search box, or general search.
Cortex XSOAR comes with a very powerful search capability. You can search for data using the following:
General search
The search follows the Bleve query syntax. Bleve query syntax is similar to Lucene query syntax, but with some differences, such as query syntax
for numeric ranges and date ranges. The search is performed on certain pages such as incidents, indicator, or the entire data (such as titles,
entries, chats).
You can add some of the following inputs when searching for data:
Input Description
Add text Type any text. The results show all data where one of the words appears. For example, the
search low virus returns all data where either the low or the virus string appears.
and Searches for data where all conditions are met. For example, status:Active and
severity:High finds all incidents with an active status and high severity.
or Searches for data where either conditions are met. For example, status:Pending and
severity:High or severity:Critical finds all incidents with a pending status and high or
critical severity.
* Wildcard search: * and ? should be used when searching for partial strings. For example, when
searching for all scripts that start with AD, use AD**. If you need to search for a script that
?
contains "get", search for *get*.
“” An empty value.
- Excludes from any search. For example on the Incidents page the -status:closed -
category:job searches for all incidents that are not closed and for categories other than jobs.
“me” Filters incidents by a user’s account. For example, owner:{me} displays all incidents where you
are the owner. It can also be used for other fields such as createdBy:{me} that displays all
incidents you created.
Relative time. For example, “today”, “half Relative time in natural language can be used in search queries. Time filters - < and > can be
an hour ago”, “1 hour ago”, “5 minutes used when referring to a specified time, such as dueDate:>="2024-03-05T00:00:00 +0200",
ago”, “10 days ago”, “5 seconds ago”, or when searching for high severity incidents: Severity:High and created:>= "1 hour ago"
“five days ago”, “a month ago”, "in 1
year". NOTE:
The timezone for searches is UTC. The system timezone is not used.
When adding some fields, such as Occurred you can enter the date from the calendar. You
can also filter the date when the results are displayed.
Search using Regex You need to use the value “//”, when searching for Regex values. For example, to search for
indicator values that contain www and end with .com, type: value: "/w{3}..*.com/". This
returns values such as www.namecheap.com, www.kloshpro.com.
Search for indicator values To search for indicator values that contain lower-upper a-z letters and 0-9 numbers with a length
of 32, type: value:"/[a-zA-Z0-9]{32}/". This returns values such as
775A0631FB8229B2AA3D7621427085AD, 87798e30ca72f77abe624073b7038b4e.
Timer/SLA fields To search for Timer/SLA fields in incidents, see Search incidents for Timer/SLAs.
Input Description
Special characters To explicitly use the following characters in a search query, place them within double quotes. An
escape character \ is not required.
&& || ! {} [] () ~ * ?
To explicitly use the following characters in a search query, place them within double quotes and
use an escape character \.
For information about using special characters, see Run commands in the CLI.
NOTE:
When searching for incidents, the following fields match any incident containing the searched value:
phase
name
details
type
For example, you have several incidents with the idle accounts name. When searching for the name: "idle", it returns any name that
contains the word idle (including idle accounts). Other fields return anything that matches the exact world, idle.
Exact matches for name, type, and phase fields, add raw to the search field. For example, enter rawName:"idle".
The search box searches for incidents, investigations, and indicators. The search box appears in the top right-hand corner on most pages. You
can either type free text or search using the search query format (use the arrow keys to assist you in the search). For example,
incident.severity:Low searches for all incidents that have low in the severity category.
NOTE:
For precise results when searching for all long text, phase, name, reason, details or type, set the Server Configuration,
incident.search.exact.match.only to true. For example, when doing a search for type:Phish Mail, if the server configuration is set to
true, the results returned include the exact text Phish Mail and not each word separately. Another option to return exact text, just for name,
type and phase, is to add the term "raw" preceding the query in your search. For example, rather than just entering type:Phish Mail,
type rawType:"Phish Mail".
Free Text
A free text search is used in the Playbooks and Scripts pages. You can search using part or all of the component's name. The component tag or
description is included in the search. You can also search for an exact match of the component name by putting quotation marks around the
search text. For example, searching for "AddEvidence" returns the script with that name. You can search for more than one exact match by
including the logical operator "or" in-between your search texts in quotation marks. For example, searching for "AddEvidence" or
"AddKeyToList" returns the two scripts with those names. Wildcards are not supported in free text search.
General Search
Use a general search. For example, when searching for a table in the Users tab, searching for a widget, or a task in a playbook.
Use markdown to add basic formatting to text in multiple contexts within Cortex XSOAR.
You can use Markdown in many places within Cortex XSOAR. Some of the more common places are:
Scripts
Playbook tasks
Widgets
Incident fields
Lists
In most contexts where Markdown is supported, a Markdown editor is available to help you apply styles and view a preview of how those styles
will look.
Markdown Syntax
Most Markdown syntax elements within Cortex XSOAR are identical to those used in basic and extended Markdown syntax. For more information
about markdown syntax, see https://www.markdownguide.org/.
The following Markdown elements used in Cortex XSOAR and exposed in the Markdown editor follow the same syntax as basic/extended
Markdown:
Bold
Italics
Strike-through
Headings
Lists (unordered/ordered)
Links
Code
NOTE:
Using the Insert code button in the Markdown editor adds three backtick quotes, which allows inclusion of a literal backtick character
within the code snippet.
Tables
NOTE:
You can use the Insert table button in the Markdown editor to easily create a table with up to five rows/columns.
Images
Blockquote
Additional elements not exposed in the Markdown editor can also be applied, such as: letter-spacing, text-shadow, font-weight, font-
size.
Cortex XSOAR supports additional elements not found in basic/extended Markdown that provide useful functionality when working with Cortex
XSOAR. For example:
NOTE:
You can use the name of the color, or the color code (hex triplet format)
Text background color {{background:#fd0800}}(This text will have red background) OR {{background:red}}
(This text will have red background)
NOTE:
You can use either the name of the color, or the color code (hex triplet format). You can use text
color and text background color in parallel. For example (using the editor buttons):
{{background:red}}({{color:blue}}(This text will be in blue with red background)). Or,
alternatively, if you are manually applying attributes, you can include both types in a single bracket:
{{background:red;color:blue}}(This text will be in blue with red background)). For both text
color and text background color, you can select Show custom options in the Markdown editor to
select or enter a specific color code (hex triplet format).
NOTE:
Upload a local image You can upload a local image that is not available on the internet to the Markdown editor. Copy/paste
or drag a local image into the Markdown editor, which automatically applies the standard image syntax
and adds a relative path to the image.
NOTE:
Within the War Room, when the Markdown editor is open, you will only be able to drag images into
the Markdown editor. To drag images into the War Room, first close the Markdown editor.
NOTE:
If data between the two sets of %%% is not parsed as JSON, all of the data is taken as a
command to render. For example, %%%!Print value='test'%%% causes the button to run
!Print.
Some extended Markdown syntax may not be supported. For example, checkboxes and footnotes.
Each user can define their own details and preferences. To configure, click your username, and select User Preferences. These preferences do
not affect other users.
Details
Edit your first and last name and reset your password. Your password must meet the requirements of your organization's password policy.
User preferences
Section Description
Choose a default landing page that appears when you log in.
Keyboard Change the shortcut letter used to open the GoTo bar to search, investigate, and initiate actions. To change the shortcut,
Shortcuts click the letter in the box, type a letter, and then save. The shortcut value must be a keyboard letter (A to Z).
Timezone Select the timezone to display your Cortex XSOAR data, which affects the timestamps displayed in Cortex XSOAR, such
as auditing logs, and exported files.
Timestamp The timestamp format is displayed in data tables, auditing logs, and exported files.
Format
NOTE:
If Keyboard Shortcuts, Timezone, and Timestamp Format do not appear in the Preferences tab, the default keyboard shortcut can only be set
on the Server Settings page.
Notifications
Each user can define their notifications. These settings do not affect other users.
You can configure which notifications to receive and via what channels. Notifications are presented by categories:
My Incidents
My Playbook Tasks
My To-Do Tasks
Other Notifications
Email notifications are enabled by default. Every email notification from which you can unsubscribe includes a link at the bottom of the email to
bring you directly to the User Preferences page where you can edit your notification settings.
If your organization has integrations with Slack or other messaging applications, you can choose to enable or disable notifications through those
applications.
NOTE:
To set your active/away status, click your username and go to Set Yourself as Away. Once you are set as away, a Zz icon appears next to your
name. To change the status, select Set Yourself as Active. Other users see you as active or away in dropdown lists, such as when assigning an
owner to an incident. Your status can also be set by entering !setYourselfAs in the command line.
Cortex XSOAR provides custom server configuration settings that enable you to customize your Cortex XSOAR on the tenant level. You can also
use custom server configuration settings in situations where you experience issues or need to troubleshoot situations in your environment.
1. Navigate to Settings & Info → Settings → System → Server Settings → Server Configuration.
4. Click Save.
Engines
engine.test.command.timeout<brand- Increases the timeout, in seconds, for a specific integration when using an engine. 60
name> For example, change it to 300 seconds. Type in this format adding the brand name:
engine.test.command.timeoutTanium
engines.notification.users Specifies which users receive an email notification when an engine disconnects. A N/a
comma-separated list of Cortex XSOAR users. For example:
user1,user2,user3user1,user2,user3
Google API
UI.google.api.key Entities that have Geo-location information (latitude and longitude) can be displayed on a Google map, by N/a
utilizing the Google Map API (which is required). For example, if you want to see the physical location of a
computer that was attacked by Malware. To display the physical location of an entity on a map, run this
command with the value: Google Maps API Key. For more information, see Set up Google Maps in
Cortex XSOAR to use map automations.
Incidents
Text values, such as Asset ID. (You can only edit when you
click the pencil in the value field).
investigation.prevent.modify.closed Whether to add chats and notes to the closed investigation (set to true
false to allow).
Export.utf8bom Whether to export incidents and indicators to CSV using the UTF8- False
BOM format. For more information, see Export an incident to CSV
using the UTF8-BOM format.
Indicators
indicator.timeline.auto.extract.enabled Enables the indicator timeline in the indicator extraction flow. For more true
information, see Configure the indicator timeline.
indicator.timeline.enabled Enables the indicator timeline in all flows. For more information, see Configure true
the indicator timeline.
sync.mirror.job.delay The interval for the job in minutes. For more information, see Special Server 1
Configurations.
sync.mirror.job.enable Enable or disable the mirroring job. For more information, see Special Server enable
Configurations.
Notifications
soc.name Customizes the SOC name in the survey header for an Ask N/a
task. For more information, see Customize the SOC name.
comm.ask.linktocontext.enabled Whether to display the links generated for an Ask task in the true
Context Data of the Work Plan.
comm. datacollection.linktocontext.disabled Whether to display the links generated for a Data Collection true
task in the Context Data of the Work Plan.
Proxy
condition.ask.external.link The address (including the HTTPS prefix) of the proxy used for N/a
external user communication in a conditional task.
Remote Repository
UI.version.control.admin.only Set to true to restrict access for pushing content to a remote false
repository to administators only.
NOTE:
reports.time.zone Configure the timezone for widgets in a report. For more information, Local
see Configure the timezone in a report. time/Location
Scripts
script.timeout The timeout, in minutes, to prevent blank pages when running a script. If you generate a report that runs a 3
script and has blank pages you can Troubleshoot the script timeout. For more information, see Troubleshoot
script timeout for reports.
System Settings
UI.show.timezone.in.server.settings If set to true, settings for Keyboard Shortcuts, Timezone, and Timestamp format false
appear on the Server Settings page. By default, these settings instead appear on
the Preferences tab of the User Details page.
SLA
Widgets
ROI.Cost.Monitor Amount in Dollars. Relevant for ROI widget. For more information, see Saved By Dbot (ROI) Widget. 60
The following are frequently asked questions for new Cortex XSOAR users.
If you have a full content bundle (.tar.gz file), navigate to Settings & Info → Settings → System → Server Settings and scroll to Custom content.
Browse for the file or dragging into the Upload custom content box.
Navigate to Settings & Info → Settings → System → Server Settings and scroll to Custom content. Click Export all custom content to download a
compressed file containing all of the custom content from your instance.
You can also export individual content items, such as playbooks, by selecting the content item, clicking the triple dot menu in the upper right corner
of the page, and clicking the Download button.
Navigate to your username and select Username → User Preferences → Notifications. By default, all notifications are enabled. De-select the
checkboxes for notifications you don’t want to receive.
Configure an integration instance for that chat application. As long as the integration instance implements the send-notification command, it
appears on the Notifications tab.
Dashboards show data from a rolling, relative time frame from a certain time in the past (for example, 7 days ago) through the present and are
shown when you log into Cortex XSOAR. Reports allow you to share similar data outside of Cortex XSOAR via email. Reports can be scheduled
to run at a specific time to capture data where the start/end time is important. For example, if management requests a report on the incidents that
occurred between 08:00 yesterday and 08:00 today.
The link to the playground appears at the bottom of the My Incidents menu item in the left sidebar. You can also access the shortcut option using
ctrl-alt-k and type playground or go directly to https://<tenant>/WarRoom/playground/
Navigate to Marketplace → Installed Content Packs. From the Show list, select Update available. Click the checkbox to select all, then click the
Update button.
Cortex XSOAR comes with a powerful search capability that uses the Lucene query syntax. For example, to search playbooks:
Search for the playbook with the exact name “Phishing - Generic v3”: name:"Phishing - Generic v3"
Search for playbooks where the word “Phishing” appears anywhere in supported system objects: Phishing
Search for playbooks where the playbook name contains “Phishing”: name:"Phishing"
Before you can begin using Cortex XSOAR APIs, you must generate the following items from Cortex XSOAR:
Value Description
API Key The API Key is your unique identifier used as the Authorization:{key}
header required for authenticating API calls.
Depending on your desired security level, you can generate two types of
API keys, Advanced or Standard.
The Advanced key hashes the key using a nonce, a random string, and
a timestamp to prevent replay attacks. cURL does not support this but it
can be used with scripts.
API Key ID The API Key ID is your unique token used to authenticate the API Key.
The header used when running an API call is x-xdr-auth-id:
{key_id}.
FQDN The FQDN is a unique host and domain name associated with each
tenant. When you generate the API Key and Key ID, you are assigned
an individual FQDN.
The API is documented in detail in the Cortex XSOAR API Reference Guide.
Cortex XSOAR API URIs comprise of your unique FQDN, the API name, and the name of the call. For example, https://api-
{fqdn}/xsoar/{name of api}/{name of call}/.
The following steps describe how to generate the necessary key values.
1. Select Settings & Info → Settings → Integrations → API Keys → New Key.
2. Select the type of API Key you want to generate based on your desired security level: Advanced or Standard.
3. To define a time limit on the API key authentication, mark Enable Expiration Date and select the expiration date and time.
You can view the Expiration Time field for each API key at Settings & Info → Settings → Integrations → API Keys . In addition, Cortex
XSOAR displays an API Key Expiration notification in the Notification Center one week and one day before the defined expiration date.
4. (Optional) Provide a comment that describes the purpose of the API key.
5. Expand each area to select the desired level of access for this key.
You can select from the list of existing Roles, or you can select Custom to set the permissions on a more granular level. You can select
multiple roles.
7. Copy the API key, and then click Close. This value represents your unique Authorization:{key}.
CAUTION:
You can't view the API Key again after you complete this step so ensure that you copy it before closing the notification.
2. Note your corresponding ID number. This value represents the x-xdr-auth-id:{key_id} token.
The following examples vary depending on the type of key you select.
You can test authentication with Advanced API keys using the provided Python 3 example. With Standard API keys, use either the cURL example
or the Python 3 example. Don’t forget to replace the example variables with your unique API key, API key ID, and FQDN tenant ID.
After you verify authentication, you can begin making API calls.
Cortex XSOAR uses telemetry to collect specific usage data. The data is analyzed and used to improve Cortex XSOAR.
Cortex XSOAR uses telemetry to collect specific usage data. This data is analyzed and used to improve Cortex XSOAR and to identify common
usage to help drive the product roadmap.
You can limit or turn off telemetry, except for essential information according to your license type. For more information, see Configure server
settings.
Keyboard shortcuts to navigate and manage playbooks, scripts, CLI, and incident pages.
The following keyboard shortcuts enable you to quickly navigate and manage Cortex XSOAR.
Select task(s) Shift + drag with the mouse Shift + drag with the mouse Playbooks (edit mode)
Open the Markdown Box and Command-M Ctrl + M CLI area in all pages
the Toolbar in CLI
Show or hide the CLI Option-F Alt + F CLI area in all pages
Cortex XSOAR's support and End-of-Life (EoL) policies depend on your version.
Cortex XSOAR On-prem versions are generally available for upgrade every 3 months.
Engines
Cortex XSOAR maintains backward compatibility with N-2 engine versions. For example, when Cortex XSOAR 8.6 is GA and deployed, we have
backward compatibility for Cortex XSOAR engine versions 8.4, 8.5, and 8.6. For more information about upgrading your engine, see Upgrade an
engine.
Also, make sure that the Cortex XSOAR tenant version is not EoL. For example, if you are running Cortex XSOAR 8.6, but 8.4 is EoL, the 8.4
engine is not supported.
NOTE:
It is highly recommended to run engines on the latest version. New features, performance improvements, and bug fixes are only provided on the
tenant's version. For example, if the tenant is running on 8.6 but the engine is running on 8.4 and a bug is found, the fix for the bug will require
you to upgrade the engine to 8.6.
Feature Description
My Incidents Includes your favorites, incidents you own, and incidents you have participated in.
Dashboards & Dashboards include visualized data, including Cortex XSOAR incident, indicator, and system data, displayed for a rolling,
Reports relative time frame. Dashboards enable you to track metrics, analyze trends that appear in your Cortex XSOAR data, and
identify areas of concern. Dashboards can be customized with widgets that focus on the data points most relevant to your
organization.
Reports also contain visualized data, but can be run for a specific time frame and automatically sent via email to internal or
external stakeholders.
Feature Description
Incidents On the Incidents page, you can search for and interact with incidents that have been ingested from third-party integrations
or manually created in Cortex XSOAR.
Incidents enable you to organize your investigation and response work. Each incident is a self-documenting IR workbench
where you can view incident details in a custom layout, run scripts and playbooks on the incident, create notes, tag
evidence items, and more.
Threat Intel The Threat Intel page displays a table or summary view of all indicators.
(Indicators
NOTE:
If you do not have a TIM license, the page is titled Indicators. Most Threat Intel features are available only with a Cortex
XSOAR Threat Intelligence license.
Indicators: Indicators database. Search, review, and interact with indicators including IPs, domains, URLs, hashes.
Research threats and correlate indicators of compromise across multiple incidents. Track indicator properties such as
their verdict and add tags to apply your own indicator classification and grouping logic.
Sample Analysis (TIM license only): View detailed file sample analysis results from PANW WildFire. Conduct in-depth
research and analysis of file sample behaviors and characteristics based on WildFire’s sandboxed detonation of the
file.
Sessions & Submissions (TIM license only): For users of PANW firewalls, WildFire, Cortex XDR, Prisma SaaS,
and/or Prisma Access, search and view firewall sessions and file sample submission data from these products.
Correlate file hashes observed in firewall sessions or submitted through other PANW products with hashes in Cortex
XSOAR.
Threat Intel Reports (TIM license only): Build and share rich threat intelligence reports. Share threat intelligence
reports with stakeholders either within or outside of Cortex XSOAR.
Playbooks On the Playbooks page, you can browse, create, and customize Cortex XSOAR playbooks, which are workflows that link
together ordered response steps including scripts, manual tasks, and communication tasks.
Playbooks enable you to standardize and orchestrate your IR processes. A playbook helps ensure users follow a consistent
response process, automates mundane response tasks, ties together your different IR tools, and gathers all relevant
incident context and enrichment data in one centralized place.
NOTE:
You can copy/paste tasks from one playbook to another by using keyboard shortcuts.
Scripts On the Scripts page, you can browse, create, and customize Python, PowerShell, and JavaScript scripts for use in Cortex
XSOAR. View the code for out-of-the-box scripts in order to troubleshoot, better understand, or build upon them. You can
create custom scripts to extend Cortex XSOAR’s functionality to achieve your automation goals.
Jobs Jobs allow you to schedule playbooks to run on a recurring basis, either at a specific time or triggered by new indicators
ingested from a feed integration. With jobs, you can automate actions you would normally take on a recurring basis, such
as compiling malicious indicators and sending them to the SOC for verification before they are blocked.
Feature Description
Marketplace The Cortex Marketplace provides access to hundreds of integrations that extend the functionality of Cortex XSOAR and
allow communication with third-party services. Includes the following:
Browse: The central location for searching and installing Cortex XSOAR content, including playbooks, integrations,
and scripts.
Installed content packs: View and manage your installed Cortex XSOAR content packs.
Contributions: Contribute content that you have created, including playbooks, integrations, and scripts.
Deployment Wizard: The Deployment Wizard significantly reduces the time required to set up your use case. It
guides you through the process of setting up your content pack for your specific use case, Relevant for phishing and
malware content packs.
Cortex Gateway: Cortex Gateway allows you to activate new tenants and view and manage existing tenants and
tenants available for activation that are allocated to your Customer Support Portal account.
Cortex XSOAR License: View information about the licenses, expiry dates, and the number of licensed and active
users.
Management Audit Logs: View and export a historical audit trail of user actions taken in Cortex XSOAR.
Tenant if you have more than one Customer Support Portal account, you can view and pivot to all the tenants that you have access
Navigator to, by clicking Tenant Navigator. In the Tenant Navigator, you can do the following:
The currently chosen tenant is marked by a green Active Session label. The tenants are grouped according to
Customer Support Portal accounts.
If there are more than 5 tenants, a search option is available. If there are more than 5 tenants within a specific
account, a list of tenants is available for that Customer Support Portal account.
NOTE:
If you do not have more than one account, the Tenant Navigator is unavailable.
Learn about Cortex XSOAR multi-tenant deployments that provide data segregation while enabling you to manage multiple tenants from a main
tenant.
Cortex XSOAR multi-tenant is designed for managed security service providers (MSSPs) that require strict data segregation, but also need the
flexibility to share and manage critical security practices across tenants.
NOTE:
This multi-tenant module should be read in conjunction with the Cortex XSOAR on-prem documentation, as most of the features apply to multi-
tenant.
Option Description
Multi-tenant A multi-tenant deployment enables MSSPs to manage multiple tenants and maintain complete data separation if needed.
for MSSPs You can centrally manage resources and reporting from the main tenant, push custom content to one or more tenants,
search across incidents from multiple child tenants, and run commands across multiple child tenants, without exposing any
data across tenants.
An MSSP has a pool of analysts in a central SOC. The analysts operate at the main and child tenants. Each customer is a
tenant, and the data for each child tenant is stored separately. Specific content for individual tenants is created on the tenant
level, and content common to multiple tenants is pushed from the main tenant. The MSSP can provide customers with direct
access to their child tenant, on a read-only or read/write basis.
Multi-tenant For most large enterprises, we recommend Cortex XSOAR Enterprise with RBAC implementation since this deployment can
for Enterprise accomplish the majority of data segregation requirements.
However, in some cases, you may want to use a multi-tenant deployment where you have a large enterprise with multiple
divisions but with one centralized SOC managing those divisions.
We encourage you to consult with the Cortex XSOAR Customer Success team to discuss your business use case.
A large enterprise has acquired multiple business divisions in different geographic regions. Each division has a separate
database, but the enterprise maintains one centralized SOC to manage all incidents.
NOTE:
Data is not easily shared between tenants. For example, collaborating on an incident requires extra steps, such as
setting up mirroring between tenants using the XSOAR Mirroring integration from the XSOAR Mirroring content
pack. For more information, see XSOAR Mirroring integration.
Tenants can't change definitions that were set on the main tenant, such as playbooks.
Multi-tenant architecture
Multi-tenancy architecture is based on the platform’s ability to run separate instances (processes and data) of Cortex XSOAR, linking each child
tenant to a main tenant. Each deployment consists of a main tenant and child tenants.
Component Description
Main Tenant The main tenant, also referred to as the parent tenant, is used to access and administer your environment. Analysts with
permissions can view and edit child tenant data directly from the main tenant or can easily switch to work directly on the child
tenant. Content configured on the main tenant can be shared with some or all tenants, using propagation labels.
The main tenant communicates over a secure SSL channel. If the child tenant has a self-signed certificate, you can use it for
secure communication instead of SSL. For more information, see Use a signed certificate instead of SSL.
Child tenant A child tenant is an instance of Cortex XSOAR that serves an end customer, such as the customer of an MSSP, and is
associated with the main tenant. Each tenant has customer-specific data such as indicators, incidents, and layouts, which are
stored separately.
NOTE:
The multi-tenant license includes a main tenant. You can install as many child tenants using the installer.
Engines
Engines are installed on a remote network and act as proxies, which enable you to access remote networks. They enable communication between
the Main Tenant and third-party integrations, which might be located in a different part of the network or be blocked through a firewall. Engines can
be installed on their own or as part of an engine group, which distributes the load from an integration, or several integrations, between multiple
engines.
In multi-tenant deployments, engines are often used to enable the network connectivity between an MSSP’s network and the end customer’s local
network. The engines are installed within the customer’s networks (normally on a local virtual machine situated either in the user's DMZ or the
security management network) and are programmed to communicate directly with the main tenant. Once the communication starts, a bi-directional
tunnel is created between the MSSP and the customer’s network, allowing the MSSP to connect to the customer’s relevant resources (for
example, AD, mail server, and firewall management server).
Multi-tenant deployment
Multi-tenancy enables you to install the main tenant and child tenants from the Cortex Gateway and then manage the child tenants from the main
tenant.
In the main tenant (Main Account), you can see all incidents and indicators across all child tenants. You can create integrations and scripts for use
across multiple child tenants, run commands across multiple child tenants, and switch easily between tenant environments.
Cortex XSOAR provides complete data segregation between customers in a multi-tenant deployment, and no incident or indicator data is stored
on the main tenant. Each child tenant runs separately, and the data separation meets data privacy standards and compliance requirements.
In a multi-tenant deployment, content is either created or modified at the main tenant level and pushed to tenants or is created within individual
child tenants. Content packs are always installed on the main tenant and pushed to child tenants. You can define propagation labels per child
tenant, which allows you to selectively push content to the required child tenants.
Comprehensive RBAC for analysts and customer accounts enables different levels of access for MSSP and customer admins. RBAC is used at
the main tenant level and the child tenant level to ensure the correct access.
In Cortex XSOAR on-prem MSSP, the users and roles of the child tenant are inherited from the user group set up in the Main Tenant. In the user
group from the main tenant, the Available Tenants include the list of child tenants that are paired with the main tenant.
These are the options for managing roles and user groups:
Roles, users and user groups inherited from the main tenant, cannot be edited in the child tenant.
When logging into a child tenant with user and password configured in the main tenant, you cannot update the password in the User Details
tab of the User Preferences settings. All fields are disabled.
Users created in child tenants, regardless of the linked user group, can only access the child tenant they were created in ( they can assume
a user group or a role propagated from the main tenant).
If you are using single sign-on, the mapping between your SSO provider and Cortex XSOAR is done through user groups.
SSO authentication configuration is not distributed to child tenants from the main tenant. This should be configured per child tenant.
Learn how to install, pair and manage parent and child tenants in multi-tenant.
This section describes how to get up and running with Cortex XSOAR multi-tenant, including how to install, pair main and child tenants, and
manage content for Cortex XSOAR multi-tenant.
Abstract
We recommend that you review the following steps to successfully deploy and onboard Cortex XSOAR.
This checklist enables you to get up and running a multi-tenant for MSSP deployment. After onboarding, you should configure Cortex XSOAR to
suit your needs. For more information, see Configure Cortex XSOAR.
Step 1. Install Cortex XSOAR for multi- Download the image from the Cortex Gateway and install Cortex XSOAR. See topic
tenant
Step 2. Pair child tenant to main tenant You must pair the parent and child tenant, in order to log in to the child tenant. See topic
Step 3. Set up an engine (optional) Use an engine for load balancing and proxies. See topic
Step 4. Set up users and roles Set up users, roles, user groups, and user authentication. See topic
Step 5. Install and configure content Install content packs and configure integrations for your use cases. Relevant for all See topic
environments.
Abstract
Learn how to install Cortex XSOAR On-prem, including system requirements, and adding a license.
To install a Cortex XSOAR multi-tenant, you need to log into Cortex Gateway, which is a portal for downloading the relevant image file and license.
The same image is applicable for both the Main Tenant and child tenants. When installing the tenants, first set up the main tenant before the child
tenants.
IMPORTANT:
We recommend configuring SSO for the main tenant and the child tenant. If the main tenant is configured with SSO and the child tenant is not,
you will not be able to log into the child tenant. Accessing the child tenant will only be possible from the main tenant.
NOTE:
If you have a child tenant in your deployment, which does not require large volumes of data ingestion, you can use the extra-small sizing
specification for them to optimize your hardware resource allocation.
Add DNS records that point the following host names to the cluster IP address.
FQDN Details
Cluster FQDN The Cortex XSOAR DNS name for accessing the UI. For example, xsoar.mycompany.com.
API-FQDN The Cortex XSOAR DNS name that is mapped to the API IP address. For example, api-xsoar.mycompany.com.
ext-FQDN The Cortex XSOAR DNS name that is mapped to the external IP address. For example, ext-xsoar.mycompany.com.
1. From Cortex Gateway, in the Available for Activation section, use the serial number to locate the tenant to download.
b. If you want to use a production and development tenant with a private remote repository, select Dev.
If you don't select it now, you can install a development tenant later.
d. Depending on the image file and the platform you want to deploy on, do one of the following:
IMPORTANT:
When installing Cortex XSOAR on your virtual machine, from the textual UI, in the Installation Mode field, you must select Parent
Tenant. For more information, see Task 6. Install Cortex XSOAR on your VM for your installation platform.
NOTE:
The License page is only available on the main tenant, not the child tenant.
NOTE:
You are not restricted to using the platform installed on the production tenant. For example, if you have downloaded an OVA file and
installed the VM on AWS in the production tenant, you can install the VM on OCI in the development tenant.
For more information about setting up a remote repository, see Set up a private remote repository.
a. On the machine you want to install the child tenant, install the image file you downloaded in step 2c.
You are not restricted to using the platform installed on the Main Tenant. For example, if you have downloaded an OVA file and
installed the VM on AWS in the Main Tenant, you can install the VM on OCI in the Child Tenant.
b. When installing Cortex XSOAR on your virtual machine, from the textual UI, in the Installation Mode field, select Child Tenant. For
more information, see Task 6. Install Cortex XSOAR on your VM.
5. After completing the installation of both the main tenant and child tenants, you can pair the child tenant to the main tenant.
Abstract
Learn how to pair the child tenant from the main tenant.
After installation of the main tenant and child tenants, pairing is required to facilitate a multi-tenant system. Tenant pairing enables resource
optimization between the tenants, enhanced security by providing isolation and granular access control to the child tenants, and flexible
distribution of content.
To pair the tenants, you must have a network connection, the child tenant URL and the child pairing token. In the main tenant, go to the Tenant
Management page to pair the child tenant. When pairing is established, you can begin to add users, roles, and distribute content. If the tenants are
not paired, the tenants cannot communicate, and content is not updated or distributed to the child tenants.
1. Log in to the Cortex XSOAR child tenant. When logging into the child tenant for the first time, a pairing token is generated for pairing the
child tenant to the main tenant:
Click the Copy Pairing Details to copy the child URL and pairing token to paste in the main tenant.
2. Log in to the Cortex XSOAR main tenant and select Settings → Configurations → Tenant Management.
Last Sync: Timestamp of when parent tenant last made contact with child tenant.
Propagation Labels: Either All or custom propagation labels are listed for syncing content to the child tenant.
3. Click Pair new tenant. Paste the details from the child tenant:
Cortex XSOAR sends a Request for Pairing to the specified child tenant.
In the Tenant Management page, the status of the pairing request is shown.
If a child tenant disconnects from the main tenant, a banner at the top of the page shows that a child tenant is disconnected. If this happens,
re-pair and sync the content of the child tenant from the Tenant Management page.
Abstract
Install engines on tenants in a Cortex XSOAR multi-tenant deployment. Configure firewall to allow communication between engine and tenant.
Engines created on child tenants use a different encryption handshake for each child tenant and connect back to the child tenant through the main
tenant.
NOTE:
a. On the main tenant, go to Settings & Info → Settings → Integrations → Engines, and select the engine.
d. If you want to allow the use of the engine for tenant-specific integration instances, select Allow tenants to use this engine for custom
integration instances.
If you do not select this option, the engine can only be used with integration instances that were assigned to the engine on the main
tenant level and were propagated to tenants.
e. Go to Settings & Info → Settings → Tenant Management, and Sync your selected tenant(s).
3. Verify that the engine is connected, by going to Settings & Info → Settings → Integrations → Engines.
Ensure that the engine machine can communicate with the main tenant. You can use Telnet, or any similar tool to check the engine has
access to the main tenant before you install it. If there is a firewall you may need to allow access from the machine that hosts the engine, so
that it can communicate back on port 443 (or any other port the main host may use) or set an ANY ANY rule.
Abstract
Create user groups and roles, manage users in the main tenant, and authenticate users using SAML 2.0 in a multi-tenant deployment.
Before setting up users and roles in Cortex XSOAR multi-tenant, the child tenants should be paired with the main tenant. If the child tenant is not
paired with the main tenant, the users and roles are not added to the child tenant.
The users and roles of the child tenant are inherited from the user group set up in the main tenant. In the user group from the main tenant, the
Available Tenants include the list of child tenants that are paired with the main tenant.
NOTE:
Users, roles and user groups are synced from the main tenant to the child tenants every 3 minutes.
When you create users in the main tenant, only after the child tenant is selected in a user group where the user is defined, does the user have
access to the child tenant.
NOTE:
The child tenant cannot update or delete the user that was inherited from the main tenant.
When logging into the child tenant with user credentials defined in the main tenant, the child tenant cannot update the password in User
→ User Preferences from the User Details page.
IMPORTANT:
The users created in the child tenant can only access the the child tenant they were created in.
IMPORTANT:
Users can only access the child tenants after being added to a user group that includes the child tenants.
3. Repeat the above steps for any other users you want to add, if they have the same role, user group, or no role.
You cannot select different roles and user groups for multiple users.
NOTE:
Users created on a child tenant can’t be assigned to a user group or role that was set up in the main tenant.
Upload a file
NOTE:
At least one row must exist including email address, first and last names.
You cannot select different roles and user groups for each user. If you want different roles and user groups for
each set of users upload separate files.
If you have set up a mail integration, users will receive a link to access Cortex XSOAR. When accessing the link, users need to
complete the password and will be able to log in.
c. Unless already done so, add roles and user groups to users.
When you create roles in the main tenant, only after the child tenant is selected in a user group where the role is defined, is the role activated in
the child tenant.
NOTE:
The child tenant cannot update or delete the role that was inherited from the main tenant.
The main tenant and the child tenant cannot define the same roles. Each role must be unique.
IMPORTANT:
The roles created in the child tenant are only accessible from the the child tenant they were created in.
TIP:
We recommend making a copy of out-of-the-box roles and editing the copies, rather than creating new roles, to avoid missing any
important permissions.
b. In the Components tab, add the permissions as required. For more information, see Role-based permissions.
Define dashboards
e. You can create user groups and add roles to them (recommended), assign roles directly to users after they have been added, or both.
Users are assigned roles and permissions either by being assigned a role directly or by being assigned membership in one or more user
groups. A user group can only be assigned to a single role, but users can be added to multiple groups if they require multiple roles. You can also
nest groups to achieve the same effect. Users who have multiple roles through either method will receive the highest level of access based on the
combination of their roles.
On the User Groups page, you can create a new user group for several different system users or groups. You can see information including the
details of all user groups, the roles, nested groups, IdP groups (SAML), and when the group was created/updated.
You can also right-click in the table to edit, save as a new group, remove (delete) a group, and copy text to the clipboard.
IMPORTANT:
In order for users in the Main Tenant to access the child tenants, they need to be assigned a user group that has access to the child
tenant.
User groups created on the Main Tenant, cannot be edited or deleted from the child tenants.
a. To create a new user group for several different system users or groups, click New Group, and add the following parameters:
Parameter Description
Role Select the group role associated with this user group. You can only have a single role designated per group.
Users Select the users you want to belong to this user group.
NOTE:
If users have been created locally, but you want them to access the tenant through SSO only, skip this
field and add only SAML group mapping after SSO is set up, otherwise, users can access the tenant
through their username and password and and through SSO.
If you have not yet created any users, skip this field and add them later. See Set up authentication.
Nested Groups Lists any nested groups associated with this user group. If you have an existing group you can add a nested
group.
User groups can include multiple users and nested groups, which inherit the permissions of parent user
groups. The user group will have the highest level of permission.
For example:
If you add Group A as a nested group in Group B, Group A inherits Group B's permissions (Tier-1 and Tier-2
permissions).
SAML Group Maps the SAML group membership to this user group. For example, you have defined a Cortex XSOAR
Mapping Admins group. You need to name this group exactly how it appears in Okta.
NOTE:
When using Azure AD for SSO, the SAML group mapping needs to be provided using the group object ID
(GUID) and not the group name.
If you have not set up SSO in your tenant, skip this field and add it later. After you have added it, follow the
procedure relevant to your IdP. For example, see Task 6. Map SAML Group Memberships to Cortex XSOAR
User Groups.
Parameter Description
Available Tenants Displays the list of child tenants that are paired with the main tenant.
(Only available in Users and roles in the child tenant are updated from the main tenant only when the user group created
Main Tenant) includes the child tenant and the role and user defined in the main tenant.
NOTE:
User groups created on the Main Tenant, cannot be edited or deleted from the child tenants.
Abstract
Install and configure content when onboarding Cortex XSOAR. This step applies to Multi-tenant and MSSP environments.
Content Description
Integrations Third-party tools and services that the Cortex XSOAR platform works with to orchestrate and automate SOC operations.
You can trigger events from these integrations that become incidents in Cortex XSOAR. After the incidents are created,
you can run playbooks on these incidents to enrich them with information from other products in your system.
Playbooks You can automate many security processes, including handling investigations and managing tickets and security
responses that were previously handled manually. Playbooks enable you to organize and document security monitoring,
orchestration, and response activities. When an incident is ingested, if a playbook runs, an incident is created.
Dashboards, Dashboards and reports consist of visualized data powered by fully customizable widgets, which enable you to analyze
reports, and data from inside or outside Cortex XSOAR in different formats such as graphs, pie charts, or text. Reports allow you to
widgets share similar data outside of Cortex XSOAR via email. Reports can be scheduled to run at a specific time to capture
data where the start/end time is important.
Classifiers and Classification determines the type of incident/indicator that is created for events ingested from a specific integration. You
mappers create a classifier and define that classifier in an integration. Mappers map the fields from your third-party integration to
the fields that you defined in your incident/indicator layouts.
Content Description
Incident types, All incidents that are ingested into Cortex XSOAR are assigned an incident type when they are classified. Each incident
fields, and layouts type has a unique set of data that is relevant to that specific incident type. Fields and layouts ensure that you see
relevant information that is relevant to the incident type.
Indicator types, Indicators are categorized by indicator type, which determines the indicator layout and fields that are displayed and
fields. and layouts which scripts are run on indicators of that type.
Scripts Perform a specific action, and are comprised of commands associated with an integration. Write scripts in either Python
or JavaScript. Scripts are used as part of tasks, which are used in playbooks and commands in the War Room.
Content is organized into content packs to support specific security orchestration use cases, which are either preinstalled or downloaded from
Marketplace. Content packs are created by Palo Alto Networks, technology partners, contributors, and customers.
After downloading and installing content packs, you can then start customizing the content to suit your use case. For example, although Cortex
XSOAR comes with a Mail Sender integration already configured, you may want to set up your Mail Sender integration, such as EWS.
Further information
To set up your use case using the deployment wizard, see Set up your use case with the Deployment Wizard.
Manage the child tenants and it's content from the main tenant.
After you have set up and configured Cortex XSOAR multi-tenant you can manage the following:
Manage investigations: From the main tenant, you can see all tenants' incidents and indicators. You do not need to switch between tabs or
screens to view data for child tenants. You can also pivot to any child tenant, subject to permission for those tenants.
Run a command on multiple tenants: Run a batch command on the main tenant for all child tenants.
In the main tenant, you can pivot to a child tenant from specific pages, such as Incidents, and Dashboards. For example, on the Dashboards
page, click Main Tenant and then click the child tenant you want to navigate to.
In Cortex XSOAR, you can use the Tenant Management for the following options:
Pair the main to a child tenant. In Pair new tenant, add the child tenant URL, the pairing token copied from the child tenant, and the tenant
name, which must be unique.
NOTE:
Unpair a connected child tenant from the main tenant. The status of the child tenant changes to Unpaired. If you click the child tenant link
from the main tenant, you are navigated to the Pair with main screen of the child tenant. You might need to re-login to the child tenant to
access the Pair with main screen.
NOTE:
You cannot use a child tenant unless it is paired to the main tenant.
You can click Remove to remove the tenant from the list of tenants in Tenant Management. If you want to pair the child tenant back to the
main tenant after it's been removed, you must Pair new tenant. The tenant name and tenant URL remains the same.
Select Sync to sync content from the main tenant to the selected child tenant. You can also sync content to all tenants by selecting Sync all
tenants.
In case of a potential security issue, you can disconnect a child tenant from the main tenant. This option revokes the valid pairing token and
generates a new token for the child tenant. From the child tenant go to Settings → System → Security Settings and from Revoke access to
tenant, click Revoke. The pairing token remains active for 24h before a new token is generated. The status of the child tenant in the main
tenant updates to Unauthorized. To repair the child tenant, from the child tenant go to Settings → System → Security Settings and from Pair
with main tenant, click Copy Pairing Token. In the main tenant, select the child tenant and click Re-Pair Token and paste the pairing token.
The status is updated to Paired.
If a child tenant is not available, the connection between the main and child tenant becomes Disconnected. Content is propagated to the
child tenant once the child tenant is reconnected to the main tenant and Sync is selected.
Abstract
Content is pushed from the main tenant to child tenants by applying corresponding propagation labels to content and child tenants.
Content (including integrations) can be configured on the main tenant or child tenants.
If creating content on the Main Tenant, you can push that content to child tenants. Usually, if the content applies to all child tenants, it should be
configured on the main tenant and pushed to the child tenants. In some cases, you may need to configure an integration on the child tenant only.
For example, the end-user has the information needed to configure a specific integration but does not want that information stored on the main
tenant. Also, any integration that fetches incidents or indicators (feeds) must be configured on the child tenant, since incidents are not stored on
the main tenant.
Content dependencies
Content that is synced from the main tenant, includes not just the content item but also dependencies. In Cortex XSOAR, there are multiple layers
of dependency relationships. For example, a classifier depends on an incident type, an incident type depends on a layout, layouts depend on
fields, and fields depend on scripts.
For a basic example of content dependencies, see the Phishing - Generic v3 playbook, which contains 43 scripts. The scripts are dependencies of
the playbook, which needs them to execute properly. You can view playbook dependencies under the Propagation Labels field in the playbook
Settings.
When syncing content from the main tenant to child tenants, content includes these dependencies.
NOTE:
Content dependencies are calculated recursively, so that if, for example, Playbook A uses Playbook B (dependency), which in turn uses scripts
C and D (dependencies), all of the dependencies (Playbook B and scripts C and D) will be included along with Playbook A.
Propagation labels
When syncing content from the Main Account, you can use propagation labels to decide what content to push and which child tenant you want to
push content to. You can add propagation labels to the following:
Content items
TIP:
We recommend that you first apply propagation labels to your child tenants and then add the corresponding labels to the content items that you
want to sync to the child tenants.
For a content item to be synced to a child tenant, both the content and child tenant must have the same propagation label. For example, if you
want Playbook ABC to sync to Tenant 123, they both need to have the same propagation label, such as Premium. Content is pushed to tenants
by matching propagation labels.
When creating or editing content, you can add the following propagation labels for syncing content to a child tenant:
All Content items with the all label are synced to all child tenants, regardless of whether the child tenants have labels. This
is the default label for content items.
Custom Add custom labels by typing a label name in the Propagation Labels field when adding or editing a content item or
when selecting the child tenant and clicking Propagation Labels on the Tenant Management page.
For more information about adding propagation labels to content, see Add propagation labels to content.
If an integration has the same settings for multiple child tenants, you can configure the integration on the main tenant and propagate it to multiple
child tenants. For more information, see Add propagation labels to a child tenant.
NOTE:
If a content item does not have any labels, it will not be synced to any child tenants. If a child tenant does not have any labels, only content
items with the all propagation label will sync to it.
If there is no propagation label on your content, for example, a script or playbook, but it is a dependency of a package that you propagate to a
tenant, the unlabeled content is still synced to the tenant.
If the content includes dependencies, these dependencies appear during the sync process, even if their propagation labels don’t match that of
the tenant, as long as the labels of the parent content match the child tenant labels.
When using a remote repository with a multi-tenant deployment, the remote repository must be configured and a machine must be set as the
development environment, before you can view propagation labels. For more information, see Manage content using a remote repository.
Example 30.
The following example demonstrates how propagation labels work with content dependencies.
The playbook has a test propagation label, which matches the child tenant's label, but the scripts contained within the playbook have a
propagation label of test1, which differs from that of the playbook.
Playbook
Script
Even though the script propagation label does not match that of the tenant, the content is still propagated to tenants during the sync process.
If there is no relevant propagation tag on your content, for example, a script or playbook, but it is a dependency of a package that you propagate to
a tenant, the unlabeled content is still synced to the tenant.
Abstract
Apply propagation labels to control which content is synced from the main tenant to child tenants in a Cortex XSOAR multi-tenant deployment.
You can add propagation labels when creating a new content item or when editing an existing item.
TIP:
We recommend that you first apply propagation labels to your child tenants and then add the corresponding labels to the content items that you
want to sync to the tenants.
Scripts
Evidence fields
Pre-process rules
Lists
Widgets
Dashboards
NOTE:
When installing a content pack from the Marketplace, the default propagation label is set to all. If you want to change the propagation label,
after installation, go to the INSTALLED CONTENT PACKS tab on the Marketplace page and click the propagation button for the content pack. If
a content item is part of a content pack and is not specifically labeled, it inherits the content pack’s propagation labels. If labels are specified, it
propagates according to those labels.
For a non-content item such as an integration instance, if you want to propagate the instance, you need to apply propagation labels both to the
integration and to the integration instance. If a tenant does not have the integration installed, the instance will not be propagated even if the
propagation label exists both on the main tenant and child tenant.
If you want to create new propagation labels or add existing ones, ensure that you have the required permissions. For more information, see Role-
based permissions.
1. Go to the content item that you want to add a propagation label to.
2. In the Propagation Labels field, add the relevant labels by either selecting an existing label or typing a new label. After typing a new label
and pressing Enter, the label is available immediately for use. If you instead keep the default all label, the content syncs to all child
tenants. For example, when editing or creating a playbook, in the PLAYBOOK SETTINGS section, in the Propagation Labels field, add the
label as required.
Abstract
Add propagation labels to an existing child tenant in a Cortex XSOAR multi-tenant deployment to control which content items sync.
The propagation labels that you add to the child tenant determine which content items sync to the tenant. This information is intended for existing
child tenants.
If you want to create new propagation labels or add existing ones, ensure that you have the necessary permissions. For more information, see
Role-based permissions.
TIP:
We recommend that you first apply propagation labels to your child tenants and then add the corresponding labels to the content items that you
want to sync to the child tenants.
2. Select the tenant that you want to add propagation labels to and click Propagation Labels.
Abstract
The content that you sync from the main tenant to the child tenants might add, override, or remove content from the child tenants. New content
items, that do not currently exist on the child tenants, are added. When you sync content to child tenants, there can potentially be content items
added.
NOTE:
There may be instances where a child tenant is disconnected for whatever reason when content is supposedly synced. The Cortex XSOAR on-
prem main tenant displays the exact reason why the child tenant is disconnected and is unable to receive any updates. We recommend to
review the Tenant Management page on the main tenant to resolve the issue.
Option Description
Add New content items that do not currently exist on the child tenants will be added.
Override For content items that are being pushed in the sync operation and that already exist on the child tenants, the sync operation
overrides the existing content on the child tenants
Remove For content items that were removed from the main tenant and which already exist on the child tenants, the sync operation
removes the existing content from the child tenants.
You should review each content item and its dependencies before syncing the content. You have the option to remove items before executing the
sync operation for a single tenant.
Ensure that your user roles have view or view/edit permission to sync to child tenants. For more information, see Role-based permissions.
From the Tenant Management, from the Main Tenant, you can click the link of the child tenant to access the child tenant.
If you select one tenant, you can review which content sync. If you select two or more tenants, you can't review the content before syncing.
To review the content before syncing to a child tenant, select a child tenant, and click Sync.
1. Review all content affected by the sync operation in the ADD, OVERRIDE, and REMOVE tabs.
2. If there are playbooks listed in the OVERRIDE tab, select or clear the checkbox to Override playbook inputs in the child tenant.
3. If the Run on field has changed in the script, select or clear the Overwrite script run-on to override this field in the child tenant.
To sync content to child tenants without manual review, select two or more child tenants and then click Sync.
This option automatically updates new, and existing content to the child tenant, and removes outdated content. If there are playbooks
and scripts, select or clear the checkbox to Override playbook inputs and script run-on-settings in the child tenant.
4. Click Sync.
NOTE:
If you sync a content item from the main tenant to a child tenant, and a content item with that same name already exists on the child tenant, the
content on the child tenant is overwritten. This applies to integrations, fields, incident types, and Threat Intel report types.
Abstract
Add propagation labels to a child tenant in a Cortex XSOAR multi-tenant deployment using remote repositories, to control which content items
sync.
If you are working with remote repositories, and want to use selective propagation to add propagation labels to content, you need to follow the
steps described in this task.
Before you begin, if you haven't done so already, set up a remote repository, as described in Content management in Cortex XSOAR.
NOTE:
Propagation labels added to content on your development tenant appear when pushing content to production
a. Install the content that is pushed from the development tenant. For more information, see Install content on a production tenant.
b. Confirm that the propagation labels were added to the content and that the labels are available for use, by going to Settings & Info →
Settings → Tenant Management.
c. Add propagation labels to child tenants. For more information, see Add propagation labels to a child tenant.
d. Sync content to child tenants. For more information, see Sync content to child tenants.
On the main tenant, you can create and make changes to content such as dashboards, incidents, and indicators, and propagate content to child
tenants. You can view data from all your child tenants or pivot to each tenant to take certain actions.
On the Incidents page, you view and take action on incidents across all tenants. You can do the following:
Action Description
Investigate an When clicking on an incident you pivot to the child tenant where you take action on the incident. You can view a detailed
incident summary, take action on the incident, add evidence, related incidents, etc.
Edit an incident Edit system fields such as name, owner, severity, and custom fields. When you save the changes they are propagated to
the child tenant.
Run a command Sometimes you may need to run a command across all tenants.
Export an incident You can export to a CSV file. By default, the CSV file is generated in UTF8 format.
NOTE:
Although you can't investigate incidents directly, you can pivot to the incident on the child tenant by clicking the incident. You can also go to the
child tenant's incident page by clicking main tenant (top left of the window) and selecting the relevant child tenant.
By default, the Incidents page displays open incidents (from all child tenants) in the last seven days. You can filter this by changing the date and
selecting the relevant tenant.
Users can be added to the incident investigation in the child tenant from the main tenant or from the child tenant directly. When viewing a list of
users, they are separated according to users and child tenant users.
NOTE:
You will see a list of users, separated according to USERS and MAIN TENANT USERS. If you access the child tenant directly and not via the
main tenant, you are considered a child tenant user (under USERS).
You can add main and child tenant users to the investigation and in other places, which gives a holistic bilateral communication experience
between the main and child tenants. You can do the following:
Update tasks
You can change the To-do tasks assignee or change the owner when completing a task.
When you type the user's name you can see whether they are from the main or child tenant. The user receives a system email to
investigate.
When mentioning a user in the War Room, the user receives a system email regardless of whether they are a child or main tenant user.
In the Actions tab, you can copy the incident URL in the main/child tenant, so users can directly link to the main/child tenant. For example, when
accessing the incident from the main tenant, you may want an end-user's input into the incident you are investigating. Copy the URL and send it to
the user via email or Slack. The user opens the link and can start investigating.
NOTE:
Depending on where the link is copied from, users access the link either in the child tenant directly or from the child tenant via the main tenant.
Abstract
Run a command on incidents residing on multiple tenants in a Cortex XSOAR multi-tenant deployment
In some cases, you might need to run a command across multiple tenants. For example, you might want to enrich certain IOCs across all child
tenants.
From the main tenant, you can batch-run a command on incidents from different child tenants. Running a command at the main tenant runs it
locally on each child tenant.
If the command doesn’t exist on a particular tenant or if the user running the command from the Main Tenant doesn’t have the correct permissions,
the command execution fails and the output is written to the incident’s war room. You will not see the error in the main tenant.
1. On the Incidents page in the Main Tenant, select one or more incidents.
From the main tenant, on the Threat Intel page, you can see the following tabs:
Indicators
The Threat Intel page shows all indicators and TIM reports across all child tenants.
NOTE:
If you don't have a TIM license you can only view the Indicators tab.
Although you can't investigate indicators directly, you can pivot to the indicator on the child tenant by clicking the indicator. You can also go to the
child tenant's indicator page by clicking main tenant (top left of the window) and selecting the relevant child tenant.
By default, the Indicators page displays open indicators (from all child tenants) in the last seven days. You can filter this by changing the date and
selecting the relevant tenant.
Action Description
Export Export the selected indicators to a CSV file. By default, the CSV file is generated in UTF8 format. Administrator permission is
CSV required to update server configurations, including changing the format, see Export incidents and indicators to CSV using the
UTF8-BOM format.