We just raised a $30M Series A: Read our story

Top 8 Security Incident Response Tools

Carbon Black CB DefenseIBM ResilientCarbon Black CB ResponseFireEye HelixSecureworks Red Cloak Threat Detection and ResponseSECDO PlatformProofpoint Threat ResponseD3 Security
  1. leader badge
    The initial setup is very easy.The visibility provided has been great.
  2. The UBA, User Behavior Analytics, is very good.The solution is very easy to use.
  3. Find out what your peers are saying about Carbon Black, IBM, FireEye and others in Security Incident Response. Updated: October 2021.
    542,721 professionals have used our research since 2012.
  4. Probably the most valuable feature of CB Response is its ability to isolate a host and take it off the network, so it's not spreading anything. We have two security operations centers around the globe. When an SOC analyst sees something on an endpoint, they can use Carbon Black Response to isolate that host from the customer's environment and prevent any kind of lateral spread.
  5. It is kind of simple and very easily deployable. You can start working with it very fast.I like that it's easy. It's got the protection set up, and we can see whatever is required. We write our own rules and the rules that we can input. I think it is good.
  6. The features that I have found most valuable are that the search capabilities are easy to use. The dashboards are good. The reports are good. It is just simple from a deployment standpoint - that was easy.
  7. Technical support is great. Palo Alto is extremely helpful and responsive. The ease of deployment is a valuable feature.
  8. report
    Use our free recommendation engine to learn which Security Incident Response solutions are best for your needs.
    542,721 professionals have used our research since 2012.
  9. It has reduced our manual efforts to remove emails from each user's inbox, and in this case we do not have to ask our IT department or users to do so.

Advice From The Community

Read answers to top Security Incident Response questions. 542,721 professionals have gotten help from our community of experts.
Hi community, I'm working on a document about the Security Operation Center best practices, and I would like to get your inputs about it. Thanks
author avatarShibu Babuchandran
Real User

Hi @Giusel ​,

Some of the best practices that I feel is as below.

1. The SOC must enable end-to-end network control

Your security operations center protects the enterprise from network threats, but you need to precisely define your network boundaries to achieve this. It is a common misconception that the external network is identical to the public internet, and anything that’s not part of the public internet is safe. CISOs must keep in mind that any third-party network (including and beyond the internet) can be a threat vector.

For modern organizations, API-based app integrations, external device connections via Wi-Fi or Bluetooth, and cloud-shared resources must also come under the definition of external networks.

In the case of internal networks, least privilege access should be your rule of thumb, and no single user should have complete access to sensitive/valuable information. Segregate your internal network into several tiers of access (based on its asset contents), aided by a powerful firewall solution.

2. Pay attention to shadow app discovery

Shadow applications (part of shadow IT) are a growing threat for enterprises. Traditionally, SOCs have restricted software installation on enterprise systems, even if the app came from a trusted source. However, in a remote working world, this becomes a major problem. Remote users could intentionally or unwittingly download malicious applications from the internet, eventually spreading across the entire internal network.

In addition to the firewall, regularly conduct an app discovery exercise to create a full software inventory across the hundreds and thousands of computers on your network. Classify these apps as per their security risks and take action. Also, gain from built-in restrictions that prevent unauthorized users from downloading and installing software on enterprise systems (including servers).

3. Keep a watch on hardware sprawl, even in cloud-first environments

Another myth around the SOC maintenance is that hardware doesn’t fall under its ambit. As most security vectors tend to be software-related (spreading through the cloud or public/private networks), SOCs frequently take a short-sighted view and focus only on software. In reality, hardware sprawl is a risk for every enterprise, adding peripherals like printers, routers, Wi-Fi repeaters, storage endpoints, and other unauthorized components as business needs grow. With each addition comes new security risks.

Make unauthorized hardware connectivity prevention a priority for your SOC. Also, implement processes that restrict employees from copying data for home use or offsite use. If some degree of BYOD is inevitable (as in a WFH scenario), make sure to verify identity through multi-factor authentication. Finally, scan enterprise perimeters for rogue hardware, just like shadow applications, to discover risks on time.

4. Protect SOC logs to aid investigation

Access logs are among your most handy tools when conducting a post-attack forensic analysis. It also helps to root out false positives from genuinely suspicious access behavior. SOC managers typically use logging records to assess the four Ws and one H of a security breach: who, what, why, when, and how.

However, the logs themselves can be vulnerable, and it’s compromise will cripple your ability to assess and respond to any security threat. One of the first things a malicious app will once it enters your systems is to remove any evidence of the attack by rewriting device logs. That’s why it is advisable to store access logs in a separate, high-security zone that is not connected to the device itself.

Further, make sure to synchronize the timestamps across all enterprise devices generating logs regularly. A single, synchronized lock will ensure that all devices follow a central time source, allowing access events to be plotted more easily. In case of a breach, you can reconstruct the incident by piecing together logs across various devices.

5. Have a contingency plan in place via a robust backup

Assuming the worst-case scenario can be extremely helpful when building an SOC, given the unpredictable and fast-evolving nature of security threats. A big part of this is investing in a backup system that can help to restore your digital assets after an attack, even if it can’t prevent malicious parties from getting hold of it.

A cloud-based backup system can accelerate data recovery, particularly if a malicious party goes after your in-house backup service.

While no backup strategy is 100% hackerproof, remember the 3-2-1 rule: 3 copies of information, including primary/dynamic/production data and two backups, where one should be stored off-site – e.g., on the cloud. Ensure your production data is protected by strong authentication measures, and your cloud backup is accessible only to a select group of users during worst-case scenarios, like a ransomware attack.

author avatarRobert Cheruiyot

Hi Giusel,

From my little experience, it's always good to have a good working plan on how you are going to start setting up a SOC and how you are going to gradually mature the SOC. The primary consideration is the availability of 3 components: people, technology and process.

It's very easy to manage the development of SOC when you do it in bits. Talk about technical aspects like SIEM. SIEM might have components like Logs, Network, Endpoint and SOAR. From my own view, it's not an easy thing to plug in all these components at once. You could start with a primary component like the Logs component and gradually build from that. It's also good to have a technology and deployment option that works for your business needs. 

On people, it's good to have skilled analysts else you may not get value for your investment in technology and time. Many people take different approaches to sort the issue of the insufficient number of skilled analysts. Some opt to work with MSSP jointly with the team you are developing in-house for a set period of time for the purpose of knowledge transfer.

There should be a clear workflow of activities in case of an incident. What should T1 do before passing the alerts to T2 .. or closing false positive alerts? What are your sources of threat intelligence?

author avatarSteffen Hornung
Real User

Sadly, I cant contribute due to lack of experience in that field. But I would love to read about your findings

Hi dear community, Can you explain what an incident response playbook is and the role it plays in SOAR? How do you build an incident response playbook?  Do SOAR solutions come with a pre-defined playbook as a starting point?
author avatarMaged Magdy
Real User


what an incident response playbook? 

Incident Response Playbook is the guide lines and group of processes, policies, plans, and procedures, along with appropriate oversight of response activities, that  the organization should take to make a proactive response, quick containment, effective remediation and action plan with "what if" scenario in case of certain cyber incident has taken place.

How do you build an incident response playbook?

Regarding to NIST, to build an Incident Response Playbook you need to design the process which contains 4 main phases:

1- Prepare.

2- Detect and Analyze.

3- Contain, Eradicate and Recover.

4- Post-Incident Handling.

*reference, NIST Computer Security Incident Handling Guide:


*reference, SANS Incident Handler's Handbook:


Do SOAR solutions come with a pre-defined playbook as a starting point?

- Sure, most of SOAR solutions today comes with predefined templates. However, it's a double-bladed weapon based on Cyber Security Awareness and maturity level of the organization. If it's implemented with no or low maturity level, it may harm the organization production and utilize the resources improperly.


author avatarDavid Swift
Real User

Incident Response playbooks detail how to act when a threat or incident occurs. PICERL - Preparation, Identification, Containment, Eradication, Remediation, Lessons Learned (From SANS).  The playbook outlines what to do at each stage.

Typical SOAR playbooks automate the response to detected threats.

- Create a Ticket to Track the Incident

- Identify the source and target

- Confirm the attack is suspicious (SOC Analyst Lookup, On known blacklist? other events?)

- Contain or Clean the Host (EDR, Patch, Update AV...)

- Block the Known Attacker (on a Firewall, IDS, etc...)

- Disable a Compromised Account

- Notify anyone necessary 

SOAR actions include scripts to set or fire off actions on devices.

A playbook usually has a series of actions when a threat/incident is detected.

Most SOARs include playbooks, but they have to be tailored and customized to the specific devices you have in your environment (Palo Alto Firewall vs. Checkpoint, Cylance vs. McAfee EPO...), Ticketing System integration, SIEM/UEBA threat detection integration...

author avatarRobert Cheruiyot

Hi Rony, 

Playbook automates the gathering of threat intelligence from a myriad of sources of threat intelligence. Playbooks ingest alerts from tools like SIEM and scan the alerts against the threat intelligence sources like VirusTotal and others in order to get information related to the alert. Playbook for example can scan suspicious domains /IPs against virus total and provide reputation score of the domain/IP.

Depending on the workflow, the playbook may be configured to close a case if it's a false positive or pass the case together with threat intelligence gathered to SOC Analyst for further investigation. This way the playbook will reduce time spent on false-positive alerts. Also saves time for analysts by automatically gathering threat intelligence instead of analysts doing that manually. 

Be careful of cases where you set alerts to be automatically closed though. You can try this on some community editions soar platforms: Splunk phantom, SIEMplify ...

Building a playbook

Magdy has provided perfect industry standards for building playbooks. Just a little, the playbook mainly has actions and decisions. Actions: take an action against an alert (like scanning) and based on the results playbook decides what to do with the results: whether to close, do further scanning using other tools, pass it to the SOC analyst and this really depends on your workflow.

I am a junior but I love this SOAR thing.

author avatarSimon Thornton
Real User

For a given incident type, it describes a series of actions that can be a mixture of automated and manual steps. When you start, the steps are often manual. As the playbook and confidence in the steps improve, you can start automating.

For example a playbook for a “suspicious email” might read as:

1) check if the case is already opened for this user and/or asset, if yes go to step 3open case

2) open case and record details

3) extract suspicious attachment

4) generate MD5 and SHA256 hashes

5) submit hashes to Virustotal and record results

6) if 50% (pick your threshold) of AV engines detect the sample skip to step 10

7) forward email attachment to sandbox

8) does a sandbox report indicate suspicious behavior? If yes escalate to T3

9) inform the user

10) open a ticket to IT to re-template PC or fix

11) when you receive a response from IT about the ticket, then close a SOC ticket with relevant closure details

This is a quick illustration of what steps should be included depending on your environment and how far you go. 

Each step could be related to different teams.

Security Incident Response Articles

Netanya Carmi
Content Manager
IT Central Station
Oct 14 2021
We receive alerts all day long - alerts about emails, incoming Whatsapps and SMSes, posts on social media, etc. At some point we become desensitized to these alerts and stop noticing them anymore - a phenomenon known as “alert fatigue.” Seventy percent of a SOC analyst’s workday is spent dealing… (more)

We receive alerts all day long - alerts about emails, incoming Whatsapps and SMSes, posts on social media, etc. At some point we become desensitized to these alerts and stop noticing them anymore - a phenomenon known as “alert fatigue.” Seventy percent of a SOC analyst’s workday is spent dealing with alerts, so SOC analysts are more at risk for alert fatigue than pretty much anyone else.

SOC analyst and IT Central Station user Geofrey M. says that he receives more than 20,000 alerts a week - and 60% of these are deemed critical. With numbers like these, alerts can easily start piling up and don’t always get dealt with in a timely manner - or sometimes at all - leaving what may be important issues to fall through the cracks.

Alert fatigue can be harmful to your business for a number of reasons. These include:

  1. Ignored alerts - Obviously, when alerts get missed due to alert fatigue, this can lead to damaged customer relationships and overall devastation to your business.
  2. Wasted time - The more time your team spends responding to alerts that are not necessarily critical, the less time they spend doing the other critical tasks they are being paid to do.
  3. Employee burnout - Your staff may, in fact, manage to resolve most of the significant alerts and therefore your customers may not be directly impacted. But the fact remains that the more alerts your employees receive and have to deal with, the less productive they will be.
  4. Psychological effects - The more alerts SOC analysts receive, the more reason they have for concern. Fear that they may have missed something can slow down releases, ultimately impacting customers.

Some of these factors, like the number of alerts that get missed or the amount of wasted time, can be measured. In his article, “Alert Fatigue – A Practical Guide to Managing Alerts,” Itiel Schwartz writes that psychological ramifications, such as how burned out your staff gets, cannot. But in response to his article, Reddit user SU1PHR disagrees, stating that just because there is no quantitative way to measure employee burnout does not mean it can’t be measured. He suggests monitoring employee retention rates and one-to-one meetings at which managers can routinely receive feedback on how their employees are managing. He warns that not doing so can cause “a deadly spiral that will lead to more fatigue, more errors and more missed alerts.”

Several other Reddit users on the same thread mentioned that at their places of employment, alerts get saved and fed into business analytics tools but that, other than being able to say “we are collecting the data,” nothing else ever gets done with them.

So what can be done to minimize alert fatigue and help SOC analysts stay on top of everything that needs to get done?

First of all, it’s important to minimize human error whenever possible. Sometimes engineers inadvertently create a code malfunction or an alert isn’t calibrated properly. Putting better organizational processes in place can help ensure that the people involved in setting the alerts do so appropriately.

But it’s best, when feasible, to remove the human element altogether. This is why IT Central Station user Tshepiso M. points out that it is important to automate wherever possible. Using technology to sort alerts by importance can help ensure peace of mind in your staff by taking some of the burden off of them. The less your employees feel responsible for keeping track of and dealing with alerts, the less you’ll have to worry about psychological effects such as burnout or fear of failure.

One way to increase efficiency, as Geofrey M. points out, is by implementing a SIEM solution. Security information and event management (SIEM) solutions can help prevent alert fatigue by streamlining security. Part of the problem with alerts is the amount of sources from which they originate. Organizations are constantly adding more tools, which makes IT environments increasingly complex. A SIEM solution can become your primary security monitoring tool by consolidating the data streams and integrating unique data sources. It can also take security data from a variety of systems and analyze them, putting them all into context and gleaning new insights from one centralized location.

Every organization is different, and a SIEM allows you to adapt and build your own nuances into the security alert process for your business. When you are considering deploying a SIEM solution, keep the following things in mind in order to ensure that you are only receiving the notifications you actually need.

  • Consider context - Rather than setting the same alerts for each new asset, take the time to think through each asset’s function and role within the wider context of the environment and adjust the defaults and settings accordingly. This allows for proper prioritization and allows the number of notifications to be reduced significantly.
  • Limit who will receive what alerts - Without a SIEM, every single alert may be sent out to every single admin. But this is rarely necessary. A SIEM will allow you to have different staff members alerted depending on the event or the operating system affected. This reduces redundancy and prevents a buildup of excess alerts over time.
  • Revisit and readjust - Make changes as you go. If your initial configuration leaves you getting alerts you don’t need, you can always lower the priority or filter it out altogether. SIEMs allow you the flexibility to change your settings as needed, maximizing the capabilities of your security tools and freeing up your security team to be available when they are really needed.


Putting better organizational processes in place in order to minimize human error can help reduce alert fatigue for SOC analysts. But an even better strategy is to try to automate and remove the human element altogether. One great way to do this is to implement a SIEM solution.

Learn about SOC Analyst Appreciation Day here.

Find out what your peers are saying about Carbon Black, IBM, FireEye and others in Security Incident Response. Updated: October 2021.
542,721 professionals have used our research since 2012.