Avoiding a false sense of security
Cyber threat detection and response is a well-established area of cyber security, with a multitude of product and service types and definitions. Yet rather than make it easier for organisations to identify what they need, this often contributes to industry noise and hype, creating a marketplace that can be challenging to navigate for buyers who are uncertain of what they need, or why they need it.
This is most acutely felt by organisations that have not historically been top of the cyber criminal’s hitlist. Organisations operating in sectors without a tradition of high-profile cyber compromise, typically possess a less robust cyber security posture.
The absence of prolonged and targeted threats has conditioned many organisations to see cyber security through a vulnerability-centric lens. Where they do have a monitoring solution in place, it is often a case of ‘set and forget’, using an out-of-the-box product or service.
Why do detection and response services fall short?
With many variants and methods of threat detection, the precise strengths and weaknesses of security monitoring services can vary. Some come with high visibility and telemetry but fail to identify malicious actions in real time – delaying detection, but generally increasing fidelity – while some rely on generating automated detections for every action, which typically increases noise and false positives. Others prioritise detections earlier or later in the Kill Chain, with attendant advantages and disadvantages. Early equals more time to react; later equals better certainty and fidelity. They can also place greater emphasis on either prevention or detection capability, meaning an action is identified and alerted but nothing is done about it, and vice versa.
It is important to have the capability to both detect and prevent. One without the other either means detection is inconsequential and the attack continues, or the attacker is free to try again until they inevitably succeed.
Organisations tend to rely on tools like the MITRE ATT&CK Framework to benchmark the overall detection and prevention capability of a vendor’s security monitoring service; often specifically EDR/MDR. MITRE’s catalogue of Tactics, Techniques and Procedures (TTPs) is effectively a taxonomy of all the actions that an attacker can perform as part of an attack, at the different stages of an attack’s lifecycle. This means that many services are evaluated against this framework – either using a subset of TTPs assigned to a specific threat actor, such as the recent MITRE Engenuity assessment, which mimicked techniques associated with Wizard Spider and Sandworm, or against the Framework as a whole.
The primary limitation of typical evaluations is that they fail to represent the real-world environment that the service is likely to be deployed to, being conducted on an unrepresentative sample rig, or even just a handful of endpoints. This means that:
The majority of the TTPs will not be relevant to the specific environment being assessed, making a vast proportion of the detection logic redundant. Reporting ‘gaps’ in this sense can lead organisations to invest in areas that add no value. For example, identifying your detection capability for Mac as a weakness is not relevant if you don’t have any Macs. This seems obvious, but for some, the allure of MITRE Bingo is too strong to resist.
The sample environment fails to accurately represent the complexity and context of a network, or account for the quirks that can exist for one organisation, which could be unthinkable for another. For example, having an unusual subset of users all with local admin privileges, which is more common than we’d like to believe.
A number of characteristics of the service cannot be evaluated in sufficient detail. For example, the quality of managed service elements, such as how quickly the vendor reacted to and investigated an alert, or how effectively actions could be linked together as part of an attack chain within the detection logic.
Ultimately, detection and prevention of discrete actions is only the beginning of responding to and containing a security incident, which cannot be gleaned from this type of evaluation.
The result of these uniform evaluations is that most providers typically perform quite well. A quick Google search shows us just a handful of the organisations scoring 100% in the technical evaluation.
But in reality, we know that each of these vendors does not perform exactly the same.
Finding the solution that is right for your organisation
Investing in a security monitoring service is an important and often sizeable purchase, which is essential to get right. But many don’t and are locked-in to a service provider that fails to deliver the silver bullet that the marketing materials and sales rep promised.
Given the magnitude of the purchase, it’s worth taking the time to properly evaluate potential suppliers without relying on a third-party assessment, such as MITRE Engenuity alone. If you’re planning such an evaluation, we recommend you consider the following guidelines:
Ensure the test environment closely replicates real life, warts, and all. This includes your outdated servers and user groups with excessive privileges. If these issues are known and risk-accepted, for whatever reason, then the vendor will have to work within that reality and ensure that detection logic or manual investigations are applied to close that gap.
This is likely to be a non-standard undertaking for the vendor, which will put many outside their comfort zone. It’s important to be fair and not expect such a POC for free. Considering that, vendors who are reluctant to comply should be removed from consideration – it’s probably an indicator that they know they will perform worse in a setting that is outside of their control.
Use representative attack paths and ensure that attacks are followed to completion. It is important to evaluate detection chronologically, allowing for flexibility in terms of the TTPs leveraged, to ensure the attack is representative of an adaptive human attacker. This will highlight errors in detection logic and attack chaining where the vendors are relying on linking generic detections as opposed to actually understanding the environment they are defending.
Assess wider metrics outside of detection and prevention. While these capability areas are sometimes less measurable, even anecdotal insight to factors like how accessible and communicative the vendor was, the quality and accuracy of alerts provided, and how quickly an alert was reacted and responded to, can prove invaluable in selecting the right vendor for you.
Organisations without internal security team personnel dedicated to monitoring should avoid product-centric offerings without a proper managed component. When you spot things going wrong, you don’t want to be in a position where your security partner is absent or insists that they can’t see evidence of a compromise.
The providers that rank highest are not those with the shiniest products, or necessarily those with ‘100% MITRE ATT&CK Framework coverage’. We regularly find significant control gaps that result in the generic MITRE-aligned controls failing when applied to a realistic testing environment - so don’t let a 100% score give you a false sense of security.
The best providers are those who are willing to listen and work with you to ensure the defences they provide are tailored to and appropriate for your organisation and network and will do something about it when an alert is raised, or an issue is identified. Make sure yours won’t let you down when it matters most.