Securing Fast And Slow- From Reactive Incidence Response To Proactive Attack Surface Reduction
Over the past year, more enterprise customers have been adopting Cyberpion as their External Attack Surface Management solution. Since then, a distinct, bimodal pattern has emerged with regard to the workflows and processes that aim to triage and resolve the findings our platform generates. The emerging pattern is a “fast” workflow for addressing critical issues that need to be fixed immediately and another “slow” workflow for less critical issues. The slow workflow is where more planning and grooming takes place and where whole classes of findings, rather than individual ones, are addressed at a time. We now believe that this pattern is quite efficient and effective in reducing our customers’ Attack Surface and, through this post, we want to raise awareness and discussions about it.
Fast Workflow for Critical Vulnerabilities
The need for a fast path, or workflow, for handling critical alerts is obvious and well understood. It’s there so that security operations and incidence response teams will handle individual critical findings – misconfigurations, vulnerabilities or breaches – as soon as they’re discovered. In many organizations this workflow is the sole mode of operation.
A finding (or security event) is attended to and remediated only if it reaches the top of the pile in terms of priority. Sometimes there are escalation rules based on the amount of time a ticket has been opened. Theoretically, this means any finding, will eventually reach all the way to the top. However, in practice, many if not most organizations, low severity/urgency alerts rarely if ever get triaged or resolved. However, like in human cognition and reasoning, there’s a loss of efficiency and ultimately, increased risk in sticking to just this one funnel model.
Slow Workflow for Systemic Issues
In the aggregate, similar low severity findings that are found across many assets can point to an equal or greater risk as any one critical finding. They may also point to some hidden common cause, perhaps some lack of a process, that if attended to and corrected, could prevent the occurrence of future cases of the issue.
Specifically for our customers, the Cyberpion platform is now automatically analyzing low level security findings and other metadata about customer assets in order to synthesize and maintain a list of those that it recommends to be removed outright, rather than fixed. The platform evaluates various indicators to assess both the use of the asset and its level of disrepair. Typically, it finds assets with various issues that had accrued over time – the use of old or obsolete components, expired certificates, weak SSL configurations – which, when considered in isolation, would never be prioritized high enough to be considered critical. This automated evaluation process also takes into account the level of connectedness these assets have to other organizational assets – the less connected they are, the more likely they will be recommended for removal. Thus, the assets it recommends for removal tend to be somewhat peripheral and isolated from the organization’s main online sites.
More often than not, the assets on the removal list turn out to be ones that, for some reason or another, have been orphaned or otherwise neglected. They are not known or monitored by the organization’s IT security teams. What the customer gets in the end is a machine-curated list of assets (FQDNs) with the action “remove”, i.e., do not make these assets accessible (or even resolvable) from the internet.
Now, the “slow path” is the workflow in which these removal recommendations are attended to. For agile teams these would be 2-4 week sprints – a chunk from the list is processed.
The Impact of Consistent Attack Surface Management
We have customers that use this process to remove as much as 12% of the total number of FQDNs that were resolvable from the internet. Not only did the removal of these assets significantly decrease their attack surface, but also their total number of findings, especially the long tail of low severity issues. As some of the indicators for removal our system used are also evaluated by various security rating companies, a biproduct of this process has been a marked increase in these customers’ security rating.
Beyond the removal of orphaned assets, the slow path workflow has been used successfully to analyze the whole set of security alerts. The analysis finds patterns that point to a common cause, that can then be addressed and rectified, thus proactively preventing future breaches.
How does your organization attend to low-severity findings? Does it ever attend to it? Does it analyze the data to find patterns? Which?