Last week as Patch Tuesday (which was today) approached, I wondered about the efforts of admins everywhere to understand, test and then apply those patches that are applicable for their environment.

I wonder if it would be possible to measure the combined effort of the researchers who find and responsibly disclose the vulnerabilities to the vendors, the exploit writers who create commercial grade exploits for the vulnerabilities and the admin who secure their machines from the vulnerabilities. If we could recycle that energy how much would we save each month? I am sure it would power a fairly decent data center… and NIST puts the cost of leveraging an automated patching solution to manage 1,000 computers at $165,500 per year (See pages 1-3) which would also buy some fun toys.

That cost of $165,500 doesn’t cover the support costs when a patch causes an existing application or functionality of the patched machine to fail (or the assumption by the end user that a recent patch must be the reason why their favorite site/application no longer works properly).

Let’s examine how a typical patching process takes shape. It begins with a network scan, and as more and more vulnerabilities are discovered and categorized, they are incorporated into a vulnerability scanner. But even the results of a small network scan can become large and unwieldy. A common complaint from security professionals I meet, who only use vulnerability scanners to determine the risk present on their network, is their inability to do anything with the results.

How does a typical scan and patch process come to be? Firstly we have to recognize that a network scanner has tens of thousands of signatures and other checks that they use to try and identify vulnerabilities. As a result even a scan of a small network can result in so many possible issues that the results are large and unwieldy. The complaint security professionals bring to me is an inability to identify which of the results are critical and should be dealt with now, and which can wait (or requires no action at all and can be discarded).

When I ask about the history of events that took them to this place, security pros always share some variation of the following:

  1. Performed a vulnerability scan that yielded a long report covering several pages
  2. Gave the results to the network team and asked them to fix/patch all the issues
  3. Attempts to patch by network team quickly turned into an exercise of determining whether or not discovered vulnerabilities were real.
  4. The Network team felt that the work was unnecessary and pushed the results back to the security pro asking them to provide false positive free information to patch.

From here, the security pro often will negotiate with the network team and often end up with an agreement that looks like this:

  • All critical level vulnerabilities reported by the scanner are fixed within one month
  • All high level vulnerabilities reported by the scanner are fixed within three months
  • All medium-level vulnerabilities reported by the scanner are fixed within six months
  • All vulnerabilities below the medium-level are ignored

What that agreement lacks is any type of Security Intelligence about the vulnerabilities and the risks they bring to the organization. Without this intelligence you end up, as you would expect, making less than intelligent decisions about how to reduce the risk the vulnerabilities pose.

Therefore, I have three chief concerns when a security pro tells me he or she has solved their vulnerability management problem by implementing a priority patching agreement.

1. Criminals will not wait a month to attack

When did “Nothing could happen in a month, right?” become acceptable? We have seen how quickly exploits can be released for known vulnerabilities and a criminal will move fast to take advantage of a known window of opportunity

2. Vulnerabilities are not created equal

There’s a mistaken belief that all vulnerabilities of the same level of importance are equal to one another and therefore should be treated as such. Most vulnerability scanners seem to do this with their results (with the exception of the folks at eEye, which have baked security context into their scanners). Eachvulnerability should be weighed with the potential impact it has on the risk associated with the network and a decision made regarding the urgency that is appropriate to the risk. To save time this should be done in as automated a way as possible.

3. Patching vulnerabilities that pose very little risk

My final gripe is wasting time and resources to patch things that pose little-to-no risk to the environment. Myself and some others in IT are kind of lazy (or do we prefer to call ourselves efficient?) and really dislike performing pointless tasks for the sake of it. Most organizations don’t have enough people spending time implementing security-based initiatives or controls, and wasting time to patch something that has been disabled on the machine just hurts.

 

So what do I propose instead? Add an exploitability value to the decision-making matrix to calculate the time to patch. There is a reason why Microsoft includes that type of rating in their vulnerability bulletins. If an IT organization were to ask themselves “Is this vulnerability able to be exploited in my environment now?”, a “yes” should kick off a documented, fast tracked patching/remediation process. And when that process ends, the exploitability should be tested again to make sure it worked.

Working with CISOs and security professionals alike, we have successfully added the following to the top of the patch agreement: “Any proven exploitable vulnerability must be fixed within 24 hours.”

This change is almost always agreed to by the network team because it is based on real information with security intelligence behind it -- not just rumor or supposition on the part of the security expert. We typically give the network people a choice betweening showing the vulnerability being exploited against a replica of a live machine (in either a testing or a disaster recovery environment) or by targeting the live machines themselves.

That simple change allows both groups to feel better.

-          The security pro is satisfied after successfully prioritizing vulnerabilities based on risk and security intelligence, and can live with longer lead times to fix less pressing threats.

-          The network folks have been shown that something is exploitable and they can justify the time spent on fixing it, be it a network configuration change or a patch.

As organizations continue to evolve their approaches to security, it is exciting to see them spend more time thinking ahead, and less time patching old news. The benefits of security intelligence can quickly add up – maybe to $165,500 in savings per year.

 

- Alex Horan, Senior Product Manager