The process of testing IT security mechanisms to measure and validate their efficacy is a nascent idea, but we're already getting there.

I joined Core Security Technologies just under two months ago to head up Core’s Engineering team.

black-hatWhile I already had a feel for just how popular CORE IMPACT is, I was completely blown away by what I saw at Black Hat USA 2009 – the reception at Core’s booth, the depth and effectiveness with which everyone is using IMPACT Pro, the level of engagement at customer panels, and of course, the Party. Net of it all, it’s a great product that everyone loves.

Which makes one wonder, so what’s up with a new head of engineering?

As it turns out, several folks at Core have been working on and thinking about an interesting phenomena – that being, most technologies, IT and non, usually have a robust testing science behind their development. Airplane wing designs are tested in wind tunnels, new car designs are put through a plethora of reliability, durability, efficiency, and aerodynamic tests, software applications are put through a battery of tests before they go live in production. The testing of security technologies, on the other hand, remains fairly embryonic and a very specialized – we have some silos of testing, and then, of course, we have penetration testing.

core_icon_redThe question of how we can formalize IT security testing such that it can be applied uniformly across security technologies has been a subject of ongoing research at Core. The progress made by our product and research teams have brought us to a point where we’re now ready to work on a new class of products, which I will loosely call “Security Test Controls” – evidently, and thankfully, more work has been done in defining the underlying technology than on identifying a catchy name for it.

IT security officers typically divide their world into two halves – the set of defenses they need to implement to protect their IT networks, and a set of controls that serve to tell them how well those defenses are working. From this, they make decisions on which areas need more defense, and in which areas they might already be overinvested. In the end, the quality of their organizations’ overall IT security depends directly on the quality of the data produced by these controls.

The recent indictment of Albert Gonzalez is a good example of where this traditional model has fallen short. The root issue can be traced to inadequate controls to measure the security of Web applications – in this case, against SQL injection attacks.

The basic control we all have is that all Web apps are potentially vulnerable, and a typical enterprise has 5,000 of these apps. Now what? This is just an example of how an inadequate control set fails to provide adequate and accurate enough risk information to be actionable, which ultimately leads to failure – sometimes the kind that grabs Wall Street Journal headlines.

In an upcoming whitepaper, I’ll share more details of the new products and the underlying technologies that will address this area of need. As you will see, the technology is based on much of the engine that is already in IMPACT Pro itself.

And that’s why I am here – to build out a whole new set of products based on the foundation of the technology in IMPACT Pro.

-Milan Shah, Senior Vice President of Engineering

 

.