Core CTO Ivan Arce offers his reaction to some of the newest ideas around secure development proposed by longtime industry colleague Michal Zalewski, taking a deeper look at how and why we look at IT risk in many of the ways that we do, and tracing those views back to some time-honored concepts.

Last week I found out that Michal Zalewski is working on his second book.  That’s great news!  “Silence on the Wire” is one of my favorite infosec books – I highly recommend reading it – and I have a lot of respect for Michal’s work and innovative ideas so I’m eagerly waiting for the new book to come out.

I learned about this when I read an excerpt published on his blog that was later also published by ZDnet in the Zero Day blog (here).  I was doubly happy when I read his critique to something I wrote almost a decade ago in a not-too-serious attempt to produce a one-line definition for “Secure Code” as opposed to “Reliable Code”:

Reliable: something that does everything it is specified to do.

Secure: something that does everything it is specified to do and nothing else.

In 2000, I posted these straw-man definitions to the Secure Programming mailing list within the context of a long, winding thread that included comments from people with a more formal computer science academia viewpoint, as well as from practitioners like Michal and I. Unsurprisingly, my post did not generate a lot of discussion then, but the entire thread grew significantly large and, incidentally, elicited a few posts from Michal too.

A decade or two ago, open and often un-moderated mailing lists such as Bugtraq, Secure Programming or Firewall-wizards were a fertile ground for debating and sharing information security ideas and experiences, with many of the manners and the form inherited from the USENET newsgroups that thrived in the 80s. The 21st century transformed the medium for sharing and discussing thoughts initially to Web forums and then rapidly to blogs – which I consider an insular and asymmetric medium very sub-optimal for sharing and discussing ideas. 

(Interestingly according to analyst Rich Mogull, blogs are losing ground to Twitter and similar coprolalia-friendly mechanisms that favor the public immediate passing of any and every one’s brain gas in non-lethal doses of 140 bytes or less… but that’s neither here nor there so let’s get back on track.)

By the way, it is odd to find Michal and myself, two reluctant bloggers, exchanging one-sided thoughts about a decade’s old comment in our corresponding blog posts rather than doing it old-school in a mailing list quoting and commenting each other’s paragraphs one by one. But this is 2010…

Anyway, I didn’t expect anybody to take seriously my one line definitions of Secure (and Reliable) but Crispin Cowan (formerly of Immunix, today of Microsoft) liked it so much that he started using it – and giving me credit for it – whenever he had the chance. Eventually it reached (Fortify Chief Scientist) Brian Chess – one of the mere handful of computer scientists with a streak of pragmatism – who also included it in his book Secure Programming with Static Analysis”.

However, somewhere along the process “specified to do” transmuted into “intended to do.”

The difference is subtle but relevant. The later evokes an implicit and vaguely defined purpose while the former demands an explicit specification.

In my original post, I tried to allude to the idea of building and testing software to spec to bring the notion of security closer to the practical world and further away from the abstractions of the formal solutions – knowing full well that it was probably was a futile attempt. After all, in the realm of information security, like Michal, I’ve always been a pragmatist rather than an academic.

In his latest work, Michal points out several of the multiple shortcomings of trying to use formal models to define and implement secure systems:

1.     “There is no way to define desirable behavior of a sufficiently complex computer system”

 This is indeed the case for any modern-day computer system or application where a multitude of stakeholders seek to define and use the system to serve their own interests.  There are simply too many interests and associated traits and behaviors required from the system that it may not be even possible to enumerate them all. Michal also points out that a number of competing interests may end up settling on a system that is suboptimal for every stakeholder and inherently weaker than others against attacks from any non-stakeholder. [My side note here is that would-be attackers should be but are often not considered stakeholders.]

An interesting twist in this line of reasoning is the citation of Hardin’s 1998 paper Tragedy of the Commons which originally discussed the problem of an exponentially increasing population of consumers and a finite number of consumable resources. In the process of doing so, Hardin brought about the ideas of John von Neumann – a pioneer of game theory and one of the founding fathers of the computer – and shook the axiomatic framework of Adam Smith, one of the founding fathers of modern economics.

The reference to Hardin’s paper, applied here, leads to the thought that in the context of information security “the invisible hand of the market” will not necessarily lead to secure systems, an idea that may be controversial in vendor-space.

On the other hand, unbeknown to Hardin, his paper also provided inspiration to Lawrence Lasker and Walter Parkes, script writers of “WarGames”, my favorite hacker movie. The pair later wrote the script and produced my second favorite hacker movie, “Sneakers.”

Ultimately, Michal’s first argument simply points out that devising mathematical-logical formal models to define and implement security usually goes awry in the presence of real world economic actors, and that the information security discipline would benefit more from adopting knowledge, practices and experience from other fields such as sociology and economics, rather than seeking purely technical solutions. I agree.

Perhaps it is not a coincidence that John von Neumann laid the foundations of what may help us bridge the gap in the form of game theory. But guess what? Game theory does use and benefit from very formal and precise definitions of its constituents.

For a comprehensive introduction to game theory I highly recommend Roger Myerson’s book “Game Theory. Analysis of Conflict”.

I am a big fan of using game theory in information security for several reasons:

- It implies the definition of the problem as a game where multiple players with different roles and strategies participate.

- It introduces the economic viewpoints and the notion of a utility function that fits well with current information security trends.

- It provides mathematical and stochastic tools to work with.

- It fits well with the mental framework developed from how I first got involved with computers. That is, by playing games and programming my home computer(s). (I suspect many from my generation share the same origins.)

- It fosters the use of “gaming rhetoric” rather than “warfare rhetoric” in dealing with information security problems. The actors are “players”, adversaries rather than enemies, and the rules of the game need not match or resemble those of physical conflict, the use and abuse of analogies does not necessarily convey fear, uncertainty and doubt.

Unfortunately, game theory does not give us a recipe for building secure systems and I doubt it will ever do, but at least it does provide some new toys and tools to think about and to play with.

2.     Wishful thinking does not automatically map to formal constraints

Attempting to subject a modern system to the constraints of a formal model will yield a far too abstract and generic specification that leaves so much leeway to the implementers to screw-up security (a.k.a “The Curse of the RFC) that it ultimately defeats the purpose of using formal constraints. On the other hand, a very precise and fine-grained formal description of a system in each and every of its possible states isn’t feasible, and mathematical models capable of dealing with loosely defined or fuzzy constraints are themselves sufficiently complex to have their validity questioned.

This is indeed the conundrum of looking for problems to be defined in a purely formal way so that purely formal solutions can be devised and subsequently applied. It is hard to contest Michal’s argument and I do not intend to do so, but in favor of the computer science security theorists I’d argue that these days even the most ivory-tower-ridden security expert of them realizes that peppering their models with heuristics and shortcuts while leaving several loose ends is acceptable, and in several cases even practical.

Formal modeling does not need to be uniformly applied to an entire system to yield acceptable security, and partial localized solutions, although imperfect, are not devoid of value. However the use of the term “acceptable” does introduce a relativistic and subjective dimension to information security, muddying formal purism with the appearance of possibly irrational and non-intelligent (in the game theoretical-sense) subjects – also known as humans.

Ultimately information security is about humans and human relations through the use of technology, if you’re not yet sold on the idea of borrowing from social sciences to study information security problems you should check out Miller & Page’s “Complex Adaptive Systems. An introduction to computational models of social life”.

3.     Software behavior is very hard to conclusively analyze

Ok, so even assuming that a very precise fine-grained specification is created and a system is built to match it, how are we supposed to verify that the system does actually match the spec?

Here Michal points out that theory already demonstrated that for many but the simplest cases it is impossible or unfeasible  to prove that a given program will do exactly what the specification calls for, and absolutely nothing else, in all and every possible runtime conditions.

The halting problem is often cited in this argument and I suspect that’s not just because it de-references to Alan Turing – another of the founding fathers of computing – but also because it leads to the amusing paradox of showing how a formal model can be used to invalidate the application of formal models to real-world information security.

In the right circles and with a bit of luck this argument may lead to a “War of the Formal Modelists” spectacle of which the most recent inclination is to end up rehearsing the “Battle of Dynamic vs. Static Analysts.” It is almost as relevant as the WWF but slightly less savage than Vale Tudo.

Rice’s theorem is a closely related concept and in my role as a member of Core’s Security Advisories team it is one of my favorite counter-arguments for vendors that systematically state without providing any supporting evidence or technical analysis that a given bug reported to them is not exploitable.

In sum, it is hard to argue against what Michal shows to be the shortcomings of attempts to apply formal models and methods in the construction of secure computing systems that need to operate in the real world.

I’m a pragmatist and I’m in total agreement with him on that.

However, one of the key assumptions in Michal’s reasoning is that formal models – clear and unambiguous definitions and precise and fine grained specifications – are sought for the purpose of stipulating how to build secure systems, or to be able to formally prove that a given system is indeed secure.  The obvious counterpoint is that we don’t need perfect security or perfect solutions and that reasonably imperfect ones will do.  Michal already knows this and in fact the rest of his blog post deals with the blind alleys we walk into in our search for imperfect but yet acceptable solutions.

An alternative tack would be to look at the aid of formal models and highly descriptive specifications in proving that a system is not secure or to guide security probing (empirical testing).

This of course, just gives me the excuse to talk about the scientific -or rather epistemological- aspects of the information security discipline, but before I delve into that and for the sake of maintaining consistency with Michal’s section ordering in his blog post, let me comment on risk management next.

(Check back soon for my follow-up post on how these same concepts apply to Risk Management, Science and Art…)

– Ivan Arce, Chief Technology Officer

.