Generally speaking, there are two steps to using any model:
The first step involves knowledge of the particular situation; the second step involves knowledge of the model. A useful analogy is made with raw materials and the tools used to craft them into something: the situation itself is the raw material, the model or theory is the tool, and the end-product is some information. As is to be expected, the quality of the product will depend significantly on the quality of the material, the power of the tools, and the skill of the craftsman who wields them.
The important difference between physical and intellectual work, however, is that intellectual work may produce “tools” that are not at all useful. It might be argued that this difference is not an essential one; there is, after all, no barrier to the manufacture of implements that function in purely imaginary modes. (One real example of this phenomenon is the production of “fantasy weapons”, e.g. extremely ornate blades that look very impressive but carry very little utility as real aids to attack or defense, and were probably never intended to provide such.) That said, intellectual work is able to produce its tools quite quickly, and some of these tools defy any practical attempts to definitively prove their usefulness. Rather than bring down the ire of any one discipline by making all the usual accusations that their basic theories are airy-yet-crude blunders, I’d like to constructively examine the question of what makes for a good useful theory. Here are four simple criteria to consider:
A good theory makes its inapplicability promptly and unambiguously known. This might be the most important feature a theory can have. There is always a very strong temptation to become so enamored with a theory that it becomes difficult to distinguish an elegant demonstration from a completely insubstantial fantasy. Physical chemistry, with its need for a huge plethora of quick-and-dirty theories seems to be quite apt at producing models that speak their applicability up-front and neatly hand off control to their alternatives when their presuppositions fail. Different models predict significantly different behaviors given different spatial scales and different temperature and pressure conditions, sometimes radically. While such a diversity of different views might seem cluttered and confusing from the perspective of assimilating the knowledge of the discipline, it is nonetheless quite easy to determine which model applies to a given situation. Assumptions (e.g. “this system behaves as an Ideal Gas”) are very clear from the start, and even though they incorporate known and deliberate approximations, these are accepted in a way that understands the imprecisions and their consequences.
A good theory approximates objects, not their relationships. An outstanding example of this feature comes from, of all places, political philosophy. John Rawl’s theory of justice (as constructed in his famous book of the same name) proceeds from an extremely idealized view of individual humans and the origins of social organization, as do virtually all other political-philosophical arguments. “A Theory of Justice”, however, stands out as perhaps the most compelling political argument of the 20th century; Rawls became famous following this work, and virtually every other theory that followed was compelled to address Rawls’ Theory in some form. Nonetheless, other theorists very extensively criticized Rawls for some unrealistic features of his model, specifically the extremely strong risk-aversion individual agents are assumed to display. These criticisms, though well-founded and justified, made Rawls’ Theory no less compelling. The reason is that Rawls’ Theory, though it overstates individual aversion to risk, very precisely captures the way in which individuals evaluate their position relative to others in real societies. This emphasis on relations between individuals stands in sharp contrast to traditional Utilitarianism, which presumes that individuals will assent to any social contract that maximizes net social welfare with no consideration as to how they will fare personally, a presumption very clearly at odds with reality.
A good theory tells you what it can’t tell you. A theory that incorporates its own limits can very rapidly and efficiently prune away lines of inquiry that are essentially fruitless. The example par excellance comes from the classical theory of computation, with its results on formally undecidable propositions. A beautiful example of this dynamic at work comes from the use of Turing’s famous result to demonstrate in just a few lines the undecidability of a static information flow safety analysis. While practitioners don’t very frequently encounter results from Godel, Turing, Church, Post, or Skolem, it is arguably because the theoretical foundation of computation very quickly and firmly establishes the limits of what can and can’t be done, so that engineers need never be visited by the insidious temptation to construct the unconstructible.
A good theory rapidly makes new predictions from old predictions. This criterion applies to how much uncertainty is introduced by applying a theory, or alternately, to what degree a theory lends itself to computational procedure. It is precisely this feature that accounts for the unparalleled success of Newton’s mathematization of physics. Translating observed phenomena into readily transformable symbolic representations allows inferences to be easily composed with one another, which means that a theory can readily build on its own successes. While there is some danger that concrete realities will not fit well with their symbolic outlines, i.e. that failures will also build on failures, a theory that can rapidly turn over its findings into new findings will have the opportunity to propagate errors forward in a way that will eventually become conspicuous, and hopefully diagnosable. By contrast, a theory that cannot readily incorporate its own predictions as antecedents to new inferences is more likely to function as a kind of myth or parable than as a real producer of knowledge. While it’s essential to have a conceptual foundation for considering any phenomenon, and while such a foundation is a necessary condition for a theory, it’s easy to see that a theory, as considered here, is more than just a framing device.
While a definitive breakdown of what makes for a good theory is certainly an appealing goal, this short exposition is intended more as an exploration of the issues than as any sort of final word. Much, no doubt, has already been said on the subject. While this may seem too general a subject to consider for a computer science blog, it’s worth reflecting upon for the simple fact that computer science is presently faced with the temptations of a lot of new theories. Unfortunately, few of these new theories have gained any kind of wide use or acceptance outside of academic circles for the simple reason that they have thus far failed to demonstrate their usefulness in any compelling way. I would emphasize, once again, that this is especially the case for security. A good theory of security, hopefully, can make is applicability clearly known, precisely describe relationships between its agents, make clear the fundamental limitations of security (i.e. articulate the existence of fundamental insecurity), and draw useful conclusions.