A recent research paper tackles the idea of security by obscurity. The basic idea is that you can improve system security by making it hard to find out how it works.
This is an idea that most programmers recognize only too well. Your code can be disassembled and decompiled and in many cases a well written program is much easier to reverse engineer.
The solution generally adopted is not to write a bad program but to use "obfuscation" as a final step. That is, take a good clear program and perform a range of syntactic transformations on it to make it a mess that is so much more difficult to reverse engineer.
As a code protection principle, obfuscation has always seemed obvious, but there are two general principles of security that suggests it is probably a waste of time.
- Kerckhoffs' Principle that there is no security by obscurity,
- Fortification Principle that the defender has to defend all attack vectors, whereas the attacker only needs to attack one.
These two principles are more generally applied to systems, and not just software, but it gives us a cause for concern - after all systems are mostly accumulations of software. So while obfuscation still seems worthwhile in some situations it is something to be embarrassed about. Not only is it something that you probably shouldn't rely on to protect code it is also something you shouldn't build into your architecture to defeat attackers. So don't use a random port number because someone will always find it and by the second you have to defend against every possible attack vector. This is the asymmetry of the situation - you have to be perfect in your defense and cover all possibilities, but the attacker only has to be successful once.
The new research suggests that security is a game of incomplete information and you can learn a lot by examining your attacker's behaviors and algorithms - his "type" and that obscuring your game really does bring an advantage and improves your odds of winning. In short, obfuscation is a good general principle - i.e. make it hard for your attacker to find out how best to attack you.
The paper, which is well worth reading for its presentation of the general security problem, presents a "toy" security game of incomplete information where the best strategy is to try to characterize the attacker's type while giving away as little as possible about the defender's type. The idea of logical complexity is also used to characterize the amount and nature of the obscurity involved.
Modern ideas of security make use of the idea that even if the attacker knows the algorithm they cannot muster enough computing power to crack it in a reasonable amount of time. That is you don't need to keep the algorithm secret or obscure because the attacker is computationally limited. However, if you also assume that the attacker is logically limited and not an omnipotent programmer (wouldn't we all like to be one of those) then obscurity via logical complexity can be just as good.
The final point of the work is that, as long as you take the dynamic adaptive approach, you can make something secure without having to secure everything. Instead of protecting against every possible attack vector, simply characterize your attacker and adapt the security to defend against just the approaches in use. It sort of restores some symmetry to the attacker's and defender's situation.
A really interesting paper and well worth reading, if only for the comments on the history of security.
Gaming security by obscurity, Dusko Pavlovic
To be informed about new articles on I Programmer, subscribe to the RSS feed, follow us on Twitter or Facebook or sign up for our weekly newsletter.