People who like to think of themselves as hard-core security people rarely like to actually be pinned-down on an actual implementation. There are many exceptions for people who are real security folk, but there is a large crowd of people claiming to do security that really do academic hacking: looking for flaws in existing protocols and implementations.

This second flavor of "security" people, academic hackers, take protocols implemented by others and critique those implementations. These critiques can take the form of out-right hacks or theoretic design flaws. These academic hackers vastly outnumber real security folk.

The overall problem is that security is easy. I know that it is heretic. I can secure a machine from remote attack in a matter of seconds: disconnect it from all external access. Done! If you are worried about local attacks as well, burying the machine in concrete after NSA-level erasing the drives. If you are truly hyper-paranoid, do not forget to take a magnet to your RAM and other chips. Fire may be helpful to reduce the amount of left-over capacitance charge.

The slight problem with the previous proposal is that security is generally wanted in tandem with other things, such as functionality and easy-of-use (I'm going to ignore hyper-secure organizations, such as militaries and intelligence agencies, for this discussion). While it is not particularly difficult to design a secure system, it is extremely difficult to design a secure system that is usable and provides useful functionality.

In most practice, security is desired, but people are willing to pay only minimally for it. Thus, security design's primary goal is to maximize security under the constraint of minimizing effect on usability. Academic hackers are not willing to truly accept this. In fact, I have heard academic hackers explain how ease-of-use actually increases after security reaches a certain point. This is actually backwards. Well-designed security has high ease-of-use so that people use it properly.

Consider PGP. The problem with PGP is that it is hard to use. Read "Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0" by Alma Whitten and Doug Tygar from USENIX Security, 1999 for a discussion of PGP's lack of usability (I am told the presentation of the paper lead for a response from one audience member of "PGP is easy! I use it daily!". Clearly, this audience member missed the point). Because of the difficulty in setting-up and using PGP, very few people use it. Thus, almost all e-mail is sent unencrypted, making any security problems with PGP moot.

Let me give another illustrative story: Consider, Fred, an academic hacker that worked for a company that was selling a service. They wanted to sell the software underpinning the service as an appliance. The obvious communication method (at the time) is a web server, so the company thought Apache. Fred was totally against this.

"Web servers are insecure," Fred says.
"Okay, what is your proposal?", the rest of the company ask.
Fred's response is to use X11 tunneled over SSH.

While it may not be that every machine has a web browser on it, the number that do not is a rounding error. On the other hand, SSH and X11 are installed on a much smaller fraction of machines (in particular, relatively few Windows machines). Moreover, setting-up a X11 tunnel over SSH can be difficult to debug, and there's many implementations of both the X-windows system and SSH, making leading a user through the process painful. This ignores the fact that Fred will probably want challenge-response authentication or to use DSA keys for authentatication, which would leads to another can of worms.

After convincing Fred that perhaps using a web server would be much simpler to implement, use, and support, then there's an additional problem that Apache is "not secure". Apache is a large piece of software, and, like any large piece of software, has bugs found with some regularity. Fred would rather use a smaller web server, such as thttpd. The problem with this suggest is many-fold:

  • Experience with Apache is easy to find in hires. Experience in thttpd is nearly impossible to find.
  • thttpd does not support SSL. There are ways around this, but they involve additional setup and maintenance.
  • The bug count for Apache is due to buggy add-on modules, Apache's size, and the number of users causing more people to look for bugs within it. It would probably be a footnote if thttpd had a bug, while Apache bugs make headlines in security newsletters. Thus, it is not known why or if thttpd has fewer security flaws.
  • Perception. If thttpd has a security flaw, the compnay must justify why they used a broken product. If Apache has a security flaw, it is not questioned why they selected Apache.

Someone suggested to me that part of the reason that academic hackers dislike implementing is that they realize the difficulty of designing such systems and they fear they would make an error in design or implementation. Once such an error is discovered, other acedemic hacker's view of them would be reduced as a result. This may or may not be true.

Criticizing others is easier than creating something better yourself. This is true in politics, engineering, art, and, of course, security. It is import to know the flaws of existing protocols and programs, just like it is import to be able to appreciate and evaluate fine art. Far more important, however, is fixing and avoiding those flaws, either by repairing the existing entity or creating a new one. It is far harder to do the later. Remember that next time you read about a security flaw in security software.