With the raging holy wars between proponents of different programming technologies, there seems to be a shortage of scientific comparisons between languages and objective guidelines concerning their selection. Clearly the holy wars are the result of mediocre programmers with too much time on their hands (much like this node). At the behest of this torrent of anecdotal evidence, the beginning programmers quickly choose sides based more on a flamer's eloquence than any hard facts. Mmmm, facts are tasty, here are four:

  1. All programming languages are Turing complete, so they can all model and solve the same set of real-world problems.*
  2. How fast you can solve a particular problem in any language depends largely on your experience in that language and the code base you have available.
  3. As computer performance increases, the importance of writing efficient code decreases.
  4. Restrictive languages reduce potential bugs and flexibility. This is a delicate balancing act.

Arguing over syntax is so passé. The reality is that new languages are created with new syntaxes to address perceived needs at the time. As computer applications change, so does the 'optimal' syntax if there even is such a thing. If one language has some fantastic feature that puts it head and shoulders above another for some set of common development needs, we're just a library away (or at most a compiler away) from having it implemented in every other language. Essentially, programming languages evolve over time to solve increasingly complex problems by adding layers to abstract away the complexity. Increased coding efficiency is achieved by moving farther and farther away from the actual hardware and operating system levels.

Which brings me to my point...

Understanding C means understanding what's really going on inside the computer. When one understands C, assembly language is at most tedious, but not conceptually difficult. Aside from the pure mathematical thrill of understanding these things, there are practical applications of this knowledge:

  • Bugs: They'll always exist in software at all levels, so you need to be able to recognize them at all levels.
  • Security: "A chain is only as strongest as its weakest link." Okay, I've resorted to vague clichés, nevertheless, the most common elements in any software project are the low-level ones. Crackers can break into more places by finding low-level exploits, so that's where you should expect most attacks.
  • Optimization: Porting your performance-critical sections to C means the ability to do maximum optimization to your code. The conveniences of languages like Java and Perl are costly both memory-wise and performance-wise. C gives you readable code without a lot of secret functionality that you may or may not want.

This is not to say that C is better than other languages, only that it will teach you more. In practice, C code is more difficult to secure, and more likely to have obscure bugs. Using C in place of a higher-level language is always a tradeoff, but one that should be based on an informed decision.

What really sets C apart?

So C is great, but how is it really different from all these other languages out there? After all, many seem to borrow an awful lot of C syntax and even emulate many of the standard library functions verbatim. The answer is quite simply that C forces an acute awareness of memory. This is both C's blessing and its curse.

The C programmer must be intimately aware of all the memory in use by his program. Though at times memory is automatically allocated (variable declarations and some function calls), in general the programmer must allocate memory for himself and manage it through the use of pointers. Understanding pointers can prove horrifying for the novice programmer, but they represent perhaps the most critical idea in practical computer science. Pointers are nothing more than memory addresses. Without a method of addressing memory locations, the computer would be utterly useless.

Understanding how the program stack and heap work gives immeasurable insight into what higher-level languages are doing behind the scenes. Even though programming in C can be slow and frustrating for the beginner, it will improve your code in all languages. The memory hierarchy is where all data resides inside a computer, so understanding how it functions plays a pivotal role in any application. A theoretical understanding from a book is a start, but nothing will teach you the subtleties like actual C programming experience.

As new languages sprout up to solve common business problems more quickly and easily, C will likely remain the standard for the (admittedly decreasing) field of highly-optimized and system-level hacking. C's shortcomings are merely the result of the need for flexibility. These days we have the luxury of abundant memory and processing power, so the need for this flexibility has diminished, but if you are genuinely interested in computers, you must learn C!


Addendum
*Lurking Owl says: Turing completeness does not mean that all languages can accomplish the same real-world tasks. There are a lot of I/O and OS activities that fall outside Turing completeness. C, through assembly, gaurantees access to memory mapped I/O and assembler libraries for OS traps and such. Most languages rely on C/C++ interface libraries to guarantee complete HW and IO access. More generally, some tasks (like OS programming) cannot be done in interpreted languages even thought they are Turing complete. I still like the WU, but languages do have different capabilities.

My phrase 'model and solve the same set of real-world problems' was meant to imply computations based on data, not arbitrary manipulations and access to physical hardware. Perhaps this could have been clearer since one of my main points is that C offers more power and flexibility. The limitations of hardware interactions are limited firstly by hardware design, secondly by the operating system, and only thirdly by the language itself. Even then, it's not really a limitation inherent in the language, only in that particular implementation of the language. Ultimately all languages are translated to machine bytecodes anyway, so I view C's superiority in this regard as a historical issue more than a language-design issue.