A compiler takes a given piece of high-level computer programming language and converts it into instructions which can be understood by the computer. Simple compilers do this by replacing individual commands with their machine language counterparts, without regard for the program as a whole. Replacement is quite fast to perform, and it isn't very hard to write a compiler that compiles this way. Unfortunately, it also means that the compiled program will be full of redundant code, and probably have long sections that could be much faster had they been programmed by a human who was paying attention.

Optimizing compilers are the solution to this problem. They look at the source code as a whole, and do whatever is possible to make the whole compiled program run as fast as possible on the target hardware. Optimization on a well designed compiler can be specified, from easy optimizations that would be understandable to any programmer to deep magic optimizations that make the program lightning fast but impossible to read a core from or debug. Optimized compilers produce the same kind of binary a simple compiler would, they just take much longer to do it and produce code that's very intertwined and complex. These days programmers are generally taught (although some disregard it) not to optimize their code at all, but to let an optimized compiler worry about that and concentrate on readability.

Many optimizing compilers work by converting the program to be optimized from its original high-level language into a lower level intermediate language, which only it understands. This is a semantic conversion, and retains only what the program does, and none of the way it does it. It then regroups the operations, localizing data that's shared between them and breaking the pretty data hiding paradigm popular with today's coders. Next it optimizes the routines, replacing slow-running loops with sets of nested smaller ones that would be impossible for a human to grasp. After that it probably writes a few lookup tables, which waste memory but return values much more quickly than computation would. Finally it uses a regular compiler to turn the mid-level code into assembly language, which is further compiled into a binary.

It should be noted that since an optimizing compiler's code is so self-integrated, optimization on already self-integrated code (like a kernel or even the optimizing compiler itself) can produce binaries that simply don't work. Hundreds of publications and PhD dissertations have been done on why optimizing ultra-complex code breaks it, and I don't know enough to go into detail here. Since only like .1% of programmers ever write code that could break compilation, it doesn't really matter anyhow.

Log in or register to write something here or to contact authors.