There are lots of principles, techniques and tricks you can use to optimise your computer program's performance (mostly time, but sometimes memory space). So many, in fact, that it's easy to forget the 2 cardinal rules of program optimisation:
  1. Not now.

    Don't optimise until you have to. Optimised code takes a lot longer to write, has more bugs, and is harder to maintain. Maybe your program will run fast enough, and you won't have to optimise?

    Until you have a working program that runs too slowly, don't optimise it!

  2. Not here.

    Almost all of a program's runtime is taken up by very few lines of code (the generally quoted rule is that 90% of the runtime is taken up by 10% of the code). Optimising anything else is most likely a waste of time.

    So it's almost invariably true that you don't want to optimise any particular part of the program. Instead, profile your program, find the hotspots that make up the vast majority of the runtime, and optimise those.

Rules of Optimization:
Rule 1: Don't do it.
Rule 2 (for experts only): Don't do it yet.

-- M.A. Jackson

More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity.

-- W.A. Wulf

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

-- Donald Knuth

The best is the enemy of the good.

-- Voltaire

Get someone else to do it

The thing about programme optimization is that you don't have to do it yourself, most of the time: a good compiler is likely to be able to optimize far better than you will, except with the most extreme effort. Try to keep the code clean, and avoid doing things that will frustrate compiler optimization, such as manually forcing memory allocation strategies, writing non-tail-recursive code when tail-recursion would be just as good, and so-on. Of course, if you use a declarative (or decent imperative) language, you're likely to get a much smarter compiler than if you're using C++.

Even if you're using a fairly crappy language like java, with a "clever" VM, the virtual machine will execute well-compiled bytecode faster. Try different compilers to see what they provide.

What we need more of is science

If your programme isn't running fast enough, use a profiler to discover where most of your execution time is spent BECAUSE YOUR INTUITION IS WRONG OR YOUR MONEY BACK. Also, discover WHY. Will more RAM, faster processors, or faster IO help? More RAM is often key; IO can rarely be directly improved, but simple strategies may help, and processor bound computations often indicate that a more efficient algorithm exists.

Know your enemy

Your enemy falls into two groups: Stylistic failures, and strategic failures.

Stylistic failures

This comprises things like excessive heap allocation and recursion of small stack-frames while using a compiler that cannot optimise this for you.

In general, you can make mileage out of aggregating many small, costly operations into one, single, larger operation. This is exemplified by buffered IO, eagerly allocating enough memory for the whole computation at one time, and so-on.

Your second weapon is replacing costly operations (e.g. multiplications or sorts) with appropriate alternatives, such as additive algorithms, or linear scans. In this vein, you may wish to use non-blocking (threaded) IO, so that you are not wasting time.

Both of these will be aided by using appropriate data structures. Do not use arrays for anything unless you really know what you are doing.

Strategic failures

These tend to require the use of external help: Try to use a garbage collector if you don't have one, and if you do, try to provide what hints you can. If you are creating a resource-bound programme you may benefit from disabling garbage collection, and doing it yourself, or using a garbage collector with a fixed strategy that works very well in your programme, but would suck for larger programmes. READ THE MANUAL. Alternatively, you may be eagerly evaluating things that are often discarded. You may wish to use some kind of lazy or symbolic computation engine, or if your computation is probabilistic STOP USING A HEURISTIC YOU MADE UP IN ONE MOMENT BEFORE RUSHING TO THE TOILET. Use a standard heuristic, or a principled methodology, possibly using a function-generator, such as a perfect hash-function generator.

It is important to know the costs involved in invoking third-party libraries.

In conclusion

Use the learning and effort of clever people to hide the fact that you are lazy and stupid (or at least stupider than they, and don't have years to perfect this damn programme). If this doesn't work, try calculating the cost of buying more RAM (or CPU if it really is a processor-bound computation), versus the increased cost of your time. If you are industrious and fantastically clever, you don't need my advice.

Log in or register to write something here or to contact authors.