A software design methodology that takes its name from the
clichéd response to amateurs at a Hollywood audition,
"Don't call us, we'll call you". It is a useful paradigm
that assists in the development of code with
high cohesion and low coupling that is easier to
debug, maintain and test.
Most beginners are first introduced to programming from a
diametrically opposed viewpoint. Programs such as Hello World
take control of the running environment and make calls on the
underlying system to do their work. A considerable amount of successful
software has been developed using the same principles, and indeed many
developers need never think there is any other approach. After all,
programs with linear flow are generally easy to understand.
As systems increase in complexity, the linear model becomes
less maintainable. Consider for example a simple program to bounce a
square around a window in your favorite operating system or window manager.
The linear approach may work, up to a point. You can keep the moving
and drawing code in separate procedures, but soon the logic begins to
branch. What happens if the user resizes the window? Or if the square is
partially off-screen? Are all those system calls to get such joys as
device contexts and interacting with the graphical user interface
really what the programmer should be spending all their time on? It
would be much more elegant if the programmer could concentrate on the
application (in this case, updating the coordinates of the box) and
leave the parts common to every application to "something else".
The key to making this possible is to sacrifice the element of
control. Instead of your program running the system, the system runs
your program. In our example, our program could register for
timer events, and write a corresponding event handler that updates
the coordinates. The program would include other callbacks to respond to
other events, such as when the system requires part of a window to be
redrawn. The system should provide suitable context information so the
handler can perform the task and return. The user's program no longer
includes an explicit control path, aside from initialization and
Event loop programming, however, is merely the beginning of
software development following the Hollywood principle. More
advanced schemes such as event-driven object-orientation go further
along the path, by software components sending messages to each other
and reacting to the messages they receive. Each message handler merely
has to perform its own local processing. It becomes very easy to
unit test individual components of the system in isolation, while
integration of all the components typically does not have to concern
itself excessively with the dependencies between them.
Software architecture that encourages the Hollywood principle
typically becomes more than "just" an API - instead, it may take on
more dominant roles such as a framework or container. In the
Windows world, MFC is an example of a framework for C++
developers to interact with the Windows environment, while .NET is
touted as a framework for scalable enterprise applications. On the
Java side, the Enterprise JavaBeans specification describes the
responsibilities of an EJB container, which must support such
enterprise features as remote procedure calls and transaction
All of these mechanisms require some cooperation from the developer.
To integrate seamlessly with the framework, the developer must produce
code that follows some conventions and requirements of the
framework. This may be something as simple as implementing a specific
interface, or, as in the case of EJB, a significant amount of
wrapper code, often produced by code generation tools. At all times
the developer must be aware that the framework delivers value that far
exceeds any additional effort required.
However, not all code (particularly legacy libraries)
readily succumbs to the Hollywood principle. Without due care, it is
still possible to write spaghetti code. The event-driven example
falls short of the Hollywood principle, because it is still likely the
program will have to make explicit API calls to perform the actual
drawing on the screen. Is it possible to follow the "don't call us"
part of the paradigm to its logical conclusion?
More recent paradigms and design patterns go even further in
pursuit of the Hollywood principle. Inversion of control for instance
takes even the integration and configuration of the system out of the
application, and instead performs dependency injection.
Again, this is most easily illustrated by an example. A more
complex program such as a financial application is likely to depend on
several external resources, such as database connections.
Traditionally, the code to connect to the database ends up as a
procedure somewhere in the program. It becomes difficult to change the
database or test the code without one. The same is true for every other
external resource that the application uses.
Various design patterns exist to try to reduce the coupling in
such applications. In the Java world, the Service Locator pattern
exists to lookup resources in a directory, such as JNDI. This
reduces the dependency - now, instead of every separate resource
having its own initialization code, the program depends only on the
service locator. This is better, but still not ideal.
Inversion of control containers take the next logical step. In this
example, the configuration and location of the database (and all the
other resources) is kept in a configuration file external from the code.
The container is responsible for resolution of these dependencies,
and delivers them to the other software components - for example by
calling a setter method. The code itself does not contain any
configuration. Changing the database, or replacing it with a suitable
mock object for unit testing, becomes a relatively simple matter of
changing the external configuration. Integration of software
components is facilitated, and the individual components get ever closer
to the Hollywood principle. However, there is still a long way to go.