A C++ idiom (Gorgonzola gently chides me for not mentioning its most famous propounder: Bjarne Stroustrup himself!) to ensure various resource leaks cannot occur in your code. RAII demands that you allocate resources in the constructor of some object, and release them in that object's destructor. By defining the object in some appropriate scope, you ensure that the resource cannot be leaked.

Typical resources to be managed in this way are open files (the basic example, both for its usefulness and for its presence in the C++ standard library), locks (thanks WRW!), (some) worker processes or threads, and anything else with a limited (in some way) amount available. Even the memory pointed at by some smart pointers can be considered an instance of RAII.

There is a fundamental problem with this C code:

int f(char *fname)
{
  FILE *fp = fopen(fname, "r");

  if (! fp) {
    handle_error(fname);
    return -1;
  }

  /* Do stuff with fp, call functions, etc. */

  fclose(fp);
  return 0;
}
If f() has any other exits, the file pointer fp must be freed before each one. It is very easy to neglect to do this. If f() allocates multiple resources during execution, each exit must take care to free only thouse resources which were indeed allocated. f() might be able to proceed despite failing to allocate some of the resources, adding further to the problem. The code rapidly becomes a mess. And since these exits will typically be due to obscure conditions, it may be hard or impossible to test for them...

In C++, things get even worse. You can create a new object anywhere in your code, not just at the start of a block. This is "just" a maintenance nightmare, but still (theoretically) solvable. But f() might also call some function g(), which throws an exception! If f() doesn't catch the exception, it will leak resources, as its cleanup code is never executed. Lisps' unwind-protect offer a solution to this -- the cleanup code is specifically guaranteed to be executed. But this isn't a Lisp node, and besides -- it doesn't force you to invent 17 new classes, hence is unfitting for an object oriented language.

So it might seem the only solution is to have every function catch all possible exceptions, free resources, and re-throw the exception. If using exceptions, your entire program will quickly deteriorate into a tangled mess of try { ... } catch { ...; throw(...); }; blocks, as you try to free every allocated resource.

RAII is a better idea. The destructor of a local object is guaranteed to be called whenever the program leaves its runtime scope. In particular, the destructor gets called if the function terminates normally, or if an exception unwinds the stack through it. The fundamental idea of RAII is to acquire all resources in the constructor of some object; as the object's destructor will be called at the appropriate time, the resource can be freed there:

int f(char *fname)
{
  std::ifstream file(fname);            // constructor called

  if (! file) {
    handle_error(fname);
    return -1;
  }

  // Do stuff with file, call functions which throw uncaught exceptions, etc.

  // file is automatically destroyed past this point, return
  // or no return!
  return 0;
}
The destructor of an ifstream will be called when file goes out of scope. That destructor closes the file if it's open -- so we cannot leak any resources associated with the file.

Many objects (fstreams and other iostreams are among them) even have "re-initialisation" semantics, which let you release any previous resource and allocate a new one. The destructor will be called eventually -- so a better name might have been "releasing resources is destruction".

If you prefer to hold a pointer to your resource, you're out of luck. Resources can only be destroyed if held by objects. But a smart pointer can usually save the day: by using an object with appropriate "eventual" destruction semantics, you can ensure destruction of the underlying object. It is not clear what you stand to gain from doing this (except conformance with broken coding standards).

RAII is an elegant (compared to the rest of the C++ programming language freakshow) use of the language to solve a real problem. Due to the multitude of destructors called, it is slightly less efficient than "the" correct low-level coding (especially if you're calling numerous virtual destructors). But the performance would be poor in any case: it's hard to make a wise portable decision whether to jump to more general resource-releasing code or use specific knowledge of the program's precise state to know what must be released and what must not.

A more serious drawback is that every object used for RAII must have some class. For "low-level" objects such as open files, locks, memory, and the like, the maintenance cost of these extra classes is low: you'd have to have them anyway, so all you have to add is some effort in writing them correctly. Low-level objects need only be written once, and can be used very many times. Indeed, many times the library will already have all you need (iostreams is the premier example here), so you don't need to add any extra code. But for resources higher up the application chain, you aren't so lucky. For instance, if you need to manage a block of shared memory and a socket, which can only be allocated or released together under the protection of the same lock, you'll need to define your own class. Such a specific collection of resources will typically be used very few times, maybe just once. But the code for the collection object's class will have to be written. And it will have to live some distance away from the function which actually uses it (C++ doesn't allow for internal classes, let alone anonymous classes, which could make the situation a bit better). Using RAII increases the fragmentation of code which already occurs with OOP.

At least (unlike many other "design patterns") it's a real solution to a very real problem. I still prefer unwind-protect. Too bad They don't pay me to program in Lisp.

Note that Java Junkies don't get to use RAII. Their language's much-vaunted "garbage collection" automatically prevents memory leaks, but does away with destructors! On an imaginary machine with infinite memory (a JVVM?), no "finalising" methods will ever be called. So the hypothetical JFStream class would be unable to ensure that files are ever closed! Garbage collection is handy, but it only collects predefined types of garbage. Real programmers make unlimited types of new garbage.

Log in or register to write something here or to contact authors.