In object oriented programming languages such as C++, a destructor is a common method of a class, automatically called when an object is destroyed. In many cases this works out fine – the method simply needs to make sure that all resources allocated by an object of this type are released. However, in other than simple non-polymorphic classes, there is an extra complication. To quote from the ten commandments for C++ programmers: “Thou shalt declare and define the destructor as virtual such that others may become heir to the fruits of your labors.”.

But why do this? How does it help?

Virtual

The answer can be found by thinking about why we use the keyword virtual at all. To recap, a virtual method is used when we want to derive from a class, and allow access to derived classes through the interfaces provided by the base class. For example, take the class foo, and its derivative bar. foo contains a method DoSomething, which bar overrides. If we create an instance of bar through a pointer to a type of foo:

foo *obj = new bar();

and then call the DoSomething method of obj:

obj->DoSomething();

what happens? Something undesirable – instead of the code defined in bar being executed, we find that the foo version of DoSomething is called instead. The fix is simple – declare DoSomething as a virtual function in the base class foo, and the compiler will be more intelligent about which version of the method it calls.

Destructors

So, imagine the above example, but with destructors. What if bar declares a massive dynamic array which is cleaned up in its own destructor?

delete obj;

With a non-virtual destructor in the base class, bar's destructor isn't called at all, and we suffer a horrid memory leak where a large clump of memory isn't freed when it should be. The magic keyword “virtual” fixes this, and allows us to do the things that object oriented programming was designed to do.

Can you think of any advantages of a NON-virtual destructor?

The reason why your class' constructor should usually be virtual:

class base {
 public: 
  base (){ /* constructor */ }
  ~base () { /* Non-virtual destructor */ }
};
class derived : public base {
 public:
  derived () { /* constructor */ }
  ~derived () { /* destructor */ }
 private:
  char aLongCstring[1024];
};
int main () {
  base* aPointer = new derived;
  delete aPointer;
  return 0; // even in example programs!
}

What's so bad? Well, aPointer is assigned a new derived. New calls derived's constructor, which allocates 1024 bytes of data for the variable. Then delete is called. All delete knows is that it is being passed a base* -- so it calls base's destructor. But base's destructor doesn't know about the 1024 bytes of data, so it doesn't delete them -- and we have a memory leak. That's a kilobyte down the bit bucket until your program ends -- or, if you go into an infinite loop, until you restart the computer. If you allocated an array of 1024 deriveds, it'd be a megabyte lost. Do this a few times and you'll have major problems.

The solution? Make ~base virtual. Then the compiler knows that whenever the destructor is being called on an object of type base, it should check to see if the object is really of another class with base as its base type -- which it is, in this case. So ~derived gets called, which knows about the 1024 bytes, and everything is sunshine and rainbows.


Which leads to the question, why would you ever NOT make a destructor virtual? Well, you might want a non-virtual destructor if 2 things are the case:

1. The class is very small.
2. The class is not intended to be used as the base class of another class, and therefore has no other virtual functions.

But why, you ask? Simple. If a class has any virtual member functions, the compiler adds an extra data member -- a pointer -- which keeps track of what the actual class of the instantiated object is, so that the correct version of a function can be called at runtime, even by a function that doesn't know about the details of derived classes. This pointer takes up 4 bytes on most systems. Therefore, if the class is very small (maybe it only stores a one-byte character), you've just increased the size of the class significantly (from one byte to 5.) If a class has no other virtual functions, a non-virtual destructor saves these four bytes.

Since when do four bytes matter? Well, suppose someone has an array of your class. A big array. Maybe they have, say, 128 million of them. If your class takes up one byte, that's 128 megabytes of memory -- which most computers can handle. But if you take up 5 bytes, that becomes 640 megabytes -- which some computers cannot. (I know, I know, your computer has several gigabytes of RAM. Scale up the number accordingly.)

Is this an obscure reason? Yes. Is it a valid reason? Also yes. Is it a reason that will ever occur in your career as a programmer? Maybe, maybe not. But you should keep it in mind.

But, you say, you want to keep your class small and use a non-virtual destructor, but you're still worried about the scenario outlined above? ...Well, put a comment in your header file telling people not to derive classes from your class without editing your class to have a virtual destructor. There's no language mechanism to stop them, though. Sorry.

(C++, and most likely only C++:)

A destructor that is virtual

In C++, a virtual destructor is a destructor that happens to be a virtual function (which see, if you'd like a recap of why member functions sometimes like to be virtual) . So, it gets called with (run time) polymorphism.

Why bother?

To apply run-time polymorphism in C++, you need to hold the object itself, via pointer or reference. If ever you copy an object to a variable of a base class, object slicing takes place and your new copy has that class.

Now suppose you want to write code like this:

      class X { /* ... */ };
      class Y : public X { /* ... */ };

      X* make_a_new_X(int);

      void f(int t) {
        X* px = make_a_new_x(t);
        // do stuff with px
        delete px;
      }
  
This code works perfectly -- as long as you know make_a_new_x always returns a pointer to an actual X, not a Y. As soon as it can return a Y* under some circumstances, you may have a problem: delete px calls px->~X(), so ~Y() is not called -- even if you expected it to.

For automatically-allocated objects ("on the stack"), the case never arises: you never explicitly delete them. So the rule is clear, if not simple:

  • If you plan to manage dynamically allocated objects via polymorphic pointers, and if these require non-trivial destructors, then the destructor of the base class needs to be declared virtual.
As soon as we declare virtual ~X(); in the definition of class X above, the code will work in all circumstances.

Since code correctness is pretty important, the rule above has been relaxed to something easier to understand and apply:

  • If you plan to inherit from a class, declare its destructor virtual.
This rule is certainly easy to understand and apply. And which accounts for its popularity: "I declare my destructors virtual, so I are an Object Oriented C/C++ Programmer!" (see the ten commandments for C++ programmers for yet another example). Unfortunately it is misguided and dangerous. Blindly declaring methods "virtual" is never a good idea. Blindly declaring your destructors "virtual" may prevent some bugs -- but if it does, you must have some other methods that also must be declared "virtual" to prevent bugs.

Why not always?

The second version of the rule is curious: if it's so important, why does the ISO C++ standard even give the option not to declare destructors virtual?

A destructor is just another function (even if one that gets called automatically at various points). The above arguments would apply to any member function of the class:

  • If you intend to call the member function via a polymorphic pointer or a polymorphic reference, you should declare it virtual.
  • If you plan to inherit from a class, declare all its methods virtual.

And C++ does not make methods virtual by default. Doing so would break some promises of C++:

  1. C++ allows you to program is a certain paradigm, but never forces you to do so.
  2. C++ tries to give a consistent programming environment (more details of this).
  3. You only pay for C++ feature that you actually use.
For (I), the case is clear: making all methods virtual would force you to program in an OO manner. C++ is not an object-oriented language: it just acts like one, and lets you write OO code if you so desire. Obviously, making all methods virtual would hurt (performance-wise, at least), but why not just destructors? (II) takes care of that point: the inconsistency would be jarring. And (III) is the finishing touch: virtual method calls, which often have to be (in the C++ and in the general modern linkage model) implemented as some wrappers for function pointers, must be less efficient than direct calls.

Making all method calls virtual would force C++ into the (pure) object-oriented mold of programming language, at some execution cost. Sure, some applications can take the hit, but not all. Think, e.g., of the cost of 108 virtual method calls in a loop.

Making just destructors virtual would be a curious mixup. Exactly the same programming errors as occur with destructors would still occur. The only difference would be that programs might give incorrect results, rather than crash. And the inconsistency would break point (II) above, when there is really nothing special about dispatching destructor calls.

STL

The STL in the standard C++ library gives examples of all these points. It is not object oriented. Accordingly, all STL destructors and almost all standard library methods are not virtual. This supports (I): you can program in C++ as "a better C", just using e.g. STL data structures as highly optimized data structures with precisely guaranteed semantics. It also supports (III): it is perfectly safe to use e.g. std::pair in performance critical code, knowing that no virtual method calls are introduced.

It is not even safe to inherit from an STL container. Containers have a huge interface. An inheriting class would need to implement them all. For instance, consider methods at() and operator of std::vector. While the first might be implemented in terms of the second, it need not be -- so even if they were both virtual, you'd need to reimplement each one when inheriting.

Templates

ISO C++ does not allow virtual function templates. It is simply too far outside the current (probably broken) linkage model employed on modern platforms. If we allowed

      class X {
      public:
        template<typename T> virtual f(T t) { /* ... */ }
      };
      class Y : public X { /* ... */ }
    
then every instantiation of X::f<T> for any T would immediately require instantiation of Y::f<T>. Since objects in different translation units may inherit from one another, the complexity would be unmanageable. The mind boggles at the thought of dynamically loaded objects; while not part of any C++ standard, all modern platforms support them -- and allowing template virtual functions would require compilation as part of dynamic loading!

In the current state of affairs, making all methods virtual would make templated methods impossible -- again going against the status of C++ as a multi-paradigm language.

Tradition

Obviously, using a class in a generally object-oriented manner requires making its destructor virtual. Accordingly, the traditional way ("the C++ way") of marking a class "inheritable" is making its destructor virtual.

Some people (myself included) like to mark this explicitly:

      class X /* ... */ {
      public:
        // ...
        virtual ~X();               // Inherit from this class!
      };
    

Another example: As mentioned above, it is not safe to inherit from STL containers, and this is well known. Since STL destructors are not virtual, even this upholds the tradition.

What should I do?

As always: Think.

If you have inheritance, think whether or not you need virtual functions. This certainly includes the destructor -- but if any function needs to be declared virtual then most likely all functions that can be overridden will also need to be declared virtual.

If inheritance is supported by your class, good for you. If dynamic typing is required (i.e. you're using a particular form of inheritance), you'll probably want to have some virtual functions in it.

And if the lifetimes of instances of your class can be managed through dynamic typing, you'll probably want a virtual destructor.


A note on PsyMar's writeup above. While it is probably true that base::~base should be virtual, there is no memory leak in the code he gives. This is important enough to repeat:

There is no memory leak in the example code!

True, ~base has no code freeing those extra 1024 bytes of derived::aLongCstring. BUT -- neither does ~derived! How on earth could it? The memory does not come from new[], so it cannot be delete[]d. If derived is on automatic storage ("the stack"), it still needs to be reclaimed.

The compiler has an excellent idea of how to free a derived struct. As part of its excellent idea, it takes care to know how much memory it takes. derived::~derived needs to do nothing about it, and neither does base::~base. Claiming that this is a memory leak is plain wrong.

Entirely equivalently, a class

class derived2 : public derived {
   X x;
};
will correctly destroy x, calling X::~X, as part of its destruction.

No code needs to be written to make this happen. No code CAN be written in C++ that will make this happen. The compiler has to do it. The compiler does it.

Log in or register to write something here or to contact authors.