Reentrancy is a term in computer programming with the meaning that there is the possibility that at a single point in time a particular piece of code might be in the process of executing multiple times (thus, "re-entered"). This most commonly occurs in multithreaded code, though programs which make use of recursive algorithms also often cause reentrant conditions.

What we normally think of as a function (from a mathematical standpoint) can be reentrant without issue - one can compute 2*sin(x) given an input x as many times as you like in parallel without any concerns. In programming, reentrancy can be a problem, because of state; this leads to interactions where the behavior of one piece of code depends on what another one did (or hasn't done). Without this shared state, two pieces of code cannot observe each other in any way, so it would be impossible for them to depend on one another, even if they wanted to - they have no way to communicate with each other, implicitly or otherwise!

For example, consider a hypothetical piece of software handling bank withdrawals:

  1. Read the current balance from a database
  2. Subtract the withdrawal amount from the balance
  3. Write the new balance back to the database

Which seems eminently straightforward. However, consider what might happen if many bank withdrawals are happening at the same time; one could have the situation where the first withdrawal gets through the first two steps, but due to the multitasking nature of the operating system, the other withdrawal process starts running before the balance is written back. The new withdrawal process will, when it reads the database, still see the old (pre-withdrawal) balance. Then, no matter which one of the two processes ends up writing to the database first, one of the withdrawals will be lost in the shuffle. Good for you, but it's probably not going to make the bank very happy.

Nearly any state information which can be observed by more than one piece of code has this property. The most general workaround, then, is to not have any state; this technique is known as functional programming, and is embodied by languages such as Lisp, Haskell, and ML. In theory one could hew to purely functional programming in languages such as C or Java, but the lack of support for such techniques in those languages would make the effort rather hard on the programmer. Code that can be safely reentered is called "reentrant-safe" or "thread-safe" (a term that reflects the most common cause of reentrancy in today's programs).

The general task, when dealing with code that is not reentrant-safe, is to either make it reentrant-safe (by having it not rely on any shared resources), or by preventing the reentrant condition from occurring, which is done using locks such as mutexes. Locking essentially forces just a single piece of code to run at any one time, preventing the data corruption issues discussed here - the trade off is that the ability of the code to scale to multiple threads of execution is compromised. Even a small amount of serialization can seriously affect the amount of parallel computation that can be performed; this observation is known as Amdahl's Law.