A
disk caching strategy:
when an
application writes to a
file,
the relevant
disk block's in-memory
buffer is updated,
but the block is
not actually written to disk right away; the write is
scheduled to happen "a little bit later".
This has two advantages: (1) control returns to the application right away, without waiting for physical disk I/O, and (2) if the block is modified again in the very near future (which is not at all unlikely, if the application is in the process of writing or updating the whole file), it is likely that the eventual physical write of the block will reflect the composite of most or all of the updates; there won't be a series of redundant, overlapping writes (where all but the last is essentially wasted).
Of course, there's also a corresponding large disadvantage:
if the machine crashes unexpectedly, with unwritten disk blocks still in the cache, data can be lost, and the disk can even be left in an inconsistent state, requiring repair at the next reboot.
In any case, if you have an I/O cache at all,
writebehind is easy to implement,
and gives you good bang for the buck.