Spinlocks are so called "busy-wait" locking mechanisms used in software (most notably operating systems' kernels and other applications that need to address the CPU and/or other hardware directly) to ensure mutual exclusion for a resource.

A process that needs exclusive access to, say, a certain hardware register locks the spinlock associated with that particular register, does its work, and unlocks it again afterwards. Tasks waiting for a spinlock are suspended in a busy loop until the spinlock becomes available again.

Spinlocks are only really useful in SMP (multiprocessor) machines; their sole advantage over other locking mechanisms is that it blocks IRQs only on the processor the task is running on. Single processor systems usually use a so-called cli/sti pair instead. (cli/sti refers to their respective assembler instructions CLI - Clear Interrupt Enable Flag, and STI, Set Interrupt Enable Flag.)

Spinlocks come in two flavors; full locks and read-write locks. The latter can be used to allow multiple readers of a register, while only one process at a time can write to it.

Sources: the Linux kernel source, and http://kernelnewbies.org/

When doing kernel programming, it's often not possible to use semaphores, the "standard" system used for locking almost everywhere else. The problem with semaphores is that they put the process to sleep until the semaphore is available. This isn't always possible - it's impossible to sleep in an interrupt handler, for example.

Spinlocks are a different form of locking mechanism, usually used on multiprocessor systems. Spinlocks just sit in a busy loop until the lock becomes available. The rationale here is that it's quicker to just waste a few processor cycles rather than going to the overhead of putting the process to sleep and waking up later on when the lock becomes available.

So this doesn't seem like something that would be useful on a single processor system: after all, if there's only one CPU, the CPU is just going to go into an infinite loop waiting forever for another nonexistent processor to release the lock. Except, in the Linux kernel, that isn't all that spinlocks do. The first thing to realise is that on single processor machines, spinlocks do not "spin" - ie. they do not wait in a busy loop, so you won't end up in an infinite loop. Also, Linux has a function named "spin_lock_bh" (and corresponding "spin_unlock_bh"), which temporarily disables softirqs for the period of the lock. The result is that it's possible to do locking between an interrupt handler and other code, because any interrupt handler will not be able to execute while the lock is in place.

The other thing to note is that Linux 2.6 has kernel preemption, where kernel code can be preempted (previously, processes could only be preempted when executing user space code). On a kernel with preemption, spinlocks temporarily disable preemption while in the critical section. Basically, the thing to understand is that on a single processor system, while holding a spinlock, your code is never going to be interrupted*, and on a multiprocessor system, other processors will be unable to acquire the lock until you're finished. So spinlocks are a sort of universal locking mechanism that works for all configurations. The moral of this story - don't dismiss spinlocks if you're writing code for a single processor system, because they do more than they say on the tin.

* - Except by hardware interrupts - you can disable those as well using spin_lock_irq/spin_lock_irqsave.

Log in or register to write something here or to contact authors.