Peterson's Algorithm is an algorithm to provide mutual exclusion. This means that it is guaranteed, that two or more processes, which have a critical section (for example both want to print something etc.), do not simultaneously enter that section. This is pretty hard in time-sharing systems as the following can happen: p1 wants to print something, but another process is using the printer, so p1 just waits till the printer is free. p2 does the same. Now the other process is finished and p1 gets its turn. It checks whether or not the printer is free. As it is free, it stops waiting and wants to start the printing job. Just in that moment p1 is interrupted, because it used all its processor time. Now p2 is started and as p1 wasn't able to start printing, the printer is still available and it starts its printing. After some time (certainly during the printing job of p2) its p1's turn again. And as it still thinks the printer is free (remember: it was interrupted after checking for the availability of the resource), it starts printing. BOOM! We have a race condition.

Nowadays we have advanced methods to avoid such conditions. They are called monitors, semaphores or mutex variables. But the first working algorithm was Dekker's Algorithm in 1965. Beside its complexity its major disadvantage was, that it only worked for two processes. The algorithm developed by Gary L. Peterson overcame these restrictions and it was much simpler.

Though, as already mentioned, the algorithm works for more than 2 processes, too, I will only show the two process version. The reason is, that the n process version is based on the same principle, but is a bit more complicated (for example which process you give the turn to (you will hopefully understand what I mean, when you finished reading)).

Previous non-working algorithms just checked a variable if the resource is free (no other process is in its critical section) and then started working. The problem was, that it was possible, that the process was interrupted right after checking the variable and before setting it to show other processes, that it had entered its critical section. Peterson's way is simple but genius. You just tell the other process that you want to enter the critical region, but you also say, that if the other process wants to enter the critical, too, he may do it first. This is not only polite, but it won't cost you any more waiting than just entering the section. Why? Because the other process does the same! Think about it as two people arriving at a door at nearly the same time. Both want to go through it, but its not wide enough for both. By standing in front of the door the first person shows, that he wants to enter. But as he sees that another person arrives he makes a gesture to tell the other person, that he is allowed to enter first. The other person is as polite as the first one and allows the first person to enter. The first person goes through the door. As he is now through the door, he no longer wants to go through the door (this may sound stupid but it is important). The other person sees this and goes through the door, too.

Now how does this look like in real processes?
we are in process i (i ∈(e.o.) {0,1})

interested[i] = true; // we want to enter the critical section
turn = 1-i; // give away the turn
while( (interested[i-1]) && (turn==i-1) ) {};
// as long as its the other processes turn and it is interested, wait
critical_section(); // do the critical work
interested[i] = false; // we no longer want to enter the critical section

That was simple. So why isn't the algorithm used anymore? The main reason is the while loop. Though the process is not able to do anything it still gets processor time and wastes it just for running through an empty loop. The absurdity is, that this means, that its waiting time grows, because of the waiting! It takes away processor time, which the other process may have needed to finish the critical section. This is called busy waiting.

Log in or register to write something here or to contact authors.