Quicksort, invented by C. A. R. Hoare, is the most common "industrial strength" sorting algorithm. As the name implies, it's supposed to be quick. It's not particularly easy to program or explain. It's also far less intuitive than any of selection sort, insertion sort, merge sort or bubble sort, all of which tend to be used by real people in the course of their everday lives.

Quicksort is often billed as being "efficient"; it is sometimes said to run in time O(n log n), but this is false for most naïve implementations.

Of course, you may want to redefine efficient for Quicksort to be the most efficient. Quicksort's worst case behaviour is quadratic (O(n^2)); for naïvely-coded versions, this often happens when data is already sorted or sorted in reverse. Since in some applications the list tends to be nearly sorted, this can be a very Bad Thing. Smart (or random) choice of the pivot element can make it O(n log n), but these are not nearly common enough (and can suffer high overheads). The worst cases are typically for nearly-ordered data (e.g. an already sorted array, or an array already sorted in reverse order, and arrays nearly in this situation). Quicksort's "introspective" variants improve on this, typically by detecting these situations and switching to some other sorting algorithm. This implementation is typical in the STL of C++.

Quicksort is considered "quick" because its average case behaviour (assuming all n! permutations are equally likely, or some similar limitations) is not only O(n log n), but also twice as fast as heap sort. And unlike merge sort, it's in-place (you don't need a second scratch array). So for some applications, it's really very good; unfortunately, not for all.

In C, Quicksort is accessed using the library routine qsort(). Because of its worst-case behaviour, unmodified Quicksort is not suitable for sorting in C++'s STL (there, O(n log n) worst case is required).

Another limitation is that Quicksort is not a stable sort: there's no (easy) way to cause it to preserve the ordering of elements which the comparison function deems equal without enlarging the array (which would, again, make it not be in-place!).

As always, who uses an algorithm must first determine its appropriateness for the problem.

Quicksort was invented by C.A.R. Hoare. The important part is the efficient in-place partition algorithm.

In addition to the disadvantages ariels discusses, quicksort also has the problem of not being stable---that is, it does not preserve the order of elements with the same sort key. Additionally, it, unlike merge sort, requires random destructive access to the sequence being sorted---making it ill-suited to sorting lists, or to use in pure functional languages.

Quicksort is one of the more interesting sorting algorithms around because it has two peculiar properties.

Firstly, it's one of the faster sorts available (hence the name), with an average running time of O(nlogn). This is an interesting property because the actual running time of quicksort is O(n^2).

This is related to the other interesting property of quicksort - the input which yields the worst running time is rather embarassing... it is the ordered list. That is, quicksort is guaranteed to run longer if the input is already sorted.

Quicksort works by the good ol' divide and conquer approach; it chooses an element (called the pivot), and divides the remaining elements in two groups - elements smaller and greater than the pivot. This is done recursively and is notably fast because the inner loops are very quick and the comparisons are made against the same element, so it may be stored in a register. The sort is also done in-place.

The ordered list is the worst case because the two resulting partitions of input size n are of size 1 and n-1. Therefore, most implementations of quicksort are randomized, that is, the pivot chosen is a random element in the list. This yields an algorithm in which no input manifests the worst-case behavior.

In this writeup, I assume that the name of the array to be sorted is A, that it is N entries long, that the entries are indexed 0 to (N - 1) inclusive, that element number i of the array is referenced by A{i}, and that the array is to be sorted so for all integers i and j between 0 and (N - 1) incusive, (A{i} ≤ A{j}) iff (ij) (i.e. A{0} contains the lowest element and the values of successive entries in A are nondecreasing).

The basic idea of the quicksort is to divide and conquer the problem of sorting a large array into sorting two smaller arrays. This process eventually reaches a base case because an array of zero or one elements is automatically sorted. To divide the problem, a 'pivot value' is chosen. This pivot value must be between the lowest value in the array and the highest value in the array (inclusive), it would be desirable if the pivot value were actually the value of some element of the array (this of course automatically satisfting the first criterion), and in fact the best possible pivot value would be the median of the array. In general, the effort required to find the median is more than it's worth, so usually the goal is to get an approximation or a good guess as to what it is.

Once a pivot value is chosen, the array is shuffled so no value less than the pivot is at a higher index than any value greater than the pivot, and vice-versa. This is accomplished by initialising a 'left pointer' to 0, initialising a 'right pointer' to (N - 1), then moving the pointers towards each other. If the value at the left pointer is greater than the pivot, or if the left pointer crosses the right pointer, the left pointer stops; if the value at the right pointer is less than the pivot, or the right pointer crosses the left pointer, the right pointer stops. If the pointers stop before they cross each other, then the values at the pointers are swapped and they are moved towards each other again in the same fashion - eventually, they will cross (they start a finite distance apart, and each pointer moves at least one step closer each time).

After you've done all this, you know that from index 0 up to [but not including] the left pointer contains only values less than or equal to the pivot (elements already there which were less than or equal to the pivot were skipped over by the left pointer, elements which were greater were swapped with an element less than the pivot from the right section), and from index (N - 1) down to [but not including] the right pointer contains only values greater than or equal to the pivot (by similar logic). Thus, when the pointers do cross, the array is now in two smaller parts such that if each is sorted then the entire array is sorted - this is where the divide & conquer comes into play. Code for this algorithm, using the simplest choice for a pivot value (the first element of the array), looks like this:


define quicksort (A, leftmost, rightmost) as:
  if ((rightmost - leftmost) > 0) then
    pivot := A{leftmost}
    left := leftmost
    right := rightmost
    while (leftright)
      while ((leftright) ∧ (A{left} ≤ pivot))
        left := (left + 1)
      end while
      while ((rightleft) ∧ (A{right} ≥ pivot))
        right := (right - 1)
      end while
      if (left < right) then
        swap A{left} and A{right}
      end if
    end while
    quicksort (A, leftmost, right)
    quicksort (A, left, rightmost)
  end if
end quicksort

A quicksort on the entire array is invoked by quicksort(A, 0, (N - 1)). [Typically, a different sort is chosen when the size of the array is small enough, but this is just example code to show the ideas behind this particular sort so I haven't included that particular optimisation.] One problem with this algorithm is that it gives very bad run-times when used with a sorted or nearly-sorted array. A better pivot point, in those cases, would be the middle value in the array. Another approach is to take the first, middle, and last elements in the array, sort them in place (bubble sort is entirely suitable for this part because it's only 3 elements), then take the middle as the pivot point. Another optimisation for arrays which contain many duplicated values is to allow the pointers to cross by more than one index. This makes the divide of the divide and conquer even more effective and efficient. A final optimisation is to change the tail-recursion to iteration, and to recurse on the smaller of the two sections and iterate on the bigger.


define quicksort (A, leftmost, rightmost)
  while ((rightmost - leftmost) > 0)
    mid := floor((leftmost + rightmost) / 2)
    if (A{leftmost} > A{mid}) then
      swap A{leftmost} and A{mid}
    end if
    if (A{mid} > A{rightmost}) then
      swap A{mid} and A{rightmost}
    end if
    if (A{leftmost} > A{mid}) then
      swap A{leftmost} and A{mid}
    end if
    pivot := A{mid}
    left := leftmost
    right := rightmost
    while (leftright)
      while ((leftrightmost) ∧ (A{left} ≤ pivot))
        left := (left + 1)
      end while
      while ((rightleftmost) ∧ (A{right} ≥ pivot))
        right := (right - 1)
      end while
      if (left < right) then
        swap A{left} and A{right}
      end if
    end while
    if ((right - leftmost) ≤ (rightmost - left)) then
      quicksort (A, leftmost, right)
      leftmost := left
    else
      quicksort (A, left, rightmost)
      rightmost := right
    end if
  end while
end quicksort

Note that in the above code, the expression ((N ≤ (N - 1)) ∧ (A{N} ≤ pivot)) should evaluate to false, with no error; likewise, ((-1 ≥ 0) ∧ (A{-1} ≥ pivot)) should evaluate to false with no error, even though A{-1} and A{N} are undefined. As for the quicksort being unstable, there is a simple modification which will fix that. Create a second array parallel to the first, initialised with the values of each entry equal to the index. When swapping elements in the sort array, swap the corresponding elements in the index array. If comparing two elements in the sort array gives equality, then compare the values in the index array. Always sort so that when the sort array values are equal, the index array values are in ascending order. This will make the quicksort stable. The index array can be discarded as soon as the sort is finished. Of course, putting so strict a condition on the ordering, the left and right pointers will never cross further than 1 place unless the pivot winds up unmoved, in which case they will only cross by 2 entries.


define stablequicksort(A, leftmost, rightmost)
  i := leftmost
  while (irightmost)
    B{i} := i
    i := (i + 1)
  end while
  quicksort2(A, leftmost, rightmost, B)
end stablequicksort

define quicksort2 (A, leftmost, rightmost, B)
  while ((rightmost - leftmost) > 0)
    mid := floor((leftmost + rightmost) / 2)
    if ((A{leftmost} > A{mid}) ∨ ((A{leftmost} = A{mid}) ∧ (B{leftmost} > B{mid}))) then
      swap A{leftmost} and A{mid}
      swap B{leftmost} and B{mid}
    end if
    if ((A{mid} > A{rightmost}) ∨ ((A{mid} = A{rightmost}) ∧ (B{mid} > B{rightmost}))) then
      swap A{mid} and A{rightmost}
      swap B{mid} and B{rightmost}
    end if
    if ((A{leftmost} > A{mid}) ∨ ((A{leftmost} = A{mid}) ∧ (B{leftmost} > B{mid}))) then
      swap A{leftmost} and A{mid}
      swap B{leftmost} and B{mid}
    end if
    pivotA := A{mid}
    pivotB := B{mid}
    left := leftmost
    right := rightmost
    while (leftright)
      while ((leftrightmost) ∧ ((A{left} < pivotA) ∨ ((A{left} = pivotA) ∧ (B{left} ≤ pivotB))))
        left := (left + 1)
      end while
      while ((rightleftmost) ∧ ((A{right} > pivotA) ∨ ((A{right} = pivotA) ∧ (B{right} ≥ pivotB))))
        right := (right - 1)
      end while
      if (left < right) then
        swap A{left} and A{right}
        swap B{left} and B{right}
      end if
    end while
    if ((right - leftmost) ≤ (rightmost - left)) then
      quicksort2 (A, leftmost, right, B)
      leftmost := left
    else
      quicksort2 (A, left, rightmost, B)
      rightmost := right
    end if
  end while
end quicksort2

A Visual Guide To Quicksort

The other writeups in this node do a strong job of describing the algorithm, but for many with interest or education in computer science, an example may help to illustrate the quicksort algorithm.

Problem: We have a list of sixteen names. We want to put these names in alphabetical order. Here is the list:

Adams
Edgar
Garrett
Knowles
Jones
Lawrence
Maxine
Hancock
Norbert
Fish
Oliver
Irene
Detroit
Cass
Philip
Brown

Solution: We will use the quicksort algorithm to sort this list.

The quicksort algorithm uses a divide and conquer approach to sorting a list. The algorithm separates a given list into two separate lists, then sorts each of these lists. This is done through the concept of a pivot point.

A pivot point is a single element from the list that is correctly placed in the position it should have in the final, ordered list. This is done by selecting an item from the list by some method (often, it is chosen randomly).

For the first pass, let's choose Hancock as the pivot point.

Adams
Edgar
Garrett
Knowles
Jones
Lawrence
Maxine
Hancock
Norbert
Fish
Oliver
Irene
Detroit
Cass
Philip
Brown

The next step sorting the remainder of the list into two distinct piles: less than the chosen item, and more than the chosen item. In essence, you're pivoting the list around the pivot point.

Adams
Edgar
Garrett
Fish
Detroit
Cass
Brown
Hancock
Knowles
Irene
Jones
Lawrence
Maxine
Norbert
Oliver
Philip

In the end, you have a pile of n items that should be ranked before the pivot item, followed by the item itself in the n + 1 position, and then a pile of m items that should be ranked after the pivot item. Obviously, the total number of items that you're sorting is n + m + 1. The stacks can be seen above in their rough order.

Recursion comes in at this point. On each unsorted pile of items, you merely call quicksort again. From each of the piles a pivot point is selected...

Adams
Edgar
Garrett
Fish
Detroit
Cass
Brown
Hancock
Knowles
Irene
Jones
Lawrence
Maxine
Norbert
Oliver
Philip

... and then each of these stacks with a new pivot point is sorted around the pivot point as before, with the lesser ones going before the point and the greater ones going after.

Adams
Cass
Brown
Detroit
Edgar
Fish
Garrett
Hancock
Knowles
Irene
Jones
Lawrence
Maxine
Norbert
Oliver
Philip

This concept is then repeated with each of the new small stacks:

Adams
Brown
Cass

Detroit

Edgar
Fish
Garrett

Hancock

Irene
Jones
Knowles
Lawrence

Maxine

Norbert
Oliver
Philip

When the piles are of size 1, as almost all of the ones shown above, that means the item is already at its correct place in the list, thus no need to execute quicksort.

Typically, on a very small array (of size 20 or less), other sorts are chosen because they are faster on very small sets, but in terms of large sets, it usually far surpasses other sorts.

Hopefully, this should make some of the computer science-heavy writeups in this node more clear in terms of how the algorithm works.

"... it is sometimes said to run in time O(n log n), but this is false"

Actually, this is true.

As joeytsai points out, quicksort "chooses an element (called the pivot), and divides the remaining elements in two groups - elements smaller and greater than the pivot." Quicksort's optimal running time of O(n log n) doesn't happen unless the choice of pivot roughly cuts the input in half each time. If we cut the input nearly exactly in half at every iteration, the running time of quicksort can be found by solving the recurrence:

T(n) = 2T(n/2) + O(n).

In other words, the cost to solve a problem of size n is equal to the cost to solve 2 subproblems of size n/2, plus some constant factor times n. The easiest way to solve this recurrence is with the Master Theorem: case 2 applies here. The solution is therefore O(n lg n).

The problem arises in that the "naive" method of splitting the input doesn't result in evenly-sized subproblems every time. However, with a better method of splitting the input, we can be guaranteed to achieve the optimal worst-case runtime. Note that we can split the array exactly in half if we always choose its median as the pivot. The median is also known as the n/2th order statistic, which can be found in O(n) time. Doing this at each step of quicksort does not increase the asymptotic worst-case running time; the recurrence above simply becomes

T(n) = T(n/2) + O(n) + O(n).

... but O(n) + O(n) = O(n), since O(n) basically means "n times any positive constant". Adding the median-selection step makes this constant higher, but the asymptotic running time of quicksort is still O(n log n).

On the other hand, the SELECT algorithm for finding the median is relatively expensive for an O(n) algorithm; i.e. the constant is high. For the best wallclock running time, you'd want to empirically determine at what input size the O(n lg n) quicksort given above is actually faster than the O(n^2) naive or randomized-pivot quicksort, and only call the O(n lg n) quicksort when the input is sufficiently large.


yerricde pointed out that heapsort's O(n lg n) performance may be better in this case. I tend to agree; there is a practical difference between the running times of, say, heapsort and quicksort, even if the "theoretical" running times are the same. In fact, NIST's directory of algorithms and data structures (http://www.nist.gov/dads/HTML/introspectiveSort.html) lists a type of sort called "introspection sort" and defines it as: "A variant of quicksort which switches to heapsort for pathological inputs, that is, when execution time is becoming quadratic."

Log in or register to write something here or to contact authors.