Floyd's Algorithm for the all pairs shortest path problem is simple and elegant. In Cormen et al. book, this is called the Floyd-Warshall algorithm.

Recall that the all pairs shortest path problem is: given a graph, find the shortest path between every pair of vertices in that graph.

The basic notion is that we update an adjacency matrix N times (where N is the number of vertices in the graph). First, we update the graph to reflect the shortest known paths going through node 1. At step 2, we update the graph to reflect the shortest known path going through node 1 or node 2 (or both). At step 3... and so on.

Here's the code:

//Graph g is an adjacency matrix
floyd(Graph g) {
    for(k =0; k < g.size; k++)
        for(i=0; i < g.size; i++)
            for(j=0; j < g.size; j++)
                if(g[i][j] > g[i][k] + g[k][j])
                    g[i][j] = g[i][k] + g[k][j];

After running the algorithm, the graph g is transformed such that g[i][j] contains the length of the shortest path from vertex i to vertex j.

Note that for the original adjacency matrix, if there is no edge between two vertices i and j, g[i][j] will have the value infinity. Also in this algorithm, we only store the length of the shortest path, not the path itself. I hope you can see that the algorithm can be easily extended to store the path as well as the length.

Floyd's algorithm runs with a time complexity of O(n3)

A very straightforward algorithm used to solve the all pairs shortest path problem. In other words, this will give you the shortest path between every pair of vertices in a directed graph, provided there are no negative cycles. Robert Floyd first published this algorithm in 1962 after reading Warshall's algorithm. Because this algorithm is just a short conceptual jump from Warshall, it is often dubbed the Floyd-Warshall algorithm.

This algorithm is of particular interest because it is a conceptually simple algorithm that is a good application of the dynamic programming technique. It also has many related algorithms, some of the more intuitive greedy choice type, and some with different ways of applying dynamic programming to solve the same problem. For a slightly faster solution (for sparse graphs) see Johnson's Algorithm.

Recursive Definition

The hallmark of dynamic programming is re-using the solutions of subproblems to build solutions to progressively larger problems. Floyd's algorithm is based on a simple observation:

The shortest path from vertices i to j that passes through intermediate vertices 1,2,3,...,n is one of two things:

  1. Either it is the shortest path that passes through 1 to n-1.
  2. Or it is the sum of the shortest paths from i to n and n to j.

Now let us define M(n) as the matrix giving all shortest paths passing through intermediate vertices 1,2,3,...,n. Also M(n)i,j is the shortest path from i to j. Note that M(n) is equal to M(n-1) for case 1. Furthermore, case 2 only occurs if M(n-1)i,n + M(n-1)n,j < M(n-1)i,j. Therefore M(n)i,j simply equals the lesser of those two quantities.

Floyd's algorithm starts by defining the shortest paths between all vertices that use no intermediate vertices. This is nothing more than the adjacency matrix for the graph. Take the following example:

       3           4 
(1) --------> (2) -------> (3)-+
 \            /|\          /|\ |
  \            |            |  | 
   \           |         -1 |  | 
   7\          |-4  +-------+  |
     \         |   /           |
      \        |  /            |
       \       | /          6  |
        *---> (4) <------------+

We can represent this graph using the following adjacency matrix:

        +-          -+
        | 0  3  *  7 |
        | *  0  4  * |
        | *  *  0  6 |
        | * -4 -1  0 |
 M(0) = +-          -+

For the purposes of this algorithm, * should be considered to be infinity. Also note that the diagonal line of 0s even for vertices that can not technically reach themselves (such as vertex 1). Both of these values are chosen as algebraic tools to simplify the programming of the algorithm.

Considering Each Vertex

So, we start out with M(0), the length of the shortest paths from i to j passing through no intermediate vertices. Now we construct M(1) from M(0) using our recursive definition. Here's how we do it.

For each member i,j of the matrix M(1), we write down either M(0)i,1 + M(0)1,j or M(0)i,j depending on which is the smallest. In the case of M(1) we end up with a copy of M(0) (Remember * equals infinity). Looking at the graph, it is obvious why this is: No nodes point to 1! So obviously no path can use 1 as its intermediary.

        +-          -+
        | 0  3  *  7 |
        | *  0  4  * |
        | *  *  0  6 |
        | * -4 -1  0 |
 M(1) = +-          -+

M(2), however, is another story. Just by looking at the graph we can see a couple paths that use vertex 2. Namely, (1,3) and (4,3). The first of which is a new path which replaces a *, but the second ends up being longer than the direct path, so no change happens. Here are M(2), M(3) and M(4):

        +-          -+
        | 0  3  7  7 |
        | *  0  4  * |
        | *  *  0  6 |
        | * -4 -1  0 |
 M(2) = +-          -+

        +-          -+
        | 0  3  7  7 |
        | *  0  4 10 |
        | *  *  0  6 |
        | * -4 -1  0 |
 M(3) = +-          -+
        +-          -+
        | 0  3  6  7 |
        | *  0  4 10 |
        | *  2  0  6 |
        | * -4 -1  0 |
 M(4) = +-          -+


  • Note that we still have several *s indicating that there are no paths leading to vertex 1 regardless of length.
  • The diagonal line of 0s never changes. Values only change by decreasing, and if one of those were to become negative, that would indicate a negative weight cycle, and hence an infinitely short path would exist. Thus we can use this algorithm to test for negative weight cycles.
  • Observe that the row and column x are always identical in M(x) and M(x-1). This is because all calculations for members of that row and column are based on an addition with the identity member M(x-1)x,x.

The series of matrices M(1...n) may be useful for certain applications, but if we are only interested in the final result, we can use one such matrix and just keep overwriting it as we go. The reason for this is based on the last observation above. M(x) is calculated using only row and column x of M(x-1). Since that row and column are always identical between M(x) and M(x-1), we need not worry about skewing the results as we go. The following C code uses this fact to perform the calculation inline on a graph represented as an adjacency matrix.

C Implementation

FLOYD'S ALGORITHM (int **m, int size)
    int i,j,k;
    for ( k = 0; k < size; k++ )
        for ( i = 0; i < size; i++ )
            for ( j = 0; j < size; j++ )
                if ( m[i][k] + m[k][j] < m[i][j] )
                    m[i][j] = m[i][k] + m[k][j];

Log in or register to write something here or to contact authors.