In
Computer Science, when judging the performance of
an
algorithm, there are two hypothetical test cases you
want to look for; the average case, which is more or less typical,
and the worst case.
The worst case could be a special case that is abysmally
bad for the algorithm--for instance, looking for a non-existant item in a hash table that doesn't use linked lists to handle collisions; finding an existant item
is instant, where looking for the one not there might force
you to search much of the list. Or, the worst case could
be a loop in a graph--which would cause your algorithm to never complete if it didn't detect it.
Or, the worst case could just be the high end of the spectrum, e.g., searching for the last (or second to last)
item in a linked list.
Either way, when picking the algorithm, the worst case is
important. If there is one abysmally bad worst case, perhaps it should
be detected and handled by a different algorithm. If the worst case behavior is not good enough, then maybe
a better (faster or less memory consumptive or whatever)
algorithm should be picked.
Sometimes the worst case is considered to be rare enough
that as long as the algorithm finishes, it can be ignored or worked around.
Then just the average case is considered.