display | more...

A combinatorial explosion is when the thing you're doing scales really really really poorly. Consider a game like chess, and imagine that you are a programmer trying to come up with the next big thing in AI research, by creating a perfect chess player. Setting aside for the moment the question of whether this is possible (something programmers are notorious for doing at the start of ambitious projects), you have decided that the best way to do that is to just have the computer look at the current board, think of every single possible board position that could come out of this one, and choose the best move sequence.

Before you scoff derisively, remember that this algorithm actually works perfectly well for some problems, like tic tac toe. However, don't put aside all your scoffing, because you're right to say that this is a very bad idea for a chess program. Why? Because, to a first approximation, the average chess game has roughly 40 moves, each of which results in a board with roughly 20 move possibilities. As the writeup above indicates then, there are 20^40 possibilities, which any reputable calculator will tell you is 1.099511627776e+52 different move sequences. If the computer could consider a billion moves a second, and did so, we'd be waiting 1.099511627776e+43 seconds. Since pi seconds is a nanocentury, that works out to about 3.5e+33 centuries, which is still an unacceptably long time. The problem might fall short of intractable, but it still is not exactly friendly.

This is why we call it an explosion. Little problems are little and manageable, but as the problems get bigger, the time to solve them gets bigger much much faster; faster than polynomial growth, faster even than exponential growth. Tic-tac-toe is totally doable, chess is utterly undoable, and go, the chinese game of stones, with longer games, more moves and significantly more possibilities per move, is a horror story too scary to tell your children at night.

The phrase is generally found in computer science literature, but most especially in Artificial Intelligence research, whose members seem to have a strange fondness for algorithms which are demonstrably subject to such grave, grave theoretical blunders. I swear it is not unheard of for an AI researcher to say "start by examining every piece of information in your universe, and cross reference them." Really though, it's not just AI, any system that needs modelling, where multiple variables interact in this mutually reinforcing way is dangerous, and possibly even explosive.

Log in or register to write something here or to contact authors.