Reactive Agent Planning (RAP) is a technique for giving artificial agents, such as robots in the real world or characters in a virtual one, the ability to plan courses of action which are adaptable to circumstance. It allows the agent to decide the best way of going about achieving some goal, and gives it the ability to change those methods if they become unworkable.
The technique is a classic piece of old-style AI, which attempts to build intelligence through a series of modules which do various things - a knowledge representation module, an inference module, a planning module (of which RAP is an example), and so on. What RAP does is to take a "goal", which comes either from RAP itself or from some higher source (the in-built goals of the agent), and decide what would be the best way of achieving the goal. It does this by working backwards. Say the goal of the agent (which I'm going to call "you" to make this simpler) is to give your Aunt, whose birthday it is, a present. Now, you know (from your knowledge representation module (i.e. memory)) that she lives many miles away. So to give the present, you need first to get to her. So let's make that the new goal - to get to your Aunt. So you want to drive there. So you want to get in the car. So you want to open the car. So you want to unlock the car. So you want to have the car keys. So you pick up the car keys.
That kind of backwards thinking is the basis of RAP. Given a goal, you see what "sub-goals" have to achieved before the main goal can be, and you so you look at those sub-goals and see what sub-sub-goals need accomplishing, and so on until you get to an actual "action" that is the first step to the final goal.
However, there's a problem with what we've got so far - if somewhere along the line we find out that actually we can't perform some action, then we have to give up. This is where reactivity comes in. Each "goal" should have a number of "plans" associated with it - ways of achieving that goal. So one plan might be driving to your aunt's, another would be walking there. And if the car was broken, or you couldn't find your keys, you should switch to walking. Each plan should either be simply an action (so if the goal is to have the keys one plan would be to pick them up) or a set of goals which need to be fulfilled before the action can be performed (so if the keys are in another room the plan would be to first be in the room, and then to pick them up). So we have a tree-like structure of goals sprouting plans sprouting goals sprouting plans... sprouting an action, which is performed. Here's the Aunt example drawn out. Note that only one plan is expanded for each goal - the agent must have some way of choosing the best - and only the first goal for each plan. Goals are in bold, plans in italics, and actions underlined.
Aunt have present
Give Aunt present Mail Aunt present
Be at Aunt's house Give present to Aunt
Drive to house Walk to house
| | |
Be in car Have keys Drive
Now suppose you can't get in the car. Say you hear your partner drive off with it as you are looking for your keys. Then you should switch plans - in this example to Walk to House. Now suppose you trip and twist your ankle. Then you should give up on that plan too, and so give up on achieving the goal Be at Aunt's house and hence on the plan Give Aunt present, and instead switch to plan Mail Aunt present, and expand that one into plans and sub-goals. So you see, by simple checking of when a goal is attainable - both when first considering the plan and all the time until it is achieved in case circumstances change - the agent can be sensible and reactive in its planning.
So all you need is to give your agent a load of plans for each goal, give it a top level goal, and set RAP running. In various forms this has been used in such applications as (I'll stick some examples in here once I find them). One application for which the system is particularly apt is what Carnegie Mellon University's (now defunct) Oz Project called Interactive Drama. See believable agent for more on the problems RAP here helps to solve, but an important part of Oz's architecture was Hap - a RAP-style planning system. A subset of interactive drama is what we now call interactive fiction - text-based, and turn-based, works which have grown out of text adventure games. Here is the perfect testing ground for believable agents - an easily deliminated world without the complications of physics and the difficulties of sensing which have plagued attempts in the real world. Given this, the amount of work which has been done in this area is slightly disappointing, but there have been some good attempts. In particular one Nate Cull has coded an implementation of RAP in the two main IF languages, TADS and Inform, and they are freely available on www.ifarchive.org.
Update (28 July 02) - I've recently started work on my own reactive agent planner for Inform, which I'm currently calling Ap. It's gonna be good. Oh yeah.
Update (23 Oct 02) - Uh... still working on it. Quickly, though, some things which complicate the above simplistic model: multiple goals - the actor should take the effects of a proposed action on all goals into account; fallibility - an action might go wrong, or a plan might not lead on completion to the completion of the goal it was expected to complete; incomplete knowledge - we might not even know whether a goal is achieved or not. Trust me when I say that sorting all this out is a bit of a bugger...
Oh, and I've found a whole load of references at http://citeseer.nj.nec.com/context/20815/0