During a spate of spring cleaning my box, I discovered my University dissertation entitled ‘Negotiation in Multi-Agent Systems’. I would node it all (and if you’re unlucky I may well do so in the future) but you’d die of boredom induced internal haemorrhaging before you even reached the halfway point. Presented below is the background information section giving a brief history of agent based system theory, it's short and to the point, but hopefully should give you some idea of the area I was working in
The term ‘agent’ and ‘agent based technology’ have been buzzwords in the computing and national press for over 5 years, but the idea of an intelligent computational agent is not a new one. In his work ‘What is an Agent Anyway: A Sociological Case Study’ Foner 1 references work mentioning these ideas performed by Doug Englebart and Vannevar Bush in the late Fifties and early Sixties, but no single definition has been produced that has been completely agreed upon. It has been said that ‘the question what is an agent? is embarrassing for the agent-based community in just the same way that the question what is intelligence ? is embarrassing for the mainstream Artificial Intelligence (AI) community’2
The first obstacle that must be overcome is the confusion over the usage of the term ‘agent’. It can be defined as follows: -
Agent: - One who does the actual work of anything, as distinguished from the instigator or employer; hence, one who acts for another. (Oxford English Dictionary, online version.)
This is obviously a general linguistic definition, and needs to be tightened considerably before it could be accepted as a computational definition of an intelligent agent.
Many people have attempted to tie down exactly what qualities a program must possess to qualify as anintelligent agent. These include: -
- autonomy – the ability to operate with no human interference, and with some control over their internal states
- social ability/co-operation – interacting with other agents/humans
- reactivity– perceiving its environment and reacting to changes occurring within it
- benevolence – not taking up conflicting goals and attempting those goals asked of it.
- ability to learn – changes its behaviour based on its previous experience.
- displaying adaptive goal-orientated behaviour – does not simply act in response to the environment, but also to achieve a specific predetermined target.
- tolerant of wrong/error-ridden/unexpected input – capable of correcting errors in input by extrapolating past commands and ‘second guessing’ the entity with which it is communicating.
- uses natural language – communicates with the user in natural language for ease of use.
Many people have argued for and against the inclusion of all of these attributes in the ‘
official’ definition of an intelligent agent. Some people apparently have even tried to tie in
legal or
financial constraints
3, and other have gone so far as to insist that an program possess a personality
10 before it qualifies as an intelligent agent. Many researchers have suggested the idea of different levels of agent, categorising them as strong or weak
19, or as an agent, intelligent agent, and truly intelligent agent
22 depending on which of the previously mentioned characteristics that it displays.
Nwana has tried to simplify the matter by agreeing minimal subset of these attributes (autonomy, learning and co-operation) and then categorising agents into one of several groups dependant upon what behaviour they display, and what additional attributes they possess
20. The three main behaviours that an agent may possess are summarised as
Learning,
Autonomy, and
Co-Operation.. Those learning and co-operating are defined as
Collaborative Learning Agents, those showing autonomy and learning are defined as
Interface Agents and those showing autonomy and co-operation are known as
Collaborative Agents. Agents showing all three characteristics are known as
‘Smart’ Agents
Each one of these separate agent types has a separate sub definition, which distinguishes it from any other type of agent. One problem with this approach is judging the correct amount of granularity to use in these classifications; too much and you end up with each Multi-Agent System (MAS) being in a different category, too little and important architectural and mental differences will be overlooked. An in depth analysis of agent classification is available in ‘Software Agents: An Overview’ by Hyacinth Nwana 20.
To add an additional layer of complexity to this already muddled view, there is the issue of mobility. Mobile Agents can be an instantiation of any of the aforementioned types of agent, but differ from them in the fact that they transmit themselves across networks and execute on remote computers. The results of that execution determine the agents’ route from then on. A good example of a mobile agent system is Tryllian Software’s Gossip Agent Suite (available at www.tryllian.com). This program consists of a storage area and five individual mobile agents. These agents are loaded with ‘backpacks’ containing information pertaining to your interests along with a weighting system as to the importance of each individual piece of information, and sent off onto the Internet where they find their way onto a local Gossip Agent Server. Upon arrival they ‘check in’ with the local Concierge Agent, who examines the contents of their backpack and point them in the direction of other agents with similar backpack contents. They then exchange their information with that of another Gossip Agent from another machine. The information you supply is swapped according to keywords that you input into the agents before their transmission from their home machine, and the agent then returns to its home computer within a pre-set time limit.
The fact that the mobile agent executes code on a remote host opens an entirely area of research into the security and naming issues the are brought up by this. In many ways a mobile agent can be considered as a non-damaging form of virus, as it travels between machines, and then executes arbitrary code without the machine owners knowledge.
As can be seen, the ideas behind agents are such a different take on software development that the term ‘agent-orientated’ programming (AOP) has taken hold, to distinguish this branch of software engineering from others such as object-orientated programming, etc. Some ideas follow from object orientated and component orientated programming, but unlike the outcome of these software-programming techniques, the outcome is more than just a sum of its parts.
Bibliography
- Foner L.N. - What’s an Agent Anyway? A Sociological Case Study – Agent Memo 93-01, Agents Group, MIT Media Lab (1993)
- Jennings N. R. & Wooldridge M - Agent-Oriented Software Engineering - Handbook of Agent Technology (ed. J. Bradshaw) AAAI/MIT Press. (to appear) (2000)
- Nwana H. – Software Agents: An Overview - Knowledge Engineering Review, Vol. 11 No. 3 pp.205-244 (1996.)
- Reticular Systems – White Paper ver 1.3 – http://www.agentbuilder.com/Documentation/white_paper_r1.3.pdf (1999a)