Artificial general intelligence (AGI) refers to an artificial intelligence that can function intelligently over many domains. The most common benchmark for AGI is that it is accomplished when an AI can perform any intellectual task that a human can.

AGI is quite important as it is the point where AIs move into a new level of usefulness. We already have a number of computer programs that are really very useful, many of which meet simple definitions of AI. These programs are sometimes called applied AI or narrow AI; they may technically be artificial intelligence, but they are targeted at specific tasks (canonically, playing chess). AGI is the point where the AGI can do things as least as well as we can -- and in many cases, that will mean better than we can -- including selecting which problems to work on, identifying good enough solutions, and implementing them.

As an example, we currently have programs that can play chess better than a human player, but which do not know to question what's happening if we set up a game in which they are missing a bishop. An AGI should know to question such oddities, and moreover should be able to learn to play chess without first being programmed specifically for chess, be able to design other games approximately as fun as chess, and should be able to determine when it is appropriate to stop playing games and start working on a cure for cancer.

AGI is not the same as being able to pass the Turing test, although technically any AGI would probably be able to pass the Turing test (because that is something that humans can do). There is some debate as to what abilities are required for an AGI, but they include the ability to use reason to solve problems, make decisions under uncertainty, and represent knowledge. As a matter of practicality, an AGI should also communicate in natural language at human levels and understand human values -- as much as one can. It is also generally presumed that AGI would probably involve having sensory systems in parallel with humans, although what that means in terms of qualia is an open question.

Of course, being an intelligent agent with good problem-solving abilities doesn't guarantee that an AI will be anything like a human; even with the additional qualities of natural language, morals, and traditional sensory inputs tacked on, there is a good chance that an AGI will act more like an alien than like a human. This gives rise to the problem of making sure that the AGI is friendly to humans.

Additionally, it is often held that any AI approximating this level of intelligence would be likely to undergo a hard takeoff. In a strong sense, AGI is the same as superintelligence. Any AI can currently have math calculation skills, memory capacity, focus, and concentration greater than that of a human. Whatever might be missing from machine intelligence, as soon as it is developed the machine will in fact be able to solve more problems than humans can -- unless humans intentionally hobble it.

On the other hand, AGI is also not a prerequisite for significant AI risk; an advanced AI might be able to program grey goo or code a virus without the ability to communicate fluently in natural language or visually decode captchas... or even understand chess.

Despite these issues, AI research is moving slowly towards AGI, and is likely to reach it eventually. Predictions for completion of an AIG usually range from about 2045 to 2150. Whatever happens after the event of AGI, it is likely to be very different from what passed before.

I will explain how to actually build an artificial general intelligence from first principles, starting with ...


Smart people have been thinking about 'intelligence' for a long time now, but we still don't have any clear operational definition that enables us to truly understand intelligence, let alone try to create it. One result is that all the stuff being called artificially intelligent isn't intelligent at all. Let's start clear and simple:

Intelligence is the ability of an agent to learn the structure of its environment and use that knowledge to achieve its goals.

This definition is still packed with concepts that are a bit wooley, but we'll bring them into sharp focus after considering the first principles.

We begin by solving ontology (what can exist) and epistemology (what can be known about what exists). That turns out to be surprisingly easy, given that minds the sizes of large moons and small planets have been trying to do just that for millenia, without much practical progress. 


What can exist?

States and events. That's it. Life, the universe, and everything is just a big soup of states and events.

A state is something that remains what it is over some extent of time.

An event is a transition from one state to another within some extent of time.

The final pieces of the ontology puzzle are properties, processes, and objects, all of which are chunks of reality that are composed from states and events. A property is a set of possible states that are associated with a partiular set of events. A process is a sequence of events. An object is a structure built of properties that are related by processes, in the same pattern as the world itself. They are like microcosms that interact with the object that contains them. This persistent organization lets us treat objects as states. The composability and decomposability of states (properties and objects) and events (processes) is extremely important in our epistemology.

Time is a process, a perpetual sequence of events that is needed to define all other states and their events by providing a 'before' and 'after'. Without time, there is no change, and without change, the universe is done before it begins. 


Consider an object that has one property which has two states. Let's call the object 'switch' and the two states 'on' and 'off'. There are two possible events for the property: 'on' changes to 'off' and 'off' changes to 'on'.

Now consider another object, 'lamp'. lamp has one property with two states, 'darkness' and 'light' and two possible events: 'light' changes to 'dark' and 'dark' changes to 'light'.

An event is composed of an initial state and a different final state. An initial event can be related to a consequent event in the same way. This sequence of events is a process, which can be arbitrarily long.

In this example, the events of the switch relate to the events of the lamp. Specifically, a switch 'off-to-on' event is followed deterministically by a lamp 'dark-to-light' event' and a lamp 'light-to-dark' event is deterministically consequent to a switch 'on-to-off' event. This gives us a representation of causality in our world.

Roughly, we can think of properties as adjectives, objects as nouns,  and events as verbs. Considering composability and decomposability, I will call all states 'properties' and all events 'processes'. Objects have properties that change in processes.

Universal properties

When constructing a world, it can be convenient to have some properties that all objects must have. We have already established that time is a property that is required before we can have states and events at all, so it is a universal property in all possible worlds. In what we know as the natural world, the universal properties are (at least) time and space. To exist in nature means to have a particular place at a particular time, and at least one other property (a measure that can change or stay the same over time).


What can be known?

A completely static universe would be the same as nothing at all; a totally frozen existence might as well not be. A universe without states, on the other hand, would be utter chaos, constantly changing in unpredictable ways at all scales of time. Such a world would be entirely without structure and could never be understood or afford anything capable of understanding.

A knowable world must have states and events, and an intelligence can only understand the world as states and events. Awareness is about detecting states and events. Nothing else can be known directly, and only states and events hold meaning. (The Allegory of the Retina- Plato's cave revisited)

Knowledge and the composability of states and events 

An intelligence (learner) models the structure of its world as it relates to the learner's needs and actions. The world and the learner itself are both structured by states related by events. The unit of learning is an experience, which relates a learner's need, a learner's action, the effect of the action on the world, and the effect of that on the learner. An experience is an event that associates an initial need state to an action that produces a final need state.

initial need state --> action --> final need state

Learning is thus a process of association of events and states, an accumulation of experiences.

Worlds can be interesting and learnable by learners that have finite capacities because simple states and events can compose more complex objects and processes. Those complex states and events can themselves compose even more complex objects and processes in a hierarchy that builds up ultimately into all of life, the universe and everything.

Because objects and processes are composable, they are also decomposable. So given any object or process, we can break it down into finer objects and processes. Decomposition might be bounded by elemental states and events at the bottom (if the states and events have irreducible quanta and it's not 'states and events all the way down').

Learning thus involves synthesis and analysis on top of simple state/event detection.

Consider the switch and lamp example described above. I just labeled the objects as 'switch' and 'lamp' for convenience; I could also have called them 'god' and 'universe', with the events being 'let there be light' and 'let there be darkness', etc. The example already shows how two events can related to create a more complex structure (a process we could call 'turn on the light'). We could further decompose 'turn on the light by introducing more steps. Let's add a wire that has one property with two states, 'energized' and 'not energized'. Now relate the switch property to the wire property so that when the switch is 'on' the wire is 'energized' and when the switch on 'off' the wire is 'not energized'. Also relate the wire property to the lamp property so that if the wire is 'energized', the lamp is 'light', and so on. The overall effect ('turn the light on') is the same, but the process has been decomposed into two sequential events that involve three objects.  

Because the switch and lamp are related by a process, we can consider them to compose a more complex object that we could call 'lighting system'. That would only be necessary or make sense if 'lighting system' were an object in the composition of a larger context (a 'room', which could be part of a 'house', and so on up to, well, life, the universe, and everything).

We can decompose 'switch' into two contacts and a lever and decompose 'lamp', into a 'light bulb' and a 'fixture', each of which objects could be further decomposed and so on down to whatever quantum limit may exist.

Knowledge can be useful at any level of conceptual abstraction of process and object. If we introduce a light-loving learner to our room, the only things the learner would need to know are that there is an object that produces light and what action the learner must perform to make that happen.  

Knowers (learners)

In addition to something that can be known, learning requires a learner. We must define the faculties a general learner (general intelligence) must have clearly enough that we can engineer one. To do that, let's  review our definition of intelligence and remove as much of the fuzziness from the concepts as we can.

Intelligence is the ability of an agent to learn the structure of its environment and use that knowledge to act so as to achieve its goals.

A lot can be said about agents and agency (and an awful lot has been said), but we'll start with a fairly simple concept: "An agent can be changed by its environment and can change its environment through purposeful action." Our general learner must therefore be an active object in an environment and have purpose.

To learn (create knowledge about) the structure of the environment, the agent must have a faculty for sensing or detecting world states and events.  To be useful, that knowledge must be related to the learner's needs. The learner must also have a faculty for storing what is detected so that it can be used in later selection of actions.

The environment is the set of properties, processes, and objects in the world with which the agent can interact directly at any particular time.

To link its actions to its purposes, the learner must have a faculty for selecting an action according to its purposes and knowledge.

Engineering a general intelligence

Everything we have said up to this point applies to both natural and artificial general intelligences. It is a useful framework for understanding how minds of any kind work, but my focus is on building a general intelligence.

Above, I described the faculties that general learners must have. To engineer an AGI, what we have left to do is design those faculties. Recasting a bit, the faculties are:

  • Being- The agent is embodied as a purposed object in some world.
  • Sensing- The agent detects properties and processes in its environment.
  • Learning- The agent models the structure of the world as properties and processes and their relations to its own actions and purposes.
  • Motivating- The agent chooses actions according to purpose and knowledge.
  • Acting- The agent enacts processes in its environment.
First, we need a world. Our AGI (let's call it 'RT') could exist in the real world (our world) with a robot body of some sort or it could live as a virtual body in a virtual world. For an easy start, I choose to create a minimal artificial world and a minimal learner object in that world. Both are implemented as computer software.

Our definition of object maps directly to the concept of object in object-oriented programming (OOP). An OOP object is made up of constant and variable values, which correspond to properties, and 'methods', which correspond to processes. The object interacts with its environment (peer objects) via world properties and processes. I won't present code  or even true pseudocode here, but I will describe the objects well enough to enable a clever programmer to write their own.


A world is an object that has at least three properties (time, space, and one other) and processes that affect those properties. The world can contain child objects, each of can have world properties and private properties and processes. Objects can containt other objects. Objects interact with each other via the parent object in which they reside. A learner object has a special architecture that can recognize and remember the structure of the world in which it resides. The structure is provided by the other objects in the world and how they are related by world propeties and processes.

Time is implemented as a perpetual loop, with each cycle of the loop being one 'tick' of the universal 'clock'. Each tick is an event that causes each process in the world to advance by one elemental event and also causes the time process within each object to cycle once.

Space is implemented by an array of locations in one, two, or three dimensions (with each location defined by a tuple of one, two or three numbers.) and a process for movement.

The world is thus defined by affordances and constraints defined as a set of properties and the set of all possible events (physics) that change those properties. The real world does all this for us just by being, but we have to define all states and all possible events for any world we create ourselves.


A learner object has properties for needs, effectors, detectors, and knowledge. 

The need property states are 'sated' and 'unsated'.

The effector property states are 'effective' and 'ineffective'.

The detector property states are 'detected' and 'undetected'.

The knowledge property states are an open set of experiences.

An experience is a state that records one experience cycle of the learner as a tuple composed of an initial learner state, an effector, and a final learner state. Here, the learner state is the set of the states of all learner propereties at a particular time.

(initial learner state, effector, final learner state)

Those properties are related by a motivation process, an effector process, a detector process, and a learning process. The effector and detector properties are 'visible' to both the learner object and the world, and together with the effector process and detection process constitute an 'interface' that relates the learner to its environment. 

Motivation process

This process enables the learner to use its knowledge (stored experiences) to select from its repertoire of effectors so as to act to satisfy its needs. If the need state is "unsated", search the knowledge for an experience that produces a 'sated' state from the current property states. If there is one, select the effector from that experience. If there isn't one, select an effector randomly.

Effector process

This process enables the learner to affect it's environment via a world process by setting the state of the effector property selected by the motivation process.

Detection process

This process enables the learner to be affected by its environment by linking the state of a world property or environmental object property to a learner property (the detector) via a world process. This process also propagates the detection event to the need property to produce the final learner state that is used by the learning process.

(environment state) --> (detector state)

(detector state --> need state)

Learning process

This process stores experiences in the knowledge property. An experience is represented by associating the initial learner state (at the start of the current experience cycle) with an effector state and a final learner state as a tuple.

(initial learner state, effector, final learner state)

A Minimal World with One Learner

World properties:

    world_time = True

    location {location_1:(learner), location_2:(lamp)}

World processes:


        while world_time is True:

            execute world processes

            invoke time processes of world objects


       if learner.effector_move = effective, move learner from its current location to the other location else do nothing


        if lamp and learner are in the same location, set learner.detector_light  = 'detected'


World objects:


        learner properties:

            effector_move = effective | ineffective

            detector_light = dectected | undetected

            need_light = sated | unsated)

            knowledge = (initial need_light, effector_move, final need_light), ...

        learner processes:



                    if detector_light = detected, set need_light to 'sated', else set need_light to 'unsated'


                    add current experience to the knowledge state 


                    set the effector_move state according to need_light and knowledge if knowledge is available; else set effector_move state to random state


        lamp properties:

            effector_light = 'effective'

        lamp processes:


                do nothing (lamp is inanimate)

What it all does

We have two objects (lamp and learner) that exist in a world that has two spatial locations. We start with learner at location_1 and lamp at location_2.

The lamp does not move and is always emitting light.

The learner can move and can detect light when in the same location as the lamp.

When poked by a time tick from the world time process, the learner's time process calls three processes in order (detect_light, learn, and motivate).

If light is detected, the learner's need for light is satisfied and need_light is set to 'sated'; otherwise, it is set to 'unsated'.

The learn process records an experience as a set of three values: the previous learner state, the last effector state and the new learner state. So if the initial learner state is (need_light = unsated), the last effector state is (effector_move = ineffective), and the final learner state is (need_light = unsated), the experience "(unsated, ineffective, unsated)" would be added to the learner's knowledge. The learner would now know that doing nothing when light is needed will result in light still being needed. If the initial learner state is (need_light = unsated, and the last effector state is (effector_move = effective), then the learner will have moved from the dark location to where the lamp is and the final learner state will be (need_light = sated). The experience "(unsated, effective, sated)" will then be added to the learner's knowledge, and learner will know that moving in a situtation when light is needed will result in satisfying the need for light.

The motivate process can use the learner's stored experiences to select the best action for any situation (learner state). So, if the initial learner state is (need_light = unsated), the motivate process checks the knowledge for an experience in which the initial learner state is unsated and the final learner state is sated, which would be (unsated, effective,sated). The process extracts the effector_move state from that experience and sets effector_move to that state, thus acting to satisfy a need according to experience.

Growing the AGI

This is an extremely minimal learning agent in an extremely miminimal world, but together they are a platform on which arbitrarily large and complex worlds and learners can be developed simply by adding properties, processes, and objects. Development can proceed by design or by evolution.

For example, the world space can be expanded to a two-dimensional l by w area or a three-dimensional l by w by d volume. World properties can be added (color, solidity, gravity, friction, etc). Objects that vary in how they affect world properties or are affected by them according to the object's internal structure can be added. Properties and processes can be at any level of abstraction, and they can be analyzed into component processes and properties or synthesized into composite processes and properties to create or understand worlds of arbitrary complexity.


Questions and discussion most welcome.

If you are a clever programmer that has translated this into code that works, doesn't quite work, or is actually just a hopeless mess, let me know. We can compare our stuff.

Log in or register to write something here or to contact authors.