The best thing that ever happened to the development of artificial intelligence (heretofore referred to as AI simply because I don't have the desire to type both words out so often) was a very simple programming convention: IF-THEN. It was created in the late 1960's by Allen Newell and Herbert Simon, as a general-purpose computer model of human cognition. "If this is the case, then do that." Through practical application of this programming concept, they realized that many IF-THEN statements can build one upon another, in a long string of sequential IF-THEN statements.

If A then B. If B then C. If C then D. And so on.

The whole problem with AI is that it must learn on its own. The IF-THEN statement is merely a rules-based instruction set, a logical checks and balances. Which is great, really. But when the variables within an IF-THEN statement come directly from the programmer, then it can be justifiably said that an AI is not truly self-reliant for its education and learning. The variables in an AI's matrix have to be almost genetic in nature, growing with each generation of development. They have to be measured against experience, rather than merely sequential in nature. Start them off small, as short binary scripts, like 011001. But that isn't all there is to it. Oh, no, they have to be mutative, meaning that they can change. What to do?

Take the binary scripts and integrate them with random function generators, where certain digits within the code are deemed as "unimportant" or "variable", meaning that they can be changed through subsequent generations by way of trial and error. Like so:

Strict binary script: 01010110011101100011
Mutative binary script: 0##10###0111#1#00###
Where "#" is the mutative agent that can change.

A whole slew of a binary string sets present themselves to the AI matrix for approval. Each generation of strings can be just as viable as any other in this scenario, which can pose a serious problem because if you have too many differing variables, then the system will continually generate binary strings and never decide on a viable agent- otherwise known as a "choice." It will just generate string after string, happy to do so, but not really doing anything, which leads to digital entropy and burn out. How to avoid this?

Assign each binary string a catalog number, as a sort of identifier. When a string is created, the AI matrix will "hold" that string and compare its identifier to others, selecting a number of suitable matches based on similarity, function, category and complexity. Once each string has been "harvested" by the AI matrix, they will be compared against each other, as though they're going through an even more rigorous process of selection. What's happening here is that the AI is trying to determine, of the submitted strings from the best-suited catalogs, which are more reliable, plausible and relevant to the stimulus? It will run through the selected binary scripts, compare them and then come up with yet another, smaller selection of scripts. Perhaps as few as three or as many as a hundred. The next step is determining the strongest of the best selections.

The AI matrix is still holding these numbers in reserve, still in a state of indecision. It's never been faced with this experience/situation before. It has to make sure that it's at least going to try to make the best decision. To do that, it has to determine an absolute ON-OFF/good-bad/positive-negative "factor" for the stimulus/situation. On a very minor scale, this is what a child does when first exposed to fire. The child reaches its hand into the flame and immediately it gets a sensation: burning. In the frameset of an AI matrix, burning is a negative because it's destructive, which it is finding out very quickly, thus fire has an "OFF" designation or is assigned the absolute identifier of "0". If the any of binary scripts it's chosen lean towards "1" ("ON" or "positive"), then it discards them entirely and goes for the one that is closest to "0". It wants to find a solution that is equal to the negative stimulus. If the solution presented says that fire is good, which it obviously is not, then the AI matrix wants nothing to do with that solution. If, however, the solution presented recognizes fire as a negative thing, then the AI matrix will pay attention because its value corresponds to the negative stimulus.

What happens then is that the AI matrix takes its determined binary instruction and plugs it into its own IF-THEN statement. It already knows part of the statement, due to the situation: Fire is burning the hand. The AI matrix's new IF-THEN statement looks like this: "If fire is burning hand, then 01010110011101100011", where the binary string could stand for "remove hand from fire" or, if it's advanced enough to understand relational concepts, "apply water to fire."

The AI matrix then executes the IF-THEN statement, keeping the identifier for the chosen solution on record. If the burning sensation goes away and the general sense of well-being is brought back into equilibrium for the AI matrix, then it will store that identifier in its memory as a positive response to a negative situation. The identifier earns "points" for its value. On top of that, any other variable identifiers that contributed to the solution's development earn points, too. This is so that, if a similar situation arises again, the AI matrix knows which identifiers to call upon, which have already proven themselves as reliable and productive, for another solution.

The process of learning is the key for a computer system to attain true intelligence. If the matrix already has a core set of knowledge, meaning that the programmer has already told it that fire burns, then it won't grow and will likely apply that knowledge inappropriately. It must learn from experience for its knowledge to have any worth or value.

Another key to learning for an AI is the economy of knowledge. Just like in a real-world economy, certain products will either prove themselves useful to the greater good or not. If they are deemed useful and productive, then they are assigned a value and kept in the market. If they are deemed useless or ineffective, then they are, through natural selection, discarded from the market and die off. The general populace doesn't use the horse and buggy mode of transportation because the car was developed and proved itself more useful and efficient than the horse-drawn carriage. Through the process of learning and causative evolution an AI matrix does the same thing. One solution that worked early in its development may not apply anymore and eventually gets dropped from its instruction set because it's obsolete in comparison to greater experience and newer knowledge.

Log in or register to write something here or to contact authors.