With the increase in computing power, eventually this may become a possibility. What happens when you have the power to simulate every neuron, every hormone, ever little detail of a human brain, and the ability to scan a living person for the state of their mind, and use it as the starting state for a simulation? (See: brute force uploading)

Does this program have consciousness? Or is there something special about a person that the program can't copy but can pretend to have? If you ask it if it has consciousness, and it replies in the affirmative, is that good enough to say it does? Can you determine consciousness from external viewing? (For those curious, there is no known way to do so)

And can you then pause or stop the program, ethically? After all, if it is a perfect copy, then it thinks it's the person it copied, and aren't you then pausing or stopping a human consciousness? Is it ok to pause it, since you can restart it and it will never know? Or is the temporal shift considered harm? What if you just pause it, but never restart it? Is it dead?

The future may be strange and full of devious questions...


Woundweavr - I saw somewhere that they's pretty much proven that the neurons in the brain do not make use of quantum mechanics. Of course they're not 100% sure, but I saw something about it in sci.nanotech.

Isaac Asimov, the father of robotics, considered this problem and considered that it will be possible to do this (using a science he called mentalics). Also, he considered robots to be intelligent beings, and possibly sentient as well, even if they are not patterned on humans.

On the second count I agree with him, it is possible to create intelligent, sentient machines. However (possibly because I am one generation up on him) I also believe that it is possible to create machines with true feelings and emotions. This is an interesting concept to toy with (hypothetically).

However, I disagree on the first concept. The human brain is many times more complicated than any computer that will be invented in the near future, so mapping the human brain topologically is more or less impossible, for a few years at least. But don't quote me on that.

I don't understand why people seem to associate artificial intelligence with a sort of duplication of the human brain. It just seems like the wrong way to go about it, to me. I mean, why do we need a computer to duplicate a human brain? We can do that ourselves with some DNA and an egg.

The turing test shows us that this is not the only way. You would have the intelligent machine and a human being, both hooked up to some sort of I/O mechanism. You would then alternate asking the two subjects questions. If you could not tell which one was the man, and which was the machine, then you had an AI on your hands.

People have been able to get close to that definition of intelligence for a long time with simple programs. Anyone remember Eliza, the computer therapist? You would ask her questions, or say some things about yourself, and she would analyze keywords in you statement, do a little randomization, and throw a response back at you.

Apperently, back at MIT where this program originated, people began to actually talk to her about their problems, as if she was a real therapist. If a program like this was written with proper complexity (maybe add in some heuristics and a little data storage so she'll customize herself to your personality and emotional needs), would it really be any less effective than an actual therapist? Or is it really the empathic effect of sitting next to a caring fellow human that benefits us the most?

My point here, I guess, it that intelligence is in the eye of the beholder. If it walks like a duck, and talks like a duck, does it matter if it's not really a duck? In addition to that, if we're going to be synthesizing a duck, why not take that as an opportunity to improve upon the basic model?

Maybe what the human race needs is a little bit of redesigning. A version 2.0, if you will. Is'nt that what evolution's all about? It doesnt look like we're going to change much physically anymore. The true human evolution is in machines. We're creating tools that can do things we could never hope to with our flesh. The meat is obsolete. Bring on the silicon.

Human being emulator + open source development =
We could all be a little more like Saige and Pseudo_Intellectual, instead of just wishing we were.
We could fix what's wrong with annoying noder's name here


Seriously, though, I'd use an argument analogous to the one Eric Drexler uses for the feasibility of nanotech (and heavier than air flight). "It's already been around for years."


In other words, the people who believed that heavier than air flying machines could not be built were wrong from the very start, because birds are heavier than air flying machines. People who believe that nanomachines cannot be built are wrong from the very start because ribosomes and viruses are nanomachines. Likewise, the existance of natural intelligence presents the strongest argument for the feasibility of AI (and from there, NI->AI uploads).
This should be possible, unless a current theory (and almost any religion) is true.

A current theory has been gaining acceptance that states that human consciousness is derived from quantum mechanal interaction inside each neuron's microtubules within the brain. I am fond of this theory if only to deny the logical endpoint of thought as a derivitive of purely Newtonian physics.

The brain without quantum mechanics interfering would be a complex machine only able to react to stimuli, which was caused by something else and so on. This would suggest that everything is predetermined by one initial action at the universe's creation, or Fate or God.

However, if the quantum microtubules theory is true then there is free will. It would also make simulation of a human brain dificult if not impossible. A quantum computer would be required, and even then, the seeming randomness of quantum states may preclude any type of artificial "soul".

Happen not gonna.

Well, at least probably not anytime in the near future unless there is a sudden major, and I mean MAJOR, advance in our technology. I'll get back to the problems of creating a human-like intelligence in a minute.

First I want to address the point that part of the problem when this issue comes up is that we don't really have a rigorous definition of intelligence. For the most part, we tend to use the term to refer to human-like thought processes, whatever that may entail. (For a human to be unable to tell that a machine is a machine by talking to it does not make it intelligent. If I create a hologram that looks and moves like a human to the extent that a human can't tell it is a hologram and not a human body, that does not mean that it can do the things a human body can do. The mind is fallible. It can be tricked.) That being said, it's a good enough definiton for the moment. Artificial Intelligence would then be some human designed intelligence which could reason and think in at least almost all of the same ways humans can.

Computing speed has nothing to do with our ability to create or model intelligence, at least not on the orders of magnitude that we currently deal with. Trying to compare the capabilites of a computer to a human brain, or any brain for that matter, is like comparing rocks and toilet paper. They are totally different systems, even though they seem to have similarites. Computers and humans can both perform logical operations. Truthfully, that's about where the similarities end. Computers function through a series of basic mathematical operations. Logic only. These operations occur in the hardware, and are utilized by software. I'm not an expert on exactly how silicon chips function in computers, but basically tasks are run more or less one at a time. Only one program gets to use the cpu at any given moment. Also, everything a program does has to be translated into mathematical operations. Okay, so that's enough on computers. I imagine most of the geeks out there have a general idea of what's going on in that box in front of them, at least enough to understand how radically different of a system the human brain is. Speaking of which...

First off, in the brain, the ideas of hardware and software make no sense. The hardware is the software, or probably more accurately, there is no software. Basically, this is how the brain works:

Warning! You are about to enter Extremely oversimplified information from a still developing field!

Okay, so, the brain is made of neurons (and other things which are not exactly relevant to this discussion), individual cells which can recieve and transmit signals in the form of electrical impulses. A neuron recieves input from a given set of other neurons, and then either produces an output to other neurons, or not, depending on the strength of the inputs it recieves. Now, these connections with other neurons are the important part. The manner in which neurons are connected allows them to integrate and process information. Also, the individual morphology of each neuron affects the way in which the information it recieves is integrated and possibly formed into an output. So, connections between neurons, and individual neuron morphology are the basic tools for information processing in the brain. Just wanted to be clear on that. Now. There are roughly 100 billion neurons in the human brain, and about 100 trillion interneuronal connections. Now, before you just say, "wow, that's a lot." Do some math. 100 trillion divided by 100 billion equals about 1,000 connections per neuron. Okay, that's a pretty impressive machine, right?

But wait, there's more!

Information processing circuits in the brain run in parallel. Essentially this means that everything can run at one time. Auditory information is processed at the same time as visual information. Heart rate is regulated while you do your math homework. There is no waiting to use the CPU. Another important point is that there is no fundamental system of neural processing which all brain functions use. When a computer processes two different types of input, in both cases what is going on boils down to one basic set of mathematical operations, performed in a different manner for each type of input. In the brain, different types of inupt are processed in totally different ways. Hardware architecture does the processing, and the hardware architecture for each unique brain fuction is essentially unique. And the hardware can restructure itself (within limits) to accomodate new situations. It's called learning. Oh yeah, and this same hardware that does all this processing, it stores information too. (How exactly that works is still pretty fuzzy. Well, actually how exactly a lot of the nervous system works is pretty fuzzy, but the basic ideas seem to be sound).

Now, I'm not saying that it's impossible for some other machine to do all of the things a human brain does. I think it is most definitely possible, and I also think that the work being currently done in the field of artificial intelligence is great. We get a lot of useful insights and tools from it. Someday, I think humans will be able to create artificial intelligence (if we manage to stay alive long enough), but it won't be running on any machine that even remotely resembles our current computers. No, the machine doesn't have to model every neuron in the brain of humans, as Saige suggests. In fact, it had better not, because that is probably about the hardest way to create an artificial intelligence. Not only is it a friggin' GARGANTUAN amount of information, but you also stick your head directly into the thick of chaos theory when you try to model a system that detailed and complex.

My point is that the human brain is a machine built and refined by evolution. Evolution does not, in general, tend to create machinery which is unneccessarily complex. Our brains are this complicated for a reason. It's hard to process all the things that we process. I think we're pretty damn far from having a machine that can do the equivalent.

Back to How your brain works


Some notes/disclaimers:

The idea of quantum mechanics playing a significant role in information processing in the brain is just silly.
I'm not a neuroscientist (yet. Gimme about 5 years.)
Please argue with me if you know a lot about AI and feel so inclined.

Log in or register to write something here or to contact authors.