In my opinion, artificial
consciousness is not
artificial intelligence. Artificial intelligence is a field of
computer science that aims to bestow abilities upon a machine that were previously solely the domain of human beings. For instance, the ability to
converse, play
chess, make complex, high-level decisions. These things are tasks that we would not expect an
inanimate object or lesser life forms (possibly excluding more developed
primates) to be able to perform.
Consciousness is different. Conciousness is the ability to know that you are. If you are aware of your mental processes, and can monitor and reason about them, you can be considered conscious. Observing animals shows that mammals are almost certainly conscious, at some level.
A dog, for instance, may be aware of what it is thinking, but it can only be aware of it in terms of its own experience. This will include images, sounds, smells, etc. It is unlikely to think "I am hungry", but can think of where its bowl is and whether it has any food in it at the moment.
The advantage that humans have over other species is a high-level language that can be used to describe and catagorise abstract concepts. We have an 'inner voice' that we use to think about concepts in a langauge that we use to communicate with other members of our species.
Whether this can be recreated in a man-made object is unclear. Furthermore, how we would tell whether the creation was conscious is also unclear. We couldn't just build such a machine, turn it on and ask it "What is the answer to Life, the Universe and Everything?" for the same reason we couldn't ask that of a dog, or even a newborn baby. To be conscious of our world, it would need the ability to observe and interact with it in the same way we do. It would need language, vision, motion, etc in order to develop an awareness of the world we are aware of. It would need to learn in the way we learn to gain the experience and knowledge necessary to interact with our world effectively.
However, we could make a machine aware of its own world. It might be possible to write a software agent that know where it is in a network. It might be able to interact with its surroundings, reason about them, alter them to suit its purpose. But this brings us back to the question of whether the machine is actually conscious, since it would be conscious in a different sense to the one that we know.
There is much debate as to whether artificial consciousness is possible, or even desirable. There would almost certainly be moral issues with creating something that is aware of itself. Would we have the right to make it perform tasks that we choose, which would constitute slavery, and would we have the right to turn it off, which would be murder.
This is my view on the subject. Part of me hopes that it is possible but many leading figures, notably Roger Penrose, claim it isn't.