This is an argument in the philosophy of mind
which aims at showing that consciousness
is not fully described by a description of a brain-state, however detailed, whereas a brain-state is (asymptotically) so describable, and that therefore a conscious experience is not a brain-state. In other words the argument is aimed against the physicalist mind-brain identity theory
, which holds the converse, and the strong computationalist
variants of this that have emerged, which appear to hold that a pattern of information
of a certain form and level of complexity is sufficient for consciousness.
The argument is usually given by considering questions like:
"how do I know your experience of red is not qualitatively more similar to my experience of violet than to my experience of red?"
"How do I know that the spectrum you see is not the same as the spectrum I see, but inverted?"
Note, nothing in the argument presupposes that we do not agree on testable data
such as the frequency of red light, the names for the various colours, etc.; at issue is only our direct experience of the colours - the nature of the so-called qualia
Because of the difficulty of constructions such as 'my experience of red', and so on, and also to avoid the question of minor physiological details that might affect our perception of colour, I prefer the argument in a slightly different form.
Suppose we construct an impossibly high-powered computer, that can model, in as much detail as we like, the structure of a living brain. We then model the effect of red light hitting the retina to which the model of the optic nerve is connected and ask the question: how can we extract from our model a description of the ensuing experience which is sufficient to convince us that the modeled experience of 'red' is like (or unlike) our own?
I'm not prejudging the question of whether the model has had a real experience of red, qualia and all.
The point is that we stipulated that all the physical information about the brain is in the model, and yet we can't get the determination of the brain's "qualia" out of the model. What would satisfy us? A set of figures? Using those figures to put something on a screen? Run the model through a conversation and have it tell us?
Consideration of the attendant difficulties of describing our own experience of red in order to meet this criterion will show, I think, the implausibility of extracting such information from our model.
Something about "what red is like" seems to resist capture by means we ordinarily use to communicate information, so that we need to have seen red things ourselves in order to successfully dereference "what red is like". In the case of our computer model, this seems to nullify the apparent advantage of having all the information we could wish available at our fingertips.
The proponents of strong computationalism will sometimes at this point mention the possibility of psycho-physical laws
and attempt to defer the question for consideration by some future science. Presumably this would be called psycho-physics, or perhaps neuro-qualiology; but whatever the name, it doesn't exist at present.
That approach begs the question of how to bridge the gulf in describability between a brain-state and a conscious experience. Brain-states, like any 'physical' (or informational) state of affairs - spatial arrangements, neural firing patterns, measurable properties of photons are other examples - are in principle describable in the formal way required (because in this sense, physics is no less a formal discipline than computer science), in as much detail as we like (and our knowledge of physics/IT will allow) whereas the familiar components of our conscious experience are not. (Try it!)
In my view, this version of the inverse spectrum argument is not a demonstration of the existence of some weird nonphysical mental substance or ontologically unique feature of mental events, though it is often taken as such.
The real import, it seems, is that the existence of that which is not exhaustively describable by formal systems (such as the mathematics used in physics and computer science) presents an immediate knockdown argument against naive physicalism and its computationalist cousins, which seem bound to hold that a formal approach can be ontologically complete. Rather than "mental substance", this view merely implies that descriptions of "what is" available through formal means give out at a certain point, and might be better thought of as exhaustive about one aspect of existence than about existence per se.
The only convincing counter-argument I can conceive would be a design for a system which could in principle enable someone who's never seen to know what it is like to experience colour - and to convey this in an exact way, which we don't seem to be able to manage even for seeing people, without reference to samples.
Of course, you can describe, in detail, the mathematical properties of the colour space, the physics of light, the behaviour of the neurons. Any causal property you can name. Is it realistic to expect that this type of description would suffice?
The writeup below presents one contrasting view. Regarding psychophysics, I'm inclined to ask: "intensity of what?" The reply "you'd have to ask them" essentially concedes that our "psychophysics" doesn't provide the information that we're after, else why would we have to ask them? The sort of psychophysics needed should enable us to go from the brain-states to the "qualia" directly.
Gritchka gives a pleasing summary of the orthodox position regards an invertee, in color does not exist:
If their pulse goes up in a bright red room, and mine does
too, if their threshold of perception is the same (say it's easier to detect a very faint red than a very faint yellow)... concede all the physiological inwardness, all
the brain states, and there's nothing left [to invert].
But tell us of this inwardness
, Gritchka! I'm not sure I see a physiological
sense defined in that node. (Of course it's the mysterious disappearance of "what there is to invert" - something that we know damn well is there - when we restrict the way we look for it in the way that G. suggests that my argument is pointing out, so, again, the objection fails.)
There was also an interesting writeup by rabidcow in color does not exist, maintaining:
Color exists in the same sense that "inches" exist.
I have some sympathy with this, but would add "and inches
exist in the sense that objects have extension