There is a continuum in cognitive science between explicit and
implicit knowledge. Explicit knowledge is known as data, as information
stored as such in the mind -- "knowing that." Implicit knowledge
is "knowing how" -- knowledge of how to go about doing something,
knowledge that we may or may not be able to describe explicitly. For instance,
if I ask you: "How do you breathe?" you may have no explicit
idea how you do it, but you nonetheless continue breathing. Depending how
far along the scale something is, implicit knowledge may be converted into
explicit, and vice versa. This often happens in learning -- we are told
in words how to do something, like writing HTML, and these explicit rules
are eventually converted into implicit habits and actions, like instinctively
adding "p" tags before a new idea.
But just how far does this scale take us? We know how to read and write
-- this is an implicit thing that we were taught explicitly. Do we know
how to breathe? Probably. Let's go deeper: Do we know how to beat our heart?
How to digest our food? It's getting fuzzy. Do we know how to move the electrons
in the atoms in our body? This is not as silly a question as it may
sound. There is nowhere, along this route towards deeper and deeper
levels of implicitness, where we can stop and say for sure: "This
is where it stops being something we know how to do and starts being
something that just happens."
What does this mean? It can be interpreted in a lot of ways. Some scientists
would say that the above parapgraph illustrates nothing more than a small
problem with the idea of implicit knowledge. Some mystics would say that
it illustrates a deep problem with the way we define self. How you interpret
it is up to you. But consider this:
We know from the Heisenberg Uncertainty Principle that it is impossible
for an experimenter to know both the position and the momentum of a given
electron with complete accuracy. But the electron knows. It moves happily
along, bumping into things with a certain force and from a certain direction,
without once stopping to calculate either of these values. It knows implicitly.
This all links up with Kurt Godel's Incompleteness Theorem. Some
systems are stupid enough that they cannot be Godelized. Just as the
electron is stupid enough that the Uncertainty principle doesn't apply to
it. Just as the homunculus problem is solved by reducing the homunculi
to a certain level of stupidity.
Interesting, isn't it, that there are somethings that you can only know
if you are stupid enough to see them.
Ariels: Well, you are probably right. But I wasn't making a scientific statement -- I hope you have succeeded in correcting anyone's interpretations along those lines. I was just playing with a parallel. And I think it's a good one. The electron doesn't really know any more than clouds really fly. But it's still fun to think about.
And to respond to the Godel thing: I wasn't talking about a physical problem in this node -- I was talking about a logical one -- read it again, with that in mind, to see what I mean. And in this case, Godel and the homunculus problem really do apply. It's really a matter of semantics, whether something knows or is.