Subj: Good Feelings.
Date: 10/1/99 12:01:12 PM Pacific Daylight Time
From: (Victor S. Johnston)

Hi Mike, First let me congratulate your students on their excellent
questions..I wish my students were equally perceptive. Also, I am very
happy to see young minds thinking about these important issues. I have no
doubt that the process is worth the effort, and they will be better prepared
to make a major contribution to our understanding of human nature. Here are
your excellent questions, and my answers.

1. Given that the "sniffer" program can work without
consciousness, why is consciousness needed? That
is, if hedonic tone can be represented mathematically, or
symbolically, in a purely unconscious algorithm, consciousness
seems to be redundant, or irrelevant. Why should evolution go to the
trouble of generating consciousness if its functionality can be (more
modeled with numbers?

A: The answer to this question is "cleverly" hidden away in end-notes 22 and
25. Here they are together. I think they address the question.
The reader should be aware that Sniffer�s feelings are merely a simulation.
The only reason why the model works is that a conscious being (the author)
assigned a meaning to the numbers being manipulated by the computer program.
Indeed, without such an assignment of meaning by a conscious being, the
Sniffer program, and every other computer program for that matter, computes
nothing at all. Complex biological organisms possess real feelings, not
simulated feelings, so meaning is an inherent attribute of their
Without feelings - affects and emotions- the world around us is a
meaningless conglomerate of energy / matter and it can not provide the
source of meaning required for a computational theory of mind. Indeed, this
is the central problem with all computational theories of mind that treat
the human brain as a general-purpose computer. A computer does not compute
anything unless a human operator assigns a meaning to the different states
of the system; meaning is not an inherent property of algorithms. Computers
simply change states according to programmed rules but unless someone
assigns a meaning to these symbolic states, then no computation is possible.
Who or what assigns a meaning to brain states? In his computational theory
of mind, Pinker has proposed that the "solution" to this problem rests in
the "causal" and "inferential roles" of symbols. He argues that symbols can
stand for things when "the unique pattern of symbol manipulations triggered
by the first symbol mirrors the unique pattern of relationships between the
referent of the first symbol and the referents of the triggered symbol." Of
course, by this inferential role argument, a symbol can not acquire meaning
by triggering other symbols if the latter have no meaning in the first
place. It doesn't matter whether an internal pattern of symbols is
isomorphic with the world or not, if neither the patterns nor the symbols
have any inherent meaning. Pinker uses the "causal" argument to provide
meaning for the meaningless internal symbols. This argument states that "a
symbol is connected to its referent in the world by our sense organs." Here
Pinker implies that the meaning of an internal symbol can be supplied by its
relationship with the external world. The problem, however, is that meaning
can not be acquired by association with the external world because there is
no inherent importance in the events in the external world and no inherent
purpose or intention in the laws of physics or chemistry that govern the
behavior of such events. When a rock falls, there is no desire, intent, or
purpose in its behavior and the event itself possesses no inherent value or
importance. So even if humans possessed a symbolic mental world that was an
exact mirror image of the external world it would still be meaningless
because the external world is without meaning. Meaning can only arise
through the evolutionary process whereby conscious emergent properties of
the brain - affects and emotions - come to reflect the importance of
physical or social events that have consistently enhanced, or posed a threat
to, biological survival; such events have inherent value.

2. If consciousness is an emergent property that results
from the interactions of component parts of a system,
then are all emergents conscious? For example, acceleration
is an emergent -- is the car consciously aware of acceleration?
If not, then why do some emergents result in consciousness,
and others don't?
A: Not at all. Digestion is an emergent property that depends on the
interactions between many components. Digestion is not a conscious
experience. Conscious experiences, as we know then, appear to arise from
the interactions of nerve cells. The second part of your question is more
difficult to answer. An equivanent question would be "why does water (H2O)
form ice at zero degrees (an emergent property) whereas hydrogen sulfide
(H2S) does not. Clearly the answer depends upon the interactions between
the components, but we still don't know what specific types of interactions
give rise to conscious experiences. Maybe your generation will be able to
explain the "how" question, which is the second part of the "hard problem of
consciousness." In the words of David Chalmers "It is widely agreed that
experience arises from a physical basis, but we have no good explanation of
why or how it so arises". My book, I believe, addresses the "why"
question..maybe you can solve the "how" question.

I regret to say that my software is not a commercial product and it has been
patented by the university who intend to use it on a web site (generating
faces for the police forces throughout the world). However, if you get a
copy of Goldberg's book (see reference in my book) it will provide you with
pascal code for a genetic algorithm. It is not difficult to go from there
to a working FacePrints program.

Keep up the good work..send more questions, and most of all, enjoy exploring
new ideas. Victor.

Be a volunteer in our beauty study. Go to

Volunteer for beauty study at