virus: Lakoff lecture part 2

Eva-Lise Carlstrom (eva-lise@efn.org)
Wed, 12 Mar 1997 00:51:31 -0800 (PST)


Another source of evidence is the study of basic-level categories (Rosch
and Berlin). We can readily picture a generalized 'chair' or a generalized
'car', but not a generalized 'piece of furniture' or 'vehicle'. This is
because we have motor patterns for 'sitting in a chair' and 'driving a
car' but not for 'interacting with a piece of furniture' or 'utilizing a
vehicle'. The basic-level concept is motor-based.

Another is spatial relations (Len Talmey). Languages have different
spatial relations systems. English has lots of prepositions: on, around,
near, through, with, between, over, etc. Mixtec doesn't have a word for
'on' (the English word covering the combination of contact, verticality,
and support). Instead, for "The cow is standing on the hill", in Mixtec
you say "The cow is standing the hill's head". For "The cat is sitting on
the house", you say "The cat is sitting the house's animal-back". When we
talk about something being 'in front of' an object like a glass, which has
no natural front, we project a 'front' on the glass based on its
orientation to us. All languages have spatial relations systems, and they
all involve certain key elements such as paths, contact, parts and wholes,
orientation, boundaries, etc., but different languages use different
combinations. They all derive from bodily experience.

Terry ___ {sorry, I missed the surname} did research in neurocomputation,
on the question "how can neurons compute meanings of spatial terms?" He
showed bitmaps of shapes in various relationships to neural nets, and
trained the neural nets to learn the spatial terms of various languages.
The only way he could do this, he found, was by providing the neural nets
with structures analogous to those in the human brain that represent
topologies, preserving the layout of scenes as if projected on a screen in
the back of the brain.

We use spatial relations in our reasoning (for instance: If A is inside B,
and B is inside C, then by the nature of the spatial relationship involved
I know that A is inside C). Therefore, our reasoning is embodied--based
on the fact that we have bodies and brains of the kind we do.

More examples:

Aspect. In English we use auxiliary verbs to express aspect (finished,
one-time, ongoing, repetitive quality of a verb): "he is running", "he
runs", "he has run". Can a computer program learn to associate the right
aspect (in language) with visual images of motions taking place?
Researchers on this task used a program that encoded full mapping of body
parts (muscle by muscle). Verbs have a natural event structure; some have
an endpoint ("pick up"), some iterate ("tap"), some are ongoing ("run").
The researchers found they needed a model of motor control with that event
structure built in. In the course of achieving that for motor actions,
they found a system that worked for non-motor verbs as well.

Framing. Charles Fillmore published on "semantic fields"--clusters of
associated words related by a frame, such as [waiter, menu, tip, table],
in the 'restaurant' frame. We have frames we use unconsciously,
understanding the background from the assumed frame. For instance, if I
say, "We went to a restaurant, and after half an hour the waiter brought
our food", you assume that we had ordered the food, from your
understanding of the restaurant frame. This is not done consciously.
In fact, most of our reasoning is unconscious.

Prototypes. Some things are better examples of a category than others.
When someone says 'bird' we usually think of a small songbird type. If I
say "There's a bird on the porch" and you look out and see an ostrich you
will feel somewhat misled. The prototype effect has to do with us, not
the world.

All these results conflict with traditional ideas about people and reason.

(more in part 3)