logo Welcome, Guest. Please Login or Register.
2024-03-28 04:19:56 CoV Wiki
Learn more about the Church of Virus
Home Help Search Login Register
News: Do you want to know where you stand?

  Church of Virus BBS
  General
  Evolution and Memetics

  Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong
« previous next »
Pages: [1] Reply Notify of replies Send the topic Print 
   Author  Topic: Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong  (Read 999 times)
Hermit
Archon
*****

Posts: 4290
Reputation: 8.92
Rate Hermit



Prime example of a practically perfect person

View Profile WWW
Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong
« on: 2006-11-01 16:10:29 »
Reply with quote

An Evolutionary Theory of Right and Wrong

[Hermit: From the review, it seems to me that this book possibly supports some very virian stances, particularly the class of position articulated at :If anyone manages to read it (but caveat, the Publishers Weekly review (on the Amazon site) suggests that this is "a study that is by turns fascinating and dull." The review also speaks of "universal absolutes," for example, "incest," the reality of which is based purely on assertion and is contradicted by evidence; but it is not at all clear to me whether this speaks to the contents of the book - or of the reviewer's skull.), a review from a Virian perspective would be appreciated.][/url]

Source: New York Times, Books on Science
Author: Nicholas Wade
Dated: 2006-10-31
Review of:  
"Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong", Marc Hauser, 2006-08-22, Ecco, ISBN 0060780703


Who doesn’t know the difference between right and wrong? Yet that essential knowledge, generally assumed to come from parental teaching or religious or legal instruction, could turn out to have a quite different origin.

Primatologists like Frans de Waal have long argued that the roots of human morality are evident in social animals like apes and monkeys. The animals’ feelings of empathy and expectations of reciprocity are essential behaviors for mammalian group living and can be regarded as a counterpart of human morality.

Marc D. Hauser, a Harvard biologist, has built on this idea to propose that people are born with a moral grammar wired into their neural circuits by evolution. In a new book, “Moral Minds” (HarperCollins 2006), he argues that the grammar generates instant moral judgments which, in part because of the quick decisions that must be made in life-or-death situations, are inaccessible to the conscious mind.

People are generally unaware of this process because the mind is adept at coming up with plausible rationalizations for why it arrived at a decision generated subconsciously.

Dr. Hauser presents his argument as a hypothesis to be proved, not as an established fact. But it is an idea that he roots in solid ground, including his own and others’ work with primates and in empirical results derived by moral philosophers.

The proposal, if true, would have far-reaching consequences. It implies that parents and teachers are not teaching children the rules of correct behavior from scratch but are, at best, giving shape to an innate behavior. And it suggests that religions are not the source of moral codes but, rather, social enforcers of instinctive moral behavior.

Both atheists and people belonging to a wide range of faiths make the same moral judgments, Dr. Hauser writes, implying “that the system that unconsciously generates moral judgments is immune to religious doctrine.” Dr. Hauser argues that the moral grammar operates in much the same way as the universal grammar proposed by the linguist Noam Chomsky as the innate neural machinery for language. The universal grammar is a system of rules for generating syntax and vocabulary but does not specify any particular language. That is supplied by the culture in which a child grows up.

The moral grammar too, in Dr. Hauser’s view, is a system for generating moral behavior and not a list of specific rules. It constrains human behavior so tightly that many rules are in fact the same or very similar in every society — do as you would be done by; care for children and the weak; don’t kill; avoid adultery and incest; don’t cheat, steal or lie.

But it also allows for variations, since cultures can assign different weights to the elements of the grammar’s calculations. Thus one society may ban abortion, another may see infanticide as a moral duty in certain circumstances. Or as Kipling observed, “The wildest dreams of Kew are the facts of Katmandu, and the crimes of Clapham chaste in Martaban.”

Matters of right and wrong have long been the province of moral philosophers and ethicists. Dr. Hauser’s proposal is an attempt to claim the subject for science, in particular for evolutionary biology. The moral grammar evolved, he believes, because restraints on behavior are required for social living and have been favored by natural selection because of their survival value.

Much of the present evidence for the moral grammar is indirect. Some of it comes from psychological tests of children, showing that they have an innate sense of fairness that starts to unfold at age 4. Some comes from ingenious dilemmas devised to show a subconscious moral judgment generator at work. These are known by the moral philosophers who developed them as “trolley problems.”

Suppose you are standing by a railroad track. Ahead, in a deep cutting from which no escape is possible, five people are walking on the track. You hear a train approaching. Beside you is a lever with which you can switch the train to a sidetrack. One person is walking on the sidetrack. Is it O.K. to pull the lever and save the five people, though one will die?

Most people say it is.

Assume now you are on a bridge overlooking the track. Ahead, five people on the track are at risk. You can save them by throwing down a heavy object into the path of the approaching train. One is available beside you, in the form of a fat man. Is it O.K. to push him to save the five?

Most people say no, although lives saved and lost are the same as in the first problem.

Why does the moral grammar generate such different judgments in apparently similar situations? It makes a distinction, Dr. Hauser writes, between a foreseen harm (the train killing the person on the track) and an intended harm (throwing the person in front of the train), despite the fact that the consequences are the same in either case. It also rates killing an animal as more acceptable than killing a person.

Many people cannot articulate the foreseen/intended distinction, Dr. Hauser says, a sign that it is being made at inaccessible levels of the mind. This inability challenges the general belief that moral behavior is learned. For if people cannot articulate the foreseen/intended distinction, how can they teach it?

Dr. Hauser began his research career in animal communication, working with vervet monkeys in Kenya and with birds. He is the author of a standard textbook on the subject, “The Evolution of Communication.” He began to take an interest in the human animal in 1992 after psychologists devised experiments that allowed one to infer what babies are thinking. He found he could repeat many of these experiments in cotton-top tamarins, allowing the cognitive capacities of infants to be set in an evolutionary framework.

His proposal of a moral grammar emerges from a collaboration with Dr. Chomsky, who had taken an interest in Dr. Hauser’s ideas about animal communication. In 2002 they wrote, with Dr. Tecumseh Fitch, an unusual article arguing that the faculty of language must have developed as an adaptation of some neural system possessed by animals, perhaps one used in navigation. From this interaction Dr. Hauser developed the idea that moral behavior, like language behavior, is acquired with the help of an innate set of rules that unfolds early in a child’s development.

Social animals, he believes, possess the rudiments of a moral system in that they can recognize cheating or deviations from expected behavior. But they generally lack the psychological mechanisms on which the pervasive reciprocity of human society is based, like the ability to remember bad behavior, quantify its costs, recall prior interactions with an individual and punish offenders. “Lions cooperate on the hunt, but there is no punishment for laggards,” Dr. Hauser said.

The moral grammar now universal among people presumably evolved to its final shape during the hunter-gatherer phase of the human past, before the dispersal from the ancestral homeland in northeast Africa some 50,000 years ago. This may be why events before our eyes carry far greater moral weight than happenings far away, Dr. Hauser believes, since in those days one never had to care about people remote from one’s environment.

Dr. Hauser believes that the moral grammar may have evolved through the evolutionary mechanism known as group selection. A group bound by altruism toward its members and rigorous discouragement of cheaters would be more likely to prevail over a less cohesive society, so genes for moral grammar would become more common.

Many evolutionary biologists frown on the idea of group selection, noting that genes cannot become more frequent unless they benefit the individual who carries them, and a person who contributes altruistically to people not related to him will reduce his own fitness and leave fewer offspring.

But though group selection has not been proved to occur in animals, Dr. Hauser believes that it may have operated in people because of their greater social conformity and willingness to punish or ostracize those who disobey moral codes.

“That permits strong group cohesion you don’t see in other animals, which may make for group selection,” he said.

His proposal for an innate moral grammar, if people pay attention to it, could ruffle many feathers. His fellow biologists may raise eyebrows at proposing such a big idea when much of the supporting evidence has yet to be acquired. Moral philosophers may not welcome a biologist’s bid to annex their turf, despite Dr. Hauser’s expressed desire to collaborate with them.

Nevertheless, researchers’ idea of a good hypothesis is one that generates interesting and testable predictions. By this criterion, the proposal of an innate moral grammar seems unlikely to disappoint.
« Last Edit: 2006-11-02 08:51:12 by Hermit » Report to moderator   Logged

With or without religion, you would have good people doing good things and evil people doing evil things. But for good people to do evil things, that takes religion. - Steven Weinberg, 1999
Blunderov
Archon
*****

Gender: Male
Posts: 3160
Reputation: 8.85
Rate Blunderov



"We think in generalities, we live in details"

View Profile WWW E-Mail
Re:Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong
« Reply #1 on: 2006-11-02 02:48:43 »
Reply with quote

[Blunderov] This has the potential to be huge! Thanks very much Hermit.

"The last temptation is the greatest treason;
To do the right deed for the wrong reason."
T.S. Eliot
Report to moderator   Logged
Salamantis
Neophyte
*****

Posts: 2845
Reputation: 0.00



I'm a llama!

View Profile E-Mail
Re:Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong
« Reply #2 on: 2006-11-02 16:45:27 »
Reply with quote

[[ author reputation (0.00) beneath threshold (3)... display message ]]

Report to moderator   Logged
Blunderov
Archon
*****

Gender: Male
Posts: 3160
Reputation: 8.85
Rate Blunderov



"We think in generalities, we live in details"

View Profile WWW E-Mail
Re:Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong
« Reply #3 on: 2007-03-21 15:16:59 »
Reply with quote

[Blunderov] The convergence of neuroscience and philosophy gains yet more ground - more on the subject of biology and ethics. (Doesn't answer the question "so what do we do now? however...)

NY Times

(Scientist Finds the Beginnings of Morality in Primate Behavior
Illustration by Edel Rodriguez based on source material from Frans de Waal
Social OrderChimpanzees have a sense of social structure and rules of behavior, most of which involve the hierarchy of a group, in which some animals rank higher than others. Social living demands a number of qualities that may be precursors of morality. More Photos > )

By NICHOLAS WADE
Published: March 20, 2007
Some animals are surprisingly sensitive to the plight of others. Chimpanzees, who cannot swim, have drowned in zoo moats trying to save others. Given the chance to get food by pulling a chain that would also deliver an electric shock to a companion, rhesus monkeys will starve themselves for several days.

The Beginnings of Morality? Biologists argue that these and other social behaviors are the precursors of human morality. They further believe that if morality grew out of behavioral rules shaped by evolution, it is for biologists, not philosophers or theologians, to say what these rules are.

Moral philosophers do not take very seriously the biologists’ bid to annex their subject, but they find much of interest in what the biologists say and have started an academic conversation with them.

The original call to battle was sounded by the biologist Edward O. Wilson more than 30 years ago, when he suggested in his 1975 book “Sociobiology” that “the time has come for ethics to be removed temporarily from the hands of the philosophers and biologicized.” He may have jumped the gun about the time having come, but in the intervening decades biologists have made considerable progress.

Last year Marc Hauser, an evolutionary biologist at Harvard, proposed in his book “Moral Minds” that the brain has a genetically shaped mechanism for acquiring moral rules, a universal moral grammar similar to the neural machinery for learning language. In another recent book, “Primates and Philosophers,” the primatologist Frans de Waal defends against philosopher critics his view that the roots of morality can be seen in the social behavior of monkeys and apes.

Dr. de Waal, who is director of the Living Links Center at Emory University, argues that all social animals have had to constrain or alter their behavior in various ways for group living to be worthwhile. These constraints, evident in monkeys and even more so in chimpanzees, are part of human inheritance, too, and in his view form the set of behaviors from which human morality has been shaped.

Many philosophers find it hard to think of animals as moral beings, and indeed Dr. de Waal does not contend that even chimpanzees possess morality. But he argues that human morality would be impossible without certain emotional building blocks that are clearly at work in chimp and monkey societies.

Dr. de Waal’s views are based on years of observing nonhuman primates, starting with work on aggression in the 1960s. He noticed then that after fights between two combatants, other chimpanzees would console the loser. But he was waylaid in battles with psychologists over imputing emotional states to animals, and it took him 20 years to come back to the subject.

He found that consolation was universal among the great apes but generally absent from monkeys — among macaques, mothers will not even reassure an injured infant. To console another, Dr. de Waal argues, requires empathy and a level of self-awareness that only apes and humans seem to possess. And consideration of empathy quickly led him to explore the conditions for morality.

Though human morality may end in notions of rights and justice and fine ethical distinctions, it begins, Dr. de Waal says, in concern for others and the understanding of social rules as to how they should be treated. At this lower level, primatologists have shown, there is what they consider to be a sizable overlap between the behavior of people and other social primates.

Social living requires empathy, which is especially evident in chimpanzees, as well as ways of bringing internal hostilities to an end. Every species of ape and monkey has its own protocol for reconciliation after fights, Dr. de Waal has found. If two males fail to make up, female chimpanzees will often bring the rivals together, as if sensing that discord makes their community worse off and more vulnerable to attack by neighbors. Or they will head off a fight by taking stones out of the males’ hands.

Dr. de Waal believes that these actions are undertaken for the greater good of the community, as distinct from person-to-person relationships, and are a significant precursor of morality in human societies.

Macaques and chimpanzees have a sense of social order and rules of expected behavior, mostly to do with the hierarchical natures of their societies, in which each member knows its own place. Young rhesus monkeys learn quickly how to behave, and occasionally get a finger or toe bitten off as punishment. Other primates also have a sense of reciprocity and fairness. They remember who did them favors and who did them wrong. Chimps are more likely to share food with those who have groomed them. Capuchin monkeys show their displeasure if given a smaller reward than a partner receives for performing the same task, like a piece of cucumber instead of a grape.

These four kinds of behavior — empathy, the ability to learn and follow social rules, reciprocity and peacemaking — are the basis of sociality.

Dr. de Waal sees human morality as having grown out of primate sociality, but with two extra levels of sophistication. People enforce their society’s moral codes much more rigorously with rewards, punishments and reputation building. They also apply a degree of judgment and reason, for which there are no parallels in animals.

Religion can be seen as another special ingredient of human societies, though one that emerged thousands of years after morality, in Dr. de Waal’s view. There are clear precursors of morality in nonhuman primates, but no precursors of religion. So it seems reasonable to assume that as humans evolved away from chimps, morality emerged first, followed by religion. “I look at religions as recent additions,” he said. “Their function may have to do with social life, and enforcement of rules and giving a narrative to them, which is what religions really do.”

As Dr. de Waal sees it, human morality may be severely limited by having evolved as a way of banding together against adversaries, with moral restraints being observed only toward the in group, not toward outsiders. “The profound irony is that our noblest achievement — morality — has evolutionary ties to our basest behavior — warfare,” he writes. “The sense of community required by the former was provided by the latter.”

Dr. de Waal has faced down many critics in evolutionary biology and psychology in developing his views. The evolutionary biologist George Williams dismissed morality as merely an accidental byproduct of evolution, and psychologists objected to attributing any emotional state to animals. Dr. de Waal convinced his colleagues over many years that the ban on inferring emotional states was an unreasonable restriction, given the expected evolutionary continuity between humans and other primates.

His latest audience is moral philosophers, many of whom are interested in his work and that of other biologists. “In departments of philosophy, an increasing number of people are influenced by what they have to say,” said Gilbert Harman, a Princeton University philosopher.

Dr. Philip Kitcher, a philosopher at Columbia University, likes Dr. de Waal’s empirical approach. “I have no doubt there are patterns of behavior we share with our primate relatives that are relevant to our ethical decisions,” he said. “Philosophers have always been beguiled by the dream of a system of ethics which is complete and finished, like mathematics. I don’t think it’s like that at all.”

But human ethics are considerably more complicated than the sympathy Dr. de Waal has described in chimps. “Sympathy is the raw material out of which a more complicated set of ethics may get fashioned,” he said. “In the actual world, we are confronted with different people who might be targets of our sympathy. And the business of ethics is deciding who to help and why and when.”

Many philosophers believe that conscious reasoning plays a large part in governing human ethical behavior and are therefore unwilling to let everything proceed from emotions, like sympathy, which may be evident in chimpanzees. The impartial element of morality comes from a capacity to reason, writes Peter Singer, a moral philosopher at Princeton, in “Primates and Philosophers.” He says, “Reason is like an escalator — once we step on it, we cannot get off until we have gone where it takes us.”

That was the view of Immanuel Kant, Dr. Singer noted, who believed morality must be based on reason, whereas the Scottish philosopher David Hume, followed by Dr. de Waal, argued that moral judgments proceed from the emotions.

But biologists like Dr. de Waal believe reason is generally brought to bear only after a moral decision has been reached. They argue that morality evolved at a time when people lived in small foraging societies and often had to make instant life-or-death decisions, with no time for conscious evaluation of moral choices. The reasoning came afterward as a post hoc justification. “Human behavior derives above all from fast, automated, emotional judgments, and only secondarily from slower conscious processes,” Dr. de Waal writes.

However much we may celebrate rationality, emotions are our compass, probably because they have been shaped by evolution, in Dr. de Waal’s view. For example, he says: “People object to moral solutions that involve hands-on harm to one another. This may be because hands-on violence has been subject to natural selection whereas utilitarian deliberations have not.”

Philosophers have another reason biologists cannot, in their view, reach to the heart of morality, and that is that biological analyses cannot cross the gap between “is” and “ought,” between the description of some behavior and the issue of why it is right or wrong. “You can identify some value we hold, and tell an evolutionary story about why we hold it, but there is always that radically different question of whether we ought to hold it,” said Sharon Street, a moral philosopher at New York University. “That’s not to discount the importance of what biologists are doing, but it does show why centuries of moral philosophy are incredibly relevant, too.”

Biologists are allowed an even smaller piece of the action by Jesse Prinz, a philosopher at the University of North Carolina. He believes morality developed after human evolution was finished and that moral sentiments are shaped by culture, not genetics. “It would be a fallacy to assume a single true morality could be identified by what we do instinctively, rather than by what we ought to do,” he said. “One of the principles that might guide a single true morality might be recognition of equal dignity for all human beings, and that seems to be unprecedented in the animal world.”

Dr. de Waal does not accept the philosophers’ view that biologists cannot step from “is” to “ought.” “I’m not sure how realistic the distinction is,” he said. “Animals do have ‘oughts.’ If a juvenile is in a fight, the mother must get up and defend her. Or in food sharing, animals do put pressure on each other, which is the first kind of ‘ought’ situation.”

Dr. de Waal’s definition of morality is more down to earth than Dr. Prinz’s. Morality, he writes, is “a sense of right and wrong that is born out of groupwide systems of conflict management based on shared values.” The building blocks of morality are not nice or good behaviors but rather mental and social capacities for constructing societies “in which shared values constrain individual behavior through a system of approval and disapproval.” By this definition chimpanzees in his view do possess some of the behavioral capacities built in our moral systems.

“Morality is as firmly grounded in neurobiology as anything else we do or are,” Dr. de Waal wrote in his 1996 book “Good Natured.” Biologists ignored this possibility for many years, believing that because natural selection was cruel and pitiless it could only produce people with the same qualities. But this is a fallacy, in Dr. de Waal’s view. Natural selection favors organisms that survive and reproduce, by whatever means. And it has provided people, he writes in “Primates and Philosophers,” with “a compass for life’s choices that takes the interests of the entire community into account, which is the essence of human morality.”

Report to moderator   Logged
Blunderov
Archon
*****

Gender: Male
Posts: 3160
Reputation: 8.85
Rate Blunderov



"We think in generalities, we live in details"

View Profile WWW E-Mail
Re:Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong
« Reply #4 on: 2007-03-23 04:13:33 »
Reply with quote

[Blunderov] Investigations have continued into what has become known as the 'Trolley Problem'. Apparently the universal inclination to wait until someone is in a good mood before asking for a favour is a sound strategy. The only question that remains is, as Douglas Adams once remarked, "what should we have for lunch?"

Boston Globe : Blood on the Tracks

Blood on the tracks
David Hume wrote that reason is a ``slave to the emotions." But new research suggests that in our moral decision-making, reason and emotion duke it out within the mind.

By Christopher Shea  |  August 6, 2006

MORAL PHILOSOPHERS and academics interested in studying how humans choose between right and wrong often use thought experiments to tease out the principles that inform our decisions. One particular hypothetical scenario has become quite the rage in some top psychological journals. It involves a runaway trolley, five helpless people on the track, and a large-framed man looking on from a footbridge. He may or may not be about to tumble to his bloody demise: You get to make the call.

That's because in this scenario, you are standing on the footbridge, too. You know that if you push the large man off the bridge onto the tracks, his body will stop the trolley before it kills the five people on the tracks. Of course, he will die in the process. So the question is: Is it morally permissible to kill the man in order to save five others?

In surveys, most people (around 85 percent) say they would not push the man to his death.

Often, this scenario is paired with a similar one: Again, there are five helpless people on the track. But this time, you can pull a switch that will send the runaway trolley onto a side track, where only one person is standing. So again, you can reduce the number of deaths from five to one-but in this case most people say, yes, they would go ahead and pull the lever. Why do we react so differently to the two scenarios?

Moral philosophers, if not the man on the street, can offer a few subtle logical distinctions between the cases. In the first, the fat man is being used essentially as a tool, or instrument, toward another goal. That violates the Kantian principle that human beings are ``ends" in themselves and should never be treated as mere instruments. Also, in the second scenario, the death of the innocent man can be viewed as a lamentable side effect of the chief goal, which is getting the train off the main track. This explanation is sometimes called the doctrine of the double effect: You'd pull that switch whether or not someone was on the track.

In a well-known paper that appeared in Science in 2001, however, Joshua D. Greene, then a post-doc in the Princeton psychology department, and four coauthors proposed that, whatever the philosophers said, for ordinary people the main issue was simply that pushing someone to his death-touching him and perhaps looking into his eyes-ignited an intense emotional response, whereas flipping a switch did not.

Greene, now an assistant professor at Harvard, administered MRI scans to subjects who were weighing both scenarios. While both groups showed increased activity in areas of the brain associated with intense reasoning, only in the case of those considering the footbridge scenario did the regions of the brain associated with emotion ``light up."

Greene and his colleagues described the finding as a partial victory for David Hume, the British philosopher who wrote that reason was a ``slave to the emotions." But more precisely, they described moral decision-making as a process in which reason and emotion duke it out within the mind. The finding, they added, was also a blow to older theories of human development, which held that as we become adults, we stop making moral decisions with our emotions, as children do.

In the June issue of Psychological Science, Piercarlo Valdesolo, a Northeastern University graduate student in psychology, and David DeSteno, a Northeastern professor, tightened the link between our emotions and our morals. They asked 79 subjects to consider the two trolley scenarios. But first, they had about half the subjects view a five-minute clip of ``Saturday Night Live" to put them in a good mood. The others watched a clip of a dry documentary on a Spanish village.

Valdesolo and DeSteno found that the SNL-watchers were more likely to say they would push the large man off the bridge. What seemed to be happening, they wrote, was that the happy mood caused by the video clip partly offset the negative emotions caused by the idea of directly killing a man. ``By changing the emotional response," says DeSteno, ``I can change your moral judgments."

Philosophers often caution that how we act in real life, never mind the laboratory, shouldn't determine how we ought to act. But Greene, Valdesolo, and DeSteno point out that, at the least, the results should lead us to be skeptical about our snap moral decisions, however natural and obvious they seem, as they may be very much affected by the mood we happen to be in.

Greene and the Northeastern scholars stress that the up-close-and-personal aspect of pushing the fat man off the bridge, as opposed to philosophical principles, is the most important factor influencing the decisions of ordinary people. But in a forthcoming article in the journal Mind & Language, the Harvard psychologist and biological anthropologist Marc Hauser and four coauthors argue that people are actually very subtle philosophers-at least at a subconscious level.

Hauser and his colleagues have found that people are sensitive to the doctrine of double effect even in thought experiments that don't push their emotional buttons. Even when the dirty work of actually doing the pushing is taken out of the equation, most test subjects say they are more willing to kill someone as a side effect of saving others than to kill that person as a direct means toward that end. And they make this distinction even when they can't explain their preferences afterward.

In his forthcoming book, ``Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong" (Ecco), and in other recent papers, Hauser suggests we may have a moral ``faculty" in our brains that acts as a sort of in-house philosopher-parsing situations quickly, before emotion or conscious reason come into play. Hauser compares this faculty to the mental quality that allows human beings to acquire and use language naturally and effortlessly.

It's a suggestive analogy, inviting questions about just how far the similarities run. Is human morality, like language, largely universal (gratuitous killing is bad) but with plenty of room for local variation (in some cultures, killing your daughter if she loses her virginity before marriage is not considered gratuitous)? Is it easy for children to adapt to these local differences, depending on where and how they are raised, but difficult for adults-just as it's hard to learn French at 40?

Whether the analogy to language is ``airtight" or ``useful because it allows you to ask good questions" is an open issue, Hauser says. But scholars think the answers to these questions are of more than academic interest. ``My hope is that by better understanding how we think," Greene writes on his personal website, ``we can teach ourselves to think better."

Christopher Shea's column appears biweekly in Ideas. E-mail critical.faculties@verizon.net.

Cognitive Daily

When a neutral face isn't neutral23 March 2007, 12:10:04 AM
The Kuleshov Effect, discovered nearly a century ago by Soviet filmmaker Lev Kuleshov, posits that the context in which we see an image of an actor's face will determine the emotion the face portrays. For example, take a look at this short little clip I made (QuickTime required). First you'll see a gray screen, then a photo, then a second gray screen, and another photo of a face, taken just after that person looked at the first photo:

What emotion would you say characterizes the second face? Is it neutral, subtly happy, or subtly sad?

Kuleshov's work suggests that most viewers will see that second face as happier in the context of the happy photo preceding it, compared to if they had seen it on its own. If the identical photo had been preceded by a negative image such as an aggressive dog, people would rate it more negatively.

Research during the 1980s and 1990s confirmed the effect, but it has been exploited by filmmakers countless times during the intervening decades -- the neutral face of a heroine is seen as sad if she's just witnessed her lover's death, but happy if she's anticipating his arrival on the next train.

Most recently, a team led by Dean Mobbs replicated the Kuleshov Effect while viewers' brain activity was monitored via fMRI. Mobbs' team showed movies similar to the example above to 14 volunteers. Sometimes the first (context) image was neutral, sometimes it was positive (happy), and sometimes negative (fearful). While the fMRI monitored brain activity, viewers rated the second (neutral) image as positive or negative. This chart shows the results:

Report to moderator   Logged
kenneth
Neophyte
**

Posts: 34
Reputation: 0.00



I'm a llama!

View Profile E-Mail
Re:Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong
« Reply #5 on: 2007-03-25 14:50:49 »
Reply with quote

[[ author reputation (0.00) beneath threshold (3)... display message ]]

Report to moderator   Logged
Pages: [1] Reply Notify of replies Send the topic Print 
Jump to:


Powered by MySQL Powered by PHP Church of Virus BBS | Powered by YaBB SE
© 2001-2002, YaBB SE Dev Team. All Rights Reserved.

Please support the CoV.
Valid HTML 4.01! Valid CSS! RSS feed