How to judge right and wrong?
Conscience is sometimes described as that voice inside your head. It’s not literally a voice, though. When a person’s conscience is telling them to do — or not do — something, they experience it through emotions.
Sometimes those emotions are positive. Empathy, gratitude, fairness, compassion and pride are all examples of emotions that encourage us to do things for other people. Other times, we need to not do something. The emotions that stop us include guilt, shame, embarrassment and a fear of being judged poorly by others.
Scientists are trying to understand where conscience comes from. Why do people have a conscience? How does it develop as we grow up? And where in the brain do the feelings that make up our conscience arise? Understanding conscience can help us understand what it means to be human.
Often, when someone’s conscience gets their attention, it’s because that person knows they should have helped someone else but didn’t. Or they see another person not helping out when they should.
Humans are a cooperative species. That means we work together to get things done. We’re hardly the only ones to do this, however. The other great ape species (chimpanzees, gorillas, bonobos and orangutans) also live in cooperating groups. So do some birds, who work together to raise young or to gather food for their social group. But humans work together in ways no other species does.
Our conscience is part of what lets us do so. In fact, Charles Darwin, the 19th-century scientist famous for studying evolution, thought conscience is what makes humans, well, human.
When did we become so helpful? Anthropologists — scientists who study how humans developed — think it started when our ancestors had to work together to hunt big game.
If people didn’t work together, they didn’t get enough food. But when they banded together, they could hunt large animals and get enough to feed their group for weeks. Cooperation meant survival. Anyone who didn’t help out didn’t deserve an equal share of food. That meant people had to keep track of who helped — and who didn’t. And they had to have a system of rewarding people who pitched in.
This suggests that a basic part of being human is helping others and keeping track of who’s helped you. And research supports this idea.
Katharina Hamann is an evolutionary anthropologist, someone who studies how humans and our close relatives evolved. She and her team at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany worked with both children and chimpanzees.
She led one 2011 study that put both children (two- or three-year olds) and chimps in situations where they had to work with a partner of their own species to get some treat. For the kids, this meant pulling on ropes at either end of a long board. For chimpanzees, it was a similar but slightly more complicated setup.
When the children started pulling the ropes, two pieces of their reward (marbles) sat at each end of the board. But as they pulled, one marble rolled from one end to the other. So one child got three marbles and the other got just one. When both kids had to work together, the children who got the extra marbles returned them to their partners three out of four times. But when they pulled a rope on their own (no cooperation needed) and got three marbles, these kids shared with the other child only one time in every four.
Chimpanzees instead worked for a food treat. And during the tests, they never actively shared this reward with their partners, even when both apes had to work together to get the treat.
So even very young children recognize cooperation and reward it by sharing equally, Hamann says. That ability, she adds, probably comes from our ancient need to cooperate to survive.
Children develop what we call conscience in two ways, she concludes. They learn basic social rules and expectations from adults. And they practice applying those rules with their peers. “In their joint play, they create their own rules,” she says. They also “experience that such rules are a good way to prevent harm and achieve fairness.” These kinds of interactions, Hamann suspects, may help children develop a conscience.
It feels good to do good things. Sharing and helping often trigger good feelings. We experience compassion for others, pride in a job well done and a sense of fairness.
But unhelpful behavior — or not being able to fix a problem we’ve caused — makes most people feel guilt, embarrassment or even fear for their reputation. And these feelings develop early, as in preschoolers.
Robert Hepach works at the University of Leipzig in Germany. But he used to be at the Max Planck Institute of Evolutionary Anthropology. Back then, he worked with Amrisha Vaish at the University of Virginia School of Medicine in Charlottesville. In one 2017 study, the two studied children’s eyes to gauge how bad they felt about some situation.
They focused on a child’s pupils. These are the black circles at the center of the eyes. Pupils dilate, or get wider, in low light. They also can dilate in other situations. One of these is when people feel concerned for others or want to help them. So scientists can measure changes in pupil diameter as one cue to when someone’s emotional state has changed. In their case, Hepach and Vaish used pupil dilation to study whether young children felt bad (and possibly guilty) after thinking they had caused an accident.
They had two- and three-year olds build a track so that a train could travel to an adult in the room. Then the adults asked the kids to deliver a cup of water to them using that train. Each child put a cup filled with colored water on a train car. Then the kid sat in front of a computer screen that showed the train tracks. An eye tracker hidden below the monitor measured the child’s pupils.
In half of the trials, a child hit a button to start the train. In the other half, a second adult hit the button. In each case, the train tipped over, spilling the water before it reached its destination. This accident seemed to be caused by whomever had started the train.
In some trials, the child was allowed to get paper towels to clean up the mess. In others, an adult grabbed the towels first. A child’s pupils were then measured a second time, at the end of each trial.
Kids who had a chance to clean up the mess had smaller pupils at the end than did children who didn’t get to help. This was true whether or not the child had “caused” an accident. But when an adult cleaned up the mess that a child had thought he had caused, the child still had dilated pupils afterward. This suggests these kids may have felt guilty about making the mess, the researchers say. If an adult cleaned it up, the child had no chance to right that wrong. This left them feeling bad.
Explains Hepach, “We want to be the one who provides the help. We remain frustrated if someone else repairs the harm we (accidentally) caused.” One sign of this guilt or frustration can be pupil dilation.
“From a very young age, children have a basic sense of guilt,” adds Vaish. “They know when they have hurt someone,” she says. “They also know that it’s important for them to make things right again.”
Guilt is an important emotion, she notes. And it starts playing a role early in life. As kids get older, their sense of guilt may become more complex, she says. They start to feel guilty about things they haven’t done but should. Or they might feel guilty when they just think about doing something bad.
What happens inside someone when she feels pangs of conscience? Scientists have done dozens of studies to figure this out. Many of them focus on morality, the code of conduct that we learn — the one which helps us judge right from wrong.
Scientists have focused on finding the brain areas involved with moral thinking. To do this, they scanned the brains of people while those people were looking at scenes showing different situations. For instance, one might show someone hurting another. Or a viewer might have to decide whether to save five (fictional) people by letting someone else die.
Early on, scientists expected to find a “moral area” in the brain. But there turned out not to be one. In fact, there are several areas throughout the brain that turn on during these experiments. By working together, these brain areas probably become our conscience. Scientists refer to these areas as the “moral network.”
This network is actually made up of three smaller networks, says Fiery Cushman of Harvard University in Cambridge, Mass. This psychologist specializes in morality. One brain network helps us understand other people. Another allows us to care about them. The last helps us make decisions based on our understanding and caring, Cushman explains.
The first of these three networks is made up of a group of brain areas that together are called the default mode network. It helps us get inside the heads of other people, so we can better understand who they are and what motivates them. This network involves parts of the brain that become active when we daydream. Most daydreams involve other people, Cushman says. Although we can only see a person’s actions, we can imagine what they’re thinking, or why they did what they did.
The second network is a group of brain areas often called the pain matrix. In most people, a certain part of this network turns on when someone feels pain. A neighboring region lights up when someone sees another in pain.
Empathy (EM-pah-thee) is the ability to share someone else’s feelings. The more empathetic someone is, the more those first two brain networks overlap. In very empathetic people, they may almost completely overlap. That shows that the pain matrix is important for empathy, Cushman says. It lets us care about other people by tying what they are feeling to what we ourselves experience.
Understanding and caring are important. But having a conscience means that people must then act on their feelings, he notes. That’s where the third network comes in. This one is a decision-making network. And it’s where people weigh the costs and benefits of taking action.
When people find themselves in moral situations, all three networks go to work. “We shouldn’t be looking for the moral part of the brain,” Cushman says. Rather, we have a network of areas that originally evolved to do other things. Over evolutionary time, they began to work together to create a feeling of conscience.
It is the idea that we know the ethical value of right and wrong by listening to our conscience. That still, small voice inside is what tells us whether something is right or wrong.
- Assess values instead of beliefs.
- Understand their journey.
- Resist equating outcomes with intent.
- Acknowledge the universal bias.
To understand how acquire have moral knowledge, we first need to understand what sort of thing we are talking about when we speak of right and wrong. I want to propose a non-naturalist account of morality as first put forth by G.E. Moore in his Principia Ethica (1903). Following Moore, we can conceive of morality as a sort of universal dimension. All actions fall somewhere in this moral dimension, from extremely good to extremely bad and a neutral middle.
Let me now liken morality to time. There is no physical aspect of reality to which we can point that shows time itself. But we don’t need something physical to point at to know that the passage of time occurs. Rather, time seems to impress itself upon us because our mental faculties are designed to experience its passing. This seems true of morality too. When we witness a murder and say that it’s wrong, we aren’t pointing to a physical entity of ‘wrongness’; instead we are highlighting a value that is inherent in the witnessed action. The moral dimension impresses itself on us in such a way that we can perceive moral properties.
One may wonder how, if we can apprehend moral facts in this way, that there is still widespread disagreement on moral matters. But moral facts aren’t all as simple as ‘killing is bad’ and ‘being helpful is good’. Killing can’t be absolutely wrong, since someone may rightly kill a person to stop the detonation of a bomb in a school. Actions have a range of different motivations and unseen background facts. To know if something complex is moral, we need to know not only the action but the cause, the mind-set of the person taking the action, and the intended effect. Moral knowledge can be derived from measuring the impressions a person has about an action, and investigating the thinking of the person who made the action. Some people are better at receiving these impressions and thus turning them into knowledge. This isn’t to turn ethicists into priests of morality. It is, as my metaethics professor said, like space: someone may constantly bump their head due to a lack of spatial awareness. We can all gain better knowledge of morality by learning how to better read our moral impressions.
Julian Shields, Manly, Auckland, NZ
There is no magic formula, but there is a pathway which may help in situations of doubt. First, ascertain the facts of a situation. Ignorance never promotes good decisions. Let others thrust on you facts you would rather overlook. Second, and more difficult, try to predict the consequences of the actions you might take. Unfortunately even correctly predicted consequences themselves cause unforeseeable consequences. But even the most dedicated non-consequentialist must consider consequences because actually conferring benefit on others is an important moral principle, if not an overriding one. Third, look at the moral principles which tell you to do one thing or the other. Those principles must be both valid and relevant, which is often arguable. Catholics think that divorce is wrong, but Islam makes divorce easy for men. You think that we must respect the sanctity of even a murderer’s life; I think the principle of sanctity of life has been forsaken by murderers. Finally take the decision.
Unfortunately valid and relevant moral principles clash, and we may have to decide which one we should follow of two equally pertinent claims. My utilitarian approach is that the most important objective is usually the one that brings the most good into the world; but that is not always the case. I have a greater duty to some than to others, which clashes with the duty to save more lives than fewer: but I will save my own child rather than ten strangers. Morality started as care of kin and we should not stray too far from its roots. Also some principles may be intrinsically more important than others. Perhaps it is more important not to take life than to save it, so I should refuse to kill one to save two. But what if I can save fifty by killing one? Morality can be relative to circumstances, not absolute, and at some point the utilitarian principle wins. Analysing analogous situations where the answer is clear is useful; seeing how they differ from the current situation clarifies thinking. And always discuss problems both with those you respect and with those who disagree with you. When you get it wrong, forgive yourself, and try to do better next time.
Allen Shaw, Harewood, Leeds
Perhaps the best way to answer this question is to take commonly accepted ethical notions and appraise them for the case at hand, as accordance to a central ethical principle often appears a sound basis of ethical action. One such principles is the Golden Rule (‘do unto others as you would have them do unto you’), variously occurring in many religious and belief systems. The idea that notions such as this one are reliable indicators of ‘rights’ and ‘wrongs’ is persuasive. Some moralists believe ethical action arises from a sense of duty, and not from a natural predisposition to good behaviour. Recognising responsibilities to others, not self-interest, does seem morally positive. Furthermore, following Kant, some theorists believe we must not treat others ‘merely as a means to an end’ but rather as ‘ends in themselves’, acknowledging their capacity for ethical thought. Treating people as merely an end not a means seems ethically sound: it is altruistic and respectful of others; arguably very important qualities in right ethical behaviour.
However, rigid application of ethical rules may have seemingly unethical conclusions. The majority of people would believe it wrong to lie in most circumstances yet right to lie in specific situations, such as to save a life. Secondly, an emphasis upon the importance of duty can give the impression that ethics is demanding and counter-intuitive, which is not entirely convincing: it seems difficult to criticise a naturally generous person for not being truly ethical because they do not act out of a sense of duty. Finally, although most would agree we should respect and value others persons, we may accept treating others as a means if the end is liable to have significantly more favourable consequences. For example, many people would agree it is right to sacrifice the life of one person if it saves many lives, and in fact wrong not to do so. So it seems that although people often have clear sentiments which tell them when behaviour is right or wrong, they also accept that there are times when rigid adherence to the same principles is problematic and/or unethical, making ethics as uncertain as any other branch of philosophy. This means absolute ethical judgements on right and wrong are difficult, so important ethical debates remain unresolved.
Jonathan Tipton, Preston, Lancashire
Philosophers can quibble over many different theories, but in the end I would advocate a simple boo-hurrah approach to discerning right from wrong. Okay, I’m not accounting for psychopaths. Nevertheless, I would argue that the majority of human beings have an innate sense of disgust at immoral acts, stemming from empathy. If you want to know if your actions towards another individual are right or wrong, just ask yourself if that’s how you would want to be treated. That’s the objectivity: we’re living, aware creatures. Why complicate it more than that?
Morgan Millard, Urmston, Manchester
It might be inferred from the question that discerning right from wrong is essentially cognitive. Thus, employing the terminology of Benjamin Bloom’s taxonomy of educational objectives in the cognitive domain, I am able to recall things deemed right or wrong and I can understand why they are so. I can apply my recall and understanding of right and wrong to act appropriately in specific circumstances; I can analyse behaviours and determine which are right and wrong; I can evaluate why some are right or wrong; and I can create more finely nuanced conceptions of rightness or wrongness. This learning is acquired by trial and error, and inferred from the reactions of other people to what I do or say.
But, it is an affective issue too: the reactions of others to what I say or do evoke feelings in me. To use Bloom in this domain: initially, I attend to or note particular actions that evoke responses from others or feelings in me. I learn to respond to some actions in some circumstances by others. I feel, too, that some responses are more valued by others or by myself. I organise some of these valued responses according to some principles. Eventually, these principles interlink so that my conduct is characterised by them.
For example, when my mother first put me to her breast I followed an innate need for sustenance. However, I felt pleasures of satiation, of warmth, of security. I cried when I felt hunger, or cold and, later, fear. I learned that this woman provided for these needs, on demand. Then, without intent, my toothless gums squeezed the nipple too hard. My mother flinched, drew away, withdrawing food. I cried, and supply was restored. I attended to those things and remembered: I responded to maternal actions, noted that for some of my actions she would provide things which gave pleasure and for others her response provided less pleasure. I learned which things my mother valued and led to her supply of pleasure to me. She was thus defining right and wrong. As I acquired language, I conceptualised these ideas and, in dialogue with her, and, increasingly, with others, refined these concepts. Right and wrong are defined socially by interactions amongst other people and me. They are learned. My desire for acceptance into society made me learn and conform to its ideas of rightness or wrongness.
Alasdair Macdonald, Glasgow
As an individual I am born into a society requiring adherence to a set of rules and values by which I did not choose to be bound. I am expected to behave in a certain way and live by certain rules in order to live in harmony with my fellow citizens. Assuming I have no psychological disorder, I begin to learn these societal expectations from an early age, from associations with groups, which form my cultural identity. As a member of a family, a religion, a country, a school, a workplace, I am taught the practices, values and rules of those associations. For example, as a young family member, I learn through guidance by parents that it is bad to be spiteful to siblings, and that the right behaviour sets a good example to younger siblings who may learn right from wrong from me. As an adult, I am bound by an employment contract, losing my job if I breach it. As an autonomous being, I take responsibility for my actions regarding my choice of associations. With exposure to other cultures, moralities and belief systems, I may start to question my learned behaviours and morals, reasoning as to whether or not I wish to maintain those associations, weighing up the consequences of discontinuing with what I know, and attaching myself to new associations and groups – for example, changing religion and the effect this may have on my family and friends. But in general, I can know right from wrong through my identity associations, sanctioning any resultant punishment concerning the choices I make as an adult. There may be conflicts: for example, some cultures advocate honour killings, whereas others maintain it is never right to kill another person. So what to do if you associate with a culture that advocates honour killings, but the laws of the society in which you live do not allow this? Choosing to stray from your original associations may result in penal punishment.
Sharon Painter, Rugeley, Staffs
Basically, I can’t. Not in any definitive way. Unlike laws of physics, which govern regardless of human understanding, concepts of right and wrong are constructions, products of a developing self-awareness. Reason, as Nietzsche suggests, was a late addition to our animal instincts. To highlight the implications of this, look at attitudes towards killing. For early humans, the crime of ‘murder’ would be a nonsensical idea. One had to kill to survive, making ‘murder’ an accepted hazard of daily life. Only the move from hunter-gatherer lifestyles to settled communities lessened the need to slaughter in self-defence, thus beginning the slow march to recognising murder as immoral. However, there is a problem. Many believe killing can be justified in some circumstances. Such ambiguities mean that knowing right from wrong in any absolute sense is impossible, even in seemingly clear-cut instances. But the same applies in other areas. No matter how abhorrent and objectionably wrong I believe various crimes to be, an example of historical permissibility can be found. Humans, at some point, have accepted rape, theft and persecution without question.
As right and wrong do not exist outside the collective consciousness of the planet’s population at a particular moment, it is only possible to pass judgement in hindsight. We could argue that changing attitudes are evidence of an inherent ‘wrongness’ in certain acts, perhaps pointing to a natural order of right and wrong similar to discovering laws of physics. But such convictions have proved false before. For millennia it was thought that religious texts gave definitive answers; yet if a Creator were to reveal themselves and say, ‘Same sex marriage is wrong’, or ‘Capital punishment is right’, a lot of people, including me, would have tremendous difficulty accepting it. Suddenly, we’d irrefutably know right and wrong, but feel that many ‘right’ things were ‘wrong’, and vice versa.
Some aspects of right and wrong may seem given, but for the most part we have to follow our conscience. For this reason, nothing is certain. I simply have to do my best.
Glenn Bradford, Sutton In Ashfield, Nottinghamshire
The short answer is, I can’t. Dr Oliver Scott Curry of Oxford University has essentially cracked the problem of morality, based on empirical evidence from sixty cultures, present and historical. What follows is my take on his original thoughts, so the random book should go to him.
Like Rome and its hills, morality is built on seven naturally evolved values, held to varying degrees, whose functions are promoting cooperation or resolving conflict. The greatest of these is Possession, held sacrosanct by nine tenths of cultures and the law. Next come Kinship, Loyalty and Reciprocity, espoused by three quarters. Over half of cultures rate Respect (for the powerful) and Humility (of the powerless). Last and least comes Fairness, valued by only 15%. So dosvidanya socialism, and never give a sucker an even break. The punch line is, there are no other moral values. Each individual can claim their peculiar principle, plus aesthetic judgment; but only these seven values can be truly shared.
Cultures and societies differ in the scope and priority they ascribe to these seven pillars of morality. Right is what helps achieve some conscious or unconscious goal, be it reproduction, social cohesion, long life, prosperity, or conquest. Wrong is what obstructs the goal, and evil is interpreted as doing so intentionally. Values may be incompatible, one negating another with traumatic results. What if the goal is to wield absolute domination over absolute submission, forever?
Dr Nicholas B. Taylor, Little Sandhurst
What can we say about the question? First, we must already to an extent know the answer: we must already have some idea what ‘right’ and ‘wrong’ mean. If we didn’t, we wouldn’t understand the question. But at the same time, we disagree with others about ‘right’ and ‘wrong’. But surely, if we know ourselves what is right and wrong, all we need to do is explain what those words refer to when we use them, others can explain what they are referring to, and our apparent disagreement will be resolved?
Yet we cannot do this. We can all look at an action, be in total agreement about the facts, about what the action consists of, about what effects it has, yet still disagree about whether or not it is right. If that is the case, then we cannot be arguing about the nature of that action. Our disagreement – and thus what we each mean by ‘right’ – must lie elsewhere. This helps explain why we sometimes cannot agree about the rightness of an action: its degree of rightness can only be judged comparatively, against other actions. Then which actions? If we could name the property that distinguished ‘right’ actions from the rest, we would have also named what we meant by rightness and wrongness. But if we could do that, then we would be back to rightness and wrongness referring to some fact, and any apparent disputes would be revealed as simply misunderstandings. But again, our failure to agree suggests this is cannot be the case. If right and wrong are graduations of a single system, and if we cannot place boundaries on that system, then that system must contain everything. What sorts of systems contain everything, or try to? Philosophical ones. So I would argue that our individual understanding of right and wrong is determined by our own philosophy. In so far as we have such a general philosophy, then we already know right and wrong. If we are unsure of them, it is because our philosophy remains unformed in our own minds.
John White, London
Why should we expect to be able to know right from wrong? Morality isn’t written into the universe the way facts of nature seem to be: it’s a matter of human choice, and people choose to respond to moral issues in different ways. Systems such as Bentham’s utilitarianism or Kant’s deontology have important insights but they all have drawbacks – the first for its wilful disregard of innocent people’s (assumed) rights, the second for its disregard of consequences. But what is the yardstick against which we judge the apparent failings of these two systems? For positivists, it’s a matter of psychology based on evolution and upbringing. Does this lead to relativism, with its apparent contradiction that we should never intervene in another culture or criticise a psychopath? I don’t think so. Within most polities the idea of inflicting unnecessary pain on the innocent is abhorrent. Through some inner instinct or psychological preference, we know (or is it believe?) that such cruelty is wrong. And we know if we follow certain rules that our society will give us outcomes that more or less accord with our moral preferences. In many countries enough people share enough of these values to give a sense of common purpose in pursuit of morality. Why shouldn’t we seek to convince others, that ours is a way of life that suits human psychological preferences, both theirs and ours?
However, that cohesive set of common instincts breaks down in more problematic cases such as abortion or various versions of Phillipa Foot’s ‘trolley problem’. For these there may be no agreement on what is right and we don’t have a method of deciding in some formulaic way what the correct action is. Any solution will cut across someone’s inner instinct, and there is no other way of testing the decision-making process. We agonise over these difficult problems. Perhaps the important question is not Did we get the morally right solution? – where there may be none – but Did we agonise enough? Did we grapple and make sure we looked at the problem from all possible sides?
Peter Keeble, Harrow, London
If someone were to walk off with your shopping bag in a crowded marketplace, would you judge the petty thief less harshly if he or she grabbed your bag by mistake?
The answer to that question may depend on your culture, finds a study led by University of California, Los Angeles, anthropologist Clark Barrett.
The researchers tested the degree to which intentions influence the way people judge the actions of others in societies across the globe. The result? The extent that intentions affect people's moral judgments varied across cultures.
Moral intent hypothesis
According to most philosophical and anthropological research, and according to the law in many societies, intentions affect moral judgments, Barrett told Live Science. Take, for example, the distinction between first- and second-degree murder. The difference has to do not with the actual act itself, but rather with the state of mind of the perpetrator when committing the act, Barrett said. (A first-degree murder is premeditated; a second-degree murder is not.)
More generally, "there are many cases where how harshly you might blame someone for doing something or failing to do something might depend on your judgments about whether they did it on purpose or not," he added.
In fact, the scientific literature suggested that weighing intentions when making moral judgments was a universal human trait, an idea Barrett and colleagues termed "the moral-intent hypothesis." Most of the studies supporting this conjecture, however, took place in Western, industrialized countries. Barrett said he and his colleagues wondered if the hypothesis held true in small-scale societies in other parts of the world.
Intent versus accident
The study involved 322 participants in 10 populations on six continents. These populations included two Western societies, one urban (Los Angeles) and one rural (the Ukrainian village of Storozhnitsa), as well as eight smaller-scale communities from other parts of the world.
To determine how study participants made moral judgments, researchers presented individuals with several stories in which a person, the actor, committed a harmful act of some kind; participants were then asked to rate the "badness" of the action, on a 5-point scale ranging from "very bad" to "very good." The scenarios included theft (of a shopping bag in a marketplace), physical harm (hitting someone), poisoning (a community water supply) and committing a food taboo (eating a culturally frowned-upon food).
Importantly, the scenarios also varied by whether the wrongdoings were accidental or intentional.
"The strong version of the moral-intent hypothesis would be that doing any of those things would be judged more wrong when one does it on purpose than when one does it by accident," Barrett said.
Pardonable or not?
Pooling data from all of the societies studied, the hypothesis held up: Overall, people regarded intentional actions about five times as severely as accidental ones.
However, among the 10 societies, the extent to which intent affected moral judgments varied. In the Western societies, Los Angeles and Storozhnitsa, intent seemed to influence people's moral judgments the most. Whether an act was purposeful or inadvertent mattered much less to participants on the Fijian island of Yasawa, and to the Hadza and the Himba, two populations in Africa, than it did in other populations, Barrett said.
For example, poisoning a water supply "was judged, essentially, maximally bad by the Hadza and the Himba regardless of whether you did it on purpose or by accident," Barrett said.
"People said things like, 'Well, even if you do it by accident, you should not be so careless,'" Barrett added.
In other societies, in contrast, while people still judged the accidental poisoning as bad, they viewed it less harshly than they did the malicious one.
The researchers also examined the way other "mitigating" factors — such as whether the agent acted in self-defense, acted based on misinformation or was insane — might soften participants' moral judgments. Across the board, people viewed acting out of necessity — the example of necessity given was knocking another person down to reach a water bucket to put out a fire — and acting in self-defense as factors that would mitigate a moral judgment. There were also some cross-cultural variations in the factors that people regarded as mitigating: the factors of insanity or acting on mistaken information were seen as mitigating in L.A. and Sorozhnitsa, but not on Yasawa.
"We in the West and people who have been educated in a Western scholarly tradition … think that intentions are quite relevant to moral judgments, so one of the surprises of the paper was that there were more contexts and places than we might have expected when they were less relevant than we thought," Barrett concluded. "That might mean that there are many other examples of moral variation that we have yet to discover."
Related Questions
- How to answer no to a question?
- How to lose white blood cells?
- How to get pz ii j?
- How to acquire sonar dying light?
- How to increase cibil score from 600 to 750?
- How to start vmware workstation?
- How to stop zyrtec withdrawal symptoms?
- How to exit zen mode in vscode?
- How to fix zero kb on iphone?
- How to hide calendar details in gmail?