Developing Ethically Coherent Characters

A good story usually demands a strong plot, and a strong plot is advanced through the skillful use of conflict.

Conflict, of course, starts with characters who think and act in specific ways; their patterns of behavior set the contours of how conflicts begin, progress and resolve over the narrative arc of the story.

Five introductory points about ethical consistency:

  1. At heart, ethics relates to the process by which people make value-laden choices. When there’s no choice, or no values at stake, then the question isn’t an ethical one. For example, personal preferences (e.g., “I like cashews more than brazil nuts”) aren’t a source of moral dispute.
  2. People aren’t always consistent, but they do tend to naturally fall into one of the broad ethical paradigms. No one does the right thing all the time, and always for the exact same reason; characters like Galadedrid Damodred in The Wheel of Time simply do not exist in the real world, so their presence in literary worlds proves especially jarring. Likewise, no one does the wrong thing all the time.
  3. When pressed, people can do the “right” thing for the “wrong” reason — with wrong merely suggesting a conformance to a different (i.e., non-dominant) moral paradigm.
  4. When pressed further, people can act against their moral principles. It doesn’t happen often, however. People who frequently make bad moral choices are inadvertently telegraphing that their ethical framework isn’t as straightforward as they claim.
  5. People rarely reset their default ethical worldview. Such a change can happen, but it’s not often enough in the real world to use it as a plot device. Usually these changes follow from significant trauma or long-running psychological stress.

The most common “broad moral paradigms” include:

  • Egoism. In a nutshell: Egoists do what redounds to the greatest good for the self.
  • Deontology. Duty-based ethics (i.e., Kantianism) suggests that the morally correct behavior is that which meets a generalizable duty or universal moral rule. For example, people can agree to the maxim that “It’s never okay to lie” and therefore we have a duty to avoid lying. We must do our duty, no matter the consequence.
  • Consequentialism. Consequentialism subdivides into many different groups. Utilitarians, for example, divide into “act utilitarians” (actions are judged) and “rule utilitarians” (the rules surrounding the actions are judged). Regardless of their tribe, however, consequentialists generally agree that the morally correct behavior is that which generates the greatest good or the least suffering, for the greatest number of people. Duty isn’t usually a major consideration.
  • Natural Law Theory. The natural law suggests that innate patterns in human nature — discoverable through study of universal human behavior — should govern. Popular in the Middle Ages, this approach isn’t as common anymore.
  • Divine Command Theory. The morally correct behavior is that which is willed by the supreme supernatural being(s). In other words: Do what God says.
  • Virtue Theory. The virtues rely on the development of character and follow from the ethical teachings of Aristotle. A virtue theorist balances various virtues (e.g., temperance, fortitude, bravery) to arrive at a recommended course of action. The vices (sloth, envy, etc.) should be eradicated to grow in character and thus in virtue. In a sense, the ethically correct behavior is that which the virtuous person undertakes.
  • Care Ethics. A modern innovation, care ethics seeks to preserve the relationships among those affected by an ethically difficult situation. The outcome is sometimes less relevant than maintaining amity. A special consideration is extended to people disadvantaged by the dispute.

Important non-theories include:

  • Contractarianism. The idea with contractarians is that our only moral duties are those we explicitly negotiate with others. However, this line of thinking is just a variant of selective deontology (as in, I only have a duty to those for whom I agree to incur a duty).
  • Rights Theory. Someone who emphasis rights above all other considerations is just aping a form of deontology (i.e., giving pride-of-place to the maxim that “people ought to respect the rights of others”). Depending on the justification, it’s also a variant of rule utilitarianism.
  • Honor Theory. Approaches that emphasize honor — you see it often in urban hip-hop culture that emphasizes respect — tend to loosely follow a care-ethics framework.
  • Ethical Nihilism. If you believe that there’s no such thing as morality, or that ethics can’t be universally applicable, then you’re a nihilist. But at heart, you’re really an egoist because you’re suggesting that whatever you do is, ipso facto, morally justified.
  • Hedonism. The whole “live and let live in peace and harmony, dude” mindset follows from a variant of consequentialism with a bit of egoist seasoning.
  • The Lex Talionis. The idea of “an eye for an eye” is sometimes incorrectly assumed to be a function of the natural law. In fact, natural law focuses on traits universal among humans; it’s not a surrogate for survival-of-the-fittest fetishism.

A few other points warrant mention.

First, ethical paradigms don’t relate well to the DSM-V. For example, an ethicist might classify as a “super-enlightened egoist” someone diagnosed by a psychologist as a sociopath. Many assertions of mental illness along the lines of sociopathic personality disorder or borderline personality disorder can distill into a form of ethical egoism that the psychologist simply refuses to accept as being a legitimate moral worldview. There’s long been a tension between the ethicist and the psychologist.

Second, many people mix their metaphors. They’ll follow the duty-bound approach of a Kantian for most things, but resort to consequentialist thinking when they want a free pass that Kant won’t offer. Or they’ll follow their scripture in their personal life but follow a care-ethic approach in their professional life. Again, consistency isn’t common, nor is it necessarily a desirable trait. But to the degree that people are inconsistent, they’re often consistently inconsistent.

In practice, adherents of each of these schools might come (correctly! and legitimately!) to different conclusions given the same case study. Consider the following hypothetical:

Bob arrives at work at 8 a.m. He sees his co-worker, Sally, arrive at 9 a.m. — but he discovers that she wrote 8 a.m. on her timesheet. After a bit of peeking, he concludes that she’s been faking her time card for several months, bilking her employer out of hundreds of hours of wages. Bob considers what he should do with his knowledge of Sally’s behavior.

In this situation, people can legitimately arrive at different conclusions.

THEORY CONSIDERATION OUTCOME
Egoism What’s in it for me? Bob fundamentally doesn’t care about what Sally’s doing. He briefly considers whether to extort a payment to keep quiet or to fake his own timecards; either way, he’s not terribly invested in Sally’s theft as long as it doesn’t affect him.
Deontology What’s my duty? Bob has a duty of loyalty to his employer, so he doesn’t hesitate to report Sally to their boss.
Consequentialism What’s the best outcome? Theft of wages from an employer increases the work for others and reduces the labor budget available to others. As such, Sally’s theft is (on balance) detrimental to the company and to other employees, so Bob reports her conduct to their boss.
Natural Law What would we expect a regular person to do? By reporting Sally, Bob will uphold a universal truth that crosses cultures, that people who have been injured by theft should be made whole, and that people who violate norms of conduct should not have their transgressions ignored.
Divine Command What does God will? As a devout Christian, Bob knows that stealing is wrong, so he encourages Sally to report herself and make restitution to their boss, and to repent to the Lord.
Virtue What would a good person do? Because stealing for any reason is the mark of a weak person, Bob does not hesitate to report Sally to their boss.
Care What resolution preserves our relationships? Bob approaches Sally to ask why she’s been mismarking her timecards. He suspects that if she is struggling financially, he can help her out — but fundamentally he wants to help her stop her theft so he doesn’t have to report her to their boss.

Sometimes people get confused and think that because different people can make different ethical decisions for different reasons, that therefore morality as a concept is unworkable. Untrue. The complex moral reasoning of most ordinary people resembles the Myers-Briggs Type Indicator: One or two paradigms are dominant, another one or two sometimes crop up, and others almost never make an appearance.

If your characters consistently behave as humans would behave in the real world, then not only are your characters more plausible, but the conflicts generated by their clashes are more powerful. Never underestimate the power of base moral conflict to drive tension and keep a plot advancing. When done well, these psychological studies drive powerful reader engagement and lead to more compelling stories.

The Moral Assumptions Within Income-Inequality Arguments

Throughout all the Sturm und Drang of the politics of wealth redistribution — intensified since the 2008 financial crisis — various groups assembled to review options to moderate the gap between rich and poor. Usually, such groups issue reports filled with dismal statistics and urgent demands for sweeping economic change, couched in language that suggests, but never justifies, a moral imperative to act.
Case in point: Laura Kiesel, writing for MainStreet, quotes a recent Oxfam International report alleging that the wealthiest 1 percent of the world’s wealthy now control 48 percent of the world’s wealth, and that the 85 wealthiest people on earth control as much wealth as the 3.5 billion people on the bottom end of the scale. Let us assume, prima facie, that the Oxfam International report is accurate. Many commentators immediately jump to the assertion that such an imbalance of wealth is politically and morally objectionable.
Question: Why is wealth imbalance morally objectionable?
One common rhetorical strategy is to assert that a specific cohort of people find imbalance to be unfair. And if it’s unfair, then clearly it’s unethical. Recent polling suggests that Americans making above $70k favor redistribution methods by about 54 percent, but for households below $30k, the rate jumps to 74 percent. The less you have, the more you resent those who enjoy plenty, and the more you’re excused for your resentment. A delicious interplay of argumentum ad misericordiam and argumentum ad populum.
Resentment, though, isn’t a compelling moral justification for the confiscation of another’s assets. (Although, I suppose, it could be a perfectly valid political justification, depending on the health of the state.) We haven’t really gotten to the heart of the question, yet, so let’s come back to Oxfam. Kiesel’s article addressed the group’s “Seven Point Plan” to reduce income inequality by clamping down on tax dodging, offering free/universal health and education, shifting tax burdens from labor/consumption to wealth, moving toward so-called living wages, introducing equal-pay laws, guaranteeing a minimum basic income and agreeing to “a global goal to tackle inequality.”
The ideologically astute will no doubt observe that Oxfam’s laundry list hews astonishingly close to the default policy preferences of the Far Left and includes major policy points that aren’t central to the goal of significantly flattening the distribution curve. Either Oxfam and its coreligionists have cornered the market on the best way to make everyone’s life better, or they’re singing to the Marxist-Leninist choir from The Hymnal of the Righteous.
Righteous. A curious term. An interesting tidbit about moral philosophy: It’s the twin to aesthetics. Go to any Philosophy 101 textbook worth its salt and look at the various trees of specialization beneath philosophy as a discipline. You find theories of fact — metaphysics, epistemology, ontology, etc. — and theories of value. There are only two value theories in philosophy: ethics and aesthetics. The first addresses the question of what is right, and the second, what is beautiful. But their approaches are largely similar, and they deal with similar concerns about universality and interpretation.
Within the discipline of moral philosophy, several paradigms assert themselves. None really offers a compelling, immediately obvious justification for the assertion that income inequality is, ipso facto, a morally blameworthy scenario:

  • Divine Command: In the Christian world, the highest commandment is to “love God with all your heart, and to love your neighbor as yourself.” In practice, this commandment preaches individual generosity to the poor and the avoidance of ostentatious consumption. Significantly, Biblical norms address an individual person’s responsibility to assist the poor, not a state’s obligation to prevent poverty. It’s a big leap to claim that Scriptural injunctions to alleviate the suffering of the least well-off requires the coercive power of government.
  • Natural Law: This approach is probably the least favorable to wealth distribution among all the main ethical paradigms.
  • Deontology: A good deontologist is a slave to duty. Although a person can assert some duty to help the poor, someone else can assert a counter-duty to maximize the efficiency of capital. Duty-based ethics is more about process and intent rather than outcome; a duty-based claim in favor of redistribution can be countered with a duty-based claim against it.
  • Consequentialism: In the mode of moral reasoning that elevates the outcome above all other considerations, the moral nod goes to the person who can make the most sound and convincing claim about what will follow if some action is or isn’t undertaken. As such, consequentialism itself — like deontology — is indifferent to the plight of the poor, except in those cases where a person advances an argument related to the poor that’s more compelling than the counter-argument.
  • Egoism: If you’re a “have not,” you want to become a “have;” if you’re a “have,” you want to avoid becoming a “have not.” Because the locus of moral reasoning is on the self, egoism does not readily admit to compromise positions for sweeping social issues.

So the point of the bullets, above, is to merely indicate that there’s no obvious, inherent moral imperative to support wealth redistribution. Many, many arguments pro and con litter the rhetorical landscape, some more convincing than others, but the fundamental point is that redistribution is a conclusion, not a premise, within the broader economic debate.
Question (again): Why is wealth imbalance morally objectionable?
Many worthy arguments both favor and oppose the significant redistribution of capital. I think, though, that the real question here isn’t moral, it’s aesthetic. People look at the juxtaposition of a wealthy person like Bill Gates or Carlos Slim or a prince of the Saudi royal family, relative to an emaciated child living in the slums of an Indian metropolis or in a camp in the East African desert, and find the comparison to be not beautiful.
It takes a callous soul to argue that it’s beautiful that some people live in palaces, dining on endangered species, while other people live in rape tents, dining on a few bugs and table scraps. Inequality, in its extremes, is ugly. And because it’s ugly, we are tempted to flip from the aesthetic to the ethical side of the philosophical coin and therefore conclude that it’s also inherently immoral. (Such a move is common: Think of how many book and movie villains aren’t just evil, they’re also deformed in some physical or psychological manner.)
The thing is, many ugly things are perfectly OK from an ethical standpoint. Controlled burns of national parks, for example. And many beautiful things are morally repugnant: Look at the formal photos of a child bride on her wedding day for a case study.
The moral dimension of wealth inequality cannot be trumped with the “ugly” card. We need reasonable debate to ensure that the self-righteousness that comes from privileging our moral positions as assumptions instead of arguments, yields to a degree of good-faith pragmatism that keeps us from demonizing the Other. Even when the Other is a guy worth billions of dollars and you’re left paying for a useless graduate degree in puppetry.
Because when your aesthetic sense tricks you into thinking that your moral preferences are normative, you won’t stop at income inequality. You will, like Oxfam International, subsume a whole list of policy preferences under the pristine banners of Progress, giving you the joy of righteousness while guaranteeing your efforts will come to naught.

What Would Impel You to Murder?

Statutes recognize several distinct grades of legal culpability when one human kills another. Deaths resulting from the acts of a perpetrator who didn’t intend to kill and had no ill will for the decedents — i.e., the crime lacked intent and malice — may end up with a manslaughter charge, whereas a death arising from the perpetrator’s failure to exercise due care might be charged as a negligent homicide. When a death occurs because of the willful act of the perpetrator, then the charge becomes murder and falls into one of three degrees. Many crimes of passion get charged as second-degree murder. Premeditated killings earn a first-degree murder charge. Layered into the mix are a host of defenses — insanity, self-defense, accident, impairment, victim retaliation, etc. — that attempt to minimize the mix of intent and malice that lead to specific charges and specific sentences.
The law’s judgment, however, imperfectly squares with moral judgment. To many ethicists, killing in reasonable self-defense — including during combat — and killing that follows from an unforeseeable accident, both carry minimal moral culpability. A person’s moral burden increases when a death results from an avoidable set of circumstances, like intoxication or reckless driving. It increases further when a killing that might legally be justified nevertheless could have been avoided with non-lethal approaches to conflict resolution. It increases still further when the perpetrator put himself into an environment where there was a known and avoidable risk of violence, like when an angry husband returns home to confront a cheating wife. When you cross into the threshold of first-degree murder, an ethical distinction follows from the reason for the crime; this reason may appear in sentencing memoranda but usually not in the charge. In general, the more the act of murder depersonalizes the victim, the higher the level of ethical censure.
Let’s shift gears. I’ve been doing a lot of editing of short stories for the Brewed Awakenings anthology. As part of my prep, I’ve visited libraries and bookstores to browse recently published novels and anthologies, to get a better feel for how certain plot devices unfold or how other authors manage the flow of dialogue and contextual information within a scene. What I’ve taken away from that exercise is that for many writers — although, to my satisfaction, none in our anthology — killing is something that just seems to happen, often without malice or intent. Murder becomes a plot device that’s divorced from any real grasp of what the crime actually entails in the real world. (It’s curious how many contemporary novels rely on killing and rape as staple plot conventions, despite near-universal condemnation of the practices. Perhaps there’s something significant in that.)
For an average person, the innate prohibition against murder is so strong that the only realistic way he’d kill another is by accident or through avoidable impairment. So when authors craft tales about premeditated murder, the killer rarely works when he’s an archetype of Joe Sixpack. Premeditated murder by a psychologically competent offender occurs for only a small number of reasons:

  • Financial or reputational gain (contract hit men, insurance windfalls, gang violence, failed drug deals, prison murders)
  • Revenge (grudges and other personal animosities against a known victim, honor killings, failed marriages)
  • Jealousy (knocking off a rival for someone’s affections, envy over the good fortune of another, killing a scorning lover)
  • Service to a cause (ideology, religion, sociocultural tribal codes)
  • To avoid exposure (cover up other crimes, silence whistleblowers)
  • To gain exposure (school shootings, serial killing, police-assisted suicide)
  • Bias (hatred of known or unknown others who exhibit a disfavored characteristic, tribal initiations, out-of-control bullying)
  • Thrill (killing for fun by a person not psychologically compromised, BDSM snuff activity)

Of course, reasons for premeditated murder by the psychologically incompetent run the gamut — “the voices made me do it,” etc. — but that class of perpetrator is less interesting because they’re acting out on disordered compulsions, so their actions are rarely voluntary in the sense they rationally consider their motive, means and opportunity to kill another absent any legal justification for doing so. In this sense, although some serial killers are impaired, certain diagnoses within the DSM-V don’t rise to the level of acute psychological disorder that removes moral culpability. A person with antisocial personality disorder, for example, has a diagnosis that may well be admissible at trial, but all but the most severely afflicted can still function normally and make rational choices about first-degree murder.
All of the above having been established, the question for authors is straightforward: Can you explain why a rational person willingly ended the life of another? The cultural and even instinctive taboo against unjustified homicide runs deep. A person rarely just wakes up one day and snaps into Murder One (that’s what Murder Two is for); the sequence of events leading to the pulling of the trigger or the wielding of the knife take weeks, months or years to develop. Introducing a premeditated murder at random makes for a thin plot.
But the larger question rolls beyond authors and includes everyone. What stops us from killing? For some, it’s that pre-rational inhibition rooted in culture, religion or instinct. For others, it flows from a panhumanist love for all living things. And don’t forget the fear of arrest, trial and incarceration and the deep loss of friends, family and freedom that follow. Or about the physical difficulty that comes from subduing another and the exposure to blood and internal organs that may dissuade the squeamish. Authors rarely seek recourse to the rich literature on ethical paradigms; if they did, they’d realize that certain ethical frameworks justify the don’t-murder injunction using starkly different logic models. (Consequentialists, I think, have the hardest time with this problem.)
There’s no such thing as a random killing. Each murder has a reason for its commission that outweighs the relative risk of its consequences. For authors, there’s probably some wisdom in avoiding the rape-and-murder trope unless you can paint a compelling character sketch of the perpetrator — why did he do it, and why didn’t the fear of consequences deter him?
For everyone else, it’s a useful exercise to consider the circumstances that could lead you to cold-blooded murder. And if you find that you cannot list any, then follow up with the question: Am I deceiving myself?

Wisdom and The Law

Many moons ago, I half-justified to a friend a particular deviation from Catholic moral theology by arguing that as long as he understood the rationale behind the Church’s prohibition, he could live according to the real moral truth (imperfectly encapsulated by a behavioral norm) despite his superficial non-conformance with the letter of the law. The subtext of that somewhat Gnostic argument? That much of the thou-shalt-not discipline of Scripture and Tradition was intended to provide concrete guidance to the great unwashed masses who lack the intellectual wherewithal to properly adjudicate complex ethical problems.

We, the wise, however, ought not to labor under such crude restrictions, better suited to toddlers than adults. Ergo, as long as we could tease out a logical superstructure of principle beneath those crusty old rubrics, we could live as enlightened souls who didn’t obsess over compliance with rules intended to shepherd the children around us.

Interesting, then, to read Aaron Rothstein’s review of Steven Weitzman’s Solomon: The Lure of Wisdom published in the current issue of The Weekly Standard. Rothstein — a medical student at Wake Forest — offers a refreshing insight into the interplay of wisdom and spiritual humility:

The rabbis conceived of gezeirah, alternatively known as building a fence around the Torah. One places certain restrictions on lifestyle in order to (in Rabban Gamliel’s words) “keep a man far from transgression.” … In other words, if we understand the secrets of why we do certain things or why certain laws exist, we remove the barriers that prevent us from breaking more serious laws. Solomon’s downfall, then, demonstrates the danger of too much understanding — a biblical version of the Faustian tale.

Put differently: Laws, both secular and religious, serve as markers that delimit acceptable behavior. In many cases, those markers sit very far away from grey zones, in order to protect people from the confusion reigning at morality’s twilight. When we, the wise, treat those markers as rules for the unenlightened — when we decide that our own wisdom is sufficient to light our way through that twilight — we risk missing the next set of markers, hidden in the darkness, roping off the point of no return. Ethics becomes tautology: A given act is acceptable because I believe it’s acceptable. QED. And the wisdom inherent in the original placement of those universal markers fades from public consciousness.

Having decided that I have ascertained the real lesson of the rules that govern everyone else, I can then presume to know when those rules may safely be broken. And what’s true of one’s private life — e.g., out-thinking biblical dietary norms — works in one’s public life, too. Anyone want to wager whether all the best and brightest at the National Security Agency will always elect to follow domestic-surveillance laws, enacted in messy and haphazard fashion by a dysfunctional Congress, when the intelligence analysts think they’ve got a compelling case to skirt them in service to their version of the public good?

We break laws with impunity when we think we understand the law’s purpose and decide that some other, higher, purpose ought to trump.

Therein lies the risk. Humility — a trait that often comes easier to men of intellectually modest means — helps us to acknowledge that the law’s markers serve a valuable purpose and were erected with foresight. When we lack that humility, we treat those markers as speed bumps, yet we rarely acknowledge that some wisdom superior to our own may have played a part in setting them.

Solomon fell because he out-thought the Torah and God decided to remind him that mere wisdom isn’t a license to disobedience. Today, we see case after case after case of men out-thinking both statutory and natural law, and we must ask: Are we really that wise, after all?

On the Ethics Relating to Feral Cats

Last week, I said that I’ve got a family of ferals in my garage. These four felines have prompted quite a bit of discussion among my friends.

First, the Lenin question. What is to be done? Abbi and Brittany advocate TNR — trap, neuter, release. The idea is to crate the cats, take them to a local clinic that does free spaying and neutering for ferals, and then put the fuzzy four-legs back where they came from. The argument is that TNR is the most humane way of addressing burgeoning urban populations of feral cats: You don’t kill them outright, but you do remove their ability to procreate, thus controlling their numbers and limiting their footprint upon the bird and small-mammal populations.

Stacie, by contrast, echoes the official line from PETA, which is to trap and euthanize. Their argument is that there are too many feral cats already, killing birds and otherwise disrupting the local wildlife while simultaneously leading Hobbesian lives of nasty, brutish conditions. Better to painlessly euthanize them as an invasive species and be done with it.

Of all the ethical positions to take, it appears that the least laudable is precisely the one I’ve taken: I am feeding them. I’ve been giving Snowball and her three shorties a cup of dry food per day plus a plastic dish of clean water.

My strategy leads directly to a second point, regarding the line between feral and domestic cats in general. The mama of the bunch — which I’ve cleverly named Snowball because she’s solid white — went from hissing at me if I got within 10 feet, to meowing (happily) when she sees me and letting me pet her when I feed her. She actually comes to me when she sees me in the driveway. All this, within one week. Yes, her behavior is Pavlovian. But it’s interesting, because my two indoor cats pretty much act the same way. Granted that the indoor kitties are litterbox trained and don’t scratch stuff up, the question remains whether they’re really all that different from Snowball.

Dogs domesticate. Cats don’t, really. Snowball could probably never be an indoor cat — I wouldn’t even try. My indoor cats would probably die within a week if they were released into the wild. But habituation and domestication are wholly separate concepts.

More interesting are the kittens. When I first saw them, they were old enough to eat dry food and explore on their own, but not so old that they didn’t occasionally nurse. The kittens remained afraid of me; only one let me touch him when they dared to approach the dry food I left out. And now, I haven’t seen any of them in the last two days.  So do I still feed Snowball, when she doesn’t seem to be managing a litter anymore?

Decisions, decisions. Perhaps the best insight came from Alaric, who noted that presuming to make any interference with the cats — TNR, euthanasia, even feeding — is to presume to intrude upon the natural order of their lives, so right off the bat any choice violates their autonomy as creatures operating in the natural world. Every other ethical choice follows from that first-order violation.

Who would have thought that something as commonplace as a transient family of ferals could prove so ethically complex?

Expertise and Its Discontents

The hardest thing about earning a degree in moral philosophy is discussing ethics with someone who lacks any real academic formation in the subject. Interlocutors assert, based on their belief that they have a personal apprehension of “right and wrong,” that their perspectives are just as valid as the expert’s. After all, everyone’s entitled to an opinion.

Except … when they aren’t. An opinion is nothing more than the conclusion of an argument whose major premises, more often than not, are unexpressed. And an argument’s conclusion is open to criticism based on the usual criteria of logical consistency, factual coherence, etc. You’re not “entitled” to be unchallenged in your idiocy. Privilege only applies to preferences (e.g., “I like cashews”).

So you get into these positions where you’re discussing the ethical propriety of “X” and person “Y” refuses to accept that someone else has a better understanding of the subject than they do. For people unschooled in the academic aspects of moral philosophy, “ethics” is merely “deciding what’s right and wrong,” and since everyone thinks his moral compass is infallible, most people are resistant to counsel that flows from first principles.

There are parallels with other domains of expertise, too. Try being the statistician with a solid understanding of data-management theory, and then explain to the uninitiated why they can’t get the data they want, when they want it, in the format they demand. If you’re the only one in the room who understand the technical implications of a request, and everyone else in the room are customers in the business who only know that they want a result yesterday, then most attempts to impose methodological coherence are viewed as being negative or obstructive: “Just give me what I want.” Even if what they want is mathematical nonsense.

barista-300x225Or, take baristas. Just because you know how to make a pot of coffee doesn’t mean you’re smarter than your barista, and it doesn’t give you the right to treat the coffee-ordering experience as if it were a Cold War treaty negotiation. You really don’t need to counsel the barista about how many pumps of syrup to use or ask irrelevant questions like whether the beans are shade-grown or whether you can substitute fermented yak milk or whether you can bring in your own antique glass cups for your hot tea. You also don’t need to comment on the names of cup sizes. Just drink the damn coffee and leave a tip.

Even physicians aren’t immune; more and more doctors lament that self-diagnosed patients are more difficult to treat and are more prone to abandoning prophylactic treatment because they think they know better.

Expertise, then, is both a blessing and a curse: You really do know better than others, but others fail to concede the point.

So next time you’re arguing with someone whose education and experience outshine your own, remember how you felt when the tables were turned and shape your response accordingly.

Bioethicist Claims Embryonic Genetic Engineering Is “Moral Obligation”

With this writer’s hat tip to Slashdot, now comes The Telegraph to report that Oxford professor Julian Savulescu — who is also editor-in-chief of Journal of Medical Ethics — argues that it’s a “moral obligation” to use genetic engineering on embryos to screen out “genetic flaws” that contribute to lower intelligence, higher aggression or sociopathic behaviors.

Attentive readers will no doubt recall that this isn’t the first time that our friends in the bioethics discipline have articulated positions that fall significantly far afield of mainstream thought; just this past February, a brief editorial in (surprise) the Journal of Medical Ethics suggested that there’s a right to after-birth abortion until the time the infant is capable of higher forms of self-awareness, because newborns — being incapable of true moral agency — have no moral standing and therefore enjoy no moral rights.

Outrage over the conclusions advanced by the high priests of bioethics is easy to muster; less simple is fixing the underlying problem. The most significant hurdle with bioethics is that it’s not a discipline of philosophy — it’s simply a normative subdiscipline of biology. Thus, the scope and methods of “pure” moral philosophy rarely seem to rate. The goal of the bioethicists, by and large, is to advance the industry, not to advance clear thinking about difficult subjects. Indeed, as I opined last week, it’s a shame that the real philosophers spend so much time playng language and logic games that the practical questions about what to do and why get outsourced to industry experts who add “ethics” to their title but otherwise espouse an industry- or ideology-specific worldview that feels no different than a press release.

Bioethics isn’t ethics. Bioethics is the practice of making value judgments about controversial or disputed behaviors, using knowledge and principles and assumptions that come from within the life sciences. Many bioethicists receive training in abstract moral philosophy, but their bread and butter comes from within medicine or biology or zoology or a cognate discipline. And it shows in what moral principles they espouse: Much of what passes as high-level, peer-reviewed bioethical thought uses the same simplistic tools taught to undergrad philosophy majors. When you set aside moral psychology, faith-based reasoning and alternative moral paradigms in favor of a set-piece analysis that focuses on “agency” and “autonomy” and “paternalism” and treats human life in any form as an instrumental rather than intrisnsic good, you arrive at a default position that’s a better fit for an undergrad term paper than a serious piece of moral reasoning.

Despite Savulescu’s claims to the contrary, you cannot assume that a human person — even in embryonic form — may be tinkered with on the genetic level unless you first articulate an ironclad ethical argument that justifies what he presupposes from the outset. When you look at human life and assume you can change it without its consent, simply because you increase the odds of a good outcome for other people, you are committing an egregious error in reasoning. You’re assuming the validity of the input based on the desirability of the output.

A good philosopher would know this. Unfortuantely, on bioethics questions, many of them are distressingly AWOL.

The rest of us, then, can and should reject the instrumentalism of contemporary bioethics and assert as a first principle that a person is a person and the integrity of one’s genetic code should be privileged — unless, of course, someone arrives at a thoughtful argument that justifies, rather than merely asserts, a contrary position.

[EDIT 2012-09-02 18:20 EDT: Slight phrasing edits; recast last sentence to include missing clause.]

Bioethicist Claims Embryonic Genetic Engineering Is "Moral Obligation"

With this writer’s hat tip to Slashdot, now comes The Telegraph to report that Oxford professor Julian Savulescu — who is also editor-in-chief of Journal of Medical Ethics — argues that it’s a “moral obligation” to use genetic engineering on embryos to screen out “genetic flaws” that contribute to lower intelligence, higher aggression or sociopathic behaviors.
Attentive readers will no doubt recall that this isn’t the first time that our friends in the bioethics discipline have articulated positions that fall significantly far afield of mainstream thought; just this past February, a brief editorial in (surprise) the Journal of Medical Ethics suggested that there’s a right to after-birth abortion until the time the infant is capable of higher forms of self-awareness, because newborns — being incapable of true moral agency — have no moral standing and therefore enjoy no moral rights.
Outrage over the conclusions advanced by the high priests of bioethics is easy to muster; less simple is fixing the underlying problem. The most significant hurdle with bioethics is that it’s not a discipline of philosophy — it’s simply a normative subdiscipline of biology. Thus, the scope and methods of “pure” moral philosophy rarely seem to rate. The goal of the bioethicists, by and large, is to advance the industry, not to advance clear thinking about difficult subjects. Indeed, as I opined last week, it’s a shame that the real philosophers spend so much time playng language and logic games that the practical questions about what to do and why get outsourced to industry experts who add “ethics” to their title but otherwise espouse an industry- or ideology-specific worldview that feels no different than a press release.
Bioethics isn’t ethics. Bioethics is the practice of making value judgments about controversial or disputed behaviors, using knowledge and principles and assumptions that come from within the life sciences. Many bioethicists receive training in abstract moral philosophy, but their bread and butter comes from within medicine or biology or zoology or a cognate discipline. And it shows in what moral principles they espouse: Much of what passes as high-level, peer-reviewed bioethical thought uses the same simplistic tools taught to undergrad philosophy majors. When you set aside moral psychology, faith-based reasoning and alternative moral paradigms in favor of a set-piece analysis that focuses on “agency” and “autonomy” and “paternalism” and treats human life in any form as an instrumental rather than intrisnsic good, you arrive at a default position that’s a better fit for an undergrad term paper than a serious piece of moral reasoning.
Despite Savulescu’s claims to the contrary, you cannot assume that a human person — even in embryonic form — may be tinkered with on the genetic level unless you first articulate an ironclad ethical argument that justifies what he presupposes from the outset. When you look at human life and assume you can change it without its consent, simply because you increase the odds of a good outcome for other people, you are committing an egregious error in reasoning. You’re assuming the validity of the input based on the desirability of the output.
A good philosopher would know this. Unfortuantely, on bioethics questions, many of them are distressingly AWOL.
The rest of us, then, can and should reject the instrumentalism of contemporary bioethics and assert as a first principle that a person is a person and the integrity of one’s genetic code should be privileged — unless, of course, someone arrives at a thoughtful argument that justifies, rather than merely asserts, a contrary position.
[EDIT 2012-09-02 18:20 EDT: Slight phrasing edits; recast last sentence to include missing clause.]

Wilson, Haidt & Moral Psychology

A trek through the landscape of moral philosophy reveals an interesting bifurcation within the discipline. Undergrads learn about the history and traditional scope and methods of ethics — Aristotle, Aquinas, Hobbes, Hume, Kant, Smith, Nietzsche, Rawls — but at the graduate level, the positivist/continental dispute rears its head and in many programs, a holistic approach to the discipline collapses into academic factionalism or intellectual solipsism.

As such, contemporary moral philosophy remains bedeviled by its own internal hobgoblins such that applied moral philosophy exists as little more than an offshoot of some other discipline. The philosophers fight increasingly irrelevant battles — the positivists, about linguistic theory or higher-order mathematical logic; the continentals, about principles too abstract to apply to real-world problems — while “ethicists” in other disciplines merely dress up their ideology in moral terms. The bioethicists are notorious for this; they’re biologists first, and cloak their policy preferences in terms like “autonomy” or “justice” or “quality of life” that have astonishingly little relationship to the moral universe from which they purportedly originate.

As an ethicist, then, I’ve held a pessimistic outlook on the discipline. I agree with some prominent philosophers, like Alasdair MacIntyre, that part of the problem is that philosophy needs to get over positivism before it again will become relevant to ordinary people. Philosophers have boxed themselves into a series of dead ends; everyone knows it but too many have invested too much into their sub-sub-subspecialties for meaningful reform to occur anytime soon.

One possible exit strategery flows from … applied moral philosophy. Or rather, the import of some aspects of evolutionary biology into the realm of philosophy proper.

Consider the fascinating developments in evolutionary biology. I recall first encountering the subject with Jared Diamond’s Why Is Sex Fun? This short tome — assigned reading in an undergrad philosophy-of-science class — demonstrated the evolution in behavior related to advances in the biology of sexual reproduction. Following that, Diamond’s Guns, Germs and Steel identified causal factors in why some social groups dominated and others declined.

More recently, I’ve worked through E.O. Wilson’s The Social Conquest of Earth and Jonathan Haidt’s The Righteous Mind. These books, as I read them, are correlated; Wilson outlines the long-term evolution of social behavior in humans and Haidt covers the territory of moral intuition and how pre-rational intuition leads to the group identities that function as partisanship’s precursor.

The upshot is this: While academic moral philosophy still follows trendy theories down various empty bunny holes, the social psychologists and evolutionary biologists have plausibly claimed that human moral behavior derives from the competition/altruism dynamic within groups and between groups.

Look at it this way: Our first sphere of interest is the local group — family, circle of friends, tribe, affinity group. Within this sphere, we compete for prominence and sometimes sacrifice personal goals for the good of the group. But when that sphere comes under attack, we band together to challenge the aggressor: Sometimes through overt conflict, but sometimes through engagement and compromise. By default, we identify with the local group and because of evolutionary pressure, we’re less likely to express sympathy for or understanding of The Other. The intellectual schema of inter-group disputes falls into the “me good, you bad” mindset that’s very difficult to eradicate even among otherwise educated folks.

People operate in overlapping spheres of group loyalties. We are members of families, clubs, cities, nation-states, religions, self-selected tribes (e.g., of minority groups), political affiliations, socioeconomic strata, etc. All of these memberships influence us; their overlaps force us to make choices among competing and contradictory expectations.

One logical outcome from this chaos of conflicting loyalties comes the sovereign self — the radical individual, common in Western European civilization, who selects and rank-orders his loyalties in a deliberate way. You see this trend clearly with people who self-identify first as a member of a specific group. When you meet someone new and ask, “So, tell me about yourself?” one clear hint comes from the first sentence. Does the person tell you his job? That she’s married? That he’s gay? That she’s a Christian? This ranking of competing group claims helps a person demonstrate a self-consistent personal ethics.

But cognates matter. Some identities conflict in fundamental ways; it’s hard to be a faithful Catholic, a center/right Republican, a practicing bisexual, a writer and a son of a socially conservative family … simultaneously. These identities conflict. Many elect to pick among these identities and downplay or shed others, often with a sense of viciousness for what’s downplayed. Just think of how many “recovering Catholics” or “former liberals” you’ve met. They haven’t “evolved” — they’ve merely rank-ordered their affiliations in a manner that produces the least psychic violence. (Others, myself included, maintain these affiliations but retreat to a form of relativism in which we acknowledge the conflicts but pretend that we’re above the fray.)

Thus does Haidt’s moral psychology bring a semblance of order from the theoretical chaos spawned by 20th-century philosophy. He seems to concur with Hume’s theory of moral sentiments; the interplay of Wilson’s and Diamond’s insights flesh out the how and the why of the evolutionary context.

When you see Republicans and Democrats unable to compromise, it’s not necessarily because they’re all just big fat meanie heads unwilling to share. The core beliefs in each group mean something to them, and just tossing group pieties aside to find compromise seems odd. If one party favors high taxes on the rich and the other party favors low taxes on the rich, a “solution” of medium taxes for the rich is incoherent for both sides. Similarly, people who support or oppose gay marriage want an absolute resolution; no one wants a scenario where half the gays can get married.

Politics used to be somewhat immune to this, inasmuch as the traditional passions in American life rarely affected party politics directly at the national level and across the board like they do now. But the divisions we see have always been there, just expressed in other forms (like religious bigotry, overt racism, and intolerance for gays, immigrants, etc.). As America moves ever-closer to a federal society instead of a federalist society, the pressures that used to vent along a hierarchy now can only vent from the top, with results as likely disastrous as they are eminently predictable.

The question for America, then, isn’t “what can we do to reduce partisan gridlock” but rather, “what can we do to manage gridlock more effectively.”

We could start by recognizing the import of moral psychology — in particular, by setting aside the psuedointellectual nonsense about “ideological echo chambers” or “false equivalence” and instead recognizing that group conflicts are the result of a successful society. We should embrace gridlock as a sign of healthy competition among various factions. The most dangerous societies are those with only one voice declaiming from the public square.

Some things do need resolution. (The Fiscal Cliff, for one.) This means that we need more skilled cat herders in politics and the media instead of elites whining that the cats refuse to be herded.

More than anything, though, we need to ensure that there are effective safety valves for intragroup disagreements at various social levels. This means more federalism, capitalism and diversity of thought. It means we need to resist the authoritarian tendencies of Right and Left and to accept that compromise isn’t always a virtue but squelching others is always a vice.

Human moral psychology evolved the way it did because it conferred real survival benefits. Although society is significantly more complex than it was in the days of hunter-gatherer tribes, those pre-rational skills we learned millennia ago remain relevant. If we try to suppress them for the sake of some golden ideal, we risk throwing the whole system into chaos.

[N.B. — Attributions or ellipical statements about any particular author are my reaction to that author’s work, and not necessarily that author’s explicit sentiment.]

Wilson, Haidt & Moral Psychology

A trek through the landscape of moral philosophy reveals an interesting bifurcation within the discipline. Undergrads learn about the history and traditional scope and methods of ethics — Aristotle, Aquinas, Hobbes, Hume, Kant, Smith, Nietzsche, Rawls — but at the graduate level, the positivist/continental dispute rears its head and in many programs, a holistic approach to the discipline collapses into academic factionalism or intellectual solipsism.
As such, contemporary moral philosophy remains bedeviled by its own internal hobgoblins such that applied moral philosophy exists as little more than an offshoot of some other discipline. The philosophers fight increasingly irrelevant battles — the positivists, about linguistic theory or higher-order mathematical logic; the continentals, about principles too abstract to apply to real-world problems — while “ethicists” in other disciplines merely dress up their ideology in moral terms. The bioethicists are notorious for this; they’re biologists first, and cloak their policy preferences in terms like “autonomy” or “justice” or “quality of life” that have astonishingly little relationship to the moral universe from which they purportedly originate.
As an ethicist, then, I’ve held a pessimistic outlook on the discipline. I agree with some prominent philosophers, like Alasdair MacIntyre, that part of the problem is that philosophy needs to get over positivism before it again will become relevant to ordinary people. Philosophers have boxed themselves into a series of dead ends; everyone knows it but too many have invested too much into their sub-sub-subspecialties for meaningful reform to occur anytime soon.
One possible exit strategery flows from … applied moral philosophy. Or rather, the import of some aspects of evolutionary biology into the realm of philosophy proper.
Consider the fascinating developments in evolutionary biology. I recall first encountering the subject with Jared Diamond’s Why Is Sex Fun? This short tome — assigned reading in an undergrad philosophy-of-science class — demonstrated the evolution in behavior related to advances in the biology of sexual reproduction. Following that, Diamond’s Guns, Germs and Steel identified causal factors in why some social groups dominated and others declined.
More recently, I’ve worked through E.O. Wilson’s The Social Conquest of Earth and Jonathan Haidt’s The Righteous Mind. These books, as I read them, are correlated; Wilson outlines the long-term evolution of social behavior in humans and Haidt covers the territory of moral intuition and how pre-rational intuition leads to the group identities that function as partisanship’s precursor.
The upshot is this: While academic moral philosophy still follows trendy theories down various empty bunny holes, the social psychologists and evolutionary biologists have plausibly claimed that human moral behavior derives from the competition/altruism dynamic within groups and between groups.
Look at it this way: Our first sphere of interest is the local group — family, circle of friends, tribe, affinity group. Within this sphere, we compete for prominence and sometimes sacrifice personal goals for the good of the group. But when that sphere comes under attack, we band together to challenge the aggressor: Sometimes through overt conflict, but sometimes through engagement and compromise. By default, we identify with the local group and because of evolutionary pressure, we’re less likely to express sympathy for or understanding of The Other. The intellectual schema of inter-group disputes falls into the “me good, you bad” mindset that’s very difficult to eradicate even among otherwise educated folks.
People operate in overlapping spheres of group loyalties. We are members of families, clubs, cities, nation-states, religions, self-selected tribes (e.g., of minority groups), political affiliations, socioeconomic strata, etc. All of these memberships influence us; their overlaps force us to make choices among competing and contradictory expectations.
One logical outcome from this chaos of conflicting loyalties comes the sovereign self — the radical individual, common in Western European civilization, who selects and rank-orders his loyalties in a deliberate way. You see this trend clearly with people who self-identify first as a member of a specific group. When you meet someone new and ask, “So, tell me about yourself?” one clear hint comes from the first sentence. Does the person tell you his job? That she’s married? That he’s gay? That she’s a Christian? This ranking of competing group claims helps a person demonstrate a self-consistent personal ethics.
But cognates matter. Some identities conflict in fundamental ways; it’s hard to be a faithful Catholic, a center/right Republican, a practicing bisexual, a writer and a son of a socially conservative family … simultaneously. These identities conflict. Many elect to pick among these identities and downplay or shed others, often with a sense of viciousness for what’s downplayed. Just think of how many “recovering Catholics” or “former liberals” you’ve met. They haven’t “evolved” — they’ve merely rank-ordered their affiliations in a manner that produces the least psychic violence. (Others, myself included, maintain these affiliations but retreat to a form of relativism in which we acknowledge the conflicts but pretend that we’re above the fray.)
Thus does Haidt’s moral psychology bring a semblance of order from the theoretical chaos spawned by 20th-century philosophy. He seems to concur with Hume’s theory of moral sentiments; the interplay of Wilson’s and Diamond’s insights flesh out the how and the why of the evolutionary context.
When you see Republicans and Democrats unable to compromise, it’s not necessarily because they’re all just big fat meanie heads unwilling to share. The core beliefs in each group mean something to them, and just tossing group pieties aside to find compromise seems odd. If one party favors high taxes on the rich and the other party favors low taxes on the rich, a “solution” of medium taxes for the rich is incoherent for both sides. Similarly, people who support or oppose gay marriage want an absolute resolution; no one wants a scenario where half the gays can get married.
Politics used to be somewhat immune to this, inasmuch as the traditional passions in American life rarely affected party politics directly at the national level and across the board like they do now. But the divisions we see have always been there, just expressed in other forms (like religious bigotry, overt racism, and intolerance for gays, immigrants, etc.). As America moves ever-closer to a federal society instead of a federalist society, the pressures that used to vent along a hierarchy now can only vent from the top, with results as likely disastrous as they are eminently predictable.
The question for America, then, isn’t “what can we do to reduce partisan gridlock” but rather, “what can we do to manage gridlock more effectively.”
We could start by recognizing the import of moral psychology — in particular, by setting aside the psuedointellectual nonsense about “ideological echo chambers” or “false equivalence” and instead recognizing that group conflicts are the result of a successful society. We should embrace gridlock as a sign of healthy competition among various factions. The most dangerous societies are those with only one voice declaiming from the public square.
Some things do need resolution. (The Fiscal Cliff, for one.) This means that we need more skilled cat herders in politics and the media instead of elites whining that the cats refuse to be herded.
More than anything, though, we need to ensure that there are effective safety valves for intragroup disagreements at various social levels. This means more federalism, capitalism and diversity of thought. It means we need to resist the authoritarian tendencies of Right and Left and to accept that compromise isn’t always a virtue but squelching others is always a vice.
Human moral psychology evolved the way it did because it conferred real survival benefits. Although society is significantly more complex than it was in the days of hunter-gatherer tribes, those pre-rational skills we learned millennia ago remain relevant. If we try to suppress them for the sake of some golden ideal, we risk throwing the whole system into chaos.
[N.B. — Attributions or ellipical statements about any particular author are my reaction to that author’s work, and not necessarily that author’s explicit sentiment.]