Reason vs. Vitriol: Which Shall Previal?

I had an interesting exchange a few days ago in a political discussion group, with a self-identified liberal who took issue with an off-the-cuff characterization I made about “rabid” and irrational Bush-haters.  The context of the conversation was about the nature of the lies and inaccuracies of politicians.

My gentle interlocutor seems a bright enough fellow, with more understanding of the principles of critical reasoning than most I’ve encountered in my online travels.  That credit ascribed, however, he insisted that I had committed an ad hominem, and that a “generous debater” would engage the substance of the argument assuming the best of intent on the part of the other.

Well, OK.  On a technical level, he is right that I dismissed out-of-hand the claims of what I’ll call, for the sake of this entry, the “Rabid Left.”  But it’s not clear that the Rabid Left actually presents an argument of intellectual substance worthy of reasonable engagement.

We all know that there are a number of smart, informed, passionate people who inhabit all parts of the ideological spectrum.  We also know that the spectrum contains people who are, to put it bluntly, dumb as rocks but don’t quite realize it.

Any reasonable person is worthy of civil discourse, conducted in a spirit of good-faith inquiry.  Unreasonable people aren’t, by virtue of their dismissal of sound thinking.

Consider the case of the Rabid Left.  Many of these people hate George W. Bush with passion.  They repeat the same claims as if they’re engaging in heroic truth-tellilng:  Bush is an idiot, Bush is a liar, Bush is a war criminal, Bush is a monkey, Bush is a puppet of the Evil Overlord Dick Cheney.

Do these people provide anything other than vitriol?  Rarely.  When they do defend their points, they often resort to assertions ungrounded in fact (e.g., “Bush lied about WMD”) or based on a most unkind character assassination (e.g., “Bush only attacked Iraq because Saddam tried to kill his daddy”).

How is a reasonable person to respond, short of dismissal?  Debate requires conformance to the basic principles of logic.  Things like the law of non-contradiction, the law of the excluded middle, syllogisms, the principles of valid inference — each of those things, employed in the analysis of objective fact, contribute to a coherent conversation.  A sequence of wild assertions does not.

It is both a strength and a weakness of contemporary liberalism that it remains committed to some sense of process fairness.  A strength, insofar as multiple perspectives are welcomed.  A weakness, insofar as most ideas are considered inherently equal and therefore equally entitled to consideration.  It is this weakness — this tendency to drawing moral equivalence — that has so confused much of the post-9/11 discourse in the West.

I have yet to decide whether I will attempt to rationally engage the Rabid Left in the future, or to just steer them a wide berth.  In the meantime, I can only pray that we move ever closer to a day when reason and not vitriol becomes the trump card in ideological disputes.

Saving Lives

What would you do to save a life?

Would you leap into a burning building to rescue a child?  Jump into the river to grab a man who can’t swim?  Dive on a grenade to save your squad?  Wade into a group of rioters to pull an elderly woman to safety?

We like to think that we would perform these heroic acts — at the least, in service to family or friends.  Many say they’d even do them for strangers.  And a survey of the newspapers suggests that such things happen with heart-warming regularity.

In the language of duty ethics, philosophers call these types of actions supererogatory acts.  That is, they are things done which are morally praiseworthy but not morally required.  We have no duty to expose our own lives to real danger just to attempt the rescue of another; we all applaud the bystander who dives in front of a bus to knock the foreign tourist to safety, but no one seriously suggests that the bystander had an obligation to do so.

But what if you could save a life by doing something trivial?  Would you do it?  Or, more pointedly, would you refuse to do it?

Hospital errors are believed to be the fifth most frequent cause of death in the United States.  We are aware, no doubt, of the horror stories of people having the wrong limb amputated, or the wrong surgery performed.  These things happen, but relatively rarely.  More frequent are medication errors.  It’s been suggested by healthcare quality experts that patients have a 10 percent chance of having prescribed medications fail to be transcribed and administered, and there is upwards of a 25 percent chance that at some point during an inpatient stay, there will be a medication error of some sort for any given patient in any given hospital — most frequently, because the bedside nurse failed to double-check the dosage or to verify the patient’s identity.

Not only that, but people can die from nosocomial infections (infections acquired in a hospital, unrelated to the reason for the patient’s admission).  The single most effective method of preventing nosocomial infections is diligent hand-washing, yet time and again, clinical staff fail to follow the basic universal precautions.  In fact, a physician of my acquaintance counted the number of times he observed hand-washing one day while sitting with his dying father:  Of more than 25 individual encounters, not once did clinical staff wash their hands.

The irony is that many healthcare workers actually would engage in heroic acts to save the life of a stranger, even though most don’t do the routine things that, statistically speaking, would be more effective at saving more lives in the long run.

Of course, the phenomenon is not limited to healthcare; negligence seems to be a universal human trait.  Yet for all the obsessing we do about the value of human life, the cavalier attitude so many of us take to minimizing unnecessary occasions of potential harm is truly distressing.

We may or may not have the fortitude to pull an infant from an auto wreck, but surely we can all wash our hands and fasten our seat belts.  It may not be heroic, but it reduces the need for heroism, which isn’t a bad place to be.

Social reality

I made my first actual trip to the gym today.  I joined a little before Christmas, but hadn’t had the time to really get into it until the urge hit in the early afternoon.  Interesting mix of people — some older folks trying to stay in shape, some younger guys trying to bulk up, and a lot of people on the treadmills.  Including me: Those things are phenomenal, with full-screen touch-sensitive displays and the whole works.  Impressive place.

Anyway, as I was jogging away on the machine, my mind wandered to thoughts about social reality.  Yes, surrounded by hot chicks jiggling away on stationary machines, I was thinking philosophy.  I am a nerd, albeit an unusually healthy one.  Nevertheless ….

A few years ago, I read John Searle’s book, The Construction of Social Reality.  Although I grasped his main argument at the time, the impact of it hasn’t really hit me until recently.  The book can best be summarized by (believe it or not) its back-cover blurb:  “… Searle examines the structure of social reality (or those portions of the world that are facts only by human agreement …), and contrasts it to a brute reality that is independent of human agreement.  Searle shows that brute reality provides the indisputable foundation for all social reality, and that social reality, while very real, is maintained by nothing more than custom and habit.”

Well and good.  What this means is that much of what we take for granted as objectively true is “true” only insofar as we all agree to think it so.  We believe, for example, that there are facts about money, or marriage, or art — but these facts have no basis in a world without humans.  What good does it do to say that a penny is 1/100 of a dollar, if there were no people around to use currency?

It’s interesting to see how a localized social reality, e.g. in the home or office, can shift simply as a matter of public perception.  I am witnessing just such a scenario play out over the last few weeks.  A generally accepted understanding about a particular person’s role within a group has shifted dramatically merely because a few key players have allowed themselves to form a different opinion about that person’s contribution to the overall effort.  Although nothing specific changed, and there were no incidents to prompt a paradigm shift, the change in the center of gravity of the group meant that the person in question went from insider to outcast in short order.  And once the prevailing winds turned, the others acted as if the new paradigm had been true all along.

Whether it’s the living room or the conference room, I think we too often take for granted that so much of what we believe to be “true” is merely a matter of convention.  As my friend Emilie so eloquently noted, people don’t take kindly to the black sheep in the flock.  When the conventional wisdom changes the social reality of any group of people, the folks clad in dark-colored wool can only rarely use reason and logic to advocate for change, since logic — that is, the art of argument — is essentially the manipulation of fact and not fact itself.  Or:  If a “social fact” becomes the conventional wisdom, then reason alone is disadvantaged against it.

[All of this commends the written works of Robert Greene.  I have read The 48 Laws of Power and am finished with 31 of The 33 Strategies of War.  Next up is The Art of Seduction.  Each of these is written in a dispassionate, almost amoral, tone unapologetically infused with power dynamics — the very essence of the popular misconception of Machiavelli’s works — and a lack of appreciation for the literary devices pervading Greene’s work can be seen in the derogatory reviews his books sometimes receive.  Yet once you peek behind the curtain, the reader encounters some rather interesting insights into managing interpersonal relationships (seduction), group dynamics (war), and personal ambition (power).]

Taking Searle’s epistemological argument to a small-group scenario prompts ethical questions about the appropriate methods of social interaction.  We like to think that being honest, rational, and direct with people is the best policy.  Best, because most noble and most effective in the long run.  But is this necessarily true? 

If the perceptions of a group ascend to the level of social fact, then logic alone is largely incapable of changing it.  Absent logic, only indirect appeals to emotion have the power to shift perception in most people.  Granted that there are rational people who will respond well to well-reasoned arguments, it seems the case that most people remain affixed to their social facts and will only change them when the conventional wisdom shifts (we see this herd mentality with the punditocracy, for example).  So if reason doesn’t work, must we resort to emotional manipulation?

I realize that this is not a binary proposition.  Yet the logic of it does suggest that “good boy” behavior is only truly effective for those who can afford to use it.  For the rest of us, other means of ensuring success may be more strongly indicated. 

At any rate, this has been a fun topic to ponder, and I’m not finished with it yet.

A Clash of Nihilisms

Two phenomena dominating U.S. politics are intertwined in a manner that few seem willing to appreciate.

The first is the reality of extremist Islamism, and its culture of jihad waged through means that include suicide bombing as an acceptable, even routine, tactic. The second is the moral collapse of the Left in the Western world. The wellspring, in both cases, is the same — radical nihilism, and the socially appropriate methods of finding existential meaning in a nihilist culture.

In the West, nihilism is essentially philosophical and has its roots in the logic of the Enlightenment. In reaction to the political role played by organized religion in Europe in the later Middle Ages (itself the result of social collapse whose defining moment can be sourced to the sack of Rome in 476), the trend among European elites was to move increasing far from the Church. Thus, scientism became supreme, and with it, a toleration for difference that culminated in today’s diversity movement.

The assumption underlying classical liberalism is that only through objectivity of fact and relativism of belief could a free society advance. The only truths to which the Left can admit, then, are those it holds (as a matter of faith, ironically) to be absolutely scientific; religion must be limited exclusively to the private sphere, if it is even to be permitted at all, and any other system of belief must not make claim to objective truth.

Of course, this is a problematic position. To claim that scientism is the only approach to truth is to ignore the intellectual problems wrought in areas where scientific rationality simply cannot hold court. And, there is a logical contradiction at the core of relativism — to wit, that all things are relative except for the one absolute that all things are relative.

This leaves today’s Left with … not much in its toolbox. Its scientism, being largely triumphant, is no longer capable of rallying the troops (despite the occasional jeremiad against “the theocons” in Washington). Its relativism, barring it from rendering definitive value judgments, leaves it incapable of responding forcefully to strategic shifts among civilizations that may imperil the very viability of Western civilization. With no core belief system that it can hold to be true (apart from scientism and relativism themselves), the Left cannot articulate a coherent defense of the West or even of liberalism. Hence the inanity of debate in Europe about Islamic immigration. Thus denuded of both sword and shield, all that remains for the Left is mere spit and bile.

This phenomenon is tellingly demonstrated in the behaviors of far-Left politicians in the United States and their cohorts abroad. They are capable, for example, of denouncing President George W. Bush and Prime Minister Tony Blair in the strongest of terms, but those terms are almost always ad hominems. Bush is stupid; Bush is evil; Bush is a monkey; Bush is a war criminal. Each claim is patently false, but for the Left, just making the claim is considered a heroic act of sophisticated truth-telling. Without core beliefs and an openness to non-scientific truths, the Left’s politics is little more than stone-throwing.

Of course, not all denizens of the Left are irrational hate-mongers, but even so, the response across the spectrum of Left-wing civility has been to rely increasingly on asserted value claims and not on reasoned arguments, and whether your source is the Daily Kos or the New York Times editorial board, too much of the Left’s political commentary is torn by its desire to assert value-laden truth-claims about non-objective subjects while attempting (usually inadequately) to preserve its scientific, relativistic orthodoxy. I am reminded of the trope used by a priest at my church, long ago, who punctuated an especially animated homily with the statement: “God cannot sin against Himself.” Neither can the Left betray its own basic assumptions without a fair amount of long-term psychological damage.

In the Islamic world, by contrast, the nihilism is more recent and is locused in demography. Because Islamic philosophy embraced a fundamentally Platonic worldview, it was more comfortable with authoritative pronouncements about the world than the Aristotelean West ever was. In an intellectual milieu wherein Koranic philosophy contains the definitive delineation of metaphysics, there is less need for experimentation or even a spirit of inquisitiveness. Thus, the Islamic world fell behind as the West’s technological lead widened — and with that gap came socioeconomic disability that is fundamentally incompatible with Islamic self-perception. Nihilism is a rational response to dealing with the divergence between the ontological claims of one’s faith tradition and the oh-so-obvious reality in which that tradition is lived. In other words: If the logical validity of a person’s scripture is undermined by the discrepancy between the world and what the scripture says about the world, then either a person must abandon (or at least, re-interpret) the scripture, or abandon the world. Radical Islamists have chosen the latter path.

The demographic problem of the Middle East, then, is fueling the radical Islamist assault. If the Koran says that God blesses Muslims with happiness and prosperity in this world, but you live in backward squalor, then the Koran must be false. But for idealistic youths who have no other socially acceptable outlet for their natural, biological aggression, there is a second option — to assume that Muslim civilization is under assault by the Other (Christians, for example) and that therefore it is the will of God that the oppressors of Islam be brought to earthly justice. Hence the attractiveness of suicide bombing. And given the number of young males, and the high birthrates in the Arab world, the nihilism of the contemporary Islamic intellectual position is explosively aligned with a burgeoning youth culture that grasps for meaning, recognition and tribal solidarity … and finds it in radicalized religion.

The twin gorillas of contemporary American politics are the moral vacuousness of the Left in response to a civilizational assault, and the proper response to militant Islamism per se.

The Democratic Party, the standard-bearer for the American Left, simply denies that radical Islamism actually constitutes a coherent threat. Against all the objective evidence presented over the last few decades (not to mention the assertions of radical imams across the globe), the Left rationalizes its inability to respond to the the threat by denying that the threat actually exists.

The Republican Party, for its part, has responded to the threat of Islamist demography but not to the ethos that feeds it. Invading Afghanistan or Iraq (or Saudi Arabia, or Iran) won’t solve the problem. True, it will mitigate it; Bush may be right that the best we can do is take the fight to the enemy on the enemy’s own soil. But the definitive resolution to radical Islamism can originate only from within Islam; Muslims need to reform from within. Until that happens, the best the West can do is simply to police the borders.

The confict in the West between the desire to respond militarily to provocation, and the desire to ignore the basic problem by treating terrorism as a legal problem, has created a less-than-robust reaction to militant Islamofascism. This weakness, in turn, is incorrectly perceived by radicals as a sign of (a) the emasculation of the West and (b) the favor of God. Both justify the tactics and beliefs of radical Islamism.  The circle becomes vicious, and the body bags pile up.
We are left, then, with a clash of nihilisms. The politically potent and aggressive manifestation of Islamic demographic nihilism is clashing with the morally vacuous and passive manifestation of Western philosophical nihilism. Whether a shift in demography or in political reality will affect the interplay of these nihilisms remains to be seen, but the outlook if things continue as they are is not encouraging — for Islam, or for the West. Perhaps the West will rediscover its faith in its own beliefs and institutions. Perhaps moderate Muslim leaders will stem the worst of radical behavior. Or perhaps the struggle will continue for generations.

Or perhaps one side will learn the wisdom of repudiating its own nihilism, thus freeing it to respond more effectively to the other. Some of us still dare to hope.

Stewardship as metaphor

My church, a Roman Catholic parish of more than 1,500 families, is officially big on the concept of stewardship. You know the drill — you give your “time, talent and treasure” to the church and in return you will get various blessings and happiness. Fork over 10 percent of your gross income, and somehow God will give you even more in return (often in vague and undetectable ways). Something like the “Prayer of Jabez” with a distinctly Catholic spin. It’s a wonder Wall Street hasn’t been more bullish on the concept.

Anyway, we are subjected to relentless preaching about the virtues of stewardship. We are unceasingly exhorted to give, give, give in order to improve our faith lives. Whether one accepts the hidden premises here is irrelevant; what is interesting, from a philosophy-of-religion perspective, is the sequencing of stewardship.

If we concede the religious principle, that acts of mercy or acts of charity are morally good and spiritually beneficial, we must ask: Which comes first? The faith or the act?

Stewardship as an organized program presupposes that good stewards are already good Catholics of strong faith. Yet everyone is encouraged to be a steward. It’s trivially true that not every Catholic is a good, practicing Catholic with strong faith. So what gives? Is this a form of Pascal’s Wager, wherein a life of faith is to be cultivated through habituation? You act like you believe in order to gain faith?

The role of the church is to work to ensure the salvation of souls. Although the parable of the good steward is a great metaphor, the metaphor cannot substitute for reality. Nor can a metaphor, no matter how applicable it might be to some aspects of our lives, serve as a guiding principle for the totality of our existence.

There is more to being a good Catholic than merely following the formal precepts of stewardship, but many of the parishes aren’t teaching these other aspects to the same degree. It’s as if “stewardship” is the one-size-fits-all method for living an authentically Catholic existence.

I have no objection to occasional reminders to give more. But if the link between faith and practice is as strong and as logically necessary as that presupposed by the theory of stewardship, then instead of incessant exhortations, perhaps the church should focus, as I once told a former pastor, on helping the faithful to center their lives on Christ. If, after all, the faith is there, then the act should follow. If the faith is lacking, then no amount of nagging will achieve the desired outcome.

Focusing on the act to build the faith seems backward, but it’s the central (if unspoken) conceit of stewardship. Although this is may be excusable in the periphery, to elevate this concept to a position of centrality in parochial catechesis seems detrimental to the long-term spiritual health of the faithful.

Trends

I’ve noted in a previous post that I delivered a lecture this afternoon at the National Association for Quality’s annual educational conference.  My topic was on the applications of ethical thinking to the cultivation of a culture of quality in healthcare.

During the presentation — which, I fear, bored most of my audience — I found myself stressing again and again a problem that spreads far beyond ethics and quality: that most of contemporary moral philosophy is dangerously out of sync with an average person’s thinking.

I made the point mostly to emhasize that unlike most other disciplines, today’s theoretical ethics, as a discipline, doesn’t translate well into practical ethics. 

But the point runs deeper than that.  Before the “linguistic turn” in philosophy, most philosophical problems could be understood by a reasonably well-educated person.  But to get through cutting-edge philosophy, one needs advanced training in either mathematics (to handle symbolic logic) or linguistics and mathematics (to handle language). 

I saw this during my graduate seminars.  The world of philosophy that I thought I knew from private reading and from undergraduate coursework was almost wholly unlike the complex beast that lurked in the seminar room.  There, just about everything — ethics, aesthetics, metaphysics, epistemology — seemed to reduce to formal logic and linguistics, which in turn presupposed an expert-level grasp of calculus.  Math = logic = basis of philosophy.

There are some reassuring blips on the radar screen that suggest the tide is turning, but the damage done not just to the discipline, but also to a world that (whether it knows it or not) depends on philosophy, is incalculable.  Even with symbolic notation.

In parallel fashion, there is a growing understanding that string theory as the end-all, be-all of theoretical physics may be a fool’s errand.  In the current edition of The Economist, there appears a review of two recent books that attack string theory as being non-scientific and a detriment to the advancement of bleeding-edge physics.

Let us hope the trend continues.

Yet more errata

Time for another update. Woo hoo!

1. I passed the National Healthcare Quality Board’s certification exam on Friday — thus, I am now a “certified professional in healthcare quality,” with the ability to add CPHQ after my name in professional correspondence.

2. Had dinner last week with my mother, grandmother and aunt. Quite pleasant. Granny seems to be settling in well to her new condo.

3. A friend put me into an interesting ethical dilemma last week. She and I had lunch one day, and she casually mentioned what should have been a highly confidential HR matter at work — a matter that could potentially impact me and my boss and my entire division. What to do? I didn’t solicit the information, but once in my possession, I incurred a duty to act on it so as to avoid a potential problem down the road. I occasionally hear things that I shouldn’t know, and I just file them away without further relaying. But this was different. Ultimately, I mentioned it to my boss (who was quite correctly horrified that I knew); she is better positioned than I to minimize the potential of a future problem. That said, I absolutely hated to tell her. Workplace gossip has its place, and sometimes sharing certain types of information can serve a useful social purpose. But some things should never be the subject of gossip.

3. It feels like Washington is slowly merging with Hollywood. Showboating, superficiality and irrationality are the rule of the day, and even traditional sources of wisdom (e.g., National Review) are becoming predictable in tone as well as substance. The increasing polarization of the ideological spectrum is making the political space more shrill and less interesting. While the egos fight, the mild voices of reason (no matter their place on the axes) are being shouted out. What a shame; some of today’s political controversies are not insignificant.

4. A while ago, I stopped my participation in all of the political simulations in which I had played (some, for years). I broke trend a few weeks ago to assist in the development of a simulation called the “Commonwealth of Antibia” — a constitutional monarchy based on a completely made-up nation-state, with its own history, laws, and culture. My role was to serve as the first High Lord Treasurer and Antibian Economic Director, building the game’s economy. I have since resigned from Antibia because of irreconcilable differences with one of the founders, but the experience has prompted some reflection:
– The desire for control is often rooted in the very best of intentions. However, no person can control everything, and the less willing people are to give up control for the sake of the greater good, the more likely it is that the greater good will suffer. Sometimes, there must be an environment where no one has control, in order to maximize the odds that the free marketplace of ideas will promote the wisest course of action or development — think, for example, of a river. You can dam it to control it, or you can let it run its course and accommodate whatever waterways should result. Western Michigan University’s first president, Dwight Waldo, understood this. After the first buildings were erected, he decided to wait to lay the sidewalks — he wanted to see where students and faculty actually walked, and then he paved those trails. He did NOT pave what he wanted and expect that people would follow those paths.
– Authority without responsibility is meaningless. Those tasked with action must have the ability to complete that action on their own initiative, without being micromanaged by those who are not part of the process. While it’s certainly possible for authority to be centralized in a small, highly functioning group, authority cannot be so decentralized such that the process itself confers authoritative legitimacy. PEOPLE, and not processes, hold authority.
– Proceduralism is not a guarantee of fairness. Just because a system has a series of checks and balances doesn’t mean that the right outcome will be inevitable, or even better than the alternative. A system that depends on consensus can be better than a system that relies on individual power — or not. It all comes down to who sits in the majority. And if the majority is a cohesive group that does not welcome outside input, then no amount of procedural recourse will be enough to ensure an outcome that wasn’t preordained by that majority.
– Competition is healthy. Stifling the competitive urge in order to foster a spirit of cooperation will remove a critical aspect of community that provides the more cooperatively minded with a foil and a dynamic that keeps the community moving.
– Complexity can lead to richness, but it can also lead to disorientation. In general, a system should tolerate only as much complexity as is needed to promote a goal; there is decreasing marginal utility to complexity that can be counterproductive if unchecked by common sense.
– Limiting access to power means that there is less of an incentive for competitive-minded people to participate in a system.

I wish Antibia well, whatever should happen.

Evil

This morning, I picked up a copy of Susan Neiman’s “Evil in Modern Thought: An Alternative History of Philosophy,” and have so far gotten through the introduction. [Amazon]

The book appears to hold promise; it has had some interesting and favorable reviews, and despite some regrettable and unfair swipes at the Bush administration, the early pages are easy to digest. But one thing troubles me, already: Neiman’s work seems to assume something that perhaps cannot be so easily accepted. To wit — that descriptions of the moral content of actions must admit to either/or classifications.

I am not criticizing the good professor’s work, after having completed less than 10 percent of the volume. I may soon learn that my ascriptions of her positions are either incorrect or imprecise; such is the joy of philosophical investigation. But the introduction alone presupposes a false dichotomy between “morally good” and “morally evil” that warrants some reflection here.

The challenge is wrapped in Neiman’s firm insistence that the Holocaust is evil, and the actions attendant to the Final Solution are “evil” on an unspeakably horrible level.

I do not deny this, for the record. But ….

Why is it not possible, from the context of pure philosophy, to evaluate much of the Holocaust as an exercise in amorality? Granted that the mass murder of millions is not morally praiseworthy, must it nevertheless follow (as a matter of logical necessity) that it’s unequivocably evil? This dovetails into Hannah Arendt’s thesis that much evil is mere banality — a point that Neiman spends much early ink addressing.

Can we characterize as good or evil any action not intended to be good or evil? If I return a lost wallet because I wish to bring some benefit to its owner, isn’t the virtue of my action greater than if I returned the wallet out of a sense of reluctant duty, or because a police officer saw me pick it up? And likewise, if I engage in any action without any specific intent to bring harm or benefit to another, to what extent can conscious moral responsibility be ascribed to that action?

These questions are relevant to Neiman’s undertaking, methinks. There seems to be a sense shared by most people that actions performed with nefarious intent are evil. But what about actions performed without specific intent? Yes, there is some degree of culpability for those who suspected that an action could be harmful but performed it anyway (e.g., cases of avoidable negligence or moral cowardice) — yet what about when a good end is desired as an outcome to an action perceived by others to be evil? Or, what about playing a fungible and non-essential role in the perpetration of evil? Were the secretaries at Auschwitz moral fiends?

This is tap-dancing around a profound challenge to ethicists. Consider an executioner at Auschwitz. To any properly liberal contemporary thinker, including Neiman, the actions performed by that executioner constitute pure and unmitigated evil. But what if the executioner truly believed that Jews were parasites who deserved to be killed to make room for the Aryan master race? Surely, he is convinced that his actions are, in fact, morally proper — even necessary. Can we nevertheless condemn his as being evil, instead of being merely misguided?

This challenge is not intended to be an exercise in relativism. Rather, it’s intended to raise the all-important question of intentionality.

It’s a commonplace of moral philosophy that the moral status of an action depends on several factors — among them, whether the actor intended for an action to result in an outcome that, on balance, he believes to be substantially harmful. Thus, my attempt to render aid to a choking victim is nevertheless laudable even if, being inexperienced in resuscitation, I crack his sternum and inadvertently puncture his lung, leading to his death. Since I intended to do good, I cannot be condemned for my action.

Neiman’s thesis presupposed a universal morality that recognizes that some actions are intrinsically and objectively evil.

I don’t necessarily disagree with this. But I’m not sure it can be as assumed as unquestionably as Neiman seems to do. And I’m not sure that it’s safe to side with Arendt in assuming that banal actions that contribute to evil actions must therefore be evil-in-themselves.

It all comes ’round to the secretaries. Did the women who processed the death statistics at Auschwitz engage in morally blameworthy activity? And, similarly, did the janitors? The cooks? It would be hard to ascribe to these people an intention to do evil, absent individual testimony to that effect. And yet, their actions contributed — however marginally — to the extermination of more than six million Jews.

Moral judgment is a complex endeavor. It seems, so far, that Neiman is willing to accept the conventional-wisdom judgments about political evils (the Holocaust, 9/11, etc.) without seriously considering whether those judgments are fair. Again, I could find that I’m mistaken about this as I dive deeper into her book.

A second point bears discussion. Moral reasoning is not an either/or proposition. There are not just two good/evil states that can be attached to any given action, because some actions lack substantial moral content. They are, in short, amoral.

Those secretaries at Auschwitz — could we not reasonably characterize their efforts as being intrinsically amoral? Surely, there is no lasting and direct moral outcome attendant to typing up a death certificate or requisitioning additional rations for the camp guards. To hold otherwise is to impose an impossible requirement of moral analysis on people.

To be sure, Neiman seems to be raising some fascinating questions. And perhaps I’m nit-picking a bit. Yet Neiman is insistent that the Holocaust represents objective and unquestionable moral evil. I don’t disagree — but the philosopher in me recognizes that this position is the conclusion to an argument. And any argument is subject to criticism. By pretending that the argumentative conclusion is actually a fact, Neiman does a disservice to the cause of moral inquiry that she seems so eager to engage.

I hold out hope that my judgments, thus far, are in error.

Synergy in theoretical and practical ethics

I am fascinated by moral philosophy. This is partly because an introduction to ethical theory provided me with a life-altering “Aha!” moment, and partly because the discipline is one of the purest expressions of pure thought divorced from “linguistic turn” esoterica that contemporary philosophy still permits.

I got a significant way through an M.A. in philosophy, with a concentration in theoretical and practical ethics, before I stopped taking classes two years ago. My reasons for leaving the university were many, but a big part of it revolved around my discomfort with my perceived ability to function well within the graduate program.

I had been assured by one of the faculty that I was in great shape relative to my fellow grad students, but I had my doubts. Classes were filled with people making references to ideas and people I had never before encountered. It seemed that every assigned reading assumed that the reader was already conversant in moral theory. In short: It was frustrating. I’m not dumb, but I felt as if I wasn’t prepared for the experience.

A year and a half have elapsed since I last stepped foot in a classroom. In that time, I’ve become the secretary of my hospital’s biomedical ethics committee and I’ve been accepted to deliver a presentation on quality and ethics at this fall’s annual educational conference for the National Association for Healthcare Quality.

Meanwhile, I’ve been reading more. Trying to get a better understanding of the “lay of the land” for moral philosophy.

And I’ve come to a conclusion — Dr. Pritchard was right. I am better prepared than I realized. I’ve learned a few things:

1. That many of the arcane tangents raised by other grad students amounted to little more than B.S. — no one wants to publicly admit to not knowing something, so people “invent” things, or proceed from wildly defective memories of something encountered briefly in the past, about which no one (including most professors) are willing to challenge them.

2. That intellectual rigor in philosophy, as in much of the social sciences, is in sharp decline. As such, expectations on what someone SHOULD know differs greatly from he is REQUIRED to know. And that’s a shame; no good will come of this trend.

3. That publish-or-perish syndrome is leading to an increasing specialization within the social sciences such that everyone can be an expert in his very own little corner of the discipline. This permits a greater degree of false authority than is healthy for free and sustained debate. Perhaps I’m biased, being more of a generalist by disposition, but I’m not sure that a thousand little experts is necessarily the right way to go.

These points aside, my foray into practical ethics through the hospital and NAHQ are strengthening my understand about the challenges of reconciling theoretical and practical ethics in the real world.

I am approaching healthcare quality and bioethics from the perspective of someone trained academically in moral philosophy but who has no formal training in quality or in medicine. Yet most of those “doing” ethics in these fields are quality professionals or licensed clinicians who lack formal training in moral philosophy. The experience of seeing their dabbling in my discipline, mixed with my dabbling in their disciplines, suggests interesting pedagogical opportunities within philosophy.

I’ve already written about my concerns about some of the ethics quality stuff I’ve seen from the American Society for Quality and other groups. Perhaps I was too uncharitable. After all, moral philosophy is sufficiently important that it ought not be the exclusive domain of academicians; better to have superficial ethics by non-professionals than to hope that the professionals will descend from the tower long enough to instruct the masses.

I think, though, that serious moral philosophers face a dilemma. Much of the work of practical ethics is being advanced by those with little or no formal instruction in moral philosophy. The situation is analogous to what might happen of the nation’s auto mechanics all retreated to vocational skills centers and left the work of car repair to metaphysicians who manage to get by on a wing and a prayer. If the cause of effective and non-superficial moral reasoning is to be advanced among non-philosophers, then the philosophers need to do a better job of getting out their seminar rooms and offices to help the people. Because the people seem to be drowning, despite their best intentions.

This 18-month-odyssey has renewed my desire to complete my M.A. and push for a greater and more nuanced understanding of the basics of moral philosophy by those who struggle with ethical problems in the real world.

It’s good to have a sense of purpose. 🙂

Declining intellectual rigor in the social sciences

The conservative press has been targeting academia with greater diligence in recent years. Part of this is, I think, the “low-hanging fruit” phenomenon — it’s easy to attack the defenseless. Another part is the perception of some that America’s classrooms are becoming places where correct thinking is more important than thinking correctly.

For my part, I’m happy to occasionally dabble in bromides against the academic Left, but I’ve never believed in a vast conspiracy. I have occasionally been worried about professors being too lenient toward their students, but I never really feared that our academic disciplines are sliding into intellectual incoherence.

Until recently, that is.

I’ve written earlier about articles in “Quality Progress” and the “American Political Science Quarterly.” But the trend continues — another recent publication of the American Political Science Association featured an “analysis” of the voting patterns of the Catholic cardinals who selected Joseph Cardinal Ratzinger to be Pope Benedict XVI.

Without question, an analysis of the politics and procedures surrounding the election of a Supreme Pontiff would be a welcome opportunity for discussion and debate. But the attempt at analysis published in the APSA journal was laughable.

My issue wasn’t with the theories presented by the two professors who wrote the article. I have a BA in political science, not a doctorate, and I will not presume to elevate mere disagreement on my part to the level of an indictment of their competence.

No, what was troubling was the assumptions to which the authors quite freely admitted. They clearly predicated their analysis on the belief that regional blocs among the cardinals would be a major factor in the decision-making process; the theory rested on an assumption that the cardinals in each bloc wanted to see one of their own elevated to the Throne of St. Peter — which might be an interesting approach, had the authors bothered to learn anything about Cardinal Ratzinger’s tenure as Prefect for the Congregation of the Doctrine of the Faith.

It is a methodological error of the first rank to assume whatever you wish in order to make your analysis work, even if you have to assume facts that are (a) not in evidence or (b) are too inconvenient to research before publishing. The authors made so many assumptions about Church politics that even an amateur Vatican watcher must cringe to see it. Analogously, the work of these professors would be no different from the work of Chinese political scientists analyzing FDR’s four presidential wins without knowing anything about the Great Depression or the Progressive movement. When you supply hypothesis and assume it to be a fact, any theory that would result is simply meaningless. Hell, if I can assume what I wish, then with little effort I can construct a theory of physics that will permit the transmutation of lead into gold with nothing more than a toothpick and a piece of cork. But such assumptions don’t necessarily compel reality to fall into strict conformance.

So, OK. APSA has once again published something that someone with even a limited degree of specialization in the field of study (i.e., the Vatican) could spot as flawed. Does this mean anything?

I think it does. I think there is something significant that ought to be said about junior academics operating on a “publish or perish” tenure track, or senior academics jockeying for greater prestige. And that something is: Quantity is not quality, and any theory is not as good as the right theory.

Political scientists (and philosophers, for that matter) do not serve their disciplines well when they toss out theories uninformed by facts not directly related to the theory. In the case of the Vatican analysis, it makes a very big difference, when accounting for the balloting results for Ratzinger, that in the 1980s the Iron Cardinal had almost by himself destroyed the philosophical bedrock upon which stood so-called liberation theology. It makes a difference that the other leading contenders for the papacy shared liturgical beliefs that stood at odds with many African bishops.

The social sciences are turning into silos, generating theories and texts that make sense from within (provided you don’t ask too many questions about the theories), but which tend to be increasingly uninformed by facts from without. The inevitable conclusion is an growing inability to differentiate between wheat and chaff — and given the climate of academic politics, this may well mean that nonsense will be given carte blanche on our nation’s campuses. To the detriment of future students.

The problem on our college campuses isn’t that the faculty is overwhelmingly Leftist. The problem is that the intellectual rigor of the disciplines — especially in the social sciences — has “gone wobbly,” and there has not yet been a correction. As long as the liberal arts continue to operate on assumption and posturing, students and faculty alike will continue to play the game by the rules in effect at the roll of the dice. We should not be surprised by an all-encompassing relativism that motivates the academic Left, since relativism is the one virtue that protects the status quo from the sorely needed correction. Quite the vicious circle, eh?

Alas, too many opportunists on the Right (save, perhaps, the sainted Harvey Mansfield of Harvard) don’t see the forest for the trees. The would-be slayers of Campus Liberals are focusing on the effects of intellectual decline, and not the causes, so their efforts are unlikely to amount to much.

Long story short … I guess I’m going to read a lot of things in social-science journals that will make me want to cry.

Better buy stock in Kleenex.