Tuesday 20 February 2018

Why Moral and Philosophical Disagreements Are Especially Fertile Grounds for Rationalization

Today's post is by Jonathan Ellis, Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel, Professor of Philosophy at the University of California, Riverside. This is the second in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences, eds. J. F. Bonnefon and B. Trémolière, (Psychology Press, 2017) (part one can be found here).




Last week we argued that your intelligence, vigilance, and academic expertise very likely doesn't do much to protect you from the normal human tendency towards rationalization – that is, from the tendency to engage in biased patterns of reasoning aimed at justifying conclusions to which you are attracted for selfish or other epistemically irrelevant reasons – and that, in fact, you may be more susceptible to rationalization than the rest of the population. This week we’ll argue that moral and philosophical topics are especially fertile grounds for rationalization.

Here’s one way of thinking about it: Rationalization, like crime, requires a motive and an opportunity. Ethics and philosophy provide plenty of both.

Regarding motive: Not everyone cares about every moral and philosophical issue of course. But we all have some moral and philosophical issues that are near to our hearts – for reasons of cultural or religious identity, or personal self-conception, or for self-serving reasons, or because it’s comfortable, exciting, or otherwise appealing to see the world in a certain way.

On day one of their philosophy classes, students are often already attracted to certain types of views and repulsed by others. They like the traditional and conservative, or they prefer the rebellious and exploratory; they like confirmations of certainty and order, or they prefer the chaotic and skeptical; they like moderation and common sense, or they prefer the excitement of the radical and unintuitive. Some positions fit with their pre-existing cultural and political identities better than others. Some positions are favored by their teachers and elders – and that’s attractive to some, and provokes rebellious contrarianism in others. Some moral conclusions may be attractively convenient, while others might require unpleasant contrition or behavior change.

The motive is there. So is the opportunity. Philosophical and moral questions rarely admit of straightforward proof or refutation, or a clear standard of correctness. Instead, they open into a complexity of considerations, which themselves do not admit of straightforward proof and which offer many loci for rationalization.

These loci are so plentiful and diverse! Moral and philosophical arguments, for instance, often turn crucially on a “sense of plausibility” (Kornblith, 1999); or on one’s judgment of the force of a particular reason, or the significance of a consideration. Methodological judgments are likewise fundamental in philosophical and moral thinking: What argumentative tacks should you first explore? How much critical attention should you pay to your pre-theoretic beliefs, and their sources, and which ones, in which respects? How much should you trust your intuitive judgments versus more explicitly reasoned responses? Which other philosophers, and which scientists (if any), should you regard as authorities whose judgments carry weight with you, and on which topics, and how much?

These questions are usually answered only implicitly, revealed in your choices about what to believe and what to doubt, what to read, what to take seriously and what to set aside. Even where they are answered explicitly, they lack a clear set of criteria by which to answer them definitively. And so, if people’s preferences can influence their perceptual judgments (including possibly of size, color, and distance: Balcetis and Dunning 2006, 2007, 2010) what is remembered (Kunda 1990; Mele 2001), what hypotheses are envisioned (Trope and Liberman 1997), what one attends to and for how long (Lord et al. 1979; Nickerson 1998) . . . it is no leap to assume that they can influence the myriad implicit judgments, intuitions, and choices involved in moral and philosophical reasoning.

Furthermore, patterns of bias can compound across several questions, so that with many loci for bias to enter, the person who is only slightly biased in each of a variety of junctures in a line of reasoning can ultimately come to a very different conclusion than would someone who was not biased in the same way. Rationalization can operate by way of a series or network of “micro-instances” of motivated reasoning that together have a major amplificatory effect (synchronically, diachronically, or both), or by influencing you mightily at a crucial step (Ellis, manuscript).

We believe that these considerations, taken together with the considerations we advanced last week about the likely inability of intelligence, vigilance, and expertise to effectively protect us against rationalization, support the following conclusion: Few if any of us should confidently maintain that our moral and philosophical reasoning is not substantially tainted by significant, epistemically troubling degrees of rationalization. This is of course one possible explanation of the seeming intractability of philosophical disagreement.

Or perhaps we the authors of the post are the ones rationalizing; perhaps we are, for some reason, drawn toward a certain type of pessimism about the rationality of philosophers, and we have sought and evaluated evidence and arguments toward this conclusion in a badly biased manner? Um…. No way. We have reviewed our reasoning and are sure that we were not affected by our preferences....

3 comments:

  1. Post modern thought is always trying, now, to restore the meaning of words to their origins, like 'rationalization is from a ration of thought...

    That mentations emotions sensations and instincts could be seen as means-meaning rationed for an evolving mind to work with...

    ReplyDelete
  2. To the extent it is knowable, most humans tend to significantly distort or completely deny uncomfortable truth or facts. To the extent they do accept the uncomfortable, they apply heavily biased reasoning or logic to what they do perceive, i.e., people rationalize. If human history is relevant, that kind of objective reality- and reason-detached mindset and thinking has led to both modern civilization and to all out wars, severe environmental damage, racial and religious conflict and so on. Given modern technology, that biological mindset reality says we're going to be in an all out nuclear war and/or continue to cause environmental degradation. Therefore, we're going to destroy modern civilization, or maybe even self-annihilate and go extinct.

    Arguably, the seeds of self-destruction include our biological need to rationalize. The best way to rationalize is to make fact and logic as personal and thus subjective as possible.

    If that assessment of the human condition is mostly correct, then it would seem that adopting a mindset that holds as highest, core moral values of (i) fidelity to seeing less biased fact/reality, and (ii) fidelity to applying less biased reason/logic to what is perceived ought to be helpful. That mindset should at least somewhat narrow the vast perception and belief differences that people arrive at in doing politics, which is the source of long-term human success or failure. Those differences are usually mutually incompatible. It's therefore reasonable to believe that on an issue-by-issue basis, one side has to be closer to objective truth than the other, where 'objective truth' is viewed through the lens of what ought to foster long-term human well-being and survival.

    So, one question is this: Are the mental processes that lead to rationalization and the attendant reality- and reason-detachment so deeply rooted that no mindset or morals that try to reduce reality- and reason-detachment will make any difference? Is our innate rationalizing biology is so deeply rooted that what we have now is about the best it can get? If so, we might as well just sit back, enjoy, do nothing and wait for the rapture in the form of nuclear Armageddon and/or environmental disaster (or maybe a nasty plague or two).

    Is there an expert in the house? The issue of rationalization reduction strikes me as a rather interesting question, since the stakes are potentially civilization and/or human survival. But being an amateur, maybe it's naive at best to ask about things like this.

    ReplyDelete
  3. Can the question of rationalization be part of a process of forces-movement for Observation...
    ...to challenge presumptive acknowledgements about presence of one's being Here...

    That movement-change becomes singularly compelling Value...

    ReplyDelete

Comments are moderated.