All opinion is partial. It always treats as the whole true an aspect that seems true. But other aspects are always possible. A particular opinion is prone to wander away from its seeming-true to reveal another face of reality. Opinion harbors a residue — a marker, usually some nagging feeling like doubt or hesitancy — of what has been abstracted away in order to make opinion seem true. For belief there is focal confidence and excluded, affective doubt. In thinking, both confidence and doubt are included and made focal. In any case, opinion is a dual phenomenon — it is always only ambiguously true. This duality is often masked from the one who holds the opinion. It takes a special effort to “see” what is outside its frame.
Socrates often refers to those moments in experience where the repressed other of opinion becomes unveiled and duality revealed. The paradoxes of optical illusions provide good examples. But even our everyday experiences harbor dualities. In one passage in Republic, Book VII (523c-e), Socrates turns to an experiment that I like to call the “three-finger exercise.” Look at three fingers of your hand: specifically the pinky, ring-finger and middle-finger. Compared to the other fingers the pinky is small and the middle-finger large. This seems so-far unambiguous. But now look at the ring-finger. Is it small or large? It is either small and large, depending on which of the other fingers serves as the ground of comparison. “Large” and “small” are definite features of appearance, and yet they depend on their context, on what is proximate to them. Another favorite example of Plato’s is the one and the two: each of a pair is a one, but within the pair it is a half. Each one only brings oneness to the table, yet when combined with another one, there are emergent properties of two-ness and half-ness.
These examples may seem uninteresting, but they are special cases manifesting the ambiguity present in all opining. Usually, what appears true *seems* to be a property of the focal thing, but the property shifts when the thing is placed in different contexts. These thought experiments of Socrates demonstrate that something else is going on, that seeming depends on context. The examples Socrates gives are trivial, no doubt. However, if we turn our attention to debatable (and debated) social goods like justice and goodness, isn’t it likely that something similar is going on — that what *seems just* from our perspective, may *seem unjust* to another and vice-versa? And isn’t it also clear that the criterion of “seeming-just” is not sufficient to adjudicate between these competing visions? Could it be that we are just adroit at repressing aspects that disturb our comfortable self-assessment?
Socrates calls moments of paradoxical appearance (as in the three-finger exercise), parakletikai, “provocatives”, in that they provoke thought to one’s aid:
The experiences that do not provoke thought are those that do not at the same time issue in a contradictory perception. Those that do have that effect I set down as provocatives (parakletikai), when the perception no more manifests one thing than its contrary, alike whether its impact comes from nearby or afar. (Rep. 523b-c)
Our usual dealings with the world hide duality behind a veil of taken for granted belief. Aporia and paradox are useful for bringing thinking to bear on an issue:
“Yes, indeed,” he said, “these communications to the soul are strange and invite reconsideration.” “Naturally, then,” said I, “it is in such cases as these that the soul first summons (parakalousa) to its aid the calculating reason and tries to consider whether each of the things reported to it is one or two.” (Rep. 524b)
Thinking is the attempted adjudication between competing visions of the true. Thinking begins by summoning into focal presence the otherwise tacit aspects of opinion. The duality that haunts opinion and is avoided in belief (pistis) becomes thematic in thinking (dianoia). Perhaps Socrates has so much interest in sophists because they are expert in exploring this duality and relativity present in all opinion. (The Euthydemus is a particularly good dialogue to take as an example.) Skilled sophists are able to manipulate the seeming-true of opinion by creating the contextual conditions for their preferred seeming-true to gain force. Manufacturing opinion is particularly easy when the job is simply to reinforce the seeming-true of the vulgar, since only a patient exercise of difficult thinking is sufficient to dislodge it. The mob doesn’t think. If it did, it wouldn’t be a mob.
“Partiality” is when we privilege one aspect of this duality to the exclusion of the other. For instance, in giving reasons for a favored political policy, a partisan concentrates on the benefits of the policy to the exclusion of the costs. (Just listen to any partisan debate: one side will speak only of benefit, the other only of cost.) In evaluating our own virtue, our seeming-virtuous will be quite favorable if we contrast ourselves with the morally challenged. This is partiality. We tend to pay excessive attention to villainy, attention mirrored by the scandalizing obsessions of the press, in order to seem good to ourselves and others. Albert Camus wrote that “Each of us, in order to justify himself, relies on the other’s crimes.” There is also a bias called a “halo effect” in which a single fact or characteristic of a person or circumstance colors one’s opinion about the matter as a whole. Politicians who “look the part” have a leg up on the the one who doesn’t, even if the latter has superior political acumen. (We might also call this the Warren G. Harding effect.) Clearly, there are enormous political and moral implications at work here.
We can easily see the biases of others; we are much more blind to our own. This asymmetry creates the common-sense illusion that seeming-true is sufficient evidence for being-true. Thinking requires that we confront this bias, not just in others, but most importantly in ourselves. In order to have any possibility of overcoming the deficiencies of the seeming-true, we must account for our own self-deceptive tendencies. Transcendence of one’s opinion in favor of knowledge requires knowledge of our own biases. There is no knowledge of moral or political matters that can free itself from this demand for self-knowledge. What Plato claims we need is a “conversion” (metastrophe) away from accepting seeming-true as true and begin the slow process of liberating ourselves from our bondage to mere seeming. As Bernard Lonergan puts it, “Objectivity is the fruit of an authentic subjectivity” — i.e. a subjectivity that takes ownership of its own bias. We have to understand the chains that hold us fast before we can ever escape the prison of partiality.
FYI — One book that I have found useful for understanding the various forms of bias that plague our thinking is Rolf Dobelli’s The Art of Thinking Clearly. Most of the biases have their roots in the partiality of opinion as I have articulated above.
9 thoughts on “The duality of opinion”
“There is also a bias called a “halo effect” in which a single fact or characteristic of a person or circumstance colors one’s opinion about the matter as a whole. Politicians who “look the part” have a leg up on the the one who doesn’t, even if the latter has superior political acumen. (We might also call this the Warren G. Harding effect.)”
Woody, I enjoyed your post and thanks for continuing to update your blog. With respect to the “halo effect”, it seems to me that it depends on the environment whether this short decision rule is a “bias” or whether it is an adaptive response by organisms who are constrained by time pressure and limited by finite information storage and processing.. It may be, give specific environmental parameters, that “looking the part” is statistically correlated with a distal criterion such as “superior political acumen”. Given the structure of this information environment, it wouldn’t be an occurrence of bias or irrationality to depend on this cue to infer the political acumen of a politician, it would be heuristically-based rationality.
Also, on a slightly unrelated note, at some point could you do a post that discusses Plato’s concept of “polypragmosyne” within the context of the critique of “specialization” by thinkers such as Ellul and Y Gasset. A discussion of how these concepts are different(and/or similar) I think would be very helpful.
Thanks for your comment. You are right in the sense that the seeming-true of opinion is a rough-and-ready substitute for knowledge when knowledge is lacking. Most seeming-true probably *is* true. As a first hack, in the absence of any better evidence, it makes sense to trust in our opinions. Where the bias comes in is when we take this seeming-true not as a hypothesis we are testing, alert for counter-evidence, but as as a sufficient criterion for the real.
All things being equal, the one who comes well-dressed to the interview gets hired, and I am sure there is a statistical correlation between well-dressed interviewees and good employees. But all things are not equal, and dressing well is a far from infallible sign of professional competence. Our natural preference for the well-dressed may blind an employer to other important aspects of a prospect’s resume. That is the halo effect.
Michael Lewis, in the book Moneyball, talks about how baseball scouts have traditionally been dupes for prospects who “looked the part” and overlooked statistically superior prospects because they have body-types that don’t fit the mold of what a baseball player “ought” to look like. Body-type surely correlates with athletic prowess. The scout’s bias is in letting this heuristic occlude what ought to be a better indicator, i.e. proven success actually playing baseball.
I’ll see about writing something up on the issue of Plato’s condemnation of polypragmosyne and modern thinkers’ concerns with the evils of over-specialization. I don’t when yet…
Thanks for the response. That makes sense. I agree that relying on the halo effect(from one point of view) can be considered a bias in the sense that the organism depends on a single cue as a strong indicator of a criterion while ignoring the rest. My point is that whether this is negative depends on a set of environmental parameters. Is there time pressure? Is there information load? Does the cue in fact strongly correlate with the criterion in the specific environment? If one answers yes to all these questions, then relying on one cue(or sign) and ignoring the rest may prove ecologically rational. In other words, to call reliance on simple decision rules instances of “bias” because under some environmental conditions they ignore better indicators is to neglect to fully account for the cognitive and ecological limitations of the organism.
I appreciate the discussion. 🙂
Even in quick decision-making, using a rough-and-ready heuristic rationally appropriate to the circumstances, a halo effect still a bias, in the sense that the errors stemming from that mode of decision-making will be systematic — they will conform to a type. Even if there is a correlation with success, that very success will tend to blind us to the capacity for error in judging the whole based on a part. In fact, every heuristic is probably biased toward a certain type of success and failure, and a large part of learning from experience is in figuring out ways to achieve the systematic success while curbing the systematic errors. Every opinion, every habit of decision-making, is biased in a particular way. If it weren’t, it would be knowledge.
Calling a heuristic “biased” doesn’t depend on whether it is rationally chosen or not. Adopting a rule-of-thumb under time pressure for instance is often the best we can do, but it is under such circumstances that the bias in the heuristic will show itself. We judge better when we can apply critical pressure to our own biases — to the extent we are aware of them. Decision-making under constraints of time and information is usually worse than when such constraints are lessened. I know from my Navy days, a distracted pilot does much worse than one who is not. Public service announcement: don’t text and drive!
Right, but systematic inaccuracy in subjective judgment can only be determined by comparing the judgment distributions to the actual statistical regularities in the environment. For example, to use the “trust the expert” heuristic when making healthcare choices may produce systematic judgment errors because within the specific environment the cue-criterion correlation is low. However, in most other environments the correlation may be high and the habit(bias) of relying on this heuristic is ecologically rational. The more one learns the correlations between cues and criterions in a specific environment, judgment accuracy will tend to increase while the need for excess cognitive expenditure will tend to decrease. Being blinded to the capacity for error is not necessarily a bad thing if the error is small (statistically speaking) and when reliance on the heuristic provides the benefit of decreased cognitive load. I think heuristic processing is our default mode of judgment and decision making, not something we adopt because it is the best we have given the circumstances. Of course, there are situations where more controlled, maximizing type of processing is required, but this also depends on the environment and whether one cue would predict more accurately with less effort. I think we are on the same page. I think it is the pejorative connotation of bias that I am having an issue with. Hexis is a much better term IMHO!
I’m not sure I disagree with what you are saying about the value of heuristics. When heuristics are highly correlated to practical success, they are equivalent to what Plato would term “true opinion.” In the Meno, Socrates says that for practical purposes true belief is as good as knowledge. Where it goes wrong is that it tends to “wander” from such correlation, due to changes of circumstances.
The economist Arnold Kling has coined the term “intention heuristic” to describe when the morality/desirability of a given policy is judged based on its intentions and not on its unintended consequences. All things being equal, we should indeed side with those who have better intentions. But all kinds of political kaka can be enfolded within a bill that appeals a high-sounding intention. Machiavellian political actors know this and exploit this bias, with terrible results. But even sincerely well-intentioned partisans often avoid facing unintended consequences of high-minded action, e.g. foreign aid that augments the power of bad autocrats. The intention heuristic is useful in some cases and horribly biased in others. This bias is a latent feature of it at all times; circumstances will decide which is which, not the heuristic itself.
I agree with you that we should not lose sight of the practical benefits of heuristics. In my own thinking about opinion, I too want to emphasize the positive role opinion serves, rather than just contrast it pejoratively with knowledge. (This is I describe opinion as manifesting a duality.) Our practical life is necessarily dominated by opinion; it is our default, as you say. But it is vitally important that such enthusiasm be moderated by a prudent understanding of its limits. That is the movement from pistis to dianoia on the Divided Line.
Kling essay mentioning the intention heuristic: http://american.com/archive/2014/october/more-profits-fewer-nonprofits