Recently, I watched this TED-Ed presentation by Trevor Maber. It’s a fascinating talk about how we make assumptions about the behavior of others that isn’t grounded in actual evidence—and how these assumptions can lead us to behave wrongly towards others. The implication, of course, is that we should carefully weigh the evidence before acting. We should be careful to ask, “Am I misjudging or mis-evaluating this person, based on little or no evidence?”
The scenario he describes is very much like anecdotes of self-betrayal, as described by agency theory. In agency theory, we have a moral sense of how we ought to treat others, and we can respond to or ignore that moral sense. When we ignore that moral sense, we see the world in a way that justifies or rationalizes our choices. So, an agency theorist would describe the person in the video of having a moral sense of how he ought to treat the other driver. Ignoring that moral sense requires him to rehearse to himself all sorts of horrible narratives about the other driver, in order to justify the intense anger and rage he feels. Hearing the other driver’s story reawakens him to a sense of the other driver’s humanity, calming his heart. At that point, he is now complying with his moral sense.
The difference between the two narratives is at first minimal, but at second glance very stark:
In Maber’s narrative, where things went wrong is that the individual held beliefs—and then acted on those beliefs—without evidence that they were true. The problem is that the individual was being a bad scientist. He made assumptions about the world unwarranted by the evidence available. The solution, of course, is tentative belief + more knowledge. Whatever beliefs he holds, he should hold only tentatively, and he should seek more knowledge before acting. Like a responsible scientist.
In agency theory, where things went wrong is that the individual violated his moral sense, which is wrong regardless if his beliefs about the other person were true or false. That is, the right thing to do didn’t depend on a correct knowledge of the other person’s motives. The problem wasn’t that he was being a bad scientist, but acting against his moral conscience. It’s a moral concern, not an epistemological one. He needs to change the state of his heart, rather than engage in more epistemologically sound practice.
I think this is fascinating—the difference between these two perspectives seems so small at first, but that’s because the difference is subtle. It’s all about the perspective through which we see human beings—are people, at their core, scientists, or moral beings?