A Response to the Bayesian Analysis of Book of Mormon Historicity

Posted by

[et_pb_section admin_label=”section”]
[et_pb_row admin_label=”row”]
[et_pb_column type=”4_4″]
[et_pb_text admin_label=”Text”]
Recently, an article was published in The Interpreter by Bruce Dale and Brian Dale, that uses Bayesian analysis to explore correspondences between The Book of Mormon and Michael Coe’s book on ancient Mesoamerica, and likelihood that Joseph Smith guessed those correspondences. The article has generated a lot of attention. The LDS Living facebook post featuring this article was shared (last I looked) over 450 times. I think the idea of this article is neat and worth exploring, but I have had grave concerns over the study now that I’ve delved into it.

I want to preface this by saying that I believe that the Book of Mormon is a true historical record, and that of all the potential locations, it likely took place in Central America. So I agree with the conclusions of the authors. But I do not agree with how they arrived at these conclusions. In this article, I explore some reasons why.

A Cool Idea…

The central idea of this study is fascinating: We might credit someone with a lucky guess, or several lucky guesses. But the credibility of our “smart guesser” hypothesis becomes more and more strained with each new accurate guess. The Bayesian approach mathematizes this analysis: we start with the hypothesis that Joseph Smith was a lucky guesser, and then with each accurate guess, we update our hypothesis. And as the number of guesses grows, the statistics starts to look dimly on our original hypothesis (that he was a smart guesser), to the point that we must jettison it.

One assumption that critics make is that a single wrong guess invalidates the project as a whole. That is, Joseph Smith could have gotten 100 things right and 1 thing wrong, and that 1 wrong thing casts the whole Book of Mormon as a fraud. The authors of this article (rightly) dismiss this assumption. Just because there’s something in the text that doesn’t match up with what current researchers believe about ancient America, doesn’t mean that the entire text must be thrown out — and much less so if the weight of the evidence actually supports the text.

In other words, even if we conclude that there are errors in the text, we must still grapple with the many things that were correct. If those many right guesses are astronomically improbable, it (could) be more likely than not — regarding those things that we think are mistaken — that Joseph Smith was truthful but made a mistake in the translation, or that the Book of Mormon narrators (primarily Mormon and Moroni) made an error, or that we are in error about what is true and false about ancient America.

… but it has problems.

The authors rightly reject the idea of throwing out the Book of Mormon text based on one piece of (seemingly) contrary evidence. They explain: “These practices of cherry-picking or overweighting/underweighting evidence cannot be allowed in scientific enquiry.” They are correct when they say, “No piece of evidence has infinite weight.” But the authors seem overconfident that theirs is a legitimate scientific inquiry, immune (by virtue of being made numerical) to overweighting/underweighting. By virtue of being “disciplined” and “formal” (their words), they seem to see their analysis as clean of personal bias and methodological flaw. This is far from the case.

Likelihoods are not assigned in any empirical way

The authors state multiple times over that Bayesian analysis is used in the field of medicine, in fraud detection, and other areas. This is repeated several times, to ensure readers know that this is an established statistical method. But it doesn’t matter — I absolutely grant that this is a valid statistical method, and can still argue that it is wrong for this project for a host of reasons. And the primary reason I think this is the wrong approach for this project is because we have no clear way of estimating the probability of an accurate guess. Consider an example the authors use from the field of medicine:

For example, if a disease is somewhat rare, then a randomly selected individual might have “skeptical prior odds” of 1:1000 against them having the disease. If the test has a likelihood ratio of 100 (a good medical test for screening), then our posterior odds following a positive test for the disease would be 1:1000 x 100 = 1:10 against the person actually having the disease.

When they talk about a medical test that has a “likelihood ratio of 100”, that likelihood ratio isn’t pulled out of nowhere. It is estimated rigorously through thousands of applications of the screening, stacking the number of verified false positives, verified false negatives, etc., and then calculating the accuracy of the test. This likelihood ratio is an empirical value — or, at least, a value calculated from a stack of empirical evidence. Those likelihoods are not guesses made by the medical researcher, based on his intuitions on the accuracy of the test.

The authors state that their Bayesian approach overcomes the problems of the method of parallels (e.g., parallelomania) in part because, “by using a numerical Bayes factor, the person performing the analysis explicitly estimates the strength of any given piece of evidence.” On what basis does the person perform this estimate? Personal intuition? They choose 50, 10, and 2 (and conversely, 0.50, 0.10, and 0.02) as the likelihood ratios they assign each piece of evidence. Where did they get these numbers? Are they standard usage when using Bayesian testing to estimate likelihoods that cannot be measured?

To their credit, the authors craft a sort of scale to use here: Specific correspondences are given a likelihood of 0.50, specific and detailed correspondences are given a likelihood of 0.10, and specific, detailed and unusual correspondences are given a likelihood of 0.02. Using this metric, they can give some rhyme and reason to their probability estimates. But my considered opinion is that this merely gives a veneer of rigor to what is ultimately their own guessing game: guessing how likely it is that someone guessed. And quite frankly, I think they missed the mark.

And I’d venture a guess that most use cases of Bayesian analysis don’t require researchers to “guess” based on their own intuitions the likelihoods included in the analysis. (Though I could be wrong, I’ve seen worse. Edit: And I’ve been told that I am, apparently, wrong on that — Bayesian analysis does often involve arbitrary assignments of probability. This doesn’t change my critique here, however.) My suspicion is that — to whatever extent that these likelihood assignments actually matter to the analysis (spoiler: they don’t) — the Bayesian analysis returns not the likelihood that Joseph Smith guessed these things, but the extent to which the researchers believe Joseph Smith guessed things.

The authors make tons of unwarranted and sometimes untrue assumptions

Consider their first example in the paper of something that they consider to be an extremely unlikely as a guess:

One example of Bayesian “strong” evidence is the remarkably detailed description of a volcanic eruption and associated earthquakes given in 3 Nephi 8. Mesoamerica is earthquake and volcano country, but upstate New York, where the Book of Mormon came forth, is not. If the Book of Mormon is fictional, how could the writer of the Book of Mormon correctly describe a volcanic eruption and earthquakes from the viewpoint of the person experiencing the event? We rate the evidentiary value of that correspondence as 0.02. We assume a piece of evidence is “unusual” if it gives facts that very probably were not known to the writer, someone living in upstate New York in the early 19th century, when virtually nothing of ancient Mesoamerica was known.

I simply have no reason to believe that someone living in New York had never heard of a volcano, nor of earthquakes that accompany them, nor of ash clouds that darken the sun when a volcano erupts. In fact, the explosion of Mt. Tambora in 1815 is a defining event in the history of Joseph Smith’s life. Changes in weather and decreased sunlight resulted in crop failures in New England and are what forced the Smith family to move to Palmyra. And I’ve read no evidence that the people at the time weren’t wholly aware of what happened and why. Were the authors entirely unaware of this? What else are they unaware of? It doesn’t take a lot of imagination to see how a young boy of this time could have extrapolated what this event looked like through the eyes of those who lived nearby, and it doesn’t take an expert (or someone with personal experience) to guess correctly.

(Point of fact: Some argue that this is evidence of Joseph Smith guessing wrongly — he describes a dark so dark that people could not light a candle for days, which some argue is not actually how ash clouds work; and if ash clouds did work that way, there would no survivors to report the tale.) (Point of another fact: The Book of Mormon never states that there was a volcano during this event; this itself is a guess, that may or may not be right. We assume it is true, but we cannot be certain of it.)

In other words, the authors are merely guessing on these probabilities, and they are just as liable to getting their guesses wrong as anyone else. In the case of the volcano, I believe they overweighted this evidence by a long shot. Would it be fair to say, as they did, “This practice of overweighting evidence cannot be allowed in scientific enquiry”? If this is how sloppy the authors are with the very first correspondence they chose to highlight, it casts a shadow on the rest of them. The authors are clear that these probability estimates are at times subjective and based on the intuitions and judgments of the researcher, and yet the tone of their paper rides on the credibility that a mathematical analysis offers them. They are using quantitative analysis to evaluate qualitative judgments. This can be done well. I am not sure it was done well here.

Some identified correspondences are particularly weak and flimsy

Again, I want to reiterate here that I believe that the Book of Mormon is historical and true. I just don’t see evidence presented in the paper that there is only a 1/10 chance that Joseph Smith would have included examples of “repopulating old or abandoned cities” if he were merely guessing, or why someone living in New York could not have dreamed up the idea that a community might return to a city they’d fled some time before (e.g., the people of Zeniff). Joseph Smith wasn’t asserting this as some characteristic of ancient American society. He was simply telling a story of a particular people.

The counterargument might be, “Well, sure — but we aren’t looking at any one piece of evidence for or against, but the weight of all the evidence stacked up.” Well, sure! But the weight of all the evidence is a function of the weights assigned to individual pieces of evidence. And while I have not gone through the 100+ correspondances in the paper individually (turns out I don’t have to, more on that later), the few I have seem like they were weighted based on the biases and intuitions of the researchers, intuitions that often differ from my own. Maybe I am more cautious, but I just can’t justify assigning a number — no matter how arbitrary or considered — to the likelihood that Joseph Smith would have included a group of people repopulating an abandoned city if he were merely inventing this story.

In addition, some of their correspondences are frankly strained. Consider, for example:

“Clear enough,” the authors say. I’ve never once read the text that way, nor do I see it as plainly obvious that the text is referring to homosexuality. And even if I did, I have no reason or basis to assign this an evidentiary value of 0.5 as opposed to any other number. They recognize that this is a flimsier correspondence, so they gave it the lowest evidentiary value. But their lowest evidentiary value still doubles the overall case against the hypothesis of “smart guesser.” On another point, the certainty with which the authors conclude that this test is talking about homosexuality betrays their own analytical biases and unwarranted confidence in their own analysis and interpretation.

The authors aren’t measuring what they think they are…

But it turns out that doesn’t matter which of the three probability estimates they assigned this (or any) of their evidences. The authors themselves admit that even if they gave every piece of evidence the weakest likelihood ratio (and every piece of evidence against the text at the highest likelihood ratio), the conclusions would have come out in their favor (by trillions). So all this hand wringing about ensuring that some evidences are weighted more strongly than others based on their relative evidentiary value is pointless; they could all be weighted the same, and their conclusions would be the same. The conclusions in this analysis turn far more on the difference in the number of evidences for and against, than the relative strengths of those evidences. And there were simply more correspondences than contradictions included in the analysis.

Why? The authors explain that in order for something to be counted as evidence against the Book of Mormon, it has to be mentioned in both texts. So if, for example, Coe’s text discusses elements of religious practice that bears no resemblance to what we see in the Book of Mormon, the analysis is silent — because we cannot say that these elements didn’t exist in the Book of Mormon. Only direct, explicit contradictions are counted as evidence against. And my intuitions tell me that this is automatically going to stack the “in favor” pile higher than the “against” pile, especially if one text gets into far more detail about political, cultural, and social practices than the other.

This is because the standard for explicit contradiction is — whether we want it to be or not — higher than the standard for correspondences. The authors may disagree with me on this, but consider: If person A and person B tell a story about an evening they had together. Person A mentions sitting on the grass and watching the blue sky. Person B mentions sitting on the grass and watching the red and purple sunset. Have they contradicted each other? No, because both stories could be true. They could have sat and watched the sunset, which started with a blue sky and ended with a red and purple sky. We cannot mark this as a contradiction.

But consider: we do have a correspondence. Both persons have discussed watching the sky together. We can mark this as a parallel in their stories, and potential evidence that they are the same story. But we ignore the differences, because Person A does not explicitly state that the sky was never red, and Person B does not explicitly state that the sky was never blue. So we cannot use those differences as evidence against their stories being the same story.

In this way, we can start to see how a correspondence is easier to find than a contradiction. Contradictions between two stories are most likely to arise when one person knows the contents of the other story. Most storytellers naturally assert things that were the case, not things that were not. We would not expect Mormon or Moroni to explicitly contradict most elements of Coe’s analysis, even if it turns out that the Book of Mormon is true and Coe is a fraud. (I don’t think he is, just making a point.)

In short, all this Bayesian analysis measures is the relative likelihood of finding correspondences in two stories versus finding explicit contradictions. And it turns out to be far more likely to find correspondences than contradictions. My considered opinion is that this is about the only thing the study actually measured (successfully). And this is especially true as, on top of this, there was an additional double standard: contradictions had to be explicit in the text, but correspondences did not. They could take only “veiled” correspondences and include them in the analysis (consider the homosexuality example). So in addition to the natural propensity for correspondences between two texts to be easier to identify than direct contradictions, the authors artificially tilt the ratio even more.

… and what they are actually measuring, we can observe elsewhere too.

To make my point further, what of these parallels are unique to ancient America? How many ancient civilizations had “political factions organized around a member of the elite”? How many ancient civilizations had “foreigners move in and take over government, often as family dynasties”? How many ancient civilizations had “city administrative area with bureaucrats and aristocrats”? How many ancient civilizations required tribute of conquered peoples? How many ancient civilizations had “political power is exercised by family dynasties”? I could go on. These all seem like the bread and butter of heavily peopled regions of the ancient world, both in the new world and in the old world.

In short, if we were to find a detailed textbook about the ancient peoples and civilizations of India, and run a Bayesian analysis on the correspondences between ancient India and the Book of Mormon (weighed against explicit contradictions in the two books), could we draw the same conclusions as the authors did? I’m certain we could find 100+ such correspondences. And of those things that didn’t line up, I’m certain we could leave them out of the analysis for reasons mentioned above: unless the texts explicitly contradict, we ignore anything that doesn’t correspond. And so when we stack these against other and discover that there’s an umpteen trillion, trillion to one probability that the Book of Mormon didn’t take place in ancient India, we might start to wonder a bit at whether this analysis is telling us what we think it is.

Yes, I know, Joseph Smith never claimed it took place in India, and so there’s no reason to pursue such analysis. We wouldn’t conclude that Joseph Smith wasn’t guessing about something he never claimed to be talking about. But my point is that what we are actually measuring is how much easier it is to find correspondences between two descriptions / stories than it is to find explicit contradictions — not whether or not Joseph Smith was guessing.

And perhaps worst of all, the study treats as “independent” correspondences that are not independent

An anonymous redditor (/r/mfchris) recently brought this issue to my attention:

Another egregious issue is that the Dales treated the probability of each correspondence as being statistically independent, which in the context of someone telling a complex political/religious story isn’t reasonable at all. The authors list 33 correspondences that fall under the political umbrella, and the majority all seem to fall under the basic umbrella of “the Nephite civilization was well developed and politically complex relative to the Native American populations in the northeastern US, and it turns out that some central American civilizations were too.”

While I am willing to grant that this may constitute some level of positive evidence in favor of the historical reality of the Book of Mormon, by treating each correspondence as independent the Dales are making the overarching political correspondence a magnitude of 33 times as important as it is, even if you accept their Bayesian priors. It’s like saying the odds of someone being as tall as me are 100:1, and the odds of someone being as heavy as me are 100:1, so the odds of someone being as big as me are 10000:1, except that this study does it to a far worse degree.

The fact that their conclusion is that the probability that Joseph Smith made it up is 2.69x10e-142 indicates that they did something horrifically wrong in their analysis, as such extreme values would almost never occur, even in empirically grounded applications; such studies in social sciences should almost always lead to less extreme conclusions given the higher uncertainty in quantification.

In short, many of the correspondences in the article are related to each other. For an actual example, here are two separate correspondences: (1) “‘Capital’ or leading city-state dominates a cluster of other communities,” and (2) “Some subordinate city-states shift their allegiance to a different ‘capital’ city”. The first is assigned a likelihood of .02, and the second a likelihood of .1. But these are not independent observations; the existence of #1 dramatically increases the likelihood of #2. And yet, as used in the Bayesian analysis, the authors treat the likelihood of both being true as an order of magnitude less likely than one or the other.

Add on top of this the also highly connected observations that “many cities exist” (.1) and that there were “complex state institutions” (.02) — both of which are bound up with and implied by the earlier observations — and of course you are going to get increasingly ridiculous numbers in your final analysis. (Note: you can’t have “a capital city dominating a cluster of other communities,” and not have “many cities exist.” These should not have been treated as independent observations. Most of these shouldn’t have been.) Add further “parts of the land were densely settled” (uh, yeah, cities), and on we go.

Other examples are similar: “Royalty exists, with attendant palaces, courts and nobles” (.1) and “Some rulers live in luxury” (.5) are not independent observations. Even if the likelihood that Joseph Smith could have guessed that “royalty exists with attendant palaces, courts, and nobles” was 1 in 10, it is simply wrong to conclude that the likelihood that he could have guessed that and that “some rulers lives in luxury” is, combined, 1 in 20. One tends to imply the other. Add on top of that closely related observations of “elaborate thrones” and “gifts to the kind for political advantage”, and we start to see the magnitude of the statistical errors here. Each one of these adds an order of magnitude to the final results that shouldn’t (necessarily) be there.

This is not just an error in the priors of the Bayesian analysis. This is a fundamental error in statistical reasoning.

And in this case, we are really only measuring against Coe himself and his book.

Coe’s book undoubtedly makes thousands of statements of facts about Mesoamerican history, culture, and people that aren’t corresponded by the Book of Mormon. Yet none of those things count as “contradictions”, because they aren’t contradicted explicitly by the Book of Mormon text. And that’s where this analysis falls apart completely. What authors include and don’t include in an academic text depends on their audience and purpose. And Coe didn’t write his book with the Book of Mormon in mind. If he did, he could simply quintuple the number of statements he makes that explicitly contradicts the Book of Mormon text, and throw off the analysis. If a numerical analysis relies on something as fundamentally non-numerical as this, it’s a bad numerical analysis.

Furthermore, this analysis doesn’t compare the Book of Mormon against Mesoamerica. It compares the Book of Mormon against what Coe believes about Mesoamerica. That’s a cool “gotcha” against Coe perhaps, but as a numerical analysis, it doesn’t reveal anything at all about Book of Mormon historicity per se. More to the point, it doesn’t even compare the Book of Mormon against what Coe believes about Mesoamerica; it compares only against what Coe decided to include in this book. After all, by the researchers’ own admission, Coe believes in more facts about Mesoamerica that contradict the Book of Mormon than he included in the text. Which just shows that “contradicting the Book of Mormon” wasn’t the purpose of his text. So we wouldn’t expect the text to be full of contradictions. And that is exactly what we measured.


I have spent a number of years analyzing the research of social scientists (mainly psychologists), and have come to the conclusion that the discipline is filled with excellent researchers who are horrible theoreticians. In other words, psychologists are rich in data but poor in theory. They often do not even know what it is they are actually measuring with their measurement instruments, and more to the point, they often do not know how to make a case that what they are actually measuring is what they intend to.

Furthermore, the field of social sciences are filled with dogmas that are taken as givens by researchers, and this leads to a lopsided scrutiny of their research. Studies that seem to support conclusions favored by the academic establishment are given far less scrutiny than studies that draw into question those conclusions. This means that less careful research can make it into even top tier journals if the conclusions line up with the dogmas of the discipline. I’m currently writing a book on the importance of epistemic humility in our research, and the dangers of being assuming that our methods are more rigorous than they are.

Methodological rigor requires more than merely turning things into numbers. It requires using wisdom and experience to know when to turn things into numbers, and when not to. It requires being clear-headed about what we are measuring and what we are not. It requires that social scientists don’t rely on the veneer of objectivity that numerical analysis provides, especially when evaluating fundamentally non-numerical things. This propensity has wreaked havoc on the social sciences. Bruce and Brian Dale are not social scientists. But I’m seeing inklings of many of the same patterns here.

I believe the Book of Mormon is true. I cannot say the same for this analysis. I would have preferred a qualitative analysis instead, where all the correspondences they list can be presented as valuable and faith promoting, without the pretense of numerical objectivity.


  1. I think you are right in regard to some of the weaker of the claimed correspondence. However some of them are particularly strong and not always given the numerical strength I believe they deserve. It is interesting and valuable in that it works with no regard for matching a presupposed geographic model, but actually defines the most likely within a Mesoamerican setting. And the corespondaces for this are very strong. Compared to Sorenson’s model it identifies a plausible ‘land northward’ that is actually to the north, with no necessity to skew the compass points. Restricting the analysis to Coes book also limits the available correspondences quite a bit.

    1. I agree that some of the convergences are very strong. Many of them are. I like their lists. What I disagree with is that the weights assigned to them have any bearing on the analysis, or that anything can be said about the relative number of convergences vs. direct contradictions.

      1. It depends what you rate as a contradiction, I’m not sure what they are, other than what they list. I’m not sure what would be appropriate to add seeing the objective of the study or even as an objective to find the land of Zarahemla. I don’t think ‘looking’ for contradictions is the way to do either.

        It’s a difficult field compounded by two major things, one is that the Book of Mormon is a translation of a record of a civilization cut off from the roots of our modern world at 600BC. It may be translated into English, but English literature, nor even Latin had any bearing on that civilization, the reality is likely as foreign to us as people on another planet.

        Something I’ve seen with LDSs looking at the ancient Maya structures relates to interpretation of them. You are probably familiar with El Mirador, and it’s massive structures, the biggest and probably the most important structures, after the highway network connected to all the cities in the region, are the triadic temples. Some LDSs look at them and think of why they were built, some think it was through vanity, others look at the great masks by the stairways and think how far from Christianity these Pagan looking things must be. So making a judgement we could assign them a value as a massive contradiction… but are they? We have to know what they mean and how they functioned, they are alien to us, antropologists can work on decifering what the masks are about, but if they are Nephite we LDSs know what they are about, we also know how they had to function and we also have some indication of what is what with the structure, this info is in the Bible and is applicable to test, provided we imagine them being Nephite. And with just these we can add probably another dozen convergences.

        What I can see, is that the result the Dales get, the conclusion, is correct, how they get there may be a little imperfect in places, but by and large it hits the nail on the head and indicates where these lands are. If you look at just the primary things, and if you go deeper than just Coe’s book, the match is very exacting more so than possible with any Old World comparison you could make, even if that was something worth doing.

      2. Hi Jeff,

        Definitely agree with the broad strokes of your analysis. You can see my own thoughts in the comment section of the Interpreter article. I’m working on a rather different Bayesian take on Book of Mormon authenticity, and I’m wondering if you’d be willing to take a look at it before I start publishing things. If you’re interested let me know and maybe we can find a way to connect.



  2. Thank you for your analysis– I agree with everything you have pointed out. Any interest in writing a response to the editor asking for a retraction? Publications of this sort delegitimize the entire field of Mormon Studies. Happy to collaborate!

    1. I don’t see the benefit of asking for a retraction. Why not just prepare your rigorously reasoned and researched response and submit it for publication with the Interpreter. Asking for a retraction seems like an easy way out.. The Dales have been very responsive to critiques posted in the comments of the article. To state my bias, I liked the original article. And I actually don’t see much fault in lds philosopher’s critique. So I guess you could say that it is not clear to me who has the stronger argument. Since you seem to believe you and lds philosopher have the slam dunk debunking view, it seems only appropriate that you attempt to publish a response in the Interpreter.

    2. Hi Scott:
      You are welcome to ask for a retraction if you wish. However, neither my son nor I will retract the article voluntarily and Interpreter has already said that the article will not be retracted.
      Bruce Dale

    1. Arrow heads were found at Aguateca in the Peten that date to the classic period. Obsidian tips were also found at El Mirador dating to the 4th century, atlatl or spear points not sure if they are arrow heads. No question about the Book of Mormon’s description of slings any more as mounds of sling stones have been found in the big fortifications recently discovered between Tikal and El Zotz, these fortifications are mostly thought to be 4th century.

  3. I recently wrote a blog post on the very subject of subjectivity in science.
    The problems you cite are far from unique, and they betray a rather common misconception that science is ultimately objective (even when all of the deterministic rules are followed rigorously). The true answer is that everything in the purely scientific realm is ultimately based on subjectivity and bias, and yes, mere guesswork–the fact that we ourselves find some of these claims more credible than others reveals either (a) our own subjective bias, or (b) that we know something through means external to science. See the examples here:

  4. I am also a believing member of the LDS church, and I couldn’t help but laugh out loud at a number of the statements in the paper. Their tone is one of wide-eyed wonder and a casual sort of relationship to “science”. Perhaps the hope is using statistical-sounding language would give this the sheen of acceptable science. They failed.

    What would be really nice is if people were actually somewhat “academic” about this stuff. Start small, establish a bunch of small things, work slowly through a bunch of papers covering any one of the hundred-plus issues that here merit maybe a sentence. Then, as those are “solid”, build from there. I feel like most LDS scholarship wants to jump in right at the end with amazing conclusions, and nobody wants to do the real shovel-work of building up a body of actual scholarship. So we get all these big papers with huge flaws, and it mostly amounts to hot air.

    How about we start with, say, at least one paper on the suitability of this statistical approach to analyzing correspondences in texts in general. Say, for instance, multiple news reports of the same incident or something. I have zero reason to believe, coming into this article, that anything they’re about to embark upon actually makes sense in the context where it’s being used.

    1. Andrew and Thayne:
      You are welcome to believe anything you want about our paper, “hot air” or “huge flaws” or “veneer” of scholarship or “wide-eyed wonder”, as you view these issues. As you can imagine, we don’t agree.

      However, I would like to point out three things.

      First, the scholarship here involved reading Coe’s book at least a half dozen times, the Book of Mormon hundreds of times, View of the Hebrews and Manuscript Found each a couple of times and listening to all six of the podcasts.

      We then listed subject areas where Coe’s book and either a) the Book of Mormon, or b) View of the Hebrews or c) Manuscript Found said something specific about a particular fact claim, for example: the presence of writing, cities, wars, agriculture, metallurgy, and so forth. We then compared those fact claims for all three books versus Coe’s book for both positive and negative correspondences. We describe this process in some detail in our paper.

      The bottom line is that the Book of Mormon fares very well in this comparison with Coe’s book, while the two “controls” do not.

      Second, Andrew and Thayne both appear to believe that our methodology is inherently biased toward finding positive correspondences. Well, if our methodology by its very nature is biased toward uncovering positive correspondences (i.e., evidence in favor of the Book of Mormon), why did it not do so for either of the other books? It did not. The skeptical prior for Manuscript Found was made much stronger by the accumulated evidence against that book, while the skeptical prior for View of the Hebrews was essentially unchanged.

      Neither of you have yet engaged with that fact. How do you explain it if our methodology is biased toward finding positive correspondences? Are there any academic studies that support your “considered opinion” that our methodology is biased in this way?

      To try to compensate for our possible bias, we did several sensitivity analyses to provide very rigorous tests of our assumptions and data. None of these sensitivity analyses changed our conclusions. You do not engage with that fact either.

      One of the commentators did yet another sensitivity analysis in which he removed all of the correspondences that we had weighted as 0.5, and downgraded all the other correspondences. The correspondences weighted as 0.1 were “downgraded” to 0.5 and the correspondences weighted as 0.02 were downgraded to 0.1. Even with these rather severe changes, our conclusions were unchanged.

      I suppose someone could respond that we, because we are believing Latter-day Saints, intentionally (or subconsciously) selected evidence in favor of the Book of Mormon and suppressed contrary evidence. We did not.

      But at that point, the responsibility for proving their point would pass to the critic. It is inadequate to call someone a liar or deluded.

      The critic would have to do the same work we have done, and either show negative points of evidence against the Book of Mormon that we have missed and/or point out where our positive evidence is either overstated or non-existent.

      But as the sensitivity analyses show, that is quite a hill to climb.


      ps. Spoiler alert: in my last few readings of the Book of Mormon, I have found three more positive correspondences with Coe’s book, but no additional negative correspondences. The additional positive correspondences are: 1) widespread literacy, 2) people living in homes/houses (not teepees or wigwams) and 3)… I will let you find that one. 🙂

      1. Bruce,

        First, you have not indicated that you understand, nor have you responded to, the independence of observations critique (and how that effects your analysis). Second, it’s still not entirely clear that you understand my main critique either. If either of these critiques are correct, your sensitivity analysis is insufficient, and so is your commenter’s sensitivity analysis.

        That you have yet to acknowledge the first, or demonstrate that you understand the second, simply shows that you aren’t taking your critics very seriously. You clearly have a very high estimation of your own work. But lots of smart people are looking at this with concern — and not because they disagree with you on the Book of Mormon, or even disagree with your broad conclusions. But because they think the statistical reasoning is flawed.

        Epistemic humility would require that you try to understand what they are saying, correct? So far you have not. You’ve rebutted, but you haven’t actually responded. That’s concerning to me.

  5. Jeffrey:
    I plan to give you a more complete response in the Interpreter comments section in a week or so, maybe sooner if I can get my mission responsibilities done quickly. In the meantime, here are a few points you might want to consider.

    1) My son and I are engineers, trained more in the physical sciences and mathematics than in the social sciences. You may be correct that some of the social/cultural correspondences are less strong than we have claimed. But others, as Mark Parker notes and we also note in Appendix A, are probably much stronger than we have claimed.
    2) That said, there are many, many correspondences regarding technology, war, geography, etc. between the two books that have little or nothing to do with political or social correspondences. I think these are much less likely to exhibit the overlap that might occur in some social/political correspondences.
    3) In either event, we tried to deal with concerns over the strength of the evidence and our personal bias by means of sensitivity analysis, which you do not mention in your article. Even giving the positive correspondences the minimum statistical weight and the negative correspondences the maximum statistical weight, the overall conclusion is unchanged. So we made a strong effort in the paper to deal with our personal bias and strength of the evidence…but you do not mention that effort at all.
    4) We had a much more limited scope for our paper than you seem to imply. Our objective was simply to compare the truth claims of the Book of Mormon with facts stated by Dr. Coe in his book. Coe has repeatedly stated that the Book of Mormon has very little to do with ancient Indian cultures, “in spite of much wishful thinking.” But it turns out that the Book of Mormon has a great deal to do with ancient Mesoamerican Indian cultures according to the fact summarized in Coe’s book….and you can entirely set aside the Bayesian analysis and just look at the correspondences without weighting them if our Bayesian analysis is giving you heartburn.
    5) Now here is the really key disagreement between us. You state above “that in order for something to be counted as evidence against the Book of Mormon it has to be mentioned in both texts.” That is true. It is also true that in order for something to be counted as evidence FOR the Book of Mormon, it has to be mentioned in both texts. Please explain to me how that approach is unfair or slanted in favor of the Book of Mormon. It is not.
    6) I am not sure you have read our paper carefully. For example, you state explicitly that you have not read all of Appendix A (a detailed treatment of the 131 correspondences). I wonder also if you have read the article itself carefully and thoroughly. If you did, then you missed how we attempted to deal with personal bias issue.
    7) You also appear to have missed how the Bayesian likelihood ratios were assigned. They were not “arbitrary” as you state above. They were assigned based on any one of three essentially subjective weightings given in the highly-cited Kass and Raftery paper we also referenced. In Appendix A, we justify how those numerical weightings were assigned to each correspondences.

    1. Bruce, thanks for your response!

      #2 – If you are referring here to the independence of observations critique, I’m willing to look closer at those other sections. However, the fact that this wasn’t addressed in the original signals that it wasn’t a concern in the original analysis, even though it is a fundamental statistical consideration. Furthermore, it’s not merely “overlap” that is the concern, it is correlation — if two observations are highly correlated, then they cannot be treated as statistically independent.

      Let me invent an example. Imagine if you trying to determine the likelihood that Person X smokes. Turns out that people from Utah are 50% less likely to smoke. It also turns out that Latter-day Saints are 90% less likely to smoke. You can’t just put both these variables into the same model, especially one (like a Bayesian model) that will just multiply them, because even though one is “religion” and the other is “state” (geography and religion are two very different things), they are in this case fundamentally connected, and you are just going to inflate your analysis. It’s the issue of multicolinearity.

      #3 – You say I didn’t mention this at all, and I thought I had, but it must have been in comments elsewhere. Yes, I do mention the sensitivity analysis indirectly,since it is what demonstrates is that your actual probability assignments turn out to not matter at all — the analysis hinges entirely on the sheer number of correspondences. This might seem like a strength, bu do not do any work to ensure that you are balancing against the relative scarcity of direct contradiction generally.

      #4 – I wish you would set aside the deeply problematic numerical analysis and just look at the correspondences, if that is your goal. Furthermore, your comments on the article and elsewhere — and the way that the article is promoted by LDS Living and Daniel Peterson — belie the fact that you do see this as evidence of the Book of Mormon’s historicity, and are using it in that way. Despite your protestations of limited scope, the article is being promoted broadly as irrefutable mathemtical evidence that the Book of Mormon is true.

      #6 – I haven’t yet read through every single of the 131 correspondences (though I’ve read through many of them), but I did read the main part of your paper clearly.

      #7 – I did mention this in my response, contrary to what you have stated.

      #5 – You show no evidence you understand my argument here at all. Yes, both correspondences and contradictions need to be mentioned in both places, but this still results in a higher standard for contradictions, in two ways: (1) It had to be a direct contradiction, whereas only veiled and implied correspondences where accepted (see the homosexuality one), and (2) there are inherently more correspondences than direct contradictions in any two texts, and you failed to acknowledge or account for this in your model. Here’s another restatement of my argument here, from a comment I posted elsehwere:

      In order to be included in the analysis, a contradiction had to be something mentioned in both texts, and directly contradict each other. So if Coe’s text said, for example (making this up), that the Mayan worshipped cats and every Mayan household had a cat, this was not included in the analysis because the Book of Mormon said nothing about cats. It could only be included if the Book of Mormon explicitly said there were no cats. Heck, Coe could say that the Mayan were visited by aliens, and it wouldn’t be a contradiction because the Book of Mormon is silent on aliens.

      This is sorta as it should be, because we can’t call something a contradiction when one text is silent. Silence is not absence. So we should not use something one text is silent on as evidence of contradiction. But what this means is that direct contradictions are going to be much harder to come by when comparing two texts — not because civilizations are similar, but rather precisely because writers don’t normally talk about what is not true of their culture and people. (Ditto for historians — they fill their books with what they believe is the case, not with what they believe is not.)

      If you compare any two historical narratives / books by this standard, you are going to find many more convergences than contradictions. Again, not because of the similarities in history or story, but because of the nature of human storytelling and writing. Heck, the Chronicles of Narnia and The Hobbit have orders of magnitude more convergences than contradictions when you set it up this way. The Chronicles of Narnia are silent on hobbits, and The Hobbit is silent on benevolent, all-powerful felines.

      But they both have walled cities, wars between nations, commerce, various races, dragons, magic, kings, dwarves, etc. The convergences are going stack high, but the direct, explicit contradictions will be far fewer. Not because they are remotely similar, but because direct contradictions between two stories / historical narratives just don’t happen all that much. Unless C.S. Lewis was explicitly trying to contradict Tolkien, he’d have no need to comment on the absence of hobbits or a forest called Mirkwood. Unless Tolkein was explicitly trying to contradict C.S. Lewis, he’d have no need to comment on the absence of a lamp post in the woods, or a period of winter and a white witch.

      It’s precisely this reason that the contradictions that are included tend to be those rare moments where Coe actually asserts a negative. And since asserting a negative is not a very empirical thing to do, he doesn’t do it very much, hence we have fewer of them. Coe clearly had a number of things he could assert to contradict the Book of Mormon, that he didn’t include — and the authors pull a few of these extra things in. But this just illustrates the point: Coe wasn’t trying to contradict the Book of Mormon, so he didn’t include many things he could have in his text.

      So to then run a numerical analysis that turns on the number of convergences and contradictions you find is circular logic. You haven’t established anything about the civilizations in question. You’ve only discovered something about human writing and storytelling (or in this case, the way historians write history): unless we are protesting our innocence of a crime, we are usually silent about things that didn’t happen (and also silent about much that did); And unless we are trying to contradict another historian’s narrative, we probably won’t assert very many negatives in our account of history.

      This, plus the independence of observations piece, are what makes your final numbers so ridiculously large. Imagine that you included in your analysis of Narnia and the Hobbit correspondences like, “Both stories had a number of fantasy races,” “Both stories include dwarves,” you’ve added two variables that are often co-related, and just expanded your number by a degree of magnitude or two. Do that several dozen times over, and you get results that are well outside normal Bayesian ranges.

      These are both matters of statistical reasoning that you are being very dismissive about, in ways that damage your credibility. Yes, you are very informed and seem like you know a lot about Bayesian statistics. But I’ve seen lots of other people who also work with Bayesian analyses who say that these concerns are very, very damaging to your analysis. I’ve yet to see you actually address these concerns rather than blithely dismiss them.

      1. Jeffrey:
        I am preparing a more extensive reply to you that I hope to finish later this week. In the meantime, it will help me to know how you think the facts summarized in Coe’s book might be correctly compared with the fact claims of the Book of Mormon.

Leave a Reply

Your email address will not be published. Required fields are marked *