This is the 3rd post reviewing By the Hand of Mormon, by Terryl Givens. I’ve taken a bit on an interest in wordprint studies. Givens explains wordprint studies on page 156.
Computational stylistics is based on the premise that all authors exhibit subtle, quantifiable stylistic traits that are equivalent to a litereray fingerprint, or wordprint. The method has been used to investigate other instances of disputed authorship, from Plato to Shakespeare to the Federalist papers. Analyzing blocks of words from 24 of the Book of Mormon’s ostensible authors, along with nine nineteenth-century writers including Joseph Smith, three statisticians used three statistical techniques (multivariate analysis of variance, cluster analysis, and discriminant analysis) to establish the probability that the various parts of the Book of Mormon were composed by the range of authors suggested by the narrative itself. They found that all of the sample word blocks exhibit their own “discernable authorship styles (wordprints),” even though these blocks are not clearly demarcated in the text, but are “shuffled and intermixed” throughout the Book of Mormon’s editorially complex narrative structure (wherein alleged authorship shifts some 2.000 times). Emphasizing the demonstrated resistance of these methods to even deliberate stylistic imitation, they further conclude that “it does not seem possible that Joseph Smith or any other writer could have fabricated a work with 24 or more discernible authorship styles.” The evidence, they write, is “overwhelming” that the Book of Mormon was not written by Joseph Smith or any of his contemporaries or alleged collaborators they tested for (including Sidney Rigdon and Solomon Spaulding).4 Asubsequent, even more sophisticated analysis by a Berkeley group concluded that it is “statistically indefensible to propose Joseph Smith or Oliver Cowdery or Solomon Spaulding as the author of 30,000 words…attributed to Nephi and Alma…The Book of Mormon measures multiauthored, with authorship consistent with its own internal claims. These results are obtained even though the writings of Nephi and Alma were ‘translated’ by Joseph Smith.”5
Ok, let me talk about multivariate analysis of variance, cluster analysis, and discriminant analysis. These are very advanced graduate level statistical techniques. Ronald Fisher is a famous English statistician (ok, only famous to statisticians) who pioneered many of these techniques. Danish Professor Anders Hald said Fisher “almost single-handedly created the foundations for modern statistical science.” Fisher died in 1962. These techniques are really new, are frankly aren’t discussed in any bachelor’s level statistics courses.
Givens book was published in 2002. From reading this paragraph, one would think wordprint studies are solidly in favor of Mormons. However, in Dec 2008, Oxford Journals published a new study called “Reassessing authorship of the Book of Mormon using delta and nearest shrunken centroid classification.” I have a master’s degree in statistics, and until I saw this article, I had never heard of a shrunken centroid classification. I must say I have always been impressed with Wikipedia when it comes to math articles, but Wikipedia doesn’t even have an article on this shrunken centroid classification. I found this Stanford University article that describes the technique. Apparently it is used in cancer gene analysis. The authors of this Book of Mormon authorship article are three Stanford University professors: Matthew L. Jockers (English), Daniela M. Witten (Statistics), Craig S. Criddle (Civil and Environmental Engineering). They claim that “Our findings support the hypothesis that Rigdon was the main architect of the Book of Mormon and are consistent with historical evidence suggesting that he fabricated the book by adding theology to the unpublished writings of Spalding (then deceased).”
(The abstract is found here, but you have to pay $28 to actually view the article.) FAIR has criticized the methodology of the study, because they didn’t include Joseph Smith as a possible author. Why isn’t he as likely as Spalding to have written it? It appears the Stanford professors decided that the true author of the Book of Mormon was one of only seven possible authors: Oliver Cowdery, Parley P Pratt, Sidney Rigdon, Solomon Spalding, Isaiah/Malachi, Joel Barlow, and Henry Longfellow. Barlow and Longfellow are poets thrown in as control, so it shouldn’t be a surprise that they didn’t match. Since the Book of Mormon includes writings of Isaiah and Malachi, these portions should easily match, and the Jockers study concludes these portions match.
I guess my biggest problem with Jockers is this. The corrected abstract refers to a correction on one chapter, “With the corrected data, NSC ranked Rigdon at 0.4626 and Spalding at 0.46525.” If I am understanding this correctly, these numbers are probabilities. So the probability that Sidney Rigdon is the real author if this chapter of the Book of Mormon is less 50%–not exactly a ringing endorsement, I’d say. I’d like to see probabilities of the other chapters, especially the Isaiah and Malachi chapters, which I expect will have a pretty strong correlation.
Now, to be fair, I don’t have probabilities that Givens is referencing–perhaps they are suspect as well. But I expect that Isaiah and Malachi have much higher probabilities than 0.4626 for Jockers study. So, what do you think of wordprint studies?
We went round and round about this before and I don’t think we accomplished much. 🙂
Is it possible to discuss the Jocker’s Criddle Study with an open mind or will people just gravitate to their already presupposed bias about the origins of the Book of Mormon and fight anything that doesn’t line up?
I ask because according to Criddle, this same methodology has been tested against known texts with a subject group of authors with amazing accuracy. In other words, if I feed the computer data from 10 different authors with similar writing styles and then insert a new text written by one of the 10, can the computer tell you which one wrote it? The answer is yes about 98% of the time. So could we at least as a baseline agree that properly conducted with the right controls and the right amount of data, this methodology does have a mathematical solid foundation?
“I guess my biggest problem with Jockers is this. The corrected abstract refers to a correction on one chapter, “With the corrected data, NSC ranked Rigdon at 0.4626 and Spalding at 0.46525.” If I am understanding this correctly, these numbers are probabilities. So the probability that Sidney Rigdon is the real author if this chapter of the Book of Mormon is less 50%–not exactly a ringing endorsement, I’d say. I’d like to see probabilities of the other chapters, especially the Isaiah and Malachi chapters, which I expect will have a pretty strong correlation.”
I think you’re playing right into the authors of the studies hand with this statement. For the chapter in question, no author comes out as a clear winner and therefore not possible to discern who actually wrote “that” chapter. If all the chapters came out like that, then the study would not be helpful at all in assessing who actually wrote the book…. Fortunately, that wasn’t the case with all of them… 🙂
“In other words, if I feed the computer data from 10 different authors with similar writing styles and then insert a new text WRITTEN BY ONE OF THE 10, can the computer tell you which one wrote it? The answer is yes about 98% of the time”.
But what is the point of the numbers if you insert a text written by an 11th person? The study doesn’t test Joseph Smith because the authors asserted they had no access to ANY usable documents they could attribute to Joseph. You have to solve that problem before you HAVE proper controls and methodology.
So the proper conclusion to draw from the numbers quoted in the OP is that IF any of the seven people tested wrote the chapter, Rigdon and Spauling were about equally likely, and far more likely than any of the other five.
IF any woman is elected President of the United States in 2012, Nancy Pelosi, Hillary Clinton, or Sarah Palin are far more likely than any other woman. But it is illogical to infer anything about the absolute probabilities for any of them from that statement.
You’re such a good straight man! Jocker’s and company will be publishing an update later this year, but here’s the short version of what they’ve done.
From Craig Criddle on RFM:
“A few things have happened since we published our article in the Journal of Literary and Linguistic Computing. For one, Matt Jockers carried out a further analysis of texts attributed to Smith and compared them to texts in Smith’s own handwriting. About 20% of the texts attributed to Smith clustered with texts in Smith’s handwriting. So these documents might be authentic Smith-authored text.
Also, since I gave the Ex-Mo conference talk in October, we have re-tested the Book of Mormon, including this new material as a “signal” for Smith. This new analysis continues to indicate that the major authors were Rigdon and Spalding. However, a fair number of chapters are now attributed to Smith.
Some of the chapters attributed to Smith are ones you might guess, just based on their content. Here are a few of them:
2 Nephi 3 – the chapter in which the ancient patriarch Joseph prophecies of a future seer who would be named Joseph and whose father would be named Joseph.
Alma 37 – the chapter describing a seer stone “which shall shine forth in darkness unto light” prepared for “Gazelam”, a name adopted by Smith in the Doctrine and Covenants.
Helaman 13 – the chapter that describes “slippery” treasures that are buried in the ground.
Jacob 2 – the chapter that proclaims the importance of chastity, but has an “escape-clause” that would permit polygamy.
Bottom line… the new attributions continue to support the Spalding-Rigdon theory but also suggest a minor contribution from Smith.”
Minor note, while awaiting the paper. Jacob 2’s escape clause is in the punctuation — the original HAD no punctuation or versification — and was added much after the original manuscript was hand written.
Is it possible to discuss the Jocker’s Criddle Study with an open mind or will people just gravitate to their already presupposed bias about the origins of the Book of Mormon and fight anything that doesn’t line up?
I doubt it Doug, but we’ll give it a try! 🙂 Seriously, I picked the title for a purpose. We have 2 sets of researcher who come to completely opposite points of view here. So, in all seriousness, what is the best way to settle this? Let me quote the opening post again. First, the BYU researchers say,
The Stanford researchers say,
Now, Doug, how do we settle this really?
I note that BYU researchers included Rigdon, Smith, and Spaulding in their sample, yet the Stanford researchers did not include Smith in the original study. Don’t you find it a bit problematic in the methodology?
I’ll get into some more details in a bit, but first, I would like to see where you stand on the Stanford researchers sample not including Smith. Also, have you been able to read either the Jockers study, or the BYU studies? I haven’t been able to get my hands on any BYU studies to see what flaws they might suffer from.
I will certainly say that the Neal Maxwell Institute ought to be looking to fund openings for techies prepared to dig into the mathematical weeds. We won’t settle this here, but this is an area that seems ripe for progress, pro or con, over the next several years.
And, despite the subject matter of some of my posts, I am very straight. 😀
“I note that BYU researchers included Rigdon, Smith, and Spaulding in their sample, yet the Stanford researchers did not include Smith in the original study. Don’t you find it a bit problematic in the methodology?”
I actually think this is just good science. The Jocker’s study didn’t dodge the issue of including JS in the first run, they plainly stated that they didn’t believe a good enough sample set was available to give his “voice”. Had they included say parts of the D&C or other things written by scribes for JS and he didn’t show-up as a possible contributor, FAIR would have blasted the study exclaiming that the writings used weren’t actually written by JS. Even today there seems to be very little in the way of actual writings from JS own hand. Assembling a group of letters and short correspondences won’t yield the same flow as writing a long narrative. Perhaps as a result of the original paper they now have enough data to give the computer a good sample of JS signal. As I posted in number 3 above, he is now being considered.
“Also, have you been able to read either the Jockers study, or the BYU studies? I haven’t been able to get my hands on any BYU studies to see what flaws they might suffer from.”
I read the Jocker study last year when it was available for free at the website. (I wish I had copied it to my computer.) I haven’t seen the BYU study, but I did hear that the other folks at BYU had real problems with it. I think most notably are those that aspire to the “loose” translation theory. 🙂
I think it’s important to note that MH and others in the apologetic community are correct in pointing out that if the author of the actual text is not available to the computer, than the results aren’t going to tell us directly that there isn’t a match. However, what does come out might give some good insight to how the program reacts to this type of scenario.
The study used 7 authors at first and now 8 if we include Joseph Smith. Therefore, if the text being sampled doesn’t match up with any of the 8 you would expect the output to assign each possible author around a 12% chance of being the actual author. Chapters in the book that have a fairly balanced percentage between three or more possible contributors would not work very well at discerning who actually wrote it, if any of them wrote it at all. On the other hand, if the compilation puts one particular author high in percentage to the other 7, than it becomes more and more likely that he wrote the given section depending on how high the percentage goes. For the statisticians out there, we should be able to come up with the sample size that virtually eliminates false positives from occurring. For Jocker’s that number was 7, perhaps it’s really 10 or 20. The point being, with a big enough pool of possible authors, the chances of giving an incorrect author over a 50% chance of being the actual author should be reducible to near zero.
As I stated at the beginning, given the right controls, shouldn’t we be able to test the ability of the “Delta” and “NSC” methodologies to see if we could fool them? And once we’ve developed a process that yields over 98% correct answers to known scenarios, why can’t it then be applied to unknown texts to determine authorship?
One last thought… Jocker’s and company broke the BoM up by chapters as they believed it wasn’t written by just one person. They could just as easily broke the samples up by testing every 1200 words or any other grouping that someone felt was appropriate. The chapters were added later and therefore are somewhat arbitrary. In other words, it is possible that in a given chapter the first part could be one signal and the end could be another. Thereby creating a scenario where two authors could be equally scored by the program and still be right.
I’m in total agreement with you. I think this study is appropriate and deserves to be dissected by both LDS scholars and non-LDS scholars. The biggest problem I see is the lack of interest from the non-LDS scientific community. How do we get some very smart people outside of the church to weigh in? I have no-doubt that BYU will engage with all its might and its accompanied bias, but if the outside community feels like they have better things to do than try and destroy the faith of the LDS religion, we may never get the other side of the coin.
Scientific controversies always have this feature, even when we are talking about things as seemingly a-political as the Hubble constant. People study what is most important to them. At the moment, LDS scholars have the resources and interest. Evangelicals and secularists seem to be trained on each other, if not on other world religions.
So BYU will point out weaknesses, and if they point out enough of them, the “other side” will address them. That’s actually how science is supposed to self-correct and grow.
i’m typing on my phone and trying to remember a few things. doug, your 98% is overstating things quite a bit. I was able to review jockers study today for the first time and learned a few things about the techniques. jockers and company use a very different statistical technique than the byu studies.
the jockers study is interesting because it compares the delta method with the nearest shrunken centroid (nsc). the 2 methods generally agree but the nsc seems to be a bit more reliable. they mention nsc is a technique used in cancer gene analysis rather than wordprint studies, so this is a new and interesting use of the technique. nsc requires known handwriting samples, so this is a bit of a problem for proponents of the idea that mormon, nephi, alma, etc wrote the BoM since no known handwriting samples exist. so, this technique can’t be used to validate unknown authors, which is a bit of a limitation. jockers et al make this clear by emphasizing the fact that writing samples of joseph smith haven’t been authenticated sufficiently. these authors have an assumption that the BoM was written by 19th century people and didn’t attempt to ask if ancient writers could have written it, and though they mentioned the ethan smith view of the hebrews theory, they didn’t test ethan smith as a possible author either.
the byu researchers used 3 different techniques that I mentioned above. jockers summarizes some of these studies. from what I can tell, byu used the existing text itself to see if there were stylistic differences. the byu researchers compared samples with known samples from 19th century writers and did not find a match with joseph smith, sidney rigdon, or solomon spaulding. the studies seemed to confirm that the book was composed by multiple (more than a dozen) authors as the text claims.
it seems differences in researchers opinions come from different assumptions about the BoM itself-byu assumes ancient origins while jockers et al assumes 19th century writers.
now, one interesting note in jockers was a false positive. of the 21 isaiah/malachi chapters, longfellow was the most likely author (above 80% as I recall.) so the quick math says you 98% is at least closer to 95, and I think we all agree longfellow didn’t write the isaiah malachi chapters. I bring this up not to find fault with jockers numbers, but rather doug’s overstated statistic. I think jockers makes some interesting points, but his method is impossible to use to prove ancient origins without a handwriting sample. byu may have a flaw in their methods too, but I don’t have any documents from byu to review.
That is where a good discussion of Jockers should start 😉
MH, I may be just restating some of what you said in #10, and I haven’t read either study, but from your description, it sounds like they were asking entirely different questions.
The BYU study was testing for one author versus multiple authors. They found evidence to suggest that multiple authors were more likely than one author. This suggested that the Book of Mormon wasn’t written by Joseph Smith.
The Jockers study it sounds like was agnostic on the question of one versus multiple authors, but was trying to pin down who the author or authors were.
Thanks for posting this. Now you’ve got me curious to look the studies up and see if I can understand them.
I’ve looked at the Jockers study briefly. From an apologetic perspective (and I’m sure the FAIR people mentioned this), the biggest weakness is that it assumes that one of the authors from the set was the author. In other words, the analysis in effect asks this question: Given that one of these seven men was the author of this chapter of the Book of Mormon, which one was it? The strength of the analysis, though, is in the inclusion of the control authors who were contemporaries not linked in any way to the Book of Mormon.
To me, an interesting next study would include a lot more contemporary control authors. Rather than novelists, I would be interested to see a comparison with contemporary religious writers not associated with Mormonism.
And this goes back to my previous comment, but the real weakness of the BYU study (as I understand it) is in setting the bar too low–in just looking for multiple authors as evidence that the Book of Mormon is of ancient origin.
Really, though, Doug G. nailed it in #1, I think. I find it difficult to imagine a result of a wordprint study that is going to convince anyone that they’re wrong, regardless of where we all start out.
I can think of additional research questions to ask. Instead of taking chapters, break the multiple “narrators” of the various portions of the BofM (with all the internal switches) and compare those sections to the modern control group, for example. I think there is a lot to be learned.
“your 98% is overstating things quite a bit.”
Ok MH, yep you got me! I read the study over a year ago and my memory is not as good as it used to be. However, if you’re implying that the Delta and NSC methodology hasn’t been tested with controlled text and a determination made about how accurate it is, then you’re not being fair either. If you have the study in front of you, perhaps you’d be so kind as to tell us how accurate the authors stated it was.
I’m not nearly as educated as you in statistics so I’m not even going to try and discuss the BYU study on a technical level when as you pointed out, it’s well beyond my level of understanding. For the layman, I see big problems with statements like this one:
“They found that all of the sample word blocks exhibit their own “discernable authorship styles (wordprints),” even though these blocks are not clearly demarcated in the text, but are “shuffled and intermixed” throughout the Book of Mormon’s editorially complex narrative structure (wherein alleged authorship shifts some 2.000 times).”
This statement makes no sense to me at all even if I were still a believing member for the following reasons.
1. The BoM itself claims that the gold plates where written by Mormon as he abridged many different records of his people from the time of King Benjamin until his delivering them to his son. He even states that he can’t even write hundredth part of the history that’s available to him. So shouldn’t at least Mosiah through Moroni excluding the Book of Ether be in Mormon’s voice?
2. I’m not a translator, but I’m told that translation is done by converting phases from one language to another. It doesn’t work to try and translate word for word as the resulting text would be nonsensical. Therefore, however you want to suppose JS “translated” the text on the plates, he would have had to put the sentences into English and reword them to make sense in our language. So shouldn’t the whole book be in his voice?
3. Apologists have worked very hard to convince the inquirer that JS would have had to put the text into the language he understood. Therefore, they explain way allot of problems critics have risen by insinuating that the book was a loose translation. So how does one do a loose translation and still retain “the authorship style” of the original writers?
You tell me MH, does this make any sense to you? I know that Jocker, Criddle and Witten wrote about previous studies done with “word prints” and explained why their methods were more accurate. I just can’t remember the argument at present. Perhaps you could enlighten us with that as well from the paper they wrote?
Lastly, even though this is a peer reviewed paper from a major university with two non-members putting their professional reputations on the line, are you insinuating that the Delta and NSC methodology is without merit?
“I think jockers makes some interesting points, but his method is impossible to use to prove ancient origins without a handwriting sample.”
Actually MH, I agree with this statement to a point. Proof would be 100% and obviously we don’t get that from anything in this study. However I believe we do have good writing samples from all the possible authors of the BoM, so coming up with who most likely wrote it is not impossible. I think you’re going to have to do better than that to convince anyone on my side of the fence that the BYU study is much more accurate in deciding if the text is ancient or modern.
Stephen, I did find Jockers information about false positives quite interesting, and there were other interesting tidbits there. I found footnote 18 particularly interesting. Jockers notes that Stanley Fish and Roland Barthes believe the whole concept of authorship is a flawed concept. This would seemingly throw doubt on both the Stanford and BYU studies. However, Jockers (and BYU for that matter) is operating under the idea that wordprint studies are a valid technique.
Yes I agree Ziff: The researchers were asking different questions, which may be an explanation as to why their conclusions are different. However, it is interesting that the BYU studies and the Stanford study seem to agree on two things: (1) there were multiple authors of the Book or Mormon; (2) going off Doug’s update in #3, then it would seem to appear that BYU and Stanford agree that Joseph Smith was not the primary author of the Book of Mormon.
2. I’m not a translator, but I’m told that translation is done by converting phases from one language to another. It doesn’t work to try and translate word for word as the resulting text would be nonsensical. Therefore, however you want to suppose JS “translated” the text on the plates, he would have had to put the sentences into English and reword them to make sense in our language. So shouldn’t the whole book be in his voice?
I’m a translator, so I’ll address this a little. The way (good) translation usually works is to understand concepts in one language and then express those concepts as nearly as possible in another language. But I don’t think what Joseph (purportedly) did was actually translation per se, since he wasn’t reading the plates and then writing or dictating the English. After all, apparently the plates were covered and/or his face was buried in a hat much of the time, so whatever he was doing, it wasn’t reading the plates. Joseph would have been working more by some sort of inspiration rather than actually translating. That said, I have no idea how that should affect authorial voice.
Lastly, even though this is a peer reviewed paper from a major university with two non-members putting their professional reputations on the line, are you insinuating that the Delta and NSC methodology is without merit?
That’s always somewhat possible. Methods are picked up and discarded all the time. I think that’s just how science works.
if you’re implying that the Delta and NSC methodology hasn’t been tested with controlled text and a determination made about how accurate it is, then you’re not being fair either. I don’t think I made that implication. Jockers states that the delta method has been “well-documented in the literature of computational stylistics.” NSC is a very new statistical technique, and to my knowledge, this is the first use of the technique in computational stylistics or wordprint studies. From Jockers study, he seems to believe NSC is more accurate, and it does seem to have fewer false-positives and fewer false-negatives than the delta method in this study, so I would tend to agree with Jockers that NSC seems more promising than the delta method.
If you have the study in front of you, perhaps you’d be so kind as to tell us how accurate the authors stated it was. I’m not sure I understand the question, but I think you are asking me to compare the error rate between Delta and NSC. There are many ways to measure accuracy. Jockers used a training set. Let me try to put this is layman’s terms. I don’t have his exact methodology, but let’s say for example, he took a long sample of Parley P Pratt of say 500 words known to be written by Pratt. Then he takes a 90-110 word sample from this 500 word piece to see how well delta and NSC stack up. They check these 2 samples from a certain 101 word list of ‘again, before, earth’ etc. When comparing the 500 word and 90 word samples, the error rate for delta was 11.1%, and the error rate for NSC was 8.8%.
Jockers notes that 20 of 21 Isaiah/Malachi chapters were attributed to Isaiah/Malachi with probability of 91% or better. However, as I mentioned before the other chapter was attributed to Longfellow. This is where Jockers talks about false positives. Quoting from the article on page 473,
Once again, this seems to show the superiority of NSC over delta. So, as you can see, there are different ways to measure accuracy.
are you insinuating that the Delta and NSC methodology is without merit?” Not at all, and I’m not sure where you felt I insinuated that. NSC is used on cancer gene analysis and seems to be promising. Delta has been used in these types of studies before.
However, I do question Jockers sample selection, and his application of the method. According to the study, the combined authorship of Rigdon, Isaiah/Malachi, and Spalding account for 85% using NSC (93% using Delta). Now this sounds pretty impressive, doesn’t it? Let’s look at the sample again. Quoting from page 470,
So, let’s say that Jockers picked me, Doug, FireTag, Stephen, Ziff, BiV, and Hawkgrrrl. It’s entirely possible that me, Doug, and FireTag (or any 3 of this list) would have writing styles similar to 85% of the Book of Mormon. Does that really mean that we 3 wrote it?
Now granted Pratt, Cowdery, and company are much more logical choices than Doug and I. But the fact of the matter is if Doug and I were the only ones in the list, one of us would have a more similar style to the Book of Mormon than the other, whether our writing style actually represents it or not. We’d expect Doug to have 50% chance, and I’d have 50% chance. If Doug came up 85%, and I came up 15%, that would be interesting, but would it tell us anything? The fact that Longfellow actually matches Isaiah is something to wonder about, isn’t it?
If none of the authors had anything to do with authoring the BoM, then the probability would be 1/7 + 1/7 + 1/7 = 43%. The fact that it is 85% is almost twice the rate it would be by chance, so it is an interesting finding. The Ethan Smith View of the Hebrews theory is a competing theory with Spalding-Rigdon theory, and Jockers mentions that would be an interesting project to study. Is it possible the Ethan Smith is more like the Book of Mormon than Spalding? Sure, it’s possible, but it wasn’t tested. There is no way to test Mormon’s wordprint, or Nephi’s, or Alma’s or anybody else. If we could add these others in there, do you think the Jockers’ big 3 would still turn up 85%? Jockers states that 15% of the Book of Mormon is Isaiah-Malachi, so I’d put my money that they would still show a strong correlation with 100 possible authors, but I’m not so sure I’d go there with Rigdon and Spalding if we could find samples of Nephi, and Mormon, and/or Ethan Smith.
All this makes me wonder if Stanley Fish was right…. As I mentioned in my post last week, sometimes people can unintentionally misapply statistics. I’m not saying Jockers misapplied statistics in this case–he could very well be on to something here, and perhaps we do need to reassess Book of Mormon authorship. But I do think further study needs to be made before we conclude that either BYU or Stanford has all the answers to this question. They can’t both be right about Rigdon/Spalding, but they could both be wrong. Maybe it’s Ethan Smith?
The original study by the three BYU statisticians had substantial methodological problems, documented here:
David I. Holmes critiqued the BYU study and performed his own analysis, based on vocabulary richness. Holmes found, to the contrary, that the Book of Mormon does not form prophet-by-prophet clusters. It’s authorship appeared to be unitary. The Book of Mormon texts also clustered stylistically with the Book of Abraham and some of the D&C texts. They did not cluster with Joseph Smith’s own personal writings, but Holmes concluded this was due to Smith’s use of a “prophetic voice” in his scriptural writings. See the summary here:
Hilton argued, against Holmes, that vocabulary richness is not a good measure of authorship. He published his own study, which updated the BYU study’s methodology and worked out a number of the methodological kinks. Hilton’s study compared three texts from within Alma and three from within 1 Nephi, and found that each book had an internally consistent style, but was not consistent with the other book. He also found that Smith, Cowdery, and Spalding didn’t match up well with either book.
Although I’m not aware of any published critique of the Hilton study, there are a few problems. First, Rigdon was not included. Second, the samples from within Nephi and Alma were assumed to have unitary authorship. Third, the study was based largely on non-contextual words that in some cases would not have been present in the original Hebrew. Fourth, my understanding of the method is that it requires identifying the ends of sentences. This is a somewhat arbitrary endeavor in Smith’s corpus, and especially in the Book of Mormon. And fifth, when we look at the results for the Smith-Alma comparison, they are mostly within the tolerance for Smith having been the author of that book. The style of Nephi is more distant from Smith, but I’m not convinced that we should expect any different when comparing Smith’s private letters and personal history to the Book of Mormon, which is a very different kind of text. In sum, there appears to be something to this study, but I am not entirely satisfied.
And now, of course, we have the Jockers, et. al. study, which also seems to have some methodological problems, but again posits multiple authorship (split among 5 or 6 authors, if I recall correctly), and finds that the stylistic differences do not follow the text’s internal authorship divisions.
All of this looks to me like an incomprehensible cacophony, with emphasis on the “phony”. I don’t quite know what to do with all these studies, but I’m not really convinced by any of them, at the end of the day. I rather doubt that statistical methods of authorship attribution can be reliably applied to a text with the characteristics of the Book of Mormon. I’ll stick to more traditional modes of analysis until and unless the math guys can come to some kind of agreement.
Ok, I’ve had all night to think about this and I want to add a few points about Biblical authorship. I don’t know if anyone has read my posts on the Documentary Hypothesis. Briefly, there are scholars who believe the first 5 books of Moses (Genesis, Exodus, Leviticus, Deuteronomy, and Numbers) were written by 4 different sources at different times. They refer to these sources as J (Yahweh), E (Elohim), D (Deuteronomist), and P (Priest). They believe that the J source was a Kingdom of Judah (southern) source, while the E source is a Kingdom of Israel or northern source. Contrary to LDS theology, proponents of the Documentary Hypothesis say that Yahweh and Elohim are used interchangeably in Old Testament times (and FARMS has conceded this point as well.)
Some scholars believe that simply by looking at the word choice of Elohim and Yahweh in the original Hebrew, that we can determine the original source J or E. Others, such as Lawrence Schiffman of NYU disagree with the Documentary Hypothesis completely. Documentary Hypothesis proponents believe several redactors or editors combined the sources into the narrative of the Bible we have today.
I bring this up to show that there are similar questions about authorship of the Bible, and it appears they have used some type of wordprint study to identify these original ancient authors. The jury is still out on the Documentary Hypothesis among scholars, though Daniel Christopher-Smith of Loyola Marymount University says the Documentary Hypothesis takes care of many problems, and seems to indicate it is the best hypothesis out there.
I don’t know what statistical techniques biblical scholars have used to determine authorship, and I doubt they are so sophisticated to use NSC. But lest Doug think I’m throwing cold water on Jockers, I want to add that peer-reviewed articles are open to criticism by peers. In reading Jockers review of the BYU studies, I think Jockers asks some legitimate questions of the BYU researchers, and it is entirely appropriate to bring these types of questions up so future studies can address possible weaknesses. I note that Jockers NSC technique would be impossible to settle the Documentary Hypothesis questions too since there are no J, E, D, or P handwriting samples either. It would be interesting if Longfellow came up positive in the Documentary Hypothesis too.
I believe your last post (#18) to be one of the most balanced and fair assessments I’ve seen you write on this board!
“There is no way to test Mormon’s wordprint, or Nephi’s, or Alma’s or anybody else.”
Here where I miss your logic as it doesn’t fit the story of the coming forth of the BoM from the official account. We know that Oliver Cowdery wrote the original text as it fell from the mouth of JS. Therefore, we do know who wrote the text. We also know from JS himself that each character on the plates equated to 20 or 30 words in English. To be fair to the story, each reformed Egyptian character would probably need about that to fit in the space provided. Let’s consider this quote from Wikipedia:
“Joseph Smith did not provide his own published description of the plates until 1842, when he said in a letter that “each plate was six inches [15 cm] wide and eight inches [20 cm] long, and not quite so thick as common tin. They were…bound together in a volume, as the leaves of a book, with three rings running through the whole. The volume was something near six inches [15 cm] in thickness”. ”
If each plate was about the thickness of a piece of tin and the writings were on both sides of each plate, we should be able to come to a close approximation of many plates were in the un-sealed portion of the book. 2/3rds of 6 inches is 4 inches. So even if the plates were much thinner than the standard tin that could be made in 1830, the best you could hope for might be as thin as 1/20 of an inch. I suspect to be able to engrave on both sides of the plate you would need to be twice as thick as that, but for the sake argument, let’s give Mormon the benefit of the doubt and say we could get 40 plates into the space of 2 inches. That’s 80 writing surfaces at 6 by 8 inches to put the 531 pages of the BoM plus the lost 116 pages.
I don’t believe anyone in apologetics will argue about reformed Egyptian charters being whole phrases instead of words, so how would we possibly get a word print from symbols that express thoughts instead of words? When JS translated these characters into a narrative story, it would have to be in his voice. I therefore would expect the Jocker’s study to show that most of the text was written by JS. The fact that they’ve now added him to the pool of authors and reran the tests should have produced the expected results. The fact that it didn’t is very interesting indeed.
Now we’ve come to the part of the discussion where we must believe that Jocker’s et al deliberately manipulated the data inputs to get the expected results they wanted. In-other-words deliberately input different writing samples from the author pool until the computer output a strong Spaulding/ Rigdon signal. I’m not saying that it’s not possible for a peer reviewed paper, that they knew would be subject to rigorous scrutiny by LDS scholars, but it would seem unlikely to me.
As for the “View of the Hebrews”, Ethan Smith didn’t write that as a narrative story, but rather a like a modern white paper explaining why he thought his theory of origin of the American Indian had merit. A word print test would seem useless as critics have always maintained that “View of the Hebrews” simply outlined the story for Joseph Smith, not that he copied the book.
Doug, I think you’re starting to tell me I’m actually being objective about this–thanks for the compliment. Now you’ve mentioned a similar comment on my polygamy post (see your comment 12), so I hope I’m gaining some street cred. 🙂
Now we’ve come to the part of the discussion where we must believe that Jocker’s et al deliberately manipulated the data inputs to get the expected results they wanted. Why must we believe this was deliberate? I never said that, nor do I believe they deliberately did this. To deliberately do this is academic fraud, and I don’t believe these 3 people would put their names to academic fraud.
Probability is probability, it is not fact. Flip a coin 7 times and count the number of heads. The probability that you get 7 heads is 0.0078. Repeat the experiment 1000 times, and you will likely see about 8 runs of 7 or more heads. The event of 7 heads is unlikely, to be sure, but it probably will happen. If we focus on these 8 runs of 7 heads, we might conclude that the coin is biased, when in fact it is not.
My point about Doug, me and Firetag having writing styles similar to the 85% of the Book of Mormon operates on the same principle. We could put together lots of people who weren’t alive in 1830 that might have similar writing styles to the Book of Mormon. The fact that Longfellow came up positive for Isaiah is really interesting, because it shows that someone can essentially fool the test. Now the NSC method was right 95% of the time, but Longfellow is spectacularly wrong. If a drug test was right 95% of the time, many employees would fail drug tests when they didn’t take drugs, and we would all complain about the accuracy of drug tests. Drug tests are much more accurate–approaching 99.99999% accuracy I believe. A 95% drug test would be deemed a grand failure on the market.
“As for the “View of the Hebrews”, Ethan Smith didn’t write that as a narrative story, but rather a like a modern white paper explaining why he thought his theory of origin of the American Indian had merit. A word print test would seem useless as critics have always maintained that “View of the Hebrews” simply outlined the story for Joseph Smith, not that he copied the book.”
This is inaccurate on a couple of levels. (1) I remind you that Longfellow and Barlow were poets, and did not imitate a novelistic or biblical writing. The idea of a wordprint wouldn’t care if something was written as a whitepaper, or else I don’t think Jockers would have included 2 poets–and one of those poets came up positive for Isaiah. (2) Jockers seems to indicate that Rigdon added parts of his own style, unrelated to Spalding, so these parts would not have been in the Spalding manuscript. So, Jockers seems to believe Sidney was an actual source, Spalding was a source, and Isaiah/Malachi was a source. The new update shows the Joseph portions were from Joseph, so Joseph is now a source too. Perhaps Ethan Smith is a source too, and it wouldn’t matter that it was a whitepaper. Under this theory, it seems Joseph’s contributions were small, while Rigdon/Spalding/Isaiah were much larger. This begs the question of why Sidney would choose to let Joseph take the credit when Sidney was much more responsible for the final product. We’ve been down that road, and I’ve never heard a satisfactory answer. I don’t think a statistical paper will ever answer that question.
““As for the “View of the Hebrews”, Ethan Smith didn’t write that as a narrative story, but rather a like a modern white paper explaining why he thought his theory of origin of the American Indian had merit. A word print test would seem useless as critics have always maintained that “View of the Hebrews” simply outlined the story for Joseph Smith, not that he copied the book.”
This is inaccurate on a couple of levels.”
What I simply was saying is that no-one I know is stating that “View of the Hebrews” was plagiarized to get the BoM. So, a word print match to Ethan Smith as a possible contributor would seem unlikely, but then again, perhaps a another good author to use in the control group.
“My point about Doug, me and Firetag having writing styles similar to the 85% of the Book of Mormon operates on the same principle. We could put together lots of people who weren’t alive in 1830 that might have similar writing styles to the Book of Mormon. The fact that Longfellow came up positive for Isaiah is really interesting, because it shows that someone can essentially fool the test.”
Let me take you back to post #7, I made this statement:
“For the statisticians out there, we should be able to come up with the sample size that virtually eliminates false positives from occurring. For Jocker’s that number was 7, perhaps it’s really 10 or 20. The point being, with a big enough pool of possible authors, the chances of giving an incorrect author over a 50% chance of being the actual author should be reducible to near zero.”
You see I agree with your assessment of probabilities with the caveat that including more control authors will raise the accuracy of the study. So, as I asked at the beginning, what’s the number?
Just as a side note, while I wouldn’t care how many times in a row I could get heads on a coin, I’ll bet you dollars to donuts that if I did flip it a thousand times, it would come up very close to 500 heads and 500 tails. So there again, bigger sample size should increase the predictability of the outcome…
Christopher, I just found your comment in the SPAM folder and released it. As an FYI, when there are more than 1 or 2 links in a single comment, it will trip the filter, so next time you may want to break up your comment. It appears you agree with Stanley Fish too… Thanks for the reviews–they seem to agree with Jockers assessment of them.
Doug, “we should be able to come up with the sample size that virtually eliminates false positives from occurring”…”including more control authors will raise the accuracy of the study. So, as I asked at the beginning, what’s the number?
Sample size calculations can be really tricky. Since I’ve never heard of NSC prior to this, I would have no idea how to calculate a sample size for such a study. I note that as Jockers was putting his list of 7 authors together, he created a matrix of dimension 456 x 110. Imagine how large the matrix would be if we included hundreds of authors… I don’t recall how many authors there are in the Book of Mormon, but if we are to test that idea of ancient authors (which is impossible without verified samples from Nephi) then it could be in the hundreds, perhaps thousands. If you want to use NSC to rule out ancient authors, they have to be included in the analysis somehow.
Doug, one other point. Dale Broadhurst, the leading authority on the Spalding Theory, doesn’t believe Joseph copied from Spalding either, but rather used it as a source (see this comment from my blog). As background, Jockers says that Riley Woodbury (1902) believes that Joseph used View of the Hebrews as well as other local sources to produce the Book of Mormon. So, really, Spalding and Ethan Smith advocates make similar claims about 19th century authors as the true inspiration of the Book of Mormon, differing mainly on Spalding vs Ethan Smith as the primary source.
I haven’t studied the Jockers et al. study at all. But I can tell you that I don’t buy it. In my view, you can’t somehow apply the magic of statistics in ignorance of actual historical study. If we put aside the traditional faithful understanding of the BoM as an inspired translation of an ancient work, how many serious Mormon historians accept a Rigdon/Spaulding hypothesis? Zero. That’s a fringe theory at best. By far the mainstream view of those scholars who look for a specifically 19th century origin of the BoM is that Joseph himself was the author. And yet this study was written from a Spaulding theory bias and excluded Joseph as a possible author. In that situation, I really don’t care how sophisticated the statistics are; it really doesn’t matter. This is garbage in, garbage out. After this study, how many serious historians of Mormonism (let’s even limit it to non-LDS scholars) accepts the Rigdon/Spaulding conclusion? Zero. You can’t prop up bad history with lots of fancy graphs.
I simply have never accepted any application of wordprints to the BoM at all. They’re all fatally flawed AFAIC. Below are my comments on this topic from my Documentary Hypothesis article in Dialogue:
Statistical Linguistics. In 1985 the results of the Genesis Project were published in English. This project involved a combination of biblical studies, linguistics, statistics, and computer science in an analysis of the authorship of the book of Genesis, concluding that the book was a unified composition. As with chiasmus, informed Latter-day Saints are familiar with statistical linguistic studies due to their application to the Book of Mormon. I happened to be present at the BYU forum assembly where the initial results of Wayne A. Larsen’s, Alvin C. Rencher’s, and Tim Layton’s study of computerized stylometry, or “wordprinting,” of the Book of Mormon were presented, finding that the Book of Mormon was written by multiple authors as opposed to a single author. That early work has been elaborated on by the late John L. Hilton, who went to great pains to immunize the methodology from criticism. Wordprinting involves the measurement of non-contextual word rate usages of different authors and noting their statistical differences. The great hope and promise of wordprinting lies in the possibility of bringing a certain scientific “objectivity” to author identification and differentiation, a judgment that is otherwise profoundly subjective.
I remember being impressed by all of the charts and graphs used in that forum assembly. I am similarly impressed by those used since by Hilton, as well as those used in the Genesis Project. But while the charts look impressive, I have always felt that the basic assumptions underlying Book of Mormon wordprint studies are faulty. I concur in the assessment of John Tvedtnes, who points out that (1) an English translation should reflect the language of the translator more than that of the original author, and (2) the particles used in wordprint studies (such as “of”) are often non-existent in Hebrew, which tends to use syntax to express the meaning of English particles. An additional concern I have is with the naive assumption that speeches were perfectly transcribed. The reality, as seen in the work of such ancient historians as Herodotus and Josephus, is that such speeches were often composed by the historian himself as approximations of what the historical character would have said under the circumstances. Generally, historical speeches were not attended by court reporters making transcriptions of precisely what was said on the occasion.
Part of the problem with computerized stylometry is that the hoped for “objectivity” does not seem to have been achieved yet and may be unachievable. Yehuda Radday rejects the Documentary Hypothesis and so his team finds unity while other scholars who accept the hypothesis utilize statistical linguistics to find the very diversity they had expected to find all along. It appears to me that there is still (unwitting) manipulation of the data going into the black box of the statistical construct (or unwitting manipulation of the statistical construct itself) so that the hoped for result indeed emerges from the other side. I frankly do not understand the statistics well enough to offer a useful critique of such studies. All I can do is report that I remain open minded about their possibilities, but I have not yet been convinced of their validity. As this fledgling bit of science develops, however, it does have the potential for making a legitimate contribution to the problem of the authorship of the Pentateuch. My own stance with regard to wordprinting is one of “watchful waiting”
 Yehuda T. Radday and Haim Shore, et al.. Genesis: An Authorship Study in Computer-Assisted Statistical Linguistics (Rome: Biblical Institute Press, 1985).
 See Larsen, Rencher, and Layton, “Who Wrote the Book of Mormon? An Analysis of Wordprints,” BYU Studies 20 (Spring 1980): 225-51.
 For a summary, see John L. Hilton, “On Verifying Wordprint Studies: Book of Mormon Authorship,” in Noel B. Reynolds, ed.. Book of Mormon Authorship Revisited: The Evidence for Ancient Origins (Provo: FARMS, 1997), 225-53.
 John A. Tvedtnes in Review of Books on the Book of Mormon 6, no. 1 (1994): 33 [here critiquing Edward H. Ashment, “A Record in the Language of My Father’: Evidence of Ancient Egyptian and Hebrew in the Book of Mormon,” in Brent Lee Metcalfe, New Approaches to the Book of Mormon: Explorations in Critical Methodology (Salt Lake City: Signature Books, 1993), 329-93, but agreeing with Ashment on this point].
 See the excellent comments of Shemaryahu Talmon, “A. Bible Scholar’s Evaluation,” in Genesis: An Authorship Study, 225-35.
 For a useful survey of such studies articulating common weaknesses, see A. Dean Forbes, “Statistical Research on the Bible,” in David Noel Freedman, ed.. The Anchor Bible Dictionary (New York: Doubleday, 1992), 6:185-206.
Thanks for rescuing my comment from the spam catcher, MH.
Kevin, I have to agree. I don’t think I’d go so far as to say, “Garbage in, garbage out,” but certainly these authors seem to find mainly whatever it is they’re looking for.
In fairness to the discussion, I think it’s only fair to point out some of the problems with Kevin’s thinking.
“I haven’t studied the Jockers et al. study at all. But I can tell you that I don’t buy it.”
Good clue about your objectiveness on the issue… 🙂
“If we put aside the traditional faithful understanding of the BoM as an inspired translation of an ancient work, how many serious Mormon historians accept a Rigdon/Spaulding hypothesis? Zero”
And that should impress me because???? Has science found answers to commonly held beliefs in the past that no-one thought was possible and completely changed the world as we understand it?
“By far the mainstream view of those scholars who look for a specifically 19th century origin of the BoM is that Joseph himself was the author. And yet this study was written from a Spaulding theory bias and excluded Joseph as a possible author.”
If you’d read the early posts on this thread you would see that this statement is out of date. Jocker’s et al has included JS in the latest run of the study. (See note #3 above)
“This is garbage in, garbage out. After this study, how many serious historians of Mormonism (let’s even limit it to non-LDS scholars) accepts the Rigdon/Spaulding conclusion? Zero”
Again, I think for a guy that hasn’t even read the study, this is typical name calling tactic used by apologist to make someone feel stupid for thinking a particular theory may have merit. (I thought you were above that Kevin.)
“ I concur in the assessment of John Tvedtnes, who points out that (1) an English translation should reflect the language of the translator more than that of the original author,”
Here we agree as I pointed out in comment #21 above. As a matter of fact, I think I made a fairly good case for why Joseph Smith should have shown up as the author.
“Part of the problem with computerized stylometry is that the hoped for “objectivity” does not seem to have been achieved yet and may be unachievable.”
I think that’s why we have this thing called “peers review”. If you can find problems with the methodology employed, then by all means present it. The fact that Stanford University put this study up for peer review tells me they feel confident in its findings.
“It appears to me that there is still (unwitting) manipulation of the data going into the black box of the statistical construct (or unwitting manipulation of the statistical construct itself) so that the hoped for result indeed emerges from the other side. I frankly do not understand the statistics well enough to offer a useful critique of such studies. All I can do is report that I remain open minded about their possibilities, but I have not yet been convinced of their validity”
Mormon Heretic and I already addressed this possibility in the comments above. I made the same observation and was quickly shut down by him. Of course it’s possible that it’s happening unintentionally, but adding more authors to the pool of possible contributors should help increase its accuracy. Other than that, I do appreciate your wiliness to be open minded… 🙂
Look Kevin, don’t take this the wrong way. The Spaulding/Rigdon theory certainly has plenty of missing pieces of the puzzle that may never be found. I’ve made my career by putting aside personnel preconceived notions and letting the evidence lead me where it leads. Do yourself a favor and read the 27 or so comments already explored on this topic and then see if you can add something to the discussion. 🙂
Doug G., I was referring to the formally published paper, not to an internet add-on patch to add Joseph after the fact. That’s encouraging that the authors recognize the fatal flaw of excluding Joseph from the analysis. But they still appear to conclude that the primary influences in the authorship of the BoM were Rigdon and Spaulding. I’m sorry, but that’s just plain ridiculous, and Stanford University should be embarrassed that its name is associated with that study.
I stand by my opinion.
I have no problem with you having an opinion. Thanks for participating in the discussion! I had thought you might want to take back the comment of “garbage in, garbage out”, given that the garbage “in”, as you call it, included the works of Longfellow, Barlow, and the entire text of the Book of Mormon… Peace
Not to push you, but I’d like to hear how you believe JS could have translated the characters into the English language and not got his “signal” in as the predominate author. The first edition of the BOM even lists him as the “Author and Proprietor”. I would buy the 5% error thing if it was just a few chapters that missed the mark, but most of the text??? That’s pushing it I think unless we concede, as Kevin has, that all word print studies are useless. I’m not asking to start a fight, I think lots of folks on the non-believing side of the house also support the idea that JS wrote the text and therefore the study should show his “voice”. (Not a good place for me to be in, both LDS scholars and non-LDS scholars think it’s ridiculous to believe such a silly thing as Rigdon writing some of the BOM.) So I’m hoping you have a satisfying answer for me, because I would really like to join with one of these esteemed groups. 🙂
Kevin, I think you bring some good points to the discussion. Questioning validity of wordprints in relation to the Documentary Hypothesis is certainly useful information, but the “garbage in, garbage out” comment was a big over the top, and not particularly useful to the conversation. I’m trying to avoid the typical name-calling and motive questioning that tends to dominate these discussions, and that is not helpful.
Doug, I’ll get to your comment later tonight–I gotta run for now, but in brief, I agree with you that Joseph’s wordprint should be a better match. Perhaps wordprint studies are not mature enough at this point to avoid researcher bias.
Have any of the studies, be it BYU or Stanford, addressed their expectations of how translation affects the validity of this test. To me it throws the whole thing out the window. How do you detect the “fingerprint” that has been converted time, space, language, and culture? Studies such as Jockers which operate under the assumption that the Book was written by 19th century authors, will confirm their expectations that Joseph Smith and company wrote the Book of Mormon, because technically he did. Even among Mormon scholars there is debate regarding the translation, method, and mechanics. What is not debated is the fact that Joseph spoke and others wrote. On the other hand, research such as that done at BYU need to take into account not just the word styles, but the alleged history of the Book of Mormon also. Realistically how many authors are contained within the Book of Mormon as we have it today, who provide enough of sample to provide detection. I count three, Nephi, Jacob, & Moroni – mabey Mormon. From Words of Mormon to the end Moroni was supposedly abridging the text, which makes the abridged authors fingerprints questionable. Enos, Jarom, and particularly Omni (which more like a series of year book entries) are too short to give a reliable fingerprint. So to be frank, if that study wasn’t of questionable bias, it would be saying more for plagiarism than the Jockers study. The point is, I agree with all others who have said this, there is no way to develop a reliable wordprint study of the Book of Mormon given the explanation for it’s origins.
The sections of Givens that MH is quoting here are near Givens’ discussion of structural Hebraisms that are also contained in the text, and that observation also makes understanding of the translation mechanism difficult. I presume MH will get to that topic in one of his future posts.
In the meanwhile, I’m still trying to figure out how the TARDUS translates Ood for the Doctor. 😀
Let me try to explain what I meant by “garbage in, garbage out.” The dominant non-LDS theory of the origins of the BoM is that JS authored it. So to undertake a wordprint study trying to prove the fringe Rigdon/Spaulding theory while at the same time excluding the dominant possibility of authorship is beyond the pale. The peer review must have been strictly on the statistics side; I can’t imagine a serious scholar of Mormonism signing off on the way this study was originally constructed. Would Douglas Davies or Laurie Maffly-Kipp or Jan Shipps have signed off on this formulation? Unthinkable. So that is what I meant by characterizing the study rather harshly. It doesn’t matter how good the statistics are if there isn’t a sound historical sense guiding them.
So yes, that was an over the top characterization, but then I’m a blogger, and that’s what bloggers do. (g)
I’d like to hear how you believe JS could have translated the characters into the English language and not got his “signal” in as the predominate author.
Well, that is an interesting question, and I think Cowboy’s comments are very appropriate here, as well as Kuri’s. The Stanford study did not address the “editorship” position of Joseph or Oliver from a statistical point of view. They do give some background info at the beginning of the paper noting that Spalding advocates note that Oliver Cowdery had editing experience. Since Cowdery came up pretty low on the wordprint study, this does make one question how much editing Cowdery did, and why his editing doesn’t show up more. I haven’t seen the BYU studies yet, so I don’t know if they addressed how editing would affect the statistical analysis, but I tend to believe BYU didn’t address this idea either.
Biblical scholars have noted that there aren’t just multiple authors, but multiple editors of the Bible as well. The Documentary Hypothesis believes an editor compiled J and E together. Then another editor added P. Another nameless editor added D, though there is some speculation that the D editor was either Baruch or King Josiah (scholars lean toward Baruch.) For those unfamiliar with the hypothesis, King Josiah (who lived just prior to Lehi) implemented many religious reforms in the southern Kingdom of Judah. He instructed the priests to clean the temple, and these priests discovered the Book of Deuteronomy, a previously unknown book of scripture. Deuteronomy contains many more references to the abomination of idol worship than the other 4 books of Moses, and some scholars believe Josiah planted the book in the temple to be discovered, in order to emphasize idol worship was sinful. Baruch was the editor of this new book of scripture, so how much of a role does he play?
This story of editorship is just the beginning–as we know the Bible has been translated from Hebrew to Akkadian to Aramaic to Latin to Greek to English, German, and many other languages. Is it really possible to tease out these original J, E, D, and P authors after so many editors and translations? Proponents of wordprint studies, seem to think so, but I think this is a question that deserves serious study.
In my job, I do quite a bit of writing of conference call summaries for medical studies that we are working on. On some of these summarizes, I write and distribute them myself. Others are reviewed and edited by my boss and a co-worker. As I go back and review these summarizes, I can tell that the ones I write myself are a bit less formal. The ones edited by co-workers are more formal. While 75-80% of these edited summaries are still my words, would a wordprint study be able to tell the difference between my edited and unedited summaries? Additionally, my boss is much more picky about grammar than I am used to, so I have made conscious decisions to change wording to meet her tastes. So, if we were to compare my edited call summaries to these posts on my blog or Mormon Matters, would they really be able to tell how much influence the editors had over my writing style? I don’t know. Would we be able to identify paragraphs edited by my coworkers that are really their words, and not mine? The Jockers study seems to believe that Mormon chapters 5 and 7 were written by Rigdon, but chapter 6 was written by Spalding. While I understand why the Stanford group used modern chapter divisions, I really question whether wordprints can be as precise as Jockers seems to indicate.
So, to go beyond Doug’s question a bit, if the LDS church and Spalding advocates both name Cowdery as the man with the pen, why doesn’t Cowdery’s wordprint show up more prominently? I’m at a loss. I’m certain that Rigdon’s handwriting is not part of the original manuscript, so Smith would have had to read Rigdon’s manuscript to Cowdery verbatim, and all 3 agree that Joseph is “Author and Proprietor” of the work. As I mentioned in my previous Spalding post, Oliver was offered the opportunity to translate in D&C 9 and failed. If Sidney wanted to be a translator, it seems Joseph certainly would have let him do it, and Sidney could have read his manuscript to Smith or Cowdery or Emma for that matter. Joseph offered many the opportunity to translate the Book of Abraham, so if anyone was really working from Spalding, they could have pretty easily introduced a manuscript behind a curtain as Joseph did, and translated through the curtain. They could have done the same thing with the Abraham papyrus–why didn’t they just find another of Spalding’s many novels (or Ethan Smith’s, or the Dartmouth Library, with access to maps of Nahom, etc) to produce the Book of Abraham?
I just don’t understand a rationale for Cowdery, Emma, and Rigdon to all lie about their involvement in the translation process in making Joseph the front man here. (Oh, we should look at Emma’s wordprint also too.) Rigdon and Cowdery had ample opportunity to assist in the translation–Joseph was very open to the idea of collaboration. Why did Sidney let Joseph get all the glory in translating? Wouldn’t Sidney’s stature have increased immensely to be a co-translator with Joseph? If we believe Richard Van Wagoner’s biography on Rigdon, Sidney was a man that craved publicity and attention. Why would he consent to Joseph getting all the credit when he was really the brains behind the whole thing? I just doesn’t make sense to me.
Obviously these questions about motives can’t be answered by statistics, but I do wonder how prominent a translator of the Federalist Papers wordprint would affect this whole process, and I don’t have an answer. Documentary Hypothesis proponents apparently haven’t addressed how editors affect a wordprint either. Perhaps Kuri could weigh in with his opinion.
I’ll be honest with you and say that I don’t understand how the Nearest Shrunken Centroid (NSC) works. So you could be right about certain sections of the BoM not having enough words to find the “voice” of the author. However, I wouldn’t throw the baby out with the bathwater just yet. I don’t think the issue is whether or not Joseph and Oliver wrote the text, but why there “voice” shows up so seldom in the text. Now if the BoM had been actually written in a language like Hebrew, then some arguments could be made about structure and perhaps a diluted style from the original author. That’s the rub isn’t it, there just isn’t enough room on the plates to write that kind of narrative, so the translator would be looking at a character and coming up with one or two verses to put the picture being painted into English. Therefore, a word print should show the translators voice in that kind of scenario. The fact is Joseph doesn’t show-up in either the BYU or the Jocker’s study in any significant way (about the only thing both studies seem to agree on), so I think it’s fair to conclude that he probably wasn’t the principle author.
Again, I don’t see the logic in throwing out both studies as worthless because I believe they provide data in the quest for who really did write the book. I think the biggest problem right now is the direction it’s pointing is just not palatable to either the believers or the non-believers. Perhaps the folks that say each word was delivered directly from god have the best argument, but then that opens up a whole host of new problems that apologists just don’t want to deal with. To be fair, off topic for this discussion…
The fact that the Jockers study made it into a peer-reviewed journal is a big point in its favor. But mainly what that means is that presumably competent people in journal’s field, “literary and linguistic computing,” found the methodology and conclusions sound enough to be worth publishing. Essentially what it means is that people who know enough to tell the difference concluded “This article makes a good point.” That’s really as far as it goes.
OTOH, the publication of the Book of Mormon is not just a literary and linguistic event. It’s also a historical one. It’s been studied by historians, who have published the results in their peer-reviewed journals. And in those journals, as Kevin mentioned, the Spalding-Rigdon hypothesis is (I understand) considered largely debunked. It’s quite likely that the “peers” who reviewed the Jockers article are completely unaware of this, since they are linguists rather than historians and probably have no special interest in Mormonism. It’s also quite possible that if the Jockers paper had been submitted to a Mormon history or Mormon studies journal, it would have been rejected out of hand for relying on a debunked theory.
So basically what we might end up with is dueling peer reviews that reach differing conclusions. Who’s right? Beats me. I lean towards accepting the historians’ findings, but I don’t know. Maybe they’re “all wrong together.” But here’s something I think is important to remember about the peer-review process: It’s very valuable, and it serves several important functions, but it’s no guarantee of truth or accuracy or even the absence of utter crapitude. Most of what you find in peer-reviewed journals is sound as far as it goes, but things that are wrong and sometimes even pretty stupid make it into peer-reviewed journals too. It all needs to be taken with a grain of salt.
My understanding is that Joseph Smith was not included in the original Jockers study, and they are working on a revision right now to include him. Have those results been published yet?
“Why did Sidney let Joseph get all the glory in translating? Wouldn’t Sidney’s stature have increased immensely to be a co-translator with Joseph? If we believe Richard Van Wagoner’s biography on Rigdon, Sidney was a man that craved publicity and attention. Why would he consent to Joseph getting all the credit when he was really the brains behind the whole thing? I just doesn’t make sense to me.”
I understand that the big problem with the Spaulding-Rigdon theory is that there is no evidence that Joseph Smith and Sidney Rigdon were acquainted prior to 1830, as result of Parley P. Pratts influence. Even so I think secondary arguments such as what you are suggesting come across as fairly intuitive. I recently listened to the Mormon Stories Podcast featuring Richard Bushman. According to Bushman, prior to 1835 Joseph Smith was very underemphasized, to the point where many people were joining the Church without even knowing who he was. I had always had the expectation that Joseph Smith was a front and center feature of the Churh, but Bushman would apparently disagree with this.
I think some people don’t fully appreciate the purpose of peer review, when calling it crapitude or garbage. in the 60’s and 70’s, there were several studies that showed a link between sugar and heart disease. it was not known that sugar was hiding another problem (known in statistical terms as a ‘confounder’).
surprising to researchers was the fact that smokers consume high amounts of sugar. apparently smoking dulls taste buds and smokers pour high amounts of sugar in their coffee. when smoking was taken into account, sugar was eliminated as a cause of heart disease. essentially sugar was a confounder for smoking. so, is it appropriate to call these initial studies crapitude or garbage? could it be that they were simply blind to the idea that smokers consume high amounts of sugar? there is probably a confounder is these wordprint studies we have not recognized. (did you know smokers consume high amounts of sugar?)
the more I think about it, the more concerned about these error rates I am. 90% is good enough on a math test, but for a drug test, it is terrible. 27% false positive is really bad if you were accused of a crime by a dna sample. if we assume the the big 3 also have a false positive rate of 27%, that could potentially reduce their probability from 85% down to 62%. there were no confidence intervals calculated for these probabilities in the jockers study, so there could easily be this much or more variation. if the confidence interval contains 43%, then this 85% value for the big 3 isn’t at all a precise number. I suspect that if confidence intervals intervals were calculated, then these results would not be distinguished from chance, perhaps even for longfellow or barlow.
I would be surprised if any byu studies included confidence intervals in their paper either.
cowboy, I agree with your statement that joseph smith was a bit more minor in the conversion process prior to 1835, and givens seems to support bushman’s contention as well. however givens seems to talk that angels and the translation of the book of mormon were very important during and after this time period. while joseph was not emphasized and the theology of the book of mormon was of little importance during these early days, miracles, speaking in tongues, and the gold bible were important. as such, I would think that rigdon would have had a strong motive to show his miraculous gift of translation along with joseph.
Is it fair to say that these duelling stylometric analysis may call into question the utility of stylometry as a tool for testing the authenticity of the Book of Mormon?
yes, I think it is a fair question. however, as I mentioned about the sugar study above, there is probably a confounder of some sort in play here. the purpose of peer review is to allow others to identify better methods to reduce confounders. with better methods we should be able to come up with better methods. i’m not sure it is time to throw the baby out with the bathwater yet, but there does need to be better accuracy in these tests. it appears nsc is an improvement over delta, but more improvement needs to be made before we can make firm conclusions about authorship. (note delta’s 93% for the big 3 would be reduced by half with the 47% false positive rate making it nearly indescernible from the 43% by chance. certainly nsc shows promise over the delta method, but it too has a ways to go.)
Assume for the moment that Joseph (or somebody) did set out to fabricate a work of ancient scripture, and consciously strove to imitate “Biblical style.”
You would expect that literary voice to differ from the author’s ordinary writing style, would you not?
Moreover, shouldn’t you expect the author’s “true” voice to make its presence felt more or less forcefully, depending on the author’s mood, or even fatigue level? (Arguably, if the writer were fresh, or feeling particularly motivated, he would take greater care to preserve the “biblical” literary style, while on the other hand when he was tired or less motivated, he might write in a style closer to his natural style.)
“And it came to pass in the fourteenth hour of the day, that a certain lawyer did become an hungered, and did cast his eyes covetously towards his secretary’s desk. For behold, she did have there a jar of jelly beans, and I beheld that they did appear sweet unto me, yea, that they were desirable above all other jelly beans. And it came to pass that he did arise, and stole forth through the secret pass on the left of the printer, and did obtain the jelly beans. And because that he was in disguise at the time he took the jelly beans, his secretary knew not of what he had done.”
I imagine that if I were going on like this for several pages, I would probably get less careful about maintaining the right voice toward the end, and my own voice would creep in. Have Book of Mormon stylometric studies ever taken this possible factor into account? Has it ever been done for any work?
I respect the peer review process, I just don’t believe in fetishizing it. Things often get published in peer-reviewed journals and are later recognized as completely wrong. (Of course, that happens far more often with things that aren’t peer-reviewed.) I reject the idea, which has been implied a bit in this thread, that getting published in a peer-reviewed journal somehow guarantees that an idea is not mistaken or even dumb.
“Garbage in, garbage out,” BTW, is an old computing expression. It’s not as pejorative as it sounds. It just means that if you input bad data, your output will be bad too. The sugar and heart disease studies you mentioned are perfect examples. False data (“garbage”) went in, so false conclusions (“garbage”) came out. Kevin was saying the same thing about the Jockers study. The debunked Sapalding-Rigdon theory (bad data, “garbage”) went in, so the conclusion that Spalding and Rigdon are the most likely authors is not warranted (bad data, “garbage”). That’s his argument.
Rigdon was dominant feature of Mormonism throughout Joseph Smith’s life, particularly up to 1835. Even after those periods he remained dominant and influential, he just had more competition in the late Missouri and Nauvoo periods. So questions of why wouldn’t Rigdon wanted to be Joseph Smith kind of fall flat for me. In many respects he was. If one wanted to argue for the Spaulding-Rigdon theory it really isn’t that difficult if your basis doesn’t take into account the major anachronism in the theory. Particularly when using the old Moses and Aaron analogies. Again the problem isn’t human nature, it’s conflicting chronology. Just to be clear, unless new evidence can demonstrate that Joseph Smith and Sidney Rigdon did meet much earlier than 1830, I agree that the Spaulding-Rigdon theory seems implausible. That being said, I find the theories which make Joseph Smith the principle author, as either creative genius or divine translator, just as unlikely. So, statistically I guess The Book of Mormon doesn’t really exist.
You would expect that literary voice to differ from the author’s ordinary writing style, would you not?
My understanding is that stylometrics is supposed to pick up unconscious language use, so the results don’t actually change much due to deliberate stylistic changes.
I tend to take a skeptical view of purportedly scientific methods which yield more ambiguous results than “the chemicals go *boom* if you mix them in the right proportions” or “if you don’t put “x” tons of steel in the bridge, it falls down.” Philistine, I know.
I’m kind of skeptical of the whole thing too. I don’t really know enough to evaluate the technique, but it doesn’t appear to be accepted as definitive. It doesn’t seem to be the case that when people apply stylometrics, everyone else looks at the results and says, “Well, that settles that.”
at this stage, I agree. it seems to me that wordprint studies are too immature of a technique to make any strong conclusions one way or the other.
See comment #3. I’m being told that the updated study will be available this fall…
Let me get this straight, because nobody (and I mean nobody) likes the data from either of these studies, we’re just going to say they they’re too immature. Sorry if this sounds a little condescending, but just like MH stating that he’s never got a satisfactory answer for why Rigdon let JS be his front man, I’m just as dissatisfied with this answer. Let me explain why:
1. We have a method in the Jockers study that has been tested to be 95% accurate at identifying authors correctly. The study tells us that for 93 of the chapters in the BoM, Sidney Rigdon is the most likely author out of the eight possible authors. For 15 chapters his signal puts him above the 90% mark for most likely. Statistically, it would seem impossible to have that many miss calls. I totally agree that the study should include more control authors, but I’m not at all convinced that adding more will change the results.
2. As I tried so hard to establish earlier, because the plates were written in reformed Egyptian, there is no word print or style or anything else on the plates to begin with. With Egyptian hieroglyphics, each character tells a story, it doesn’t denote a word. Therefore, whoever translated the plates would have to put the narrative in his own voice and style. In other words, even if you believe the BoM is historical, someone in the nineteenth century had to write it and their style should be evident in the translation. Jocker’s et al compiled a list of possible authors from the 1800’s and ran their tests. I totally understand why they didn’t include JS at the beginning as they explained in the original study. Now that they have a better sample of his writing, the study has been updated to include him. Who else do you believe should be put into the pool?
3. I appreciate MH’s thoughts on how editing could mess-up a word print study and believe that could have some merit. The biggest problem I see with his theory is that editing should just dilute the sampled text so that more of the authors become possible matches. I can’t believe this editing by Cowdrey and/or Smith would make the text match Rigdons voice so closely. It’s a matter of probabilities. If it happened once or twice sure, but 93 times? Come on.
4. Kevin Barney thinks it’s silly to include Rigdon and Spaulding has those theories have already been debunked. Why have they been debunked? Because no-one can find a connection between Smith and Rigdon before 1830. Let me see if I get this right; when critics say that the BoM has already been debunked because of a lack of evidence supporting it, the first argument out of Kevin’s and other apologists mouth is “ lack of evidence is not evidence of lack”. Really???? And he’s embarrassed for Stanford… I’m embarrassed too, but it isn’t for Sanford.
5. Someone brought up that because JS used biblical style writing, his voice print might not show up. If it were that simple to fool the computer then the Old Testament authors, that were included in the study, should have shown up as the most likely contributors for the majority of the text. After all, the book was deliberately written to imitate their writing style and yet the study put Isaiah and Malachi near the bottom for most of the chapters. Again, I have the same problem that I stated for MH, if anything the diluted word print should make for higher percentages for the rest of the pool or even favor the Old Testament authors, not Rigdon. And certainly not 93 times.
Lest anyone miss my motives here, I would like to put this theory to bed just like everyone else. To be honest, most non-believers think JS wrote the book, period. No help. No ghost writer. No conspiracy. Just him with the information that was available within a couple hundred miles of his home. That’s easier for me to believe as well, except this nagging problem of not only the “Delta” method putting Rigdon as the primary author, but also the NSC method. And although I think you folks have hammered the BYU study in these comments, they also came way saying the JS probably was not the author.
“To be honest, most non-believers think JS wrote the book, period.”
I think this is a fitting dismissal for a disinterested critique wishing to move on, and Fawn Brodie. does anyone else really believe this thoug?
1. We have a method in the Jockers study that has been tested to be 95% accurate at identifying authors correctly. The study tells us that for 93 of the chapters in the BoM, Sidney Rigdon is the most likely author out of the eight possible authors. For 15 chapters his signal puts him above the 90% mark for most likely. Statistically, it would seem impossible to have that many miss calls
My problem with these figures like “95% accurate” or “90% mark” is that I simply have no way of fully evaluating such claims. I don’t have the necessary background in statistics, linguistics, or computing. I have to rely on experts. And, as far as I can tell, experts have not reached a consensus that stylometrics is dead-on, can’t-miss science. As far as I can tell, conclusions reached with stylometrics are often disputed.
What I do have enough background to know, though, is that Exciting New Methods and Bold Claims come and go, in the hard sciences as well, but especially in the humanities and the social sciences (“the very hard sciences”). Often they pan out and become widely accepted, but sometimes they just don’t. Sometimes, they’re simply wrong. Another thing I know is that even the best methods are subject to human error. People can input the wrong data, or input it in the wrong way, or misunderstand the results. Peer review will catch many such errors, but not all of them. Not by any means.
So, for now, I can’t see the Jockers results as anything more than “interesting.” It raises interesting questions, but I don’t think it answers them definitively. It’s a data point, but there are many other data points out there, and some of them contradict Jockers.
Doug, your response in 52 seems intent on showing the strengths of the Stanford study, but I haven’t heard you or anyone else on this post espouse the strengths of the BYU study. Could you provide some BYU strengths to show a bit of objectivity on why wordprints are a valid technique?
I think Kuri addressed your point #1 right on. Many people don’t understand probability very well. Frankly, I much prefer statistics to probability–statistics are much easier for me–I have no desire to be a probabilist. As I mentioned in the previous Mormon Academics/Evangelical post, many people unintentionally misuse statistics, and your comments in your point #1 completely ignores the error rate.
2. Your characterization of Egyptian hieroglyphics is probably quite inaccurate. I addressed this in my previous post talking about Reformed Egyptian on my blog. Demotic Egyptian and Meroitic are 2 examples of an Egyptian script dating to the time of Lehi. A 6th century BC papyrus discovered at Arad and Kadesh-Barnea “contains a scriptural text in Northwest Semitic tongue written in an Egyptian script.”55 Coptic is another example of Egyptian script, though it is probably not a candidate for Reformed Egyptian since it comes after the time of Lehi, but the Gospel of Judas was written in Coptic, and that certainly isn’t King Tut type hieroglyphics.
3. You mention that Sidney comes up 93 times. We’ve noted the false positive for Longfellow, and the authors stated that the false positive rate for NSC was 27% (much worse than it turned out to be for Isaiah/Malachi.) So, if we assume the false positive rate for Rigdon’s 93 hits could be wrong by 27% (or more–27% was the average false positive rate), then we reduce these hits by 25 down to 68. Who was the real author of these 25 chapters? Was it Emma, Longfellow or Barlow, Nephi, Alma, Ethan or Joseph Smith, Cowdery, Pratt, Isaiah/Malachi, or is it conveniently Spalding?
Once again, if you took a drug test for your employer, and there was a 27% chance you came up positive for drugs when you did not use drugs, how confident are you in the drug test? Would you happily abide by the results, especially if you had to be tested every quarter? Odds are, you’d be fired within a year for drug use. Are you sure you’re satisfied with NSC to accurately tell the wordprint of these 7 or 8 authors? Are you really satisfied to ignore this false-positive rate?
4. I’ll leave you alone here, because I think you have a fair point, though nothing on this point addresses the statistical aspects of Jockers. This is more of payback for some of Kevin’s hyperbole. I’m not trying to engage in hyperbole.
5. I already addressed this in point #3, but I will mention that NSC can be fooled. I am curious to see how well a wordprint can be faked, but since Longfellow did it unintentionally, it certainly seems that someone could intentionally fool NSC as well.
So, to be fair, if we’re going to claim that wordprint studies are a valid technique, can anyone tell why the BYU studies got it right? Or is that not PC here?
Doug, I want to mention one other thing that I find very interesting. Both BYU and Stanford agree that Joseph wasn’t the author of the Book of Mormon, but for completely different reasons! BYU says it is because he is a translator, and therefore his wordprint isn’t supposed to be there–rather Mormon, Moroni, Nephi’s wordprints are there. Stanford says Joseph’s wordprint isn’t there because he copied it from Spalding/Rigdon. Both claim Oliver was the scribe. Neither Joseph nor Oliver show up in the Wordprint, so this would seem to show that translators or scribes can’t be detected by wordprint studies, whether one believes the LDS version or the Spalding version. Documentary Hypothesis proponents apparently can’t identify wordprints of translators/editors over the years either. Perhaps it is no big deal that Joseph’s wordprint is there (or is it really there in small quantities)?
#52: “With Egyptian hieroglyphics, each character tells a story, it doesn’t denote a word.”
Not entirely. Champollion’s breakthrough was his discovery that most hieroglypic writing was phonetic, with each character standing for a sound or syllable. Only some characters stood for whole words.
Reformed Egyptian would have been quite different from ordinary Egyptian hieroglyphics if it were able to pack a whole story into one character.
It’s not that I don’t appreciate your efforts, but I just can’t wrap my mind around reformed Egyptian charters were representative of words instead of stories. Did you miss my point about the size of the plates and the amount of room to write on? I was actually being very generous in letting you have 20 plates per inch. Even at that thickness, you would have to get nearly 10 charters in the space of one English word in our BoM. Add to that, you’re not writing, you’re engraving. So look, if you’ll are going to go down that road then you’ve lost me. As an experiment, I tried to engrave with a electric pencil on .0015 brass only to poke holes through it and distort the charters on the other side. Sorry, it’s not my fault that the guy said it was 6 inches thick and 2/3rds of it was sealed. He also said that each plate was about the thickness of a tin plate…
Anyway, I’m off on another trip, so I’ll have to read your response when I get back. Best of luck and thanks for taking the time to engage with me in this thread!
I rebutted all 5 of your points and now you’re off on Egyptian writing? Look, I’m no archaeologist, and I don’t know the exact particulars on how thick the plates were, or how many characters were on the plates. You said they were hieroglyphics, but that wasn’t the case. Beyond that, I have no idea how thin they were or how hard to engrave. If memory serves me correctly, Nephi said they were hard to engrave, so I’m sure he probably poked a few holes through the metal pages too, though he did have metallurgical skills in making the swords, iron tools, etc. But this has nothing to do with wordprints anyway. I digress.
In the interest of accuracy, my comment in 55 point 3 wasn’t precise–I was going off memory on the false positive rate, though my overall conclusions about the false positives remain in tact. I decided to go back and re-read the paragraph comparing the 47% delta false-positive with the 27% NSC false positive, and discovered some more interesting things about Isaiah (I’m going to shorten it to Isaiah from here on, but I am referring to Isaiah-Malachi.)
As I’ve discussed before, the NSC test correctly identified 20 of 21 Isaiah chapters; the other chapter was attributed to Longfellow (this is known as a false-negative.) This means NSC correctly identified Isaiah 95.2% of the time. However, Jockers states that 16 other chapters contain parts of Isaiah. I’m going to quote here from pages 472-3, to ensure I am not misrepresenting Jockers.
So, of these “mixed chapters” with some resemblance to Isaiah, both the NSC and delta tests were fooled on the authorship. Sure the error rate was better for NSC than delta, but you’d never accept a drug test with a false positive rate that bad.
So, when I referred to the “the average false positive rate” — that’s where I was imprecise. The false positive rate of 27% is for Isaiah. Technically, it is an average, but the way I said it made it sound like they could figure the false positive rate for other authors–that’s not the case. If NSC incorrectly attributed Isaiah 27% of the time on these 16 chapters, that’s not very good. It’s certainly reasonable that many of the 93 chapters attributed to Rigdon could also be incorrectly attributed by 27%. This nice thing about Isaiah is that it is pretty easy to figure out the author. When attributing chapters to others, it’s not nearly that easy.
I hope you enjoy your trip.
I do appreciate your point — there are size (and weight) limitations inherent in the story of the plates that imply a definite limit to the data that can be physically transcribed on them, and that does offer a clue about the “translation” mechanism which any model would have to explain.
Doug, the thickness of metal plates and how much data they hold is a bit off topic. I don’t want to go down the road of metal plates since I want to stay on topic of Wordprint studies, but there have been other plates found. I wonder how thick they were and how many words they handled. I know the Dead Sea Scrolls contained a rolled up copper scroll, and other metal plates have been discovered. Perhaps you could research that topic?
I do want to mention one other thing regarding these false-positives. If one wants to take the position that either Joseph Smith or Sidney Rigdon composed the Book of Mormon and it is not ancient, then these false positives seem to indicate that a wordprint can be disguised/faked/fooled (whatever term you feel is appropriate) just by following Thomas’ example in 44. That’s something else to think about.
I’m a little late to the discussion but I guess I’ll say something because I’ve read the paper, and I understand NSC and have used it myself (in biology). I know next to nothing about the field of stylometry, although I have been skeptical, from a distance, about its application to BoM authorship, for example in the earlier BYU studies. This particular paper does nothing to increase my confidence in it for that purpose, just so everyone knows my general outlook. Also, I should say I don’t know what is meant by the “delta method”–there’s a vocabulary issue, and although I can guess I’m too lazy to check. I just want to say something about NSC.
One of stylometry’s successes–or so I’ve heard–was in identifying authors of certain of the Federalist papers with uncertain authorship. Oversimplifying a little, there are three possible authors (Hamilton, Madison, and Jay), 85 essays, and for most of the essays the author is known. There are about a dozen where there are conflicting attributions. Several independent statistical studies have generally agreed in assigning these to Madison (I think).
NSC makes sense in situations like this one; what it does is it takes a collection of samples of essays with known authors (73 of those), and comes up with a mathematical rule to classify an essay as belonging to Madison, Hamilton, or Jay. You can apply the rule to the essays of unknown authorship and it will tell you who it thinks the author is in terms of “probabilities” (85% Madison, 10% Hamilton, 5% Jay, for example). But the probabilities should not be taken very literally; they are just a gauge of confidence–if it’s 51/49 Mad/Ham, you know it’s not giving a very clear classification. It is *only* a mathematical rule. You could give it something written by an entirely different author, like J. K. Rowling, and it will still give you Mad/Ham/Jay probabilities. Contrary to what has been suggested in some of the previous comments, there is no reason to expect that essays by Rowling will be assigned equally often to each of Mad, Ham, and Jay. In terms of whatever word frequencies are in use, she’s likely to be closer to one of the three and, depending on the details of her style, her essays might tend to come out as Madison, with a high probability, too. The Mad/Ham/Jay probabilities are constrained to add up to 100%, so for Rowling you can’t get a result like 1%/2%/1% Mad/Ham/Jay; at least one of the numbers has to be much larger to make the total 100%.
For the BoM, there is no pool of known authors, it’s not clear what constitutes the equivalent of an “essay”. Here they’ve chosen 7 authors (or 5, for some calculations), and run the BoM through in chapters. Rigdon is picked as the most probable author by a wide margin for a large number of chapters. This is exactly as would be expected if he had written them. What’s not clear–at all–to me is that Rigdon authorship is the only possible explanation for the finding. As I said with the hypothetical Rowling example, NSC has to pick someone from the 5 or 7 choices available to it as the most likely author for each chapter, it has to assign (5 or 7) probabilities that add to 100%, and it will favor certain panel authors and disfavor others depending on the style of the outside author. So, for the sake of illustration, if Joseph Smith were the true author of all chapters (excluding the Isaiah excerpts), maybe NSC just thinks his word use looks a lot like Rigdon’s.
So I don’t get it.
Oh, and they CONVENIENTLY left Emma out of the possible authors.
Thanks Badger. It makes sense to use NSC if you can narrow it down to 2 or 3 authors that definitely were the author. But when authorship is disputed as it is in the case of the Book of Mormon, it doesn’t seem to be nearly as effective as it needs to be.
“I rebutted all 5 of your points and now you’re off on Egyptian writing? Look, I’m no archaeologist, and I don’t know the exact particulars on how thick the plates were, or how many characters were on the plates.”
I’ve been out of town and not able to get at a computer. (Sorry for the delay.) This thread is probably dead by now, but in fairness your either deliberately ignoring the point I was trying to make or you just don’t get it. You can’t claim to rebut my points and ignore the crutch of the whole debate.
The fact that the plates have a limited writing area based on how much data you can cram in to 2 inch thick 6X8 sheets is something concrete that can be simulated. Even if Mormon had our technology of making the perfect alloy which would be soft enough to engrave on both sides and yet hard enough to keep the engraver from pushing his tool through them and destroying the writings on the reverse side there is a limit to how many plates you can physically get into 2 inches. If you made a set today, anything less than 50/1000 of an inch would become impossible to work with using hand tools.
Do you really not understand why I think that putting whatever reformed Egyptian is on 40 sheets of metal and translating that into approximately 640 pages of English makes it impossible to have any kind of word print for those authors that engraved it? The BoM would be in JS words as you can’t get a writing style from the previous authors when he had to get a least 10 English words out of each of Mormon’s reformed Egyptian characters.
So what does all this mean for these word print studies? It means that someone in the 19th century had to write the BoM and that persons’ word print should show up. As you said earlier in the discussion, the NSC method is fairly accurate if actual author is in the pool of possible authors. Well, I think Jocker’s did a reasonably good job of rounding up the possible authors now that Joseph Smith is included. Who else do you think should be in the mix?
Now if you want to say that the plates were never really used in the process and JS channeled the ghosts of the past to write then fine, but most LDS folks are going to have real issues trying to figure out why god would have these guys write on the plates, preserved them for 1400 years, provide a translating device, and yet use none of it. For me, I can’t make that leap based on what I have been taught for over 40 years…
When I first read this it looked like the r value was .48 and the r^2 was .17 for the conclusions of the study. Now I understand that what we really have is a built in confirmation bias:
i.e. the study goes “assuming that one of the following was the author, which one was it?” and goes from there. Basically it begs the question to start with.
Slick trick, all in all.
Stephen, I just did another scan through the Jockers document, and never saw a mention of r^2 values. Where did you see that?
“It means that someone in the 19th century had to write the BoM and that persons’ word print should show up.” Please explain to me why J, E, D, and P are so prevalent in the Hebrew Bible, yet none of the editors in 3000 years has been able to be identified–let alone King James. Scholars believe Baruch may have written Deuteronomy, but no wordprint study has ever fingered him definitively. Please explain that.
Doug, pretty much nothing you said in 66 has anything to do with statistics or wordprint studies. You’ve sidetracked the discussion. I’ve asked you to tell me what the BYU guys did right, and you’ve conveniently ignored that question too. Let me ask it again. If you feel wordprint studies are such a good technique, please tell me why I should believe the BYU studies. If you tell me their techniques are bad, why aren’t Stanford techniques bad? Really, the BYU guys must have done something right. If you’re going to say that the BYU guys are just a bunch of idiots, and the Stanford guys are a bunch of geniuses, then I don’t think you’re being objective about this at all.
In all fairness, this post was about wordprint studies, not the dimensions of the plates. Jockers didn’t mention a single word about the thickness of the plates. You’re the only person talking about it. Yes I agree with you that it is a valid question, but it’s NOT a valid question regarding the statistics of wordprints and is therefore off topic. That’s why I’m “deliberately ignoring the point”. It has nothing to do with Jockers or wordprints or BYU wordprint studies. If you want to talk about r^2 values like Stephen, or probabilities, or some other reason to believe the statistics, be my guest. Jockers didn’t deal with the thickness of the plates, and neither am I. As I said before, if you want to research the thickness of the copper Dead Sea Scroll and make a comparison, go for it–but don’t try to tie the results to a statistical wordprint study, because the issues are apples and oranges.
“Who else do you think should be in the mix? I’ve answered this question many times–are you forgetting? But if you want me to get really comprehensive, here’s a few to expand on (many of which I’ve already mentioned above.) Emma, Martin Harris, Ethan Smith, Hyrum Smith, Joseph Sr, William McLellin, everyone in attendance on April 6, 1830, Alma (Jr and Sr), Mosiah, Nephi, Benjamin, Enos, Jarom, Omni, Moroni, Mormon, Zenos, Zenock. How can you rule out ancient authors if you don’t test for them? Just because you can’t get a handwriting sample? Well, isn’t that convenient? Your little scenario says it had to be 19th century, so we can’t test anyone from a prior time period? The Documentary Hypothesis seems to think ancient authors can be discovered–why can’t Jockers handle this scenario?
Now here’s some more far-fetched ones, but it would be interesting to see similarities: Alexander Campbell, Isaac Hale, Joseph Sr, Lucy Mack Smith, Samuel Smith, William Smith, Ann Lee, EB Grandin, Brigham Young, the Expositor authors, Mark Twain, WW Phelps, the 3 Witnesses, the 8 Witnesses, John C Bennett, John Taylor, Porter Rockwell, every LDS prophet since Joseph Smith, Doctor Hurlbut, ED Howe, James Strang, Joseph III, every author in the Dartmouth Library in 1820-1830 (since you’re sure Joseph was checking out maps of Nahom there), Matthew, Mark, Luke, John, Paul, Peter, Amos, Moses, J, E, D, P, Q, Thomas, Judas, Samuel, Jeremiah, Ezekiel, Abraham Lincoln, Stephen Douglass, John Adams, Ulysseus S Grant, General Sherman, and General Lee. I’m sure I could think of more, but that’s a pretty good start, and it would be interesting to see how many false positives come up. (I believe every non-biblical person I mentioned there is a contemporary with Joseph Smith.) It would also add to the sample size that you think will solve the problem so precisely. Perhaps with so many controls, it could legitimately eliminate Longfellow as an author of Isaiah? Perhaps we could identify someone else that we hadn’t considered such as a Dartmouth author who wrote nothing about Hebrews being related to Indians? Perhaps it was the previously unknown author Joe Schmo who composed the Book of Mormon.
Additionally, it would be interesting to see how 7 modern people stack up as possible authors, such as you, me, Kuri, and others. Perhaps we could get real authors such as Steven King, Dan Brown, Paul Dunn, LeGrande Richards, Richard Eyre, and Mark Hoffman too. Come to think of it, we should really add a good list of forgers to the list. I
s that a good list? Remember, the sample size increase solves all the problems, right?
Here’s some other legitimate questions to ask. Would you agree that more contemporary controls should be on the list? Should we have as many controls as potential authors, or is 5 to 2 a proper ratio? Should we limit our controls to religious writers to see if there is more similarity? Is Longfellow a legitimate control since he wrote poetry, rather than prose?
OK. I don’t want to pile on, because I do think wordprint studies are a fertile ground for much more serious work. But here’s a serious suggestion: add Orson Scott Card. He’s a Mormon author who writes award-winning fiction about imagined cultures for a living. He can “fake it” better than anyone, which is why he’s a believer in the historicity of the BofM.
You know, I had thought you were deliberately ignoring my point, but now I realize you really just don’t understand it. Fine, I don’t think I could explain it any clearer for you, so I’ll let it go. There are others here who understand completely why compressed data on the plates when expanded into English is critical to the conversation of word prints. I highly resent your statements that it’s not relevant to the studies because that’s just plain wrong and unfair.
As for the BYU studies, I had thought that Christopher Smith, Kevin Barney, and Jocker’s did a pretty good job of showing the problems with their process. To be fair, I haven’t read the BYU study and therefore haven’t commented on it. I also haven’t made claims about its accuracy or the intent of those who performed it. Those are your words, not mine. I did state that I thought it interesting that both studies virtually eliminated JS as the principle author.
Your list of possible authors has some farfetched ones in it, but why not include them? Data is data, I think there’s allot more work that could be done with the Jocker’s study and thereby derive more understanding of how the BoM came to be. Perhaps you’ve decided word prints are worthless altogether, but then you’d have to explain why they work most of the time. At the end of the day, for the believing Mormon, no amount of evidence is going to convince them that the book isn’t of ancient origin. I’m actually fine with that just like I am of people that say we never landed on the moon, that the earth is flat, there was a global flood, man has only been around for 6000 years, and so on. Science has been hard on religion in the past and it will continue to be in the future. Old ideas have to change and adopt has science discovers the secrets of the past and explodes our world views. Hasn’t DNA changed what we believed for so many years about the Indians and there forefathers?
No need to reply here MH, I can tell by your tone that this conversation has gone far enough. I’m moving on…
Doug, I think we have beat this to death, and I’m ready to move on. I think the Documentary Hypothesis is highly relevant to the discussion, and you never seem to address the wordprints of ancient authors, or why the Biblical authors (like King James translators) don’t show their wordprint.
FireTag, I was trying to think of Card’s name, but couldn’t remember it last night.
Sorry to drop in here, I have been following the conversation.
Doug – I’m not sure that I understand your point given your evidence. It would seem to me that your argument regarding the thickness of the plates and compressed characters is an argument against the validity of word-print studies surrounding The Book of Mormon (And I actually think you do have a valid point on the plates dimensions – though I would leave a little room for error, ie, how much compression would be required). If we assume that any of the 19th century writers/”translators” are going to show through with high confidence intervals regardless of whether they were writer or translator, doesn’t that just invalidate the studies? If Joseph Smith for example, shows up as the likely writer, those with a predisposition to believe Joseph Smith translated The Book of Mormon will begin to favor loose translation theories. They will also point out that, as has already been done, that word-print analyses will always be incomplete because we can never include the supposed Book of Mormon authors in the sample. On the other hand, those who believe Joseph Smith wrote The Book of Mormon will still be without the proof their looking for (Because after all, that’s really what this whole study is about) because they cannot test all of the claims surrounding the books authorship.
Now to make it more interesting, if Sidney Rigdon shows up as the “best fit” (statitician talk – I think) then again both camps will have to decide what that means. There are a number of theories and rationals that believes could come up with that would be reasonable for throwing out the studies findings. For those who want to make the Rigdon connection to authorship, you would have make a reasonable case as to the plauzibility that Rigdon had met Joseph Smith prior to 1830. Even then, the analysis and corresponding Rigdon/Spaulding theory would be far from bullet-proof.
I guess to get back to my question, in sincerity I don’t understand the point you are trying to make with the evidence. Are the word print studies reliable, or should they be thrown out? In answering the question, I am not referring to word-print analysis in general, but rather as they specifically relate to Book of Mormon authorship.
I’ll throw my two cents into here although I think this thread is winding down. Coming from a person who does translation on a daily basis I know that every person translates with a distinct style. The more you have to translate the more it is going to be in your own words. Here is an example.
If you were to take 10 people who speak and write Spanish perfectly and were to hand them 1 word to translate, it is possible for there to be maybe 5 or 6 different answers between the 10. Now if you were to hand out a paragraph worth of information to translate, it is almost a guarantee that all 10 will be worded different and in each persons own style. The more there is, the more that person’s “Voice” will come out in the translation.
Now where you run into a problem is….your “Voice” or how you write and word things in a different language….is different from your own. I could go on forever explaining that but basically your brain is working differently while using a learned language.
So yes the word print should show 1 distinct style. But there is no way to know who’s style that is. The fact that the word print shows multiple authors is a bit disheartening considering I was always told JS did the translation while what he said was written down verbatim. It most definitely should show one style through the entire book.
“If we assume that any of the 19th century writers/”translators” are going to show through with high confidence intervals regardless of whether they were writer or translator, doesn’t that just invalidate the studies?”
That’s the point here Cowboy, the plates couldn’t have a style because the writing on them had to be extremely condensed. Joseph Smith said that each character could take as many as 20 words in English to express the thought it represented. So, those 20 words should be in the translator’s voice. I would have expected the studies to show JS as the principle author, and you’re right, that wouldn’t show whether he wrote the book or translated it. Here’s the point, both studies said he didn’t write it, so now we have a problem as someone had to write those words. That someone should have a style and corresponding word print. I think the list Jocker’s used with the inclusion of JS encompasses the most likely authors. Now if some else wrote the BoM outside the group, then MH has a point, the study isn’t going to give us the answer. However, it would seem reasonable to assume it must have been written by someone originally involved with the movement. The only reason Sidney gets included in the group is so many people even in JS day thought he must have had something to do with it. (We discussed all the parallels before as well as those Hulbert affidavits.) So not to include him as a possible author would be a mistake. I completely agree that Emma should be considered as well as Ethan Smith.
At the end of the day, the only point I’m actually trying to drive home is that this science deserves more study and has the potential of exposing who actually put it all together. For the believers, it might lend proof to the theory of it actually being an ancient translation. I don’t know what science may uncover as they work this issue, but I believe it has promise…
I get it. Thanks.
I’ve been slow to respond to Doug’s last comments because his “data density” concerns have sent me down a new line of thinking. I mentioned Orson Scott Card above because of an essay he wrote in which he talked about what science fiction fans (yes, I confess to that vice, too) call “world building”.
He talks about how a member of a culture writes to a member of his/her own culture, much of the actual message is carried not in the words, but in the commonly carried context of the culture. What is said, and, just as importantly, what is not said, marks the writer’s culture just as surely as the words themselves may give a writer a wordprint. Call it a “culture print”, to coin a term. The very act of trying to imitate, no matter how artfully, another culture in fiction, leaves more “culture prints”. Card uses the example of a Robert Heinlein descriptive phrase: “the door dilated.” The emphasis itself tells you that the writer comes from a culture where doors don’t work that way.
Card shows how that “culture print” of a non-nineteenth century culture is in the BofM. Card’s essay spoke to me because I’d seen the same internal features for myself without realizing it before I read what Card was saying. Read the Card link above and decide for yourself about his points, but this leads me back to Doug’s “data density” argument and what it says about the “translation process”.
If Joseph is getting 20 words from a character, then the process doesn’t involve embedding data in a single character, and then extracting it into one’s own words in the 19th century, because the translation seems to be carrying the “culture print” of the author even if its got a 19th century wordprint. That says that the character is the medium of the message, and not the message itself.
Its an idea that wants to make me reexamine the roots of prophecy among the Jews and other primitive societies where physical mediums (e.g., bones) are first used and only later do more sophisticated moral understandings grow out of those roots.
Do these media act through the Spirit to open mind-to-mind-communication between writer and “translator” the way we picture inspiration opening communication between the prophet and God. (Please remember, physicists do not ask if their ideas are crazy; they ask if their ideas are crazy enough. 🙂 )
I really like MH’s suggestions for additional control authors. Frankly, the use of Barlow and Longfellow as controls really bugged me. Poetry is a highly structured genre, and thus is likely to have a very different “style” than other kinds of texts. In order to control against Rigdon, for example, I would want to test some contemporary theologians, not poets.
Pingback: Debunking the Jockers Study | Mormon Heretic
Here is an interesting site which discusses the issues of statistics and authorship. Dale Broadhurst has long been supporting this theory and has a number of sites online