statistical illiteracy in scholarship - Daniel Wallace struggles with numbers

Steven Avery

Member
Elect
Joined
Dec 23, 2012
Messages
500
Reaction score
0
Points
16

Statistical Illiteracy in Textual Scholarship


The paper being discussed is from the
:

Journal Bibliotheca Sacra. BibSac 148 (1991) 150-169.
A Journal put out by Dallas Theological Seminary.


And then in June 2004 this paper was posted on the internet.


======================================

Please note the math in this article. Daniel Wallace responding to  Wilbur Pickering. An article which has been available for reading for  over 25 years, since 1991.

This section was recently quoted on  Facebook, by a respected creationary scholar, as part of an attack on  the TR-AV and Majority positions. And this is frequently quoted online. The numbers that were given here, 98% and 99%, were a major part of the argument against the significance of the Received Text and Byzantine/Majority text positions. 


======================================
The Majority Text and the Original Text: Are They Identical? (1991)
Daniel Wallace

https://bible.org/article/majority-text-and-original-text-are-they-identical

"There are approximately 300,000 textual variants among New Testament  manuscripts. The Majority Text differs from the Textus Receptus in  almost 2,000 places. So the agreement is better than 99 percent.

How different is the Majority Text from the United Bible Societies’  Greek New Testament or the Nestle-Aland text? Do they agree only 30  percent of the time? Do they agree perhaps as much as 50 percent of the  time? This can be measured, in a general sort of way. There are  approximately 300,000 textual variants among New Testament manuscripts.  The Majority Text differs from the Textus Receptus in almost 2,000  places. So the agreement is better than 99 percent. But the Majority  Text differs from the modern critical text in only about 6,500 places.  In other words the two texts agree almost 98 percent of the time "
**

**  "Actually this number is a bit high, because there can be  several variants for one particular textual problem, but only one of  these could show up in a rival printed text. Nevertheless the point is  not disturbed. If the percentages for the critical text are lowered,  those for the Textus Receptus must also be correspondingly lowered."
======================================

The problem we have here is that this is not fuzzy math, this is full-blown bogus math. The methodology used is totally false.

The number of textual variants calculated globally (i.e. all Greek mss), in this case 300,000, the divisor, is an unrelated number to the affinity between any two specific texts.  Two texts do not get closer together if the total variant count is seen  to be 1,000,000 instead of 300,000. They do not get farther apart if the total variant  account is seen to be 50,000 or 20,000.  Since the number is unrelated,  it is quite flexible in wiggle room, e.g. translatable, significant, printed variants  could be specified.

Thus, if you plugged in 20,000 as the divisor (a calculation, perhaps, of total printed or significant variants) your affinity number for the two texts. Byz/Maj-CT would be close to 67% affinity instead of 98%.  Yet the two texts being compared have not changed in even one letter.  And then this conclusion in the paper simply would not be possible:

"Not  only that, but the vast majority of these differences are so minor that  they neither show up in translation nor affect exegesis. Consequently  the majority text and modern critical texts are very much alike, in both  quality and quantity" - Daniel Wallace, ibid
This conclusion has other difficulties, because it is simply not true that the vast majority of the 6,500 Byz/Maj-CT differences are not translatable. So you have GIGO, with a false "very much alike.." conclusion.  This bogus conclusion was keyed off the statistically false 98% number, essentially a plug-in by choosing the unrelated 300,000 number as the divisor.

And note, this statistical problem in the paper should be easily recognized  by the smell test. 6,500 variants in 8,000 verses can have various  measurements of affinity (see below) .. coming up to 98% is  extremely unlikely, with any sensible measure. About forty full verses omitted in the CT that are in the Byz (a few more in the TR) and thousands of significant variants.  How could it be 98%?

There is also  nothing complicated in realizing that the math does not fit. Anybody who  read and understood the classic Darrell Huff book How to Lie With  Statistics should be able to find the problem in a couple of minutes.  And recognizing the problem here does not require any special skills or  training.

Incidentally, we do not know if Daniel  Wallace played with these numbers with the purpose to deceive about the  texts (hopefully not) .. or if he is simply statistically illiterate.  Perhaps he did not think about it and put the statistics in the paper forth as a sort of hopeful monster attempt.  His footnote indicates that he had some second thoughts, yet he never realized that his divisor is improper. An unrelated number that does not have anything to do with the textual affinity that was claimed to be measured.


========================================

What can be done?


If a very simple statistical calculation is totally wrong in textual science, and is not noticed by the writer, his review, peers and students, for years, for decades ... how  about graphs and more sophisticated presentations?

An example: articles  on the topic of manuscripts through the centuries by Daniel Wallace and  James White have used graphs that are similarly using false  methodologies. There is one in the article above. The purpose: to present a "revisionist history" (Maurice  Robinson's phrase), the impression that the Byzantine text  and its variants only entered the Greek text late. Sort of a back door method to keep the Syrian (or Lucian) recensions alive.  To give an impression of the 400 to 900 AD period that is against all accepted textual history. And to give the impression that  the early centuries were massively Alexandrian. (E.g. 100+ localized  papyri fragments, technically each one a mss, from gnostic-influenced  Egypt, totaling a couple of NT books, is capable of skewing any  statistical calculation that is based only on numbers of mss. And this  is one of many problems. Some graphs do not even have an X or Y axis description, one of the tricks pointed out by Huff. This was covered separately on a textual forum.) We can see in textual science that the goal of  agitprop against a text like the Reformation Bible (TR) can outweigh scholarly study. This started with Hort ("vile" and "villianous" describing the TR, even before he began) and the beat goes on.  And the math,  statistical and graphic presentations will be unsound and unreliable.

And statistics can be manipulated on all sides, however papers that are published are supposed to go over a high bar of correctness and examination.  If a Byz or TR-AV supporter, or an eclectic, makes a similar blunder, it should be quickly caught and corrected.

Maybe SBL and ETS should  have seminars teaching about the basics of statistical manipulation.  And should reviewers of papers be vetted for elementary statistical  competence?  What do we say about students  educated today in such a statistically illiterate environment?

My  concern here is not just Daniel Wallace, it is also what this says about a  type of scholastic and statistical dullness in the textual studies realm as a  whole. This should not have lasted in a paper one week without correction, much less 25 years and standing. 

========================================

Similarly, the problem is not only statistics. One can look at the recent 2008 paper by Van Alan Herd, The Theology of Sir Isaac Newton, which was successful as a PhD dissertation, and see elementary blunders that passed review at the University of Oklahoma. Here is one of many examples:

The Theology of Sir Isaac Newton (2008)
Van Alan Herd
https://books.google.com/books?id=nAYbLOKKq2EC&pg=PA97 (the paper can also be found at gradworks.umi.com , however the google url goes right to this quote)

The error here, according to Newton, is assuming the word "God" as the antecedent to the Greek pronoun, ὃς,, ("who"), as the King James translators had assumed it and replaced the pronoun with the noun, "God" in the Authorized (KJV) version. Newton questioned this translation on the grounds that it is incorrect Greek syntax to pass over the proximate noun "mystery" which is the closest noun to the pronoun ὃς,, in the text.
Virtually everything here is factually wrong, which anyone who has read and understood Newton's Two Corruptions would easily see.

========================================


And here is a kicker:

If a textual writer flunks  the elementary logic of statistical understanding, and publishes false information for argumentation against our historic English Holy Bible, are they likely to be  strong in other areas of logical analysis?  Are they a good choice for making up your variants, for choosing your version?

========================================

Sidenote: finding an agreed-upon method to measure the % of affinity  between two texts, even two clearly defined printed texts, is a bit complex and dicey. Since the measurements used are  subjective and variable (What is the standard size of comparison? maybe verses? or words? how many variants is a  12-verse omission/inclusion? and are you weighing variants?) And there  can be a variety of results. This complexity, a bit more sophisticated than choosing the wrong divisor, is  rarely mentioned when affinity numbers are given in textual literature. (Even if the numbers have some sense, unlike the Daniel Wallace numbers above.) This is a more general  critique of the use of numbers in the textual science.

By contrast, for a three-way comparison of the nature of:
 
"The Peshitta supports a Byz-TR text about 75%, the Alex text about 25%"


it is easier to establish a sensible methodology that can be used with  some consistency and followed by the readers and statistic-geeks quite  easily.

Although even there the caution lights should be on,  especially about the weight of variants, for which I offer a maxim for  consideration:

"Variants should be weighed and not counted"


========================================

Steven Avery
 
Since when have KJVOs developed a just measurement for variants? Why does a KJVO engage in discussions related to variants when we all know they are skewed, not by an earnest searching/comparison of the Greek texts, but by a motivation to back track from the KJV to prove the KJV?

 
FSSL said:
Since when have KJVOs developed a just measurement for variants? ...

I'll take this as an acknowledgement that you understand that the paper from Daniel Wallace is statistically illiterate.
 
As Wallace clearly stated, in that article, he believed the number to be high.

The real question is why you think your textual critical approach is honest.
 
FSSL said:
As Wallace clearly stated, in that article, he believed the number to be high.
> Daniel Wallace
> Actually this number is a bit high, because there can be several variants for one particular textual problem,  but only one of these could show up in a rival printed text.

Wallace is talking about reducing the divisor, the 300,000, the unrelated number that is bogus. You could adjust the number any which way, for whatever purpose, and the resulting calculated number would remain bogus.

> Daniel Wallace
> Nevertheless the point is not disturbed. If the percentages for the critical text are lowered, those for the Textus Receptus must also be correspondingly lowered.

That was not the point of his article. 

MT-TR - 99%
MT-CT - 98%

His principle point was built around the bogus 98% number:

> Daniel Wallace
> "the majority text and modern critical texts are very much alike, in both quality and quantity."

GIGO. The 98% number is false, the conclusion is false.

And I'm am going to conjecture that FSSL actually understands statistics reasonably well, so that he realizes the methodology is bogus, but is simply unwilling to say so.  And that will remain his position.

Steven
 
That can be taken as an acknowledgement that Scott also understands the statistical illiteracy of the Daniel Wallace writing.
 
I can't understand how anyone can tolerate reading anything you write. I couldn't get half way through this "mess" you somehow think is a scholarly critic of Wallace.

I'm not a big Wallace fan. He is a Calvinism for starters.... However, he is very honest and very skilled in whatever he chooses to do. His works are really a "joy" to read. Easy to follow and brutally honest in their content.

You'd do good to learn a "thing or two" from him.

 
The "mess" :) makes it 100% clear that Wallace is statistically illiterate.  Whether sincere and charming or not.  And that the textual community missed this for 25 years. 

Now it is true that we are in a culture where many people are not statistically competent, so they can be snowed rather easily.  Others might understand the math, but as politicians whose main concern is that the TR and AV not be seen as God's pure word, they will not criticize the emperor's statistical wardrobe malfunction.  Diversion will be the name of the game.  Some people may have caught the problem, if they stopped and looked, but as Darrell Huff says, it is easy to glaze and daze over numbers.

================================


Notes from How to Lie With Statistics, Darrell Huff, 1993 edition

If you can't prove what you want to prove, demonstrate something else and pretend that they are the same thing. In the daze that follows the collision of statistics with the human mind, hardly anybody will notice the difference. The semi-attached figure is a device guaranteed to stand you in good stead. It always has.
p. 76,  , Ch. 7 The Semiattached Figure

Advertisers aren't the only people who will fool you with numbers if you let them. - p. 79

Misinforming people by the use of statistical material might be called statistical manipulation;
in a word (though not a very good one), statisticulation. - - Ch. 9, p. 102 How to Statisculate

The title of this book and some of the things in it might seem to imply that all such operations are the
product of intent to deceive. The president of a chapter of the American Statistical Association
once called me down for that. Not chicanery much of the time, said he, but incompetence. - p. 102

the distortion of statistical data and its manipulation
to an end are not always the work of professional statisticians. - p. 103

But whoever the guilty party may be in any instance, it is hard to grant him the status of blundering innocent. ... As long as the errors remain one-sided, it is not easy to attribute them to bungling or accident. - p 103


================================

As to other efforts of Wallace, they vary.  Some are quite commendable, such as his paper on the personalization of the Holy Spirit (which de facto acknowledges the strength of the heavenly witnesses in the grammatical realm.) He was taking a somewhat unpopular position as well.

As an attacker of the pure Bible, and supporter of corruptions like the abbreviated Mark ending (which Wallace really wants out of your Bibles even though it is in 99.9% of the Greek, Latin and Syriac mss and has solid Ante-Nicene support) Wallace is a mess. And basically the latest iteration of the Hort-Aland-Metzger-Ehrman  textus  corruptus apologetics.  Adding a minor twist or two,  such as the absurd ultra-minority Mark 1:41 corruption of Jesus being angry instead of having compassion.

Writing about numbers in a public forum has its own dynamic, since the readership is so diverse in skills and positions.  However, the key point here is that there is nothing at all complicated about the nature of the problem of using an ad hoc, unrelated number for calculations.  And then getting bogus results based on the false methodology. And then coming to totally absurd and transparently false (e.g. the "vast majority" blunder) conclusions using the bogus results.  And this is what was done by Daniel Wallace. 

Steven Avery
 
Steven Avery said:
Please note the math in this article. Daniel Wallace responding to  Wilbur Pickering. An article which has been available for reading for  over 25 years, since 1991.

2015 - 1991 = 24.

Who is it that struggles with numbers, again, Stevie?
 
Thank you. When I make a little faux pas in a post, I simply correct it with thanks, and move on.

Now if Daniel Wallace will retract his paper, as being  built upon a statistical disaster, bolstering phoney arguments, with public acknowledgement and thanks for the correction, we will be on a solid path. 

One of the ironies in how this was discovered was that it was posted by Jonathan Sarfati, creationary scientist, physicist and chess master.  (I had actually missed the methodology error earlier, when looking at the paper in the context of  the graph problems.) Sarfati never defended the math, when the problems were pointed out. He simply avoided the issue, the thread is on a Facebook page he hosts.  (To his credit he did not delete posts, in the James White manner. When White had factual problems with his Acts 8:37 presentation and they were pointed out by James Snapp .)

Steven Avery
 
Steven Avery said:
Thank you. When I make a little faux pas in a post, I simply correct it with thanks, and move on.

Um.

Steven Avery said:
Please note the math in this article. Daniel Wallace responding to  Wilbur Pickering. An article which has been available for reading for  over 25 years, since 1991.

And

Steven Avery said:
The "mess" :) makes it 100% clear that Wallace is statistically illiterate.  Whether sincere and charming or not.  And that the textual community missed this for 25 years.

You didn't "correct it and move on," Stevie, you doubled down on the same blunder.

Now if Daniel Wallace will retract his paper, as being  built upon a statistical disaster, bolstering phoney arguments, with public acknowledgement and thanks for the correction, we will be on a solid path.

Someone who fails at basic arithmetic is in no position to criticize someone else's statistical analysis, Stevie.
 
Old Stevie is just trying to build a reputation by taking someone down that already has a stellar reputation.

Wallace doesn't have anything to worry about.

 
praise_yeshua said:
taking someone down that already has a stellar reputation. Wallace doesn't have anything to worry about.
As I pointed out, there is a certain type of Critical Text dupe who simply will not care about bogus math.  That is not surprising. 

And the issue is not the reputation of Daniel Wallace, which is a rather subjective determination, and can vary a lot depending on the specific context.  The issue is why the textual establishment uses arguments that are based on statistical illiteracy, and they last for decades in a published paper, and are still duping people (like when Jonathan Sarfati placed in the quote) today.

=====================

Scott is welcome to show my response to the correction, rather than simply fabricating.

Now, the real issue is whether Daniel Wallace will accept correction.  This has been pointed out on a few textual forums, so now we shall simply be patient and watch for the responses.

Steven Avery
 
Anyone else following Dan's work on NT manuscripts?

http://www.csntm.org/


839403_IMG_0529.jpg
 
Steven Avery said:
Scott is welcome to show my response to the correction, rather than simply fabricating.

I "fabricated" nothing, Stevie. In the process of criticizing Wallace's supposed mathematical blunder, you committed a far simpler one. Twice.

So whatever you might have to say about Wallace's math has been discredited by your own foolishness and lack of skills. In fact, I really didn't bother to read any farther, thinking to myself, "Hmmm. Clearly these are the ramblings of a crackpot. In fact, if this fool can't subtract 1991 from 2015 in his head, and doesn't realize that 24 isn't more than 25, I'm more inclined to cut Wallace some slack for considerably more complicated math."

You brought this ridicule upon yourself, Stevie. Don't like it? Don't be ridiculous.
 
FSSL said:
Since when have KJVOs developed a just measurement for variants? Why does a KJVO engage in discussions related to variants when we all know they are skewed, not by an earnest searching/comparison of the Greek texts, but by a motivation to back track from the KJV to prove the KJV?

This has gone unanswered.

Steve got some of the same line of questioning on the [textualcriticism] debate on yahoo groups. Jovial summarized it this way:

1) Propose an alternative method for measurement.
2) Explain the logic behind the alternative method and why it is better.
3) Show that this alternative measurement demonstrates your conclusion is the most likely.

See the rub! Steve cannot (and will not) provide an honest measurement. He is quite comfortable trying to muster accusations when it is painfully obvious that a KJVO does not have an honest measurement in a discussion of variants.

He can only charge their opponent with a "lie" even when Daniel said, quite clearly, that the figure may be high.

The KJVO should just say, "The KJV is the measurement upon which all variants must be measured." KJVO textual discussions always begin in 1611 (or 1903... or whatever edition), and work back from there.
 
The Authorized King James Bible, purified seven times and counting.
 
Top