The Insufficiency of Literary Analysis Unaccompanied by Other Tools
New Testament critics generally assume that our gospels are the product of a scribe having two or more editions before him, which he takes together to produce a new version that contains material from the old sources. They say this, because when you examine the text of the gospels closely, you find evidence that the text was edited. They use the word ‘recension’ to mean what we would call an ‘edition.’ Scholars usually suppose a chain of editors, but readily agree that one person can produce several recensions by re-editing the text themselves.
The existence of recensions does not in itself demand more than one editor, nor does it require anything more than the authors revising their own texts. The existence of recensions only proves that there are recensions.
No study has been done about the breadth of stylistic differences that can be found within a single writer’s works. The Bible critics assume that differences in style mean differences in authorship, but haven’t proved that this is necessarily the case. For example, Isaac Asimov wrote science books for children, academic papers, fiction in chemistry, salacious limericks, and college textbooks—all in different styles. Does that mean there were four people who called themselves Isaac Asimov? Biblical scholarship would constrained by its present methods to say yes, so I feel we have reduced this this principle to the absurd and refuted it.
Even before the New Testament canon was established, the scribes were dealing with the traditions of the apostles. Even though we know they had no reverence for the precise wording, we know that they had reverence for the sense of the wording, so the scribes would be more constrained than the scholars suppose. The scholars also suppose that the gospels (particularly Matthew) had liturgical use. If this is true, it would have constrained the hypothetical editors even more, because people do not like changes in the liturgy. The 1979 revision of the Book of Common Prayer sparked splinter groups from the Episcopal Church. In the fourth century, Jerome translated the Bible into Latin. The older Latin versions mistranslated the type of vine that shaded Jonah’s head as he pouted over Nineveh, so Jerome corrected it. When the passage was read in church in North Africa in the fourth century, the people rioted to protest the change! So even before canonization, and even in an era in which most people did not own personal copies of the scriptures, the copyists were more tightly constrained than the critics admit. The disharmonies in the New Testament can be completely explained by the fact that the sources had different viewpoints and even include disclaimers that the accounts are not complete, as it states outright in John 21:25. There is no compelling need to posit any other cause.
Either the New Testament consists primarily of first-century documents or documents that were written a couple hundred years later. The hard evidence that shows that they are first-century documents, turned up after most of the scholars of the nineteenth century had completed their work, so nineteenth-century-style higher critical methods work best if we ignore this evidence. If the New Testament documents are early, then eyewitnesses would have been around to object to monkeying around with the text, and that would explain why it contains a lot of unflattering stories about the apostles. The apostles are not depicted as paragons of virtue that Jesus selected for their goodness; in particular, Peter and Paul are not glorified. If the New Testament documents were written in later centuries, and if there were a problem with the harmony of the gospels, we would find four sects in the early centuries of the church, each of which held to one gospel and deprecated the others; then there would have to be a council of the proportions of Nicea to compromise. These events are missing from history. Therefore the New Testament documents must have been early, and the hard evidence backs this up. It then follows that there was no time for this complicated process of harmonization to take place at all. This explains why the gospels can be harmonized, but not conclusively. The ‘oral tradition’ consists of eyewitness accounts, the recensions were done by the authors themselves or by editors under their supervision. All gospels were complete in modern form by AD 85.
The basic rule of interpreting ancient documents is to take them at face value unless they can be proved to be false. This is a non-paranoid theory that holds that no ancient person contrived to deceive archaeologists in the distant future. Personal correspondence is especially trustworthy, since incidental references to circumstances and events must be accurate for the letter to have any value at all for the recipient. Biblical scholarship reverses these principles, assuming that the account is false unless it is verified. If Biblical letters were processed by the same rules as all other ancient letters, the epistles of Paul would be taken at face value and the authorship, where it is asserted, would be beyond dispute. If the principles of Biblical scholarship were applied to all ancient documents, we would say that we have no knowledge of the past, because the attestation for nonbiblical documents is impoverished by comparison. There are Greek philosophers that we know about only because they are quoted 1,000 years after they lived, yet we accept them as historical personages. We have more documentary evidence for Jesus Christ than any other person of that age, yet scholars assume the personal correspondence in the New Testament is unreliable and go searching for the ‘historical Jesus.’ They do not go searching for the ‘historical Plato’ or the ‘historical Homer,’ even though those people are more poorly attested.
There are also efforts to analyze the text of biblical documents by computer. In the 1960s, a computer decided that Isaiah 40-66 was written by a different person than Isaiah 1-39. Actually, the idea that Isaiah is the compilation of two or three different prophets is much older than that, but I think that was the watershed event that emboldened preachers to speak of ‘Second Isaiah’ and ‘Third Isaiah’ from the pulpit as if they were books in the Bible with those titles. Even though this viewpoint is likely to be right, I’m not sure that this is very wise, because when parishioners look in their Bibles and find that such books do not exist, the preacher loses credibility. There are some low-level details of scholarship that are appropriate for a seminary but are out of place in a sermon, and I think this is a good example. If you are trying to impress the congregation with your expertise at the cost of unnecessarily unsettling their faith, you have the wrong priorities—in my view.
The evidence for the theory that Isaiah is really two or three books is compelling, but not conclusive, because it rests on assumptions that are not necessarily valid:
- That any references to historical events places the prophet after those events—in other words, prophecy doesn’t happen. This is the logical fallacy of begging the question.
- That differences in vocabulary and style mean different authorship, which is unproved.
Scholars often overlook the fact that most authors edit their works to smooth out their style and that editing was difficult in those days. Therefore, we are less likely to find stylistic smoothness in ancient authors than in a modern author with recourse to word processors or at least to a typewriter and a bottle of white-out.
I am skeptical of the ‘literary analysis’ of the New Testament, not just because the New Testament is too small to be a valid sample (Isaiah is longer than any New Testament book), not just because of the uncertainties of the authorship of the pieces, but because of the underlying assumptions. The uncertainty of authorship is important: Literary analysis to determine authorship can only work if we have a large corpus of undisputed works and one disputed work to test. In the New Testament, there are few undisputed works and they are not large enough to stand on for this purpose. Literary analysis basically has to posit the results before beginning the investigation!
Breadth of Style
I wrote lots of stuff for the web, but my essays do not spring from my fingertips like Minerva from Zeus! Most essays start out snooty and preachy and have to be edited severely to meet my own personal guidelines. If you compared the first draft to the final, you’d think two different people wrote them, because the entire tone and vocabulary had changed. I have found stylistic differences within the books of the New Testament, but nothing as great as what I find in my own stuff over the course of editing or over the course of my life. (Of course, the process of translation tends to smooth over stylistic differences. Style can only be analyzed in the original.)
Even when there is a large undisputed corpus…
Isaac Asimov is another good example. While he was preparing for his doctorate, he wrote a spoof of a doctoral thesis on a fictitious chemical compound. He invented a few imaginary chemical names and festooned the document with footnotes referring to nonexistent authorities. There were a lot of words and phrases that were unique to that document and he wrote it in the style of a doctoral thesis rather than in his usual conversational style. In a few of his works, Asimov referred to this spoof and claimed authorship, but suppose these works were lost, leaving us with his works that did not refer to the spoof and the spoof itself. Or suppose some doctoral candidate writes a well-received thesis that Asimov was taking credit for a competitor’s clever work. (I can think of a way to make a convincing case for that.) How could we detect through computer analysis of his remaining works that Asimov wrote the spoof? We could not. The spoof is written in a different style than the entire corpus of his published works, except perhaps a college textbook. However the college textbook had a co-author who may account for any differences. The spoof contains a lot of words that occur in no other document in the English language and in no other works of Asimov. After two thousand years some of Asimov’s works are lost, computer analysis would cause some scholar to say the spoof was written by someone else, say, Pseudo-Asimov.
So what if you prove that one book had more than one author?
Computer analysis depends on analyzing sentence structure and phenomena such as hapaxlegomena, that is words (legomena) that appear only once (hapax). For example, it is easy to show that the long ending of Mark contains phrasing that is not characteristic of the rest of the gospel of Mark and that it contains a few hapaxlegomena. Some, based on their personal theology, say it shows a theological break with the body of the gospel, but that is an improper intrusion of exegesis into the text before the text is established, which is circular reasoning. Therefore, many critics conclude that the long ending is not part of Mark and was written by another author. However, they do not have a large enough corpus of Mark’s writings to determine whether the long ending fits within the author’s stylistic range. Even with as large a sample as we have with Asimov, stylistic analysis can produce invalid results! Even if it turns out that the long ending was written by someone else, it does not matter, because authorship and canonicity are two entirely different things.
Even if the long ending of Mark was not written by the author of Mark, it is still canonical. And even if the long ending is proven to be by a different author, it does not prove that the long ending is false. For example, Mozart died before he completed his final masterpiece and someone else completed it. We do not have to use computer analysis to determine that the ending was written by a different composer, because that is only a curiosity interesting to a musicologist, not to the concert-goer. Proving that someone else wrote the long ending to Mark is not necessarily relevant to the exegete, for whom the canon is more important than authorship. The gospel of Mark is still canonical, even if it was written by Fred.
So what if you prove that someone else edited a gospel?
I write for publication and sometimes the publisher changes my wording here and there. They like to change the title to fit the theme of the magazine issue and they sometimes change my ending; often there is a grammatical construction here and there that they change. I can easily read through and find the changes, because the phrasing just isn’t me. However, the meaning hasn’t changed. I am still the legal author. The whole thing is attributed to me. So if you analyze one of my articles and detect where the magazine made a change, it would tell you about how magazines publish, but it would shed no light on me or my meaning. It would make no difference to the person who sets out to exegete me, to analyze my thought.
So my problems with literary analysis as an exegetical tool for analyzing the New Testament are as follows:
- We do not have a large enough undisputed sample of any of the writers’ works to know what their breadth of style is.
- The state of scholarship is shaped by the necessity that doctoral theses be original and therefore by the necessity of discovering controversy and difficulties.
- No one gets a doctorate by writing an apologetic work or by proving that Paul wrote Galatians. No one ever was catapulted to the New York Times best-seller list by writing a boring book that vindicated orthodoxy. We must take these pressures into account before we take scholarly opinions at face value.
- Many textual problems do not impact the interpretation of the text.
- For example, the fact that the pericope of the adulterous woman does not belong to John is no big news; it was acceptable as canonical by people who were aware of the textual problem. Hebrews was accepted into the canon even though its authorship was unknown at the time. Who wrote something is quite a different question from whether it is historically reliable and whether it is canonical.
Conclusion
The literary analysis of the New Testament is valuable and it is interesting detective work. However, the sample is too small, the methods are untested, and the academic pressures are too strong. Even when the findings are objectively valid, they are not necessarily relevant to other disciplines, such as exegesis.