Shakespeare Apocrypha | Penny's poetry pages Wiki | FANDOM powered by Wikia
In all, Foster got only four of his plus hardballs over the plate, a disturbingly high error rate for someone as intolerant of errors as he professes to be. He, by contrast, has made over 40 errors of his own, about half in each response, and about half of them major. It highlighted important differences between our approach and his. We have explained and supported every step of our analysis. When good evidence shows we have made a substantive mistake, we have admitted it and fixed it.
Silver- bullet tests are orders of magnitude more reliable, both in theory and in practice.
You could just as well be Little Miss Muffet. The other good news is that, after two years of determined bashing by an author- ship blackbelt who was not pulling any punches, our work did not shatter.
Did Shakespeare Write the Funeral Elegy? What about the other half of the debate, the one arguing whether or not Shakespeare wrote the Elegy? It now seems so to him as well below. Each of these is a silver bullet in the Shakespeare ascription. The patient might survive two or three such hits, but a dozen such hits are far too many for a cheerful prognosis. Brian Vickers forthcoming and Gilles Monsarrat now argue from smoking-gun evidence that FE is also much more loaded with features shared with John Ford than with features shared with Shakespeare.
Our studies show that the odds of Shakespeare authorship are about 3, times worse than the odds for Ford our In June , without having read Vickers and with no direct mention of us, but supposedly convinced by Monsarrat, Foster finally publicly conceded that Ford was the obvious author. Not at all. Elegy by W. The bad part was scraping off the mud. We once likened our controversy with Foster over the Elegy to a land war in Asia over the literary equivalent of the Spratly Islands.
It also got our work an immediate, thorough, highly adversarial going- over by an authorship blackbelt, which it weathered with astonishingly low erosion. But we still feel like the man who was tarred, feathered, and ridden out of town on a rail. References Crain, C. Elliott, W. Foster, D. PMLA, , p. CHum, 30 a , p. Author Unknown.
New York: Henry Holt and Company, Monsarrat, G. Review of English Studies, 53 , p. Vickers, B.
Counterfeiting Shakespeare. Cambridge University Press, forthcoming. Related Papers. Can the Oxford Candidacy Be Saved? By Robert Valenza. Part II: Conclusion. And then there were none: Winnowing the Shakespeare claimants. Download pdf.
by David Kathman
Remember me on this computer. Enter the email address you signed up with and we'll email you a reset link. Need an account? It, too, has six other rejections by our tests. Its whenas equivalent, likewise, makes a seventh rejection, further weakening the case for sole Shakespeare authorship. These are enough to make Cym a Shakespeare outlier on this test, but, like the whereas equivalents in 2H6 , not quite enough to justify changing our profile.
Foster has faithfully tried to do so, with commendable attention to substantiation, and gotten much of it right -- apart from what looks like two major problems. Computer-defined sentences are a step or two more complicated than word counters. Our favorite sentence counter, Grammatik 4 for DOS, gets sentences for 1H6, while whatever Foster is using which probably does not use his own computer-resistant definition of sentence  gets only The same play, counted by Word Perfect 8 and Word97 for Windows, respectively, gets readings of 2, and 2, sentences!
Textcruncher counts sentences implicitly but does not report its results. Where the sentence counters are in the same ballpark, as they normally are and appear to be in this case, the prudent course is to pick one and stick to it throughout the test run. If the test has quirks, as most do, at least they are the same quirks for the entire test run. If Foster sees a pressing need to use manual counts and a different sentence-counter, he is welcome to do so.
It would be a different, but probably sufficient way of doing the same thing, getting similar, but not identical results. But it would also be much more vulnerable to mistakes, judging from what we have seen of it in his article. This mistake is badly compounded by another, which makes the differences between his counts and ours look larger than they actually are. Our published figures are standardized to rates per 20, words and are clearly and repeatedly so described; for example, see our , pp.
Despite all the warnings, Foster has persisted in reading them as if they were raw numbers and then telling us, erroneously, that we have miscounted his b, p. But what we actually reported was 15 hark adversions per 20, words , which amounts to 12 in a play the length of The Tempes t about 16, words. More important, you could get usable results either way, as long as you do it consistently. These are manual counts; they do involve some exercise of judgment, and it should not be surprising if different people, or even the same person at different times, got slightly different results.
Discrepancies between two different, but sufficient counting regimes are not per se proofs that one is unimpeachably right and the other intolerably wrong, only that the test, like many, is not immune to interpretational wobble. We also suggested one of our own. In retrospect, we should actually have given him four hits, counting identifying the Textcruncher glitch as his biggest hit. We are grateful to Foster for these Foster-inspired refinements.
Cumulative Foster score for entire debate: five hits, no runs. What about errors? Let us start with our own. We thought he was completely wrong on the other 16 charged errors, but we now know from checking his Second Response that he was partially right about our whenas counts.
Blunder over Bard's 'lost lines'
Hence, he was actually completely wrong on only 15 of the 19 substantive errors he originally charged us with. We were troubled by three features of the first Foster Response: the seriousness of his charges of incompetence and malfeasance on our part, the flimsiness of his supporting evidence, and his strange practice of ignoring both the concessions and the non-concessions we had sent him in our April letter, discussing five of our tests which rejected the Elegy.
At this point we could accurately count and assess our errors and see how much difference each one in fact had made. Whereas and whenas were counted as one test and, hence, also counted as one error. In other words, this time Foster got only one of his 14 hardballs anywhere near the plate seven percent , an error rate even worse than that of his First Response. Total substantive Elliott-Valenza errors in entire debate: four, all minor -- and all fixed.
In his Second Response, he responded, again erroneously, to one of our minor error charges, and perhaps to as many as half of the serious ones.