When you lie about a person, its best to not lie about something when the proof to the contrary is so easily accessible. This is the situation that Jaaron Wingo, of Christ Before Jesus, has found himself in.
In a recent TikTok Live, Wingo attacked me personally and made the claim that I was asking them questions about stylometry. That I went to them to learn about the subject, and that in turn, they told me everything I needed to know.
Now, I get it. When cornered, lashing out with ad hominems seems to be the tactic that Wingo resorts to. However, in order to have full transparency here, I’ve decided to publish our email exchange, and make some comments about that.
So to make it easy to follow, my emails will be in bold, their emails will be in italics, and any commentary from myself will be in brackets. So let’s dive in with the email that started it all.
I just finished reading your book, Christ Before Jesus, and I was wondering, what Greek New Testament texts did you use for your stylometry results? Did you use the newest version of the Nestle Aland text, or did you rely on a different set of texts?
Thank you,
Dustin White
[The way this email exchange began was just me trying to get some clarification. My intent was always to offer a critique of their book, but in order to be thorough, I wanted to have all of the details.]
Hello there,
Thanks for reaching out and reading the book!
We have used various Greek versions and tend to point people to the version available via Tufts University Perseus Library, which is the WH for both ease of access and copyright purposes. Most people going into stylometry from the biblical studies/humanities end of things often aren’t familiar with data cleansing, parsing the texts, etc. but the Scaife Reader of the Perseus Library makes it pretty straightforward. It’s also a lot easier regarding copyright issues.
[Quick point, Britt and Wingo don’t really do anything to the text, as one should when running a stylometric analysis. I explain elsewhere how this is a major problem, because it distorts the results. So actually doing some data cleansing, and parsing the text, or even just knowing a bit about Greek, is something that is required here.]
For a while we ran tests using the SBLGNT version since we are SBL members and have access to that for free, but it comes in PDF format and is harder for most people to navigate and extract the text from so we don’t tend to recommend it. We’ve also run selections from UBS5 (has identical text to NA28) and another random version we found online, but ultimately the choice is inconsequential in terms of the results. All these acted the same way when we were running our tests and that’s why we were fine in the book directing people to the version at the Perseus Library. We can break down why they always acted the same and why the version isn’t particularly consequential for the results.
[This is either a lie, or they don’t know what’s in their book. They do not direct people to the Perseus Library in the book. I’ve done multiple searches of the text, and Perseus and Tufts never appear. They never, as far as I can see, ever direct people to a certain Greek New Testament or its texts. No mention of the Wescott Hort translation, which is the translation used in the Perseus Library, can be found either. In their book, they don’t direct people to any version.]
If you check out the Tyndale Bulletin 71.1 (2020) 43-63 article by Lanier, he estimates the similarities of the NA28 and WH at 98.5% of the content being similar for the NT overall. 99.2% of the Pauline Epistles were identical on the high end, and Acts was on the low end of 97.2%. Now, his math is actually kind of poor in that article because he just adds up the divisions he made without consideration for percentages of the texts (he has Gospels, Acts, Pauline Epistles, Catholic Epistles, and Revelation as categories, adds up the totals for each and divides by 5). If he actually had weighted them correctly, his overall of 98.5% would be a bit higher because the low end, Acts, makes up only ~13.4% of the NT, whereas the Pauline Epistles make up over 27% of the text.
Using a different approach, we can get another idea of the differences. NA25 and WH had 558 words different in the main text, which contains over 138,000 words. That’s ~0.4% of the versions differing from each other and 99.6% agreement. In NA26, the NA editors estimated a 500 word change from NA25 but didn’t clarify which direction that was in (some could be toward the WH, some could be away). If we round to 1,000 words different from the WH, that’s around 0.72% of the text that is different, or 99.28% that is the same.
Anyways, generally speaking we’re talking about less than 1% difference, and much of that’s in Acts. We did cross check results with other editions just to safeguard against concerns, but even at some of our smallest-size runs we’re talking maybe 1 word per file different on average, which isn’t going to make or break a test when the results are as clear as they are. We’ve scaled down to chapter level and below in some cases all the way up to 4 chapters at a time, as well as whole texts, and we continue to test in different ways just to be sure.
The creators of the software have actually run tests in a peer-reviewed article where they corrupt samples progressively and have found that up to ~30% of a text can be different before it really starts becoming an issue, so less than 1% is pretty solid (and that’s assuming that the NA28 is actually more representative of the “original” versions of the texts). You can also adjust your methodology to account for minor spelling variations, changes in compounds or conjugation, etc. (minor things which most textual variants of the NT are) to a degree, as well. One such option is to run character n-grams as opposed to word n-grams, then instead of taking whole words you can actually catch a lot of those minor variances and the frequencies of their roots rather than the specificities and it would possibly weight it more accurately. We did both word and character n-grams and the results were very similar and there were no significant variations from what we present in the book. Another option, if you’re concerned, is to run the built-in bootstrap consensus tree feature, and run it one word or character n-gram at a time up to whatever level you like, and you’ll be able to create a consensus of each word’s frequency (or you could do this manually and just have the computer export the results for each word at a time and then you look but that kind of defeats the purpose). If you were worried about a ~1% difference you’d be able to account for that easily at very low consensus levels, like probably the lowest possible setting.
[Just to point out, the Britt and Wingo later claim that they do none of the things that they suggest could be done. In fact, the method is just to run Stylo in basically the default manner. I probably should have caught on to something here, because they were overly complicating the process, but it wouldn’t be until later that I realized that they were seemingly making the topic even more complicated than it needed to be. I believe the reason for this is because they are overcompensating for their lack of knowledge here.]
We definitely weren’t going to try and present anything that’s demonstrably false, especially when we have an entire chapter dedicated to trying to help people do the tests themselves to check our work. We’ve actually had quite a few people come up on our TikTok lives who have run the tests themselves, finding the same results.
[The chapter dedicated to their methodology, chapter 7, doesn’t actually list their methodology. They don’t list what texts they use, how they run the data, whether they clean up the data or not, or really anything. As I’ve made super clear in other articles, the whole claim that they were trying to make it easy for others to check their results is just nonsense.]
Fortunately, our research has been increasingly supported by recent scholarship, such as Dr. Nina Livesey’s new book which also puts the Pauline Epistles all in the second century with the Marcionite school, and she took an entirely different route independent of ours. It’s cool how different methodologies can find the same thing despite different tools and techniques.
[There are currently 3 newish books on the subject here. These are all fringe ideas on the edge of modern scholarship. It does not show an increasing support for a second-century dating, and for the most part, have been brushed off. I will be going over these books later on, because even though their overall conclusion is wrong, they still offer some great insight.]
Anyways, the short answer to your question is: WH, SBLGNT, UBS5, and a random version we found online against each other/against key texts, with an early internal preference for SBLGNT and WH for ease of access once we realized they all acted the same given how minute the differences were in relation to the level of analysis. We tend to recommend the WH version found at the Tufts University Perseus Library on their Scaife Reader because it lowers the barrier of entry for most people even though it might vary ~1% from modern scholarship.
[So after all of that, the answer was we used a host of different versions, including one they can’t name. To reproduce their results, we need to know what they ran, so we can test that. To simply name off a number of different texts doesn’t really work.]
Our next book will be covering the Old Testament, Quran, and Book of Mormon and we’re wrapping up writing the final sections on that. It’s been an interesting journey across varying manuscripts and traditions. For the most part, though, the general reliability has been the same and the results equally as interesting.
Thank you,
Matthew and Jaaron
Thanks for getting back to me and clarifying that data. I have another question for clarification. What do you guys mean when you state that this is a first-of-its-kind analysis? As you guys mention in the book, computational stylometry analysis on the GNT goes back to at least Kenny in the 80s. Are you guys referring more to this being a first in using stylometry in proving a second century origin? Just curious here.
Also thank you for the book recommendation, just picked it up and exciting to see what conclusions they have.
Dustin
[I do plan on offering a critique of Livesey’s book in the future.]
No problem!
As far as we know, we are the first to do stylometric analysis on the New Testament at a chapter level or even multi-chapter level. Likewise, we found few studies that have any reliably dated controls and when they do it tends to be much later authors such as Clement of Alexandria. Many New Testament scholars acknowledge that there are likely multiple layers of redaction or even entire letters or chapters added later, but none have broken these down to lower levels, added in hundreds of control texts across 3 centuries, and done computerized stylometric analysis on all of those together as well as subsets of them.
[In my first article on this, I actually discuss this. There is a reason no one has run these stylometric analyses on a chapter level. It doesn’t work. You need much more data than that. And the “control texts” are just silly here as well, as I showed quite clearly in the articles I did. I believe this was all to make their approach seem more scientific than it actually is.]
Thanks again for getting back to me. I think I have just one final question. In Chapter 7, you mention that part of your approach h that you developed was based on different academic papers on Stylo. I’ve been searching for such papers, but I’m coming up short. I did find some papers by Eder, but those were more about stylometry in general, and have been fascinating. But I haven’t found any specifically about Stylo, besides the one on R Journal that gives an overview on how to use the program.
Any help would be appreciated.
Thank you again,
Dustin White
No problem!
It was a lot of Eder’s papers, as well as various other papers on the topic that we read. We have a huge folder on the desktop full of various things we used to understand stylometry, Stylo, and other analyses of the texts that we had to source through a variety of means. We’ve had people try to catch us on copyright violations for sharing and using articles, so we can’t share things here but we can link to public places where some of them are posted, and I’m sure you know of other ways to find them if they are paywalled.
[This response is nonsense. You can’t get a copyright violation for simply saying, hey, these are the studies I used, and just listing them. To make the claim that you have all of this information, but you have to hide it, is nonsense, and only screams, I don’t actually have any actual studies. Again, the suggestion that by simply naming an article could get you in trouble is just dumb.]
Here’s a helpful collection of pre-prints from the Computational Stylistics Group that created Stylo. It’s a bit old, but has quite a few helpful resources.
https://github.com/computationalstylistics/preprints
They also have a more updated one on their website:
https://computationalstylistics.github.io/publications
[I do cite a number of these studies in the articles I wrote. I had in fact already read these. The issue is that they don’t use Stylo. This makes me think they didn’t read those articles. And this is a big problem, because if there are no actual academic studies that use Stylo, and detail their methodology, we have to come to the conclusion that Britt and Wingo lied in their book when they say they based their own methodology on something that doesn’t exist. Now, they really should read some of the articles in those links, as there are many in there that debunk what Britt and Wingo are attempting to do.]
Another good article but not using Stylo (though he must be aware of it since he cites one of Eder’s other papers), which largely validates our findings is Jacques Savoy’s “Authorship of the Pauline Epistles Revisited,” preprint here:
http://members.unine.ch/jacques.savoy/Articles/StPaul.pdf
[There is no reason to assume that the author, Savoy, was aware of Stylo. Citing a paper by Eder doesn’t mean one knows everything about Eder. I won’t go over that much here, because I dive into this in my other articles. But it just shows how Britt and Wingo jump to conclusions.]
He finds that the 7-letter hypothesis doesn’t particularly work. The 10-letter and 13-letter hypotheses are certainly out. What he does find is a link between the “four core letters” that we find, and that these have some connections to 1 Thes. and Phillipians. But, keep in mind, he runs whole texts in this test.
What he struggles to explain is that 1 Thes. matches with 2 Cor., but not the other three of the four core letters. Likewise, he finds connections between Galatians and Philippians, but Philippians doesn’t match the other three of the four core letters. Because we run at the 1, 2, 3 chapter levels and various other divisions (such as the scholarly divisions of 2 Corinthians given its composite nature), we are actually able to explain this, and you can even replicate this yourself.
The reason 1 Thes. matches 2 Cor. but not the others because 2 Cor., as we explain in the book, has large amounts of writing from at least 2 authors – the group responsible for the bulk of the four core letters and then also work from the 1 Thes. group. No other letter of the four core letters have parts that match 1 Thes. either in whole (as Savoy’s peer-reviewed study shows) or in part (as we show in the book). As such, when you run whole texts, 1 Thes. matches 2 Cor. but not the rest because 2 Cor. contains work and thus stylometric signals from the 1 Thes. author/group.
The Galatians and Philippians connection can be explained similarly. Galatians is heavily edited and is very possibly from two (or more) authors, and Philippians is also a composite letter, like 2 Cor. (this is widely agreed upon in scholarship). Pieces from one author or group could very well be matching parts of the other when you run them at whole texts, which very easily explains why Philippians might match Galatians but not Romans, 1 Cor., or 2 Cor.
[Savoy’s article suffers from a lot of problems. It’s why it’s often ignored because it doesn’t add anything really. And again, there are just so many issues with their own methodology. So to recommend such a poor study as an example of one to copy or to emulate one’s own study after, is nonsense.]
Even if one were worried about sample sizes being too small, with the four core letters particularly there’s enough in them to divide them up several ways and meet even more conservative sample size expectations.
Keep in mind that a lot of times it isn’t looking just for a paper on Stylo, but looking for studies that used Stylo and then reading the paper’s methodology. Peer review in data science and related journals heavily focus on the methodology and its repeatability, so when someone uses Stylo for something and publishes in a relevant data science or technology journal, their methodology should be relatively sound.
[Such studies, that used Stylo, don’t really exist. That was the problem. That’s why I asked them what they suggested, because I found nothing. The fact that they run around the question, and never actually list a few articles, says so much here. Especially when they supposedly had those articles at their fingertips.]
It’s not that there is some existing secret knowledge or key in a document that unlocks all of this. It took a year or two of studying the data science literature, sourcing and cleaning files, understanding the program both from a theoretical perspective and a hands-on approach, reading the program documentation, and even digging into the codebase in order to develop our approach (we did not modify the software in any way). To get it up and running and test it is easy (which is what we recommend people do in the book and why we say it’s easy), but to actually work to validate and promote specific methods took a ton of work.
[Again, just more overcomplicating this process. It’s not that hard. And honestly, if they spent two years doing all of this, they should have spent some of that time reading the relevant literature on the topic.]
We were only comfortable publishing the book after we had thoroughly approached stylometry, Stylo, and other aspects of the topic from a variety of approaches. We tested every hypothesis we could come across, and knew we would be encouraging people to check our work themselves so we wanted to be sure we were doing it right. Something we say in the book and in interviews a lot is that we really expected the seven-letter hypothesis for Paul’s letters to be true and up to that point we believed that was the case. It’s just that there’s absolutely no way one can draw that conclusion while using modern tools, which is why we had to abandon it. We felt like following the data was the right thing to do, which is why we present what we do in the book and encourage others to test these tools themselves.
[And yet they don’t spell out their methodology, and when questioned about their methodology, they do everything they can in order to avoid spelling it out. I highly doubt they want people to actually replicate their results, because if they did, they would have been transparent about their methodology.]
On the other hand, we’ve seen a number of people, including some scholars whose past books and reputation have been (in part) about upholding things like the seven-letter hypothesis, not only reject the findings, but attempt to discredit the entire practice of stylometry on New Testament texts. It’s unlikely the same people would reject the findings on things like Homer, Plato, Aristotle, Shakespeare, and other authors we (and others) have run tests on. Ironically, if you look at Shakespeare studies (a surprisingly contentious field), you see some authors who have built careers around certain hypotheses and claims also reject stylometry specifically on Shakespeare but not in other cases because it contradicts their personal views. It’s a pattern of people selectively accepting new tools depending on when they clash or complement their existing beliefs. We have done our best to do the opposite – rework our understanding of New Testament authorship, Homer, Plato, and others we’ve tested based on the data rather than sticking to any past beliefs that contradict the data.
[Who are these scholars? They don’t exist. This is a common tactic with Britt and Wingo. To make claims that scholars are trying to keep this hidden, yet they can’t name any scholar. Stylometry has been used for decades within Christian studies. Bart Ehrman has done amazing work on this. Virtually any discussion about Josephus and his mention of Jesus, uses Stylometry. So they are once again spouting nonsense.]
Really, it would be hard for us to go against the data when we have tested Stylo on ourselves not only at the chapter level of our book (you can actually see that run in the book itself) which got it 100% right, but also having tested it on our own writings at levels as small as 330 word samples and it still getting it 100% right. We definitely don’t expect 100% accuracy in most cases, and these are specific scenarios, but our Greek tests of known authors were in the range of 96-100% accuracy as well. It’s an uncomfortable thing to have to make an unpopular claim, but we think the truth matters and that’s why we have put so much work into this project and taken on the criticism and attacks.
[When they manipulate the data, and misread their dendograms, it’s not hard to go against the data. Because the data is useless at that point.]
As just a side project we’re working on coding a similar software in Python since it has a variety of analysis tools and the ability to be packaged in a desktop app, hopefully increasing accessibility to these tools. We think a big part of why this type of approach is just now catching on is because there isn’t a lot of overlap between people who are interested in classics and biblical studies and also have the background/skills to do computerized stylometric analysis. Usually if you can work R or code at any level, you’re usually going to take on a higher-paying job than apply the skills to ancient texts, plus it’s unlikely people with the coding background will be familiar with the nuances of these texts and the history of scholarship on them, which is also needed. A big part of this gap is that Stylo is “gated” behind having to install R and RStudio – a task that isn’t all that hard, but is something beyond what most people, even humanities scholars, do on computers on a regular basis.
[The whole Python route has been done quite a few times. And it’s been used for Biblical Studies. Not to mention, computational stylometry has been used in Biblical Studies for some 60 years. And it’s not like having a coding background, or learning how to code, is necessarily hard. Nor does it mean you’re going to find some high-paying job. My 10-year-old son is learning how to code. It’s a program offered at our library. I learned how to code in high school. I’ve done coding for years, largely for websites, and even for some video games. If you look at the gaming community, like a game like Farming Simulator, there are many people who are coding in a variety of manners, and they are doing it for free because they want cool things in the game. It’s not a high bar anymore. More so though, many scholars tend to also look at areas outside of their expertise. My academic advisor, when I was majoring in Religious Studies and History, had spent a great deal of time learning about quantum physics. That sort of thing was encouraged, because well-informed scholars produce better works. And again, that’s why we have 6 decades of Biblical Scholars, and others in relevant fields here, who have studied stylometry and implemented it into their own studies. This whole section just screams that Britt and Wingo haven’t done their own research, and have no idea what they are talking about.]
We appreciate the good questions! We definitely recommend checking the methodology sections on papers in the links above. You can also search for other works by the authors listed in those collections since they go on to use Stylo elsewhere.
Thank you again. I greatly appreciate your willingness to answer all these questions. And I guess I do have one more.
So I have Stylo downloaded, I’ve got the Greek texts broken into chapter, and I’ve ran a few analyses. Im running some on my own, and then I’m attempting to also replicate your set up. My question is about your methodology.
So, Im assuming you’re running 100 MFW, using Eders Delta (or Eders Simple, as you mentioned in the book those worked best). Im guessing youre not culling any words. But my big question is surrounding n-grams. Are you running either character or word n-grams?
Also, are you doing any pre or post processing? So when I look at the word list that is created, for instance, I see that for the term Christ, in Greek there are a couple forms that are listed. So while they all mean Christ in English, they are in different forms in Greek based on grammar. Do you factor this in, or is it unimportant? I do notice that looking at the word frequency list, that these other forms are used much less, so maybe they dont really skew anything. So just curious if there are any sorts of processing that you do to potentially minimize issues, or if its even necessary.
Again thank you for being open to answering these questions.
Dustin
[At this point, I was just asking bluntly what their methodology was. I did not expect to get any real answer back. I expected a lot more of a run around, as they had already done multiple times. I expected them to say a lot without saying anything. I was actually surprised here.]
No problem, we’re happy to help!
Yes, we usually use 100 MFW on Eder’s Simple, which we’ve tended to lean more toward since publishing the book. Eder’s Delta works well too. We don’t do any culling, and we primarily do 1 word n-grams. It also doesn’t hurt to run it a few different ways, because what’s important is the overall trends you see. It’s not an all-knowing software but rather a tool like a microscope that needs to be focused.
[So after all of this, their answer is, we run Stylo in the most basic manner. The only change we make is to change the distance measure. So their overly complicating everything above was just word vomit. Because they literally took the easiest route here. The only way it could have been easier is if they didn’t change the distance measure.]
If you wanted to control a little more for Greek morphology and inflection then you could do short character n-grams (I’d say no more than 4 or 5 at most, no less than 2). Usually it’ll come up pretty similarly to the 1 word n-gram tests because, like you mentioned, a lot of the variations come far down the list and are weighted a lot less than the more frequent words.
[This just shows they don’t know Greek. So part of the problem that I explain elsewhere is that Stylo doesn’t pick up on case or accents. For instance, the word for for, in Greek, is gar. The term gar can contain different accent marks over the a. Running an n-gram here wouldn’t achieve anything, because Stylo will still see the word differently based on the accent. So they are spouting nonsense here because they don’t understand Greek.]
For the book we didn’t really do much, if any, pre or post processing, primarily because we don’t want to complicate the process for non-experts and various people who might be trying it out. While it’s not unimportant, we didn’t want to do anything extra that would draw more criticism than we’d already knew we’d be getting. You could argue either way for needing to control for various forms of words, but ultimately we came down on the side that it’s probably best 1) to just keep it simpler for wider adaptation of the tool and let more talented and skilled scholars hone in on the specifics once the tool is more widely applied, 2) we think there’s a fair argument that grammatical changes are also part of an author’s style. Yes, it’s limited to the situational use of the language such as tenses, etc. but we wanted to treat the authors on their own grounds – they wrote what they wrote and chose to write what they did, and 3) like you said, a lot of the grammatical variations tend to be low down on the list, and given it’s a frequency-based formula they tend to have less impact. The literature is also divided on culling’s value, so we just decided to keep it as simple as possible and try to primarily promote adoption of the tool.
[This is such a copout. Their argument is basically, we didn’t do a proper study because we didn’t want to make it too complicated so others could replicate our study. This is insane. And I will be very honest here. I asked a very leading question here when I spoke about the other forms of Christ that popped up. I suspected that they did no processing. And as I show in my other articles, that has a huge impact. But I also didn’t want to have them simply dismiss me. Which is why I used only the example of Christ, because with the term Christ specifically, it generally didn’t create a massive issue. One could argue I was being a little bit dishonest here, and maybe that’s true. But this also really confirms that they are out of their depth here. The fact that they argue that grammatical changes are part of an author’s style really makes little sense. I mentioned the term gar above, and how it can have different accent marks. That’s not going to be impacted by an author’s style. It may just be that a different meaning is being used. For instance, the word ho, with a grave accent, means the, while ho with an acute accent means who. So claiming it’s just part of an author’s style just shows they have no clue about Greek, and that causes massive problems here.]
When you get to running the gospels we recommend only running one at a time at first. Given the Synoptic Problem, things can get messy pretty fast (as we’re sure you’ve seen).
Also, it might not hurt to just download the full texts too since that’s only 27 texts (technically less since 1-chapter texts like Philemon are already downloaded at the chapter level). That way, if you want to kind of just lighten the interpretive load you can. There are 260 chapters in the NT, and when that graph pops up it’s daunting to try and figure out what’s going on the first time. You can also just do subsets – for example, maybe just run the Pauline letters by themselves just so you can hone in on that and get grounded in where things show up. We’ve found that, especially at first, it’s easier to start with fewer texts and get a grasp on that rather than tackle the whole thing at first, but still having the larger picture on hand to compare.
For example, we had this issue with the Old Testament for our new book.There are over 900 chapters in the Old Testament, plus we have a few works from the Dead Sea Scrolls in Hebrew that were large enough to run or divide and run. We combined some chapters to increase sample size, but still ended up with early tests of >400 files. That’s more than we were able to take in all at once starting from scratch, even though it largely followed some known trends already established in the scholarship. Plus, Chronicles likely used Kings and other canonical sources, meaning it’s kind of like a Synoptic Problem there. So we made runs that excluded the Deuteronomic History, runs that excluded Chronicles, runs that only ran prophetic texts, runs that had upwards of 10-15 chapters at a time, etc. to compare to the large chapter-level run.
So when you get the time and if you stay interested in it, we’d recommend grouping some chapters together once you start to see trends. The easiest way to do this is go back to Scaife or your preferred source and just download them again rather than trying to group them with your existing files (unless you’ve done some cleaning or editing that would be lost). Also, groupings of 2-4 chapters at a time also make for a much easier interpretation effort early on. That’s not to try and make you do more work, but just a suggestion to help get an interpretive framework to work with if the 260 file (or more) runs are hard to parse.
[My intention here was always to try to run the analysis as Britt and Wingo did; to try to replicate their results. I ran tests by chapter, as well as by book. Running by chapter really isn’t that much more difficult. The fact that they continually try to make this all seemingly more difficult than it really is just boggles my mind. They say they want people to try this on their own, and then they spend so much time hampering that and trying to make it sound so much more difficult than it really is.]
Some other texts you might want to grab are Ignatius’ letters (the 7 letter recension), 1 Clement, the Epistle of Barnabas, and then some easy non-Christian controls like Philo, Josephus, Lucian of Samosata, and a few other semi-contemporary authors. Now, if the file size of Josephus’ works are kept significantly larger then we’ve noticed that basically by default they’ll show up further away. A simple Python script could cut these larger texts into specific file sizes to more closely match NT chapters (maybe 750-1000 words) if you wanted. That might be beyond the scope of what you’re wanting to do, though.
Also, keep in mind you’ll be looking for broader trends. Each test will look a little bit different, so you’re looking for overall trends. We discuss these trends in the book, so whenever we make a claim it’s almost always about larger trends. We use single test examples for images in the book because it’s a book for a broad audience and images of 50 runs of mostly the same texts at varying sizes, controls, etc. would make for a mess of a book especially when that’s done for each topic (Paul, gospels, etc.). Plus, book pages are only so large so we can’t usually put the whole test on a single page (or even two).
[They have a website. They could put all of their tests on there. They could have shown the entire dendrograms if they wanted. I did. Once you have a website, it’s not hard to upload some files to it. They could have hosted those files actually anywhere, and just directed people to them. This is another copout.]
So, for example, you want to look for trends as to “how do these texts/chapters act across a bunch of tests and does it usually/always line up.” With Paul’s letters, you’ll see this across everything from whole texts down to chapter level analysis – 1 Cor, 2 Cor, Romans, and Galatians, with the exception of some chapters, show up together. Likewise, the Pastorals show up together but far away from most/all other Pauline content.
Anyways, hopefully this was helpful! Let us know if you find anything different or have any questions or results you’d like to bounce back and forth. We definitely acknowledge we could be doing something wrong, but we’ve run thousands of tests and even done live demonstrations on multiple YouTube channels, at public universities, and private organizations. We’ve done our best to be careful and thorough but that doesn’t guarantee perfection by any means. We appreciate you taking the interest in testing all this, it’s always fun when someone picks up the tools to advance the field. We truly believe that even if our work is flawed somewhere that the use of tools like Stylo are worth it all. We’re all fortunate to be in a time where new tools are being developed and pioneered to allow us to dig into history in a way that couldn’t easily be done not too long ago.
[This is their biggest lie. They don’t want to be shown they are doing anything wrong. Wingo makes this very clear on his lives. You question him, he shouts you down or mutes you. Then he blocks you, because he doesn’t want to be challenged. It’s why he never faces critiques. Instead, he acts like a victim, and resorts to ad hominems. In the live I did with him, he refused to even let me get my point out and then outright denied what was in his book. And just a couple days ago, while I was listening to one of his lives, he even called me out, invited me to join, and then instead of allowing that, blocked me. Because he doesn’t want any actual discussion or push back. As my critique clearly shows, they didn’t try to be careful, they weren’t thorough, and honestly, it seems like they don’t care about the facts at all.]
Matthew Britt and Jaaron Wingo,
I’m including a link to my full breakdown (in a series of articles) of your methodology and results (chapters 7 and 8 of your book). It shows the major flaws in your methodology, and the results that came from that methodology. In those articles, I also dive into the points I tried to make with Jaaron on the live discussion, which may be of interest as some of the points are rather major, and it seems as if Jaaron is unaware of various arguments that are made in your book.
I send this link as Jaaron made a big ado about all of this during the live discussion. And I think if you two choose to actually go through with a second book, actually getting your methodology correct is highly important. Here is the link: https://thecuriouschristian.org/category/history/jesus-mythicism/christ-before-jesus/page/2/
Dustin White
[Wanting to be as transparent as possible, this was the final email I sent them, one in which I didn’t get a response. I shared my results. I honestly don’t expect them to ever really deal with them, at least not any more than they “dealt” with Richard Carrier’s critique. Which was basically to just lie about him and name call.]
