Blog RSS Feed

A Javascript Bible Passage Reference Parser

November 18th, 2011

Browse the Github repository of a new Bible-reference parser written in Coffeescript / Javascript (it understands text like “John 3:16”), try a demo, or review the annotated source. You can use the parser as-is or as a starting point for building your own–the source code includes 200,000 real-world passage references to give you a head start. It’s designed to handle how people actually type Bible references (typos and all) and tries hard to make sense of any input you give it.

From the readme:

This is the fourth complete Bible reference parser that I’ve written. It’s how I try out new programming languages: the first one was in PHP (2002), which saw production usage on a Bible search website from 2002-2011; the second in Perl (2007), which saw production usage on a Bible-related site starting in 2007; and the third in Ruby (2009), which never saw production usage because it was way too slow. This Coffeescript parser (at least on V8) is faster than the Perl one and 100 times faster than the Ruby one.

I chose Coffeescript out of curiosity–does it make Javascript that much more pleasant to work with? From a programming perspective, the easy loops and array comprehensions alone practically justify its use. From a readability perspective, the code is easier to follow (and come back to months later) than the equivalent Javascript–the tests, in particular, are much easier to follow without all the Javascript punctuation.

My main interest in open-sourcing and thoroughly documenting this code lies in giving future programmers data and code that they can use to build better parsers. While this code reflects my experience, it’s hardly the last word on the subject.

Jim LePage’s Illustrations of Every Bible Book

November 11th, 2011

Jim LePage has just finished a two-year project in which he’s created an illustration for every book of the Bible. The always-underappreciated Obadiah is my favorite:

A giant hand reaches for a bird, with the caption, “Though you soar like the eagle, I will bring you down. Obadiah.”

Jim also runs Gettin’ Biblical, a site that showcases non-schlocky Christian-themed artwork. I particularly enjoyed The Savior collage and the papercut-esque Burning Bush. Good examples of “Christian art” (a difficult term to define if you’ve ever talked to artists who are Christians) are hard to come by, and I appreciate Jim’s efforts to collect them.

Update September 2016: Removed outdated link to Gettin’ Biblical.

Applying Sentiment Analysis to the Bible

October 10th, 2011

This visualization explores the ups and downs of the Bible narrative, using sentiment analysis to quantify when positive and negative events are happening:

Sentiment analysis of the Bible.
Full size download (.png, 4000×4000 pixels).

Things start off well with creation, turn negative with Job and the patriarchs, improve again with Moses, dip with the period of the judges, recover with David, and have a mixed record (especially negative when Samaria is around) during the monarchy. The exilic period isn’t as negative as you might expect, nor the return period as positive. In the New Testament, things start off fine with Jesus, then quickly turn negative as opposition to his message grows. The story of the early church, especially in the epistles, is largely positive.

Methodology

Sentiment analysis involves algorithmically determining if a piece of text is positive (“I like cheese”) or negative (“I hate cheese”). Think of it as Kurt Vonnegut’s story shapes backed by quantitative data.

I ran the Viralheat Sentiment API over several Bible translations to produce a composite sentiment average for each verse. Strictly speaking, the Viralheat API only returns a probability that the given text is positive or negative, not the intensity of the sentiment. For this purpose, however, probability works as a decent proxy for intensity.

The visualization takes a moving average of the data to provide a coherent story; the raw data is more jittery. Download the raw data (400 KB .zip).

Update October 10, 2011

As requested in the comments, here’s the data arranged by book with a moving average of five verses on either side. (By comparison, the above visualization uses a moving average of 150 verses on either side.)

Sentiment analysis of the Bible, arranged by book.
Full size download (.png, 2680×4000 pixels).

Update December 28, 2011: Christianity Today includes this visualization in their December issue (“How the Bible Feels”).

Bible Annotation Modeling and Querying in MySQL and CouchDB

September 1st, 2011

If you’re storing people’s Bible annotations (notes, bookmarks, highlights, etc.) digitally, you want to be able to retrieve them later. Let’s look at some strategies for how to store and look up these annotations.

Know What You’re Modeling

First you need to understand the shape of the data. I don’t have access to a large repository of Bible annotations, but the Twitter and Facebook Bible citations from the Realtime Bible Search section of this website provide a good approximation of how people cite the Bible. (Quite a few Facebook posts appear to involve people responding to their daily devotions.) These tweets and posts are public, and private annotations may take on a slightly different form, but the general shape of the data should be similar: nearly all (99%) refer to a chapter or less.

Large dots at the bottom indicate many single-verse references. Chapter references are also fairly prominent. See below for more discussion.

Compare Bible Gateway reading habits, which are much heavier on chapter-level usage, but 98% of accesses still involve a chapter or less.

The Numbers

The data consists of about 35 million total references.

Percent of Total Description Example
73.5 Single verse John 3:16
17.1 Verse range in a single chapter John 3:16-17
8.4 Exactly one chapter John 3
0.7 Two or more chapters (at chapter boundaries) John 3-4
0.1 Verses spanning two chapters (not at chapter boundaries) John 3:16-4:2
0.1 Verses spanning three or more chapters (not at chapter boundaries) John 3:16-5:2

About 92.9% of posts or tweets cited only one verse or verse range; 7.1% mentioned more than one verse range. Of the latter, 77% cited exactly two verse ranges; the highest had 323 independent verse ranges. Of Facebook posts, 9.1% contained multiple verse ranges, compared to 4.2% of tweets. When there were multiple ranges, 43% of the time they referred to verses in different books from the other ranges; 39% referred to verses in the same book (but not in the same chapter); and 18% referred to verses in the same chapter. (This distribution is a unusual—normally close verses stick together.)

The data, oddly, doesn’t contain any references that span multiple books. Less than 0.01% of passage accesses span multiple books on Bible Gateway, which is probably a useful upper bound for this type of data.

Key Points

  1. Nearly all citations involve verses in the same chapter; only 1% involve verses in multiple chapters.
  2. Of the 1% spanning two or more chapters, most refer to exact chapter boundaries.
  3. Multiple-book references are even more unusual (under 0.01%) but have outsize effects: an annotation that references Genesis 1 to Revelation 22 would be relevant for every verse in the Bible.
  4. Around 7% of notes contained multiple independent ranges of verses—the more text you allow for an annotation, the more likely someone is to mention multiple verses.

Download

Download the raw social data (1.4 MB zip) under the usual CC-Attribution license.

Data Modeling

A Bible annotation consists of arbitrary content (a highlight might have one kind of content, while a proper note might have a title, body, attachments, etc., but modeling the content itself isn’t the point of this piece) tied to one or more Bible references:

  1. A single verse (John 3:16).
  2. A single range (John 3:16-17).
  3. Multiple verses or ranges (John 3:16, John 3:18-19)

The Relational Model

One user can have many rows of annotations, and one annotation can have many rows of verses that it refers to. To model a Bible annotation relationally, we set up three tables that look something like this:

users

user_id name
1

annotations

user_id annotation_id content
1 101
1 102
1 103

annotation_verses

The verse references here are integers to allow for easy range searches: 43 = John (the 43rd book in the typical Protestant Bible); 003 = the third chapter; the last three digits = the verse number.

I like using this approach over others (sequential integer or separate columns for book, chapter, and verse) because it limits the need for a lookup table. (You just need to know that 43 = John, and then you can find any verse or range of verses in that book.) It also lets you find all the annotations for a particular chapter without having to know how many verses are in the chapter. (The longest chapter in the Bible has 176 verses, so you know that all the verses in John 3, for example, fall between 43003001 and 43003176.) This main disadvantage is that you don’t necessarily know how many verses you’re selecting until after you’ve selected them. And using individual columns, unlike here, does allow you to run group by queries to get easy counts.

annotation_id start_verse end_verse
101 43003016 43003016
102 43003016 43003017
103 43003016 43003016
103 43003019 43003020

Querying

In a Bible application, the usual mode of accessing annotations is by passage: if you’re looking at John 3:16-18, you want to see all your annotations that apply to that passage.

Querying MySQL

In SQL terms:

select distinct(annotations.annotation_id)
from annotations, annotation_verses
where annotation_verses.start_verse <= 43003018 and
annotation_verses.end_verse >= 43003016 and
annotations.user_id = 1 and
annotations.annotation_id = annotation_verses.annotation_id
order by annotation_verses.start_verse asc, annotation_verses.end_verse desc

The quirkiest part of the SQL is the first part of the “where” clause, which at first glance looks backward: why is the last verse in the start_verse field and the first verse in the end_verse field? Because the start_verse and end_verse can span any range of verses, you need to make sure that you get any range that overlaps the verses you’re looking for: in other words, the start_verse is before the end of the range, and the end_verse is after the start.

Visually, you can think of each start_verse and end_verse pair as a line: if the line overlaps the shaded area you’re looking for, then it’s a relevant annotation. If not, it’s not relevant. There are six cases:

Start before, end before: John 3:15 / Start before, end inside: John 3:15-17 / Start before, end after: John 3:15-19 / Start inside, end inside: John 3:16-18 / Start inside, end after: John 3:17-19 / Start after, end after: John 3:19

The other trick in the SQL is the sort order: you generally want to see annotations in canonical order, starting with the longest range first. In other words, you start with an annotation about John 3, then to a section inside John 3, then to individual verses. In this way, you move from the broadest annotations to the narrowest annotations. You may want to switch up this order, but it makes a good default.

The relational approach works pretty well. If you worry about the performance implications of the SQL join, you can always put the user_id in annotation_verses or use a view or something.

Querying CouchDB

CouchDB is one of the oldest entrants in the NoSQL space and distinguishes itself by being both a key-value store and queryable using map-reduce: the usual way to access more than one document in a single query is to write Javascript to output the data you want. It lets you create complex keys to query by, so you might think that you can generate a key like [start_verse,end_verse] and query it like this: ?startkey=[0,43003016]&endkey=[43003018,99999999]

But no. Views are one-dimensional, meaning that CouchDB doesn’t even look at the second element in the key if the first one matches the query. For example, an annotation with both a start and end verse of 19001001 matches the above query, which isn’t useful for this purpose.

I can think of two ways to get around this limitation, both of which have drawbacks.

GeoCouch

CouchDB has a plugin called GeoCouch that lets you query geographic data, which actually maps well to this data model. (I didn’t come up with this approach on my own: see Efficient Time-based Range Queries in CouchDB using GeoCouch for the background.)

The basic idea is to treat each start_verse,end_verse pair as a point on a two-dimensional grid. Here’s the above social data plotted this way:

A diagonal line starts in the bottom left corner and continues to the top right. Large dots indicate popular verses, and book outlines are visible.

The line bisects the grid diagonally since an end_verse never precedes a start_verse: the diagonal line where start_verse = end_verse indicates the lower bound of any reference. Here are some points indicating where ranges fall on the plot:

This chart looks the same as the previous one but has points marked to illustrate that longer ranges are farther away from the bisecting line.

To find all the annotations relevant to John 3:16-18, we draw a region starting in the upper left and continuing to the point 43003018,43003016:

This chart looks the same as the previous one but has a box from the top left ending just above and past the beginning of John near the upper right of the chart.

GeoCouch allows exactly this kind of bounding-box query: ?bbox=0,43003016,43003018,99999999

You can even support multiple users in this scheme: just give everyone their own, independent box. I might occupy 1×1 (with an annotation at 1.43003016,1.43003016), while you might occupy 2×2 (with an annotation at 2.43003016,2.43003016); queries for our annotations would never overlap. Each whole number to the left of the decimal acts as a namespace.

The drawbacks:

  1. The results aren’t sorted in a useful way. You’ll need to do sorting on the client side or in a show function.
  2. You don’t get pagination.

Repetition at Intervals

Given the shape of the data, which is overwhelmingly chapter-bound (and lookups, which at least on Bible Gateway are chapter-based), you could simply repeat chapter-spanning annotations at the beginning of every chapter. In the worst case annotation (Genesis 1-Revelation 22), you end up with about 1200 repetitions.

For example, in the Genesis-Revelation case, for John 3 you might create a key like [43000000.01001001,66022021] so that it sorts at the beginning of the chapter—and if you have multiple annotations with different start verses, they stay sorted properly.

To get annotations for John 3:16-18, you’d query for ?startkey=[43003000]&endkey=[43003018,{}]

The drawbacks:

  1. You have to filter out all the irrelevant annotations: if you have a lot of annotations about John 3:14, you have to skip through them all before you get to the ones about John 3:16.
  2. You have to filter out duplicates when the range you’re querying for spans multiple chapters.
  3. You’re repeating yourself, though given how rarely a multi-chapter span (let alone a multi-book span) happens in the wild, it might not matter that much.

Other CouchDB Approaches

Both these approaches assume that you want to make only one query to retrieve the data. If you’re willing to make multiple queries, you could create different list functions and query them in parallel: for example, you could have one for single-chapter annotations and one for multi-chapter annotations. See interval trees and geohashes for additional ideas. You could also introduce a separate query layer, such as elasticsearch, to sit on top of CouchDB.

Holy Week Timeline: Behind the Music

April 16th, 2011

It’s always fun for me to learn the process people use to create visualizations, and especially why they made the decisions they did. So please forgive me if you find this post self-indulgent; I’m going to talk about the new Holy Week Timeline that’s on the Bible Gateway blog:

Holy Week timeline

The idea for this visualization started in November 2009 when xkcd published its movie narrative charts comic, which bubbled up through the Internet and shortly thereafter became a meme. Although the charts are really just setting up a joke for the last two panels in the comic, they’re also a fantastic way of visualizing narratives, providing a quick way to see what’s going on in a story at any point in time. The format also forces you to consider what’s happening offstage—it’s not like the other characters cease to exist just because you’re not seeing them and hearing about them.

My first thought was to plot the book of Acts this way, but Acts presented too broad a scope to manage in a reasonable timeframe. Holy Week then came to mind—it involves a limited amount of time and space, it doesn’t feature too many characters, and the Gospels recount it in a good bit of detail: one Gospel often fills in gaps in another’s account.

Now I needed data. (Data is always the holdup in creating visualizations.) Fortunately, Gospel harmonies are prevalent, even free ones online. The version of Logos I have includes A. T. Robertson’s Harmony of the Gospels, so I started transcribing verse references from the pericopes listed there into a spreadsheet, identifying who’s in each one and when and where it takes place. I plowed halfway through, but then other priorities arose, and I had to abandon hopes of completing it in time for Holy Week 2010.

It lay dormant for a year (there’s not a lot of reason to publish something on Holy Week unless Holy Week is nigh). A few weeks ago, I finished itemizing the people, places, and times in Robertson. Justin Taylor last year published a harmony of Holy Week based on the ESV Study Bible, which had a slightly different take on the timeline (one that made more sense to me in certain areas), so I moved a few things around on my spreadsheet. I also consulted a couple of other study Bibles and books I had readily available to me.

With data in hand, it was time to put pencil to paper.

Version 1: Paper

Hand-drawn prototype

I wanted to make four basic changes to the xkcd comic: use the vertical axis consistently to show spatial progression, provide close-ups for complex narrative sequences, include every character and event, and add the days of the week to orient the viewer in time. Only the last of these changes wound up in the final product, however.

The vertical axis in this version proceeded from Bethany at the top, through the Mount of Olives and various places in Jerusalem, and ended at Emmaus. On a map of Holy Week events, this axis approximates a line running from east (Bethany) to west (Emmaus). Using the vertical axis this way encodes more information into the chart, allowing you to see everything that happened in a single location simply by following an imaginary horizontal line across the chart. Unfortunately, it also leads to a lopsided chart that progresses down and to the right, creating huge amounts of whitespace on a rectangular canvas. I didn’t see that problem yet, however.

I did see that the right half of the chart (Friday to Sunday) was much denser than the left half—I’d need to space that out better when creating a digital version.

Version 2: Drawing Freehand in Illustrator

Mouse-drawn prototype

I have a confession: I’d never used Adobe Illustrator before this project. Most of my image work uses pixels; Photoshop is my constant companion. But this project would have to use vectors to allow for the constant fiddling necessary to produce a decent result at multiple sizes. So, Illustrator it was.

My first goal was to reproduce the pencil drawing with reasonable fidelity. I used my mouse to draw deliberately wobbly lines that mimicked the xkcd comic. Now, if I’d had more experience with Illustrator, the hand-drawn effect may have worked. But making changes was incredibly annoying; I had to delete sections, redraw them, and then join them to the existing lines. It took forever to make minor tweaks; what would I do when I needed to move whole lines around (as frequently happened later in the process)? After all, if you look closely, you’ll see entire swaths of the chart misplaced. (Why are the disciples hanging out in the Temple after Jesus’ arrest?) No, this hand-drawn approach was impractical for someone of my limited Illustrator experience. I needed straight lines and a grid.

Version 3: The Grid

Grooving with a 1970s grid style

My wife says that this version reminds her of 1970s-style album covers. She’s right. Nevertheless, it formed the foundation of the final product.

So, what are the problems here? First, the lines weigh too much. Having given up a pure freehand approach, I wanted a more transit-style map (used for subways / the Underground) with straight lines and restricted angles. I’m most familiar with Chicago’s CTA map and thought I’d emulate their style of thick lines that almost touch. This approach leads to lots of heavy lines that don’t convey additional information—it’s also tricky to round the corners of such thick lines without unsightly gaps appearing (again, for someone of my limited Illustrator experience).

The second problem is the extreme weight in the upper left of the chart, far out of proportion to the gravity of events there. The green, brown, and black lines represent Peter, James, and Judas, who don’t play prominent roles until later in the story. They’re adding lots of “ink” to the chart and not conveying useful information. They had to go.

Why not simply lighten the colors–after all, why is Judas’s line black? Simple: black often represents evil. Similarly, Jesus’ line is red to represent his blood. The Jewish leaders are blue because it contrasts well with red, and most of the chart involves conflict between Jesus and the Jewish leaders (with the pink crowd usually acting as a buffer to prevent Jesus’ arrest). Pilate and Herod are imperial purple. Orange is similar in hue to Jesus’ red, so the disciples are orange. I tried not to get too heavy-handed with the symbolism, but there it is.

Most of the other colors are arbitrary (i.e., available in Illustrator’s default palette and of roughly the same saturation as the symbolic colors). John would be sharing a lot of space on the chart with Mary Magdalene and the other women, so I tried to give them colors (green, olive, yellow) that worked well together. The only future change from the color scheme in this version involves the guards, who change from cyan (too bright) to a light purple.

Version 4: Less Technicolor

Lighter lines open up the image considerably

This version reduced the line weight and introduced Peter, John, and Judas only when they needed to appear as independent entities in the story. It works better, but there are still two problems with it.

First, look at the giant areas of whitespace in the bottom left and top right (especially the top right). Using the vertical axis to indicate absolute travel through space is a nice idea, but I couldn’t figure out how to do it without creating these huge gaps. In the next version, I abandoned the vertical-axis-as-space idea—it now indicated travel between places, but you could no longer follow a horizontal line to see everything that happened in a single place.

Second, I realized that I wouldn’t be able to incorporate every event and person, as they added clutter. I could have added close-ups to illustrate these details—obviously there was enough space for them. However, I felt that including them would distract from the main point: to show Holy Week at a glance. I’m still a bit torn over omitting them, but I think it was a better decision to reduce the total space used by the chart.

I also abandoned the idea that Jesus went to the Temple on Wednesday. Some commentators think he did; others disagree. From a story-structure standpoint, I like the idea that Judas slipped away from the other disciples to bargain for his thirty pieces of silver while Jesus was teaching in the Temple. However, the text is ambiguous on when exactly Judas agreed to betray Jesus and what Jesus was doing on Wednesday.

Version 5: Text

Final version with text

This is the final version. It condenses a lot of vertical and horizontal space; moves some lines around so they overlap less; and, most importantly, adds text: titles for major events; shading and place names for major locations; verse references; line labels; and a short explanation.

The xkcd chart is brilliant in that it doesn’t need a key: following recent trends in UI design, all the labels are inline. I definitely wanted to keep that approach, which meant making lots of labels and placing them on the lines. Again, my lack of experience with Illustrator showed up: I couldn’t get the text to center on the lines automatically, and I had trouble creating an outer glow on the text to provide some contrast with the background and make sure that the text was legible. (Black text on a bright blue background is an unpleasant combination.) But the glow always ate into the letters. Thus, I ended up creating lots of pixel-perfect, partially transparent rectangles as backgrounds for the labels. Some of the person lines had somehow slipped out of alignment with the grid, so I had to do a lot of clicking to get things back into order. In retrospect, it was good that I had to make the rectangles; I might not otherwise have noticed that the lines weren’t all where they were supposed to be.

The shaded boxes to indicate places are straight-up rounded rectangles (though I’m not sure why the corner radius is a global application preference in Illustrator). These boxes, borrowed from xkcd, replace the vertical-axis idea I earlier toyed with.

Finally, I added event titles and verse references. Here I tried to be comprehensive, including references even when I didn’t have a title to put with them. For example, there are two fig tree stories in the Gospels, but I only titled one of them. The references are available to you if you want to read both, though.

Conclusion

This project was fun, if time-consuming. In total, it took somewhere between forty and sixty hours (much of it spent climbing Illustrator’s learning curve). The chart ended up looking less like the xkcd comic and more like a transit map than I was expecting at the outset, but that’s OK. I’m now a whole lot more familiar with the Holy Week timeline, and I hope that others find the chart useful, too. If it helps improve Bible literacy even a little bit, then I consider it a success.

What Twitterers Are Giving up for Lent (2011 Edition)

March 10th, 2011

The top 100 things that people on Twitter are giving up for Lent in 2011.

Congratulations, I guess, go this year to Charlie Sheen, who came in at both #23 and, with “tiger blood,” at #90. Justin Bieber is up several spots this year, so he hasn’t quite crested yet. The next-highest celebrity, who didn’t make the top 100, is British boy band One Direction.

“Trophies,” at #69, refers to the English soccer club Arsenal‘s recent defeat, or something.

The later start to Lent this year means that “snow” doesn’t appear on the list–last year, it was #48. Myspace hangs on at #99, dropping 48 places.

This list draws from 85,000 tweets from March 7-10, 2011, and excludes retweets.

Rank Word Count Change from last year’s rank
1. Twitter 4297 0
2. Facebook 4060 0
3. Chocolate 3185 0
4. Swearing 2527 +1
5. Alcohol 2347 -1
6. Sex 2093 +3
7. Soda 1959 -1
8. Lent 1493 -1
9. Meat 1352 -1
10. Fast food 1303 0
11. Sweets 1252 0
12. Giving up things 778 +7
13. School 768 +27
14. Religion 745 +1
15. Coffee 707 -3
16. You 675 +6
17. Social networking 665 +15
18. Chips 664 +3
19. Junk food 594 -1
20. Bread 571 +6
21. Smoking 555 -4
22. Candy 541 -8
23. Charlie Sheen 511  
24. Work 482 +4
25. Stuff 467 -2
26. Catholicism 436 -10
27. Food 395 +3
28. Shopping 363 +1
29. Marijuana 358 +31
30. Beer 346 -10
31. Fried food 307 -7
32. Homework 306 +27
33. Cheese 297 +4
34. Cookies 293 +11
35. Red meat 285 -10
36. Masturbation 285 +8
37. Virginity 253 +26
38. Pancakes 252 +20
39. Rice 236 -5
40. Booze 235 +2
41. Coke 234 -3
42. Boys 229 +24
43. Sugar 229 -16
44. Sobriety 226 +10
45. Procrastination 226 -10
46. Nothing 219 +21
47. Winning 219  
48. Ice cream 211 -7
49. Caffeine 203 -16
50. McDonald’s 195 +27
51. Church 188 +28
52. Wine 188 -3
53. TV 184 -7
54. Starbucks 183 -15
55. Texting 182 -12
56. Liquor 181 -1
57. Negativity 180 +26
58. Carbs 179 +10
59. Christianity 177 -12
60. Justin Bieber 176 +9
61. Pizza 175 -11
62. French fries 159 +2
63. Me 157 +9
64. Losing 155  
65. Men 152 -13
66. Fizzy drinks 151  
67. Porn 147 +4
68. Lint 147 -11
69. Trophies 144  
70. Tumblr 144  
71. Desserts 142 -15
72. Chicken 140 +15
73. Pork 139 -3
74. Cake 132 +8
75. Tea 127 +19
76. Sarcasm 127 +14
77. Diet Coke 119 -16
78. Laziness 118 -13
79. Sleep 117 -6
80. Jesus 115 -4
81. College 111  
82. Internet 110 -46
83. Complaining 108 -9
84. Breathing 103  
85. Takeout 98  
86. Beef 98 -8
87. People 96 +11
88. New Year’s resolutions 96 +1
89. Him 94 -5
90. Tiger blood 92  
91. Makeup 91  
92. Juice 90 -7
93. Clothes 89  
94. My phone 88  
95. God 87 -15
96. Abstinence 85 -15
97. Stress 84  
98. Chipotle 82  
99. Myspace 81 -48
100. Eating out 81 -25

Image created using Wordle.

Quantifying Traditional vs. Contemporary Language in English Bibles Using Google NGram Data

December 27th, 2010

Using data from Google’s new ngram corpus, here’s how English Bible translations compare in their use of traditional vs. contemporary vocabulary:

Relative Traditional vs. Contemporary Language in English Bible Translations
* Partial Bible (New Testament except for The Voice, which only has the Gospel of John). The colors represent somewhat arbitrary groups.

Here’s similar data with the most recent publication year (since 1970) as the x-axis:

Relative Traditional vs. Contemporary Language in English Bible Translations by Publication Year

Discussion

The result accords well with my expectations of translations. It generally follows the “word for word/thought for thought” continuum often used to categorize translations, suggesting that word-for-word, functionally equivalent translations tend toward traditional language, while thought-for-thought, dynamic-equivalent translations sometimes find replacements for traditional words. For reference, here’s how Bible publisher Zondervan categorizes translations along that continuum:

A word-for-word to thought-for-thought continuum lists about twenty English translations, from an interlinear to The Message.

I’m not sure what to make of the curious NLT grouping in the first chart above: the five translations are more similar than any others. In particular, I’d expect the new Common English Bible to be more contemporary–perhaps it will become so once the Old Testament is available and it’s more comparable to other translations.

In the chart with publication years, notice how no one tries to occupy the same space as the NIV for twenty years until the HCSB comes along.

The World English Bible appears where it does largely because it uses “Yahweh” instead of “LORD.” If you ignore that word, the WEB shows up between the Amplified and the NASB. (The word Yahweh has become more popular recently.) Similarly, the New Jerusalem Bible would appear between the HCSB and the NET for the same reason.

The more contemporary versions often use contractions (e.g., you’ll), which pulls their score considerably toward the contemporary side.

Religious words (“God,” “Jesus”) pull translations to the traditional side, since a greater percentage of books in the past dealt with religious subjects. A religious text such as the Bible therefore naturally tends toward older language.

If you’re looking for translations largely free from copyright restrictions, most of the KJV-grouped translations are public domain. The Lexham English Bible and the World English Bible are available in the ESV/NASB group. The NET Bible is available in the NIV group. Interestingly, all the more contemporary-style translations are under standard copyright; I don’t know of a project to produce an open thought-for-thought translation–maybe because there’s more room for disagreement in such a project?

Not included in the above chart is the LOLCat Bible, a non-academic attempt to translate the Bible into LOLspeak. If charted, it appears well to the contemporary side of The Message:

The KJV is on the far left, The Message is in the middle, and the LOLCat Bible is on the far right.

Methodology

I downloaded the English 1-gram corpus from Google, normalized the words (stripping combining characters and making them case insensitive), and inserted the five million or so unique words into a database table. I combined individual years into decades to lower the row count. Next, I ran a percentage-wise comparison (similar to what Google’s ngram viewer does) for each word to determine when they were most popular.

Then, I created word counts for a variety of translations, dropped stopwords, and multiplied the counts by the above ngram percentages to arrive at a median year for each translation.

The year scale (x-axis on the first chart, y-axis on the second) runs from 1838 to 1878, largely, as mentioned before, because Bibles use religious language. Even the LOLCat Bible dates to 1921 because it uses words (e.g., “ceiling cat”) that don’t particularly tie it to the present.

Caveats

The data doesn’t present a complete picture of a translation’s suitability for a particular audience or overall readability. For example, it doesn’t take into account word order (“fear not” vs. “do not fear”). (I wanted to use Google’s two- or three-gram data to see what differences they make, but as of this writing, Google hasn’t finished uploading them.)

I work for Zondervan, which publishes the NIV family of Bibles, but the work here is my own and I don’t speak for them.

Evaluating Bible Reading Levels with Google

December 11th, 2010

Google recently introduced a “Reading Level” feature on their Advanced Search page that allows you to see the distribution of reading levels for a query.

If we constrain a search to Bible Gateway and restrict URLs to individual translations, we get a decent picture of how English translations stack up in terms of reading levels:

According to this methodology, the Amplified Bible is the hardest to read (probably because its nature is to have long sentences), and the NIrV is the easiest.

Caveats abound:

  1. URLs don’t have a 1:1 correspondence to passages, so some passages get counted twice while others don’t get counted at all.
  2. Google doesn’t publish its criteria for what constitutes different reading levels.
  3. These numbers are probably best thought of in relative, rather than absolute, terms.
  4. Searching translation-specific websites yields different numbers. For example, constraining the search to esvonline.org results in 57% Basic / 42% Intermediate results for the ESV, massively different from the 18% Basic / 80% Intermediate results above.

Download the raw spreadsheet if you’re interested in exploring more.

Venn Diagram of Google Bible Searches

October 25th, 2010

Technomancy.org just released a Google Suggest Venn Diagram Generator, where you enter a phrase and three ways to finish it: for example, “(Bible, New Testament, Old Testament) verses on….” It then creates a Venn diagram showing you how Google autocompletes the phrase and where the suggestions overlap.

The below diagram shows the result for “(Bible, New Testament, Old Testament) verses on….” The overlapping words–faith, hope, love, forgiveness, prayer–present a decent (though incomplete) summary of Christianity.

A Venn diagram shows completions for (X Verses on...): Bible (courage, death, friendship, patience), New Testament (divorce, homosexuality, justice, tithing), Old Testament (Jesus), NT + Bible (hope, strength), OT + Bible (faith), OT + NT (marriage), and all three (forgiveness, love, prayer).

Procedurally Generating Archaeological Sites

October 12th, 2010

Walking around an archaeological site–whether an active dig or excavated ruins–makes you wonder what it would be like to see the site in its glory days. Existing computer tools make it possible to model small-scale sites virtually (a building, perhaps), but anything larger than a city block would take a long time to create. Even a small city is beyond the capabilities of any but the most dedicated team.

One solution is procedural generation, where a human designer lays down a few rules–a basic city plan, for example–and a computer fills in the rest according to those rules. The result is a complete rendering of a city filled with buildings that plausibly inhabit the space, with a human only having to set up the initial parameters. Consider this reconstruction of Pompeii:

The creators of this video started with street plans and a variety of historically correct architectural models. A computer then generated buildings that fit the excavated ruins, resulting in a city that you can tour virtually. While it undoubtedly has inaccuracies, the result is compelling.

Pompeii is better-preserved than most ancient cities, but you can apply a similar technique to any archaeological site. Archaeologists have partially excavated many biblical cities; they know at least some of the city’s layout. Even if they don’t know the whole thing, they can guess at what features a city of a given size needs; by starting with what archaeologists know, a computer can extrapolate a plausible street plan for the rest of the city. (I suppose that you could run the simulation many times and generate a probability of where a certain building–such as a synagogue–is likely to be.)

These projects don’t often generate interior spaces or simulate objects like furniture, both of which dramatically increase the complexity of the simulation for only a modest benefit. But there’s no reason why we couldn’t model interior spaces. A forthcoming game called Subversion, for example, uses procedural generation on both macro and micro scales: it generates both complete cityscapes and architectural floorplans of the buildings that it creates.

A screenshot from Subversion shows a building's procedurally generated floorplan.

Recreating interiors for ancient houses is fairly straightforward: floorplans weren’t nearly as complicated as they are today. Imagine walking around ancient Capernaum, for example, and visiting the house where people lowered a paralytic through the roof. Architecture plays a crucial role in the story, a role that a virtual-reality model would help illuminate.

Further Reading

  1. Procedural, Inc. creates software for procedurally generating cities, both modern and ancient.
  2. Rome Reborn from the University of Virginia recreates ancient Rome using a combination of hand-modeled buildings (for thirty models and 250 elements) and procedurally generated buildings (for the remaining 6,750 buildings). Academic papers provide more technical detail, especially the one by Dylla, Kimberly, Frischer, et al. (PDF). They use the Procedural, Inc. software.
  3. A Subversion video shows the steps a computer goes through to generate a cityscape.
  4. Procedural 3D Reconstruction of Puuc Buildings in Xkipché demonstrates an academic application of the technology applied to archaeology.
  5. Magnasanti talks about the “ultimate” SimCity city and was the inspiration for this post.