Nano Banana Pro does a plausible job of turning a real photo of an archaeological site into what the photo might have looked like if you’d taken it from the same vantage point thousands of years ago. You can imagine an app running on your future phone that lets you turn your selfies at historical sites into realtime, full-blown reconstructions (complete with changing your clothes to be historically appropriate).
Here’s a reconstructed view of Ephesus (adapted from this photo by Jordan Klein). I prompted it to add the harbor in the distance, which no longer exists in the modern photo.
Leviticus probably isn’t your favorite book of the Bible, with its long lists of cleanliness regulations and priestly procedures. But I’ve long thought that the natural format for Leviticus is the flowchart: do this, then this, then this. A flowchart makes the prose much easier to follow. So I spent about thirty minutes a week over the past year turning Leviticus into a series of flowcharts by hand.
However, with Nano Banana Pro, I was able to make more progress in an afternoon than I had in a year—going from raw Bible text to finished flowcharts in four hours. I didn’t even use any of the work I’d done over the past year.
Here are some examples of finished flowcharts:
Methodology
I first generated some test flowcharts to get a visual style I liked. I wasn’t planning on the illustrations being so friendly, but Nano Banana Pro came up with a clear and pleasing style, so I went with it.
My first thought was to display all the Bible text—NBP could actually handle it—but the summary view I ended up with was easier to follow, visually.
From there, it was mostly a matter of choosing logical verse breaks for each flowchart, which ChatGPT helped with. I then used this prompt and gave it a previously generated flowchart as a style reference:
Create an image of a flowchart for Leviticus [chapter number] (below). Use the image as a stylistic model. Match its styles (not content or exact layout), including text, arrow, box, and imagery styles. Structure your flowchart so that it fits the content. Integrate the images into the boxes themselves where appropriate; they’re not just for decoration. Present a summary, not all the text. Indicate relevant verse numbers, and include the specific verse numbers in the title, not just the chapter number. Never depict the Lord as a person.
[Relevant Bible text]
Often it took two or more tries to get the look I wanted, or to ensure that it got all the logic right. I originally wanted to have all the clean/unclean animals on one flowchart, for example, but I couldn’t get the level of detail I was going for. So they’re broken up by animal type into multiple flowcharts.
On the other hand, even when I forgot to adjust the chapter number in my prompt, NBP would still show the correct chapter number in the output—it knew the chapter I meant, not the chapter I said.
All the image resizing and metadata work on my side to prepare the final webpage was vibecoded. It wasn’t hard code, but it was even easier just to explain to ChatGPT what I wanted to do.
Discussion
These flowcharts are better than I could have executed on my own and only took about four hours to create, from start to finish. By contrast, my earlier, manual process involved taking notes in a physical notebook, and I’d only made it to Leviticus 21 after twenty hours of work. Turning those notes into a finished product would’ve taken perhaps another 100 hours. So I got a better product for 1/30 the time investment, at a cost of $24 to generate the images.
Those twenty hours I spent with Leviticus weren’t lost, as ultimately any time spent in the Bible isn’t. In generating these flowcharts, I already had an idea of what the content needed to be and that it worked well in flowchart form.
But still, I didn’t add much value to this process. Anyone with a spare $24 could’ve done what I did. I expect that people will create custom infographics for their personal Bible studies in the future—why wouldn’t they?
The main risk here involves hallucinations. NBP sometimes misinterpreted the text, and the arrows it drew didn’t always make sense. I reviewed all the generated images to cut down on errors, but some could’ve slipped through.
As you can tell from my recentblogposts, I think that Nano Banana Pro represents a step change in AI image-generation capability. It unlocks whole new classes of endeavors that would’ve been too costly to consider in the past.
In April, I had GPT-4o create a bunch of maps of the Holy Land based on an existing public-domain map. My chief complaint at the time was that GPT-4o “falls apart on the details”—it gives the right macro features but hallucinates micro features (such as omitting specific hills and valleys and creating nonexistent rivers).
Nano Banana Pro changes that. It preserves features both big and small and doesn’t alter the location of features you give it, which means that you can hand it a map, have it transform the look, and then export it back out of Nano Banana with the correct georeferencing. You can completely change the appearance of a map and just swap it out for your purposes.
This time, I started with the same public-domain map but had Nano Banana Pro extend it so that it would have the same 2:3 aspect ratio as the GPT-4o images. It did a phenomenal job. If you’ve heard of the “jagged frontier” of AI, this work is an example of “sometimes it’s amazing.” There’s no reason why it should be so good at creating a map this accurate. But here we are. (You can download the 4K version of the generated image.)
Then I ran the same prompts on Nano Banana Pro that I used for the earlier GPT-4o images. The results preserve all the details but apply the appropriate style. While the Nano Banana Pro images are more accurate, I feel like the GPT-4o images were, on the whole, more aesthetically pleasing for the same prompt. On the other hand, the NBP images followed the prompts way better. Only a few of the more heavily stylized NBP images inserted the nonexistent river between the Red Sea and the Dead Sea.
Compare the “shattered crystal” look between GPT-4o and Nano Banana Pro. GPT-4o is more conceptual, while Nano Banana Pro is more literal.Compare the “painter’s impression” look between GPT-4o and Nano Banana Pro. To my eye, the GPT-4o one captures Impressionism better.
Below are some of my favorite Nano Banana Pro images. The first two recreate the Shaded Blender look that’s so hot right now. The second two show how NBP can change up the style while preserving details. I especially love how the last one makes the Mediterranean Sea feel vaguely threatening, which captures ancient Israelites’ feelings toward it.
This image (made with Nano Banana Pro), recreates one of my favorite views of the Holy Land. The original (by Hugo Herrmann) dates from 1931 and is in the public domain. The use of forced perspective makes the topography of the region clear, especially the relationship of the Jordan rift valley to both the Mediterranean Sea (to the west) and the hilly terrain (to the immediate east and west). Mount Hermon in the far north makes clever use of the horizon line to show its dominance.
A view like this also illustrates why biblical writers talked about going “up” to Jerusalem (which is on the peak nearly due west from the northern end of the Dead Sea near the bottom).
The original uses an older style that’s less immediately accessible to the modern eye. Nano Banana Pro is the first AI image generator to do a good job at updating the original’s appearance while removing text and other modern features. Nano Banana Pro also preserves topographic details (which are stylized in the original and not completely accurate) amazingly well. You can tell that it’s AI-generated if you zoom in on the high-resolution version linked above, though—its details feel imprecise compared to what a human would create.
I wanted to have Nano Banana Pro draw Saul’s path from Jerusalem to Damascus using a map reference, but all its attempts were wrong in various ways. So it does have limits. But those limits probably won’t exist in six months.
Google this week launched Nano Banana Pro, their latest text-to-image model. It far outshines other image generators when it comes to historical recreations. For example, here’s a reconstruction of ancient Jerusalem, circa AD 70:
I gave it this photo of the Holyland Model in Jerusalem and told it to situate in its historical, geographical context. Some of the topography isn’t quite right, but it’s pulling much of that incorrect topography from the original model. It can also make a lovely sketched version.
It also does Beersheba. Here I gave it a city plan and asked it to create a drone view. The result is very close to the plan; my favorite part is the gate structure and well.
It was somewhat less-successful with Capernaum (below). I gave it a city plan and this photo of the existing ruins. It’s kind of close, though it doesn’t exactly match the plan. It’s almost a form of archaeological impressionism, where the image gives off the right vibes but isn’t precisely accurate. Also try a 3D reconstruction of this image using Marble from World Labs.
Finally, I had it create assets that it could reuse for other cities for a consistent look:
I then had it create a couple typical hilltop shepherding settlements using the assets it created (again using “drone view” in the prompt):
Itiner-e is a new and free (CC-BY) dataset of Roman roads, supplanting AWMC as the most-extensive and highest-resolution road data available. The announcement article in Nature describes the labor-intensive process of creating the 14,769 road segments that constitute the dataset.
Compared to past datasets, it more-extensively fills out roads in the Roman province of Judea, which is relevant to much of the New Testament. Here, for example, is a possible route that Saul took between Jerusalem and Damascus for his “road to Damascus” moment. The Itiner-e tool also tells you that it would have taken about 68 hours to walk this distance.
Last month’s release of GPT-4o’s image-generation capabilities led to a huge improvement in instruction-following capabilities—specifically, it can now make maps that (more or less) match real geography.
The results match what James Farrell found in his similar cartographic explorations: GPT-4o creates “generally accurate topography” but falls apart on the details. In these maps, for example, it really likes to connect the Dead Sea and the Red Sea with a nonexistent river. And it includes the Sea of Galilee only when it feels like it. The details of the topography itself—hills, valleys—are broadly correct but wrong in details.
It tends to do better at geographically accurate reproduction when it’s generating something close to what it likely saw in its training data. Sometimes modern features, like country borders, leak through into the generations.
This kind of “vibe cartography” is different from what JJ Santos describes when using a similar term, where you can use Claude to automate map creation inside QGIS. In that process, you should end up with geographically “correct” results, but you’d have to spend a lot of time to achieve the artistic effects in the more conceptual maps here.
Evan Applegate at the Very Expensive Maps podcast likes to say that “you should make your own maps.” I don’t know that he’d consider this process to be “making” a map so much as vibing it into existence. I can imagine a cartographer using an AI to explore a certain look and then polish and execute that look using a more-traditional cartographic workflow.
Methodology
I started by uploading to Sora the finest map of the Holy Land ever created, which is in the public domain, and using that image as a base. From there, I started with this prompt:
Turn this hand drawing of the natural vegetation and topography of the Middle East into something different while maintaining the physical features (especially note that everything south of the Dead Sea is desert; there’s no river), without labels, human features, or political borders:
And followed it up with the specific style, with wording suggested by ChatGPT. For example:
A pure, traditional Swiss-style shaded relief map of ancient Israel — delicate shading for terrain, clean coastline, classic colors, masterful light sourcing.
You can find all the prompts by hovering over (or long-pressing) the images on the AI Maps page.
Posted in AI, Art, Geo | Comments Off on Doing Bible “Vibe Cartography” with GPT-4o
They’re all basically the same concept, with a happy sheep coming toward the camera. Prompting for a video is different from prompting for an image; I struggled to get good results in the limited number of generations available to me. I had more failures than successes.
Here are a couple of fails where I tried to get a video of Moses parting the Red Sea. The first one looks like a video game cutscene, but revealing a giant wall is opposite of what I’m going for. In the second one, Moses decides to take a quick dip in the Red Sea before popping back out. Both of them are trying (and failing) to create the “wall of water” effect popularized by the movie The Ten Commandments.
If I had more credits available, I’d share more. We’re in the earliest days of text-to-video generations—the DALLE-2 era of AI videos: they’re amazing but limited, advanced but (in retrospect) basic.
Posted in AI, Art, Video | Comments Off on Making Short Bible-Story Movies with Sora
Acts 27 recounts Paul’s shipwreck as he travels from Crete to Malta after Yom Kippur (September 24 in AD 60, approximately when this story is set). For the shipwreck portion of the voyage, his ship starts in Fair Havens on the southern of coast of Crete. They’re trying to make port in western Crete but are blown by a strong wind from the northeast. The sailors are concerned about being driven into sandbars in the gulf of Syrtis, so they let the ship be blown along and eventually end up in Malta.
On November 11, 2021, Storm Blas set up this wind pattern almost exactly, connecting Crete to Malta (the strong white line represents my interpretation of a possible path):
This wind pattern comes from the mesmerizing earth.nullschool.net, where you can also play around with an animated version. (It’s way more exciting than this static image). This image reflects a point in time, while Paul’s shipwreck narrative takes two weeks. So this wind pattern would change during the voyage; this image just happens to show the appropriate wind pattern for the whole voyage.
Arguably, the wind should blow them farther south, closer to Syrtis. Cyclone Zorbas from September 27, 2018, shows an even-more-intense flow that would take a ship nearer Syrtis. It doesn’t connect to Malta, but, again, the wind patterns would change over the course of several days.
Earlier in the story, Luke describes sailing from Sidon “under the lee of Cyprus, because the winds were against us.” Then they “sailed across the open sea along the coast of Cilicia and Pamphylia” on the way to Myra. Bible maps don’t entirely agree what “the lee of Cyprus” implies for the route (some take it to mean sailing along Cyprus’s southern coast, though that interpretation creates tension with “Cilicia and Pamphylia” to the north). This image from October 29, 2023, illustrates the lee along Cyprus’s eastern coast:
Finally, the trip from Myra to Cnidus (“with difficulty”) and then to Salmone on Crete (“the wind did not allow us to go farther”) could find an expression on October 13, 2024. In this image, the winds during the segment from Myra to Cnidus are coming from the west or northwest, against the direction of travel. The strong winds from the north through the Aegean make westward travel difficult, pushing the ship south. This wind pattern appears to be typical for this time of year.
Again, I’m not arguing that these images reflect the actual wind patterns involved in Paul’s shipwreck voyage; I’m just showing that it’s possible to find modern analogues to the winds described in the story.
Posted in Geo, Visualizations | Comments Off on Visualizing the Wind Patterns Leading to Paul’s Shipwreck
Did you know that different translations insert section headings at different places in the Bible text? Some translations might want shorter sections to break up the text into more-easily digestible units, while others may prefer fewer sections to better preserve the flow of thought.
This project takes twenty English Bibles (BSB, ERV, ESV, ISV, NCV, CEB, CEV, CSB, GNT, GW, LEB, NABRE, NASB, NCB, NET, NIV, NKJV, NLT, NRSVue, and REB), identifies where each section starts and ends, and presents the aggregated data.
Specifically, it uses Sankey diagrams to plot section breaks for each book of the Bible. For example, here’s the diagram for Ruth (also in png format):
Here’s how to read this diagram: The height of each solid bar indicates the number of translations with a heading at that verse. Lighter bands emanate from each bar to where the section ends. For example, from 1:1, you can see a small band that ends at 1:7, larger bands that end at 1:14 and 2:1, and a much-larger band that ends at 1:6. The size of the bands shows the number of translations. So we can see that most translations treat 1:1-5 as a single section, and they start a new section at 1:6. Then, starting in 1:6, there’s much more variety in how long the sections are (you can see that the bands fan out to five different vertical bars).
What can we learn from this visualization? The high bars at 1:1, 2:1, 3:1, and 4:1 indicate that translations insert headings at the chapter breaks in Ruth. (Ruth is unusual in this respect; most books don’t break so cleanly and unanimously.) In chapter one, you can see somewhat-large divisions at verses 6 (Naomi hears about God’s work) and 19 (Ruth and Naomi arrive in Bethlehem). But other translations pick different divisions in chapter 1: verse 7 (Naomi starts heading out to Bethlehem), verse 8 (Naomi asks her daughters-in-law to go back, verse 14 (Ruth clings to Naomi), verse 16 (“Where you go I will go”), and verse 18 (Naomi stops asking Ruth to go back). And still other translations don’t break up chapter one at all. So different translators see different moments as deserving headings, which shapes how you read the text.
Similarly, in chapter four, many translations see 4:13 as a turning point (when Boaz officially marries Ruth). The bar at 4:18 is showing that some translations have a heading for David’s genealogy, but most don’t.
Lamentations is another favorite. Some translations make the acrostic structure visible to the English reader through headings, but most don’t:
Is this kind of analysis helpful? I’m not really sure. And the data complexity for most books—Ruth is manageable, but longer books are less so—is perhaps pushing Sankey diagrams past where they’re useful. But explore and decide for yourself. As usual, the data is freely available to download under a CC-BY license. I used SankeyMATIC to generate the Sankey diagrams; you can click through to SankeyMATIC to interact with the diagrams by highlighting certain bands and moving things around.
Update: to follow on with my previous post, here are two AI-generated podcasters discussing these diagrams. The part where they discuss Exodus is especially interesting to me, since I don’t discuss it in the text. The only way they’d draw their conclusions is by looking at and understanding the Sankey diagram for Exodus, knowing that Exodus 32 is about the golden calf, and interpreting it as they do. It’s impressive. Listen here.