This week, I’d like to take a brief look at something different. I recently read through the amazing webcomic, Stand Still, Stay Silent (here’s page one), set in a unique post-apocalyptic Scandinavian setting. There are a billion good things about SSSS—fascinating usage of Nordic folklore, detailed worldbuilding, delightful character interactions, and an engaging storyline among them—but something I’ve legitimately never seen anywhere else is its use of languages. It’s worth exploring these dynamics to create more linguistically vibrant world.
In the world of SSSS, the only areas that have survived an apocalyptic plague are the Nordic nations of Iceland, Norway, Sweden, Denmark, and Finland (at least, those are the only inhabited places we’re aware of). Unlike similar settings that take place in other locations, like the United States, the Nordic region is very linguistically diverse. Each of the five nations has its own language, and this hasn’t changed during the 90 years between the end of the world and the comic’s events. The author, Minna Sundberg, grew up in Sweden and Finland, so she brings a lot of personal experience to the table.
As tempted as I am to do a deep dive of the languages involved, a quick overview is enough to extract some valuable lessons. Minna has a great in-world graphic to explain the dynamics:
Essentially, there are two main groups: Norse languages and Finnish. The Norse languages (Icelandic, Danish, Norwegian, Swedish) all come from Old Norse. The three Scandinavian languages (all Norse languages except Icelandic) are “mutually intelligible,” meaning that while someone who speaks one of them can understand the others with some effort. (As an aside, English isn’t really mutually intelligible with any other languages; the only thing that comes close is Scots, and that’s a stretch. The lines in that clip are “If he was a wee bit closer, I could lob a caber at him, ken?” and “It’s just nae fair making us fight for the hand of a queen that doesn’t want a part of it, ken?”, (“ken” here meaning “you know?”) which you can just barely make out.)
The odd one out is Finnish, which is very different from the others. Finnish isn’t even part of the Indo-European language family, which practically every other language in Europe is a member of. In technical terms, Finnish is “linguistically distant” from the Norse languages. This, along with deep cultural and religious differences, makes it very hard for Finns and other Nordic people to relate with each other.
These real-world dynamics provide lots of fodder for worldbuilders. Most people think of languages only in terms of extreme linguistic distance, like Finnish and the Norse languages. Mutual intelligibility is never considered. By creating settings with varying degrees of distance, you can inject both realism and nuance into your worlds.
I hope you enjoyed this brief diversion! Feedback and suggestions are always welcome.
Now that the first article discussing the basics of the early modern era, is out of the way, we can get started dissecting the elements that can be used by worldbuilders.
One of the features that defined the early modern period was globalization. The world went from being a collection of regional powers that only interacted with neighbors to being relatively connected worldwide. Unfortunately, the vehicle for that increased connection was European imperialism and colonialism. These trends hurt a lot of people for the benefit of a few, but worldbuilders can use these events to create realistic empires of their own.
Let’s talk about colonial motivations, charters and companies, managing empires, and imperial conflicts.
To simplify greatly, the Ottoman conquest of Constantinople was the catalyst for the Age of Discovery (which led directly to colonialism and imperialism). Christian Europe was nervous that Silk Road trade was now controlled by a Muslim Empire, so they began looking for ways around them. Once Europe started to move ahead militarily and technologically, they began subjugating the locals—the first modern colonies.
Historians use the phrase “Glory, God, and gold” to describe the motivations behind colonialism. (And you thought that was something Disney made up for Pocahontas.) “Gold” is the easiest to understand. Governments were looking for revenue, both through setting up trade routes and by extracting local resources. “God” refers to the push to convert locals to Christianity. The religious fervor of the Protestant Reformation and Catholic Counter-Reformation was a huge factor behind this. Lastly, “glory” captures both the drive for individual explorers to make a name for themselves and for governments to one-up each other. Of course, “glory” on the governmental level often boiled down to “gold” anyway.
One last thing to consider is how these considerations affected the nature of the colonies themselves. There are a few ways to classify colonies, but the simple system we’ll use differentiates between “settler” and “exploitative” colonies. The vast majority of colonies were exploitative, meaning that their main goal was to extract local resources. These resources were used to maximize the wealth of the “metropole,” or the colonists’ homeland (this is where we get “metropolitan”). A minority were “settler” colonies, meaning that the focus was providing a place for immigrants to live. The North American, South African, and Australian colonies were all mostly settler, while the rest were mostly exploitative (though South American colonies were kind of mixed).
Charters and Companies
This isn’t a massive point, but it’s an interesting phenomenon that would make sense in many settings. For much of Europe’s colonial history, governments lacked the resources and organization to fund and manage a true empire. To remedy this, they would provide companies with “charters,” or official permissions to found and run colonies. The largest and most famous examples were the British and Dutch East India Companies. There were many others, some with incredibly small jurisdictions permitted by their charters.
Chartered companies were less common at the beginning and end of the early modern period for different reasons. At the start, the commercial innovations of proper corporations and joint-stock companies limited the for-profit sector’s ability to contribute meaningfully to colonization efforts, so empires like Spain and Portugal managed their colonies directly. At the end, nations became strong and bureaucratized enough that the companies were no longer necessary, so they started being reabsorbed into their governments.
The classic way to classify colonial administration systems is a spectrum from “direct” to “indirect.” This scheme is awkward both in timing (it was devised in the early 20th century by British and French theorists advocating for ways to manage their nations’ empires) and accuracy (the spectrum’s applicability has been challenged in recent decades), but it works well enough for worldbuilding purposes.
Direct administration, typified by the French Empire, involved immigrants from the metropole taking active charge of local government. It understandably required large relocation efforts to supply the required manpower, and could take a lot of resources. The benefits were that the imperial government had a lot of control over how things were run in the colonies.
Indirect administration, typified by the British Empire, left local officials in power, but made them subservient to the imperial government. Direct systems often left some locals in government, but reduced them to figureheads intended to provide the illusion of representation to keep the locals in check. Under indirect rule, the locals genuinely governed themselves, though they could only do so within the bounds that the metropole set. This was significantly cheaper, though it could get hairy if the locals decided to try and defy their overlords.
Another drawback of the indirect system was that it fell apart if the local culture didn’t have a centralized government to begin with. This was the case in several African regions. There, the British would arbitrarily pick a local and put them in charge. These “warrant chiefs” (named after the warrants that gave them authority) were poorly received and respected for several reasons. The most obvious is that people didn’t feel the need for such an authority figure, so why would they obey? Another was that the colonizers would usually pick warrant chiefs that matched their image of a good leader, which often didn’t align with the personalities that locals actually valued. Warrant chiefs were almost universally men, artificially creating a patriarchal structure in areas where there wasn’t one before. These areas tended to see the most resistance to imperial rule.
It’s worth mentioning that the direct-indirect spectrum applies only to exploitative colonies. Settler colonies don’t want to rule the locals, they want to displace them and settle their land.
A final consideration is how the colonizers related to the locals. This was a debate within French circles after the French Revolution, with the two sides representing “assimilation” and “association.” Those in favor of “assimilation” believed that the ideals of the Revolution were universal human constants. They advocated for culturally reeducating the locals and forcing them to adopt the metropole’s customs and language. While this was the dominant theory in the French Empire for a long time, it wasn’t the only one. Dissidents favored “association,” which said that the local society should be kept mostly intact, though usually separate from the colonists.
When colonial empires clashed, the results could be ugly. While empires were often hesitant to fight in their homelands, it was easier on the conscience to dictate distant bloodshed. As such, the colonies were often the sites of horrible proxy wars, frequently using locals as conscripts and mercenaries. In the most extreme examples, all the colonies would get involved, as in the Seven Years’ War (which is considered to be the first true global conflict; sorry, World War I).
Even when empires weren’t officially at war, there was a constant hum of low-level conflict. One of the simplest vehicles for this was the privateer system. Privateers (also called “corsairs”) were pirates that were given permission by specific governments to raid the vessels of their enemies. The “letters of marque” they were given as symbols of this permission theoretically made them immune from prosecution by other nations—even their targets—though some (notably Spain) didn’t honor these letters. (Random trivia: the US Constitution gives the federal government the power to hire privateers.)
Privateers often reverted to piracy once the war ended or their letters expired. Pirates could range for much longer distances than popular culture often remembers. One short-lived route, called the Pirate Round (used by pirates known as “roundsmen”) went all the way from the New York colony, down around the southern point of Africa, past Madagascar, and raided the ports of India. Just something to keep in mind—your pirates don’t have to stick to areas the size of the Caribbean.
And there you have it! Looking forward to feedback and suggestions in the comments.
Not all stories have to have detailed and lifelike worldbuilding. It’s fine for something to be simple or unrealistic; these things don’t have to detract from the story. Sometimes, though, you’re expecting something to be shallow-but-enjoyable and find it to be surprisingly deep. That happened to me when I played Ni No Kuni 2.
The first Ni No Kuni was a whimsical and moving journey through an alternate world, with visuals designed by people from Studio Ghibli (My Neighbor Totoro, Castle in the Sky, etc.). The second game took the same world and decided to take players through a story rife with political intrigue and international complexity. It was completely unexpected from something I think is meant to be a kid’s game.
A lot of this works because of the nature of the plot and characters. The story follows Evan, a young king ousted by a coup, as he founds the kingdom of Evermore to fulfill the dying wish of his mentor. He’s helped in this by Roland, a president from our world who has found himself in the land of Ni No Kuni. This gives us a dynamic where players can see Evan, pure of heart but politically inexperienced, tutored by someone with an imposing amount of political acumen. I enjoyed watching Roland come to the same conclusions about the situation and make the same recommendations that I would.
I’d like to explore the geopolitics of Ni No Kuni 2’s world and story. This will be slightly spoiler-y, but I’ll try to avoid disclosing too much. If any of what I’ve described sounds enjoyable, I highly encourage you to play the game. The script can be overdramatic and the voice acting cringey—they try to mimic accents including Welsh, Scottish, pirate, and others. There are puns everywhere, some of which I find unpleasant (they make a lot of name jokes involving stereotypical Chinese, which I don’t think they meant to be insulting). I believe the positives far outweigh the negatives, and thoroughly recommend the game to everyone.
With that out of the way, let’s meet the world of Ni No Kuni.
Intro to the World
While the world of the game is never explicitly called Ni No Kuni, that’s the name I’ll use. Ni No Kuni is a fantasy world parallel to ours. For the purposes of our discussion, the most important feature is the dominant political system. The most defined political entities are kingdoms that appear to function as city-states—that is, each is made up of one populous settlement that has very little territorial control outside its borders. One kingdom does control a small coastal village that is described as its “vassal,” but that’s it. There are a few people living outside the kingdoms, but the plot doesn’t focus on them much.
There’s a clear difference between kingdoms and every other group in the world. Kingdoms are ruled by a king or queen, which is someone who has bonded with a special being called a “kingmaker.” We’ll discuss kingmakers and their characteristics later, but what’s important is that as long as the ruler has a kingmaker, the kingdom can be structured in essentially any way. There’s even a large for-profit company that counts as a kingdom, solely because its CEO has a kingmaker.
At the time of the game’s events, there are five kingdoms. The narrator informs us that there was a recent string of wars between the kingdoms, though we don’t see any evidence of this in-game (except for the kingdoms’ general unwillingness to cooperate). This is the backdrop that serves as the setting for Ni No Kuni 2.
Coups d’Etat and Ethnic Tension
When Roland is transported to Ni No Kuni, he finds young King Evan in the middle of being deposed in a coup (though he’s initially oblivious to this) in the kingdom of Ding Dong Dell. It takes very little time for Roland to see that the coup was clearly planned ahead of time and clearly falls along the line between species: the ratlike “mousekind” are ousting the catlike “grimalkin.” He is informed that for a long time, the grimalkin have been in power and have oppressed mousekind. Evan’s father attempted to repair the rift by taking on a mousekind chancellor, but the chancellor poisoned the king and is now using force to take the crown of Ding Dong Dell from Evan.
This type of ethnic conflict is sadly very common. We usually see these situations in areas like Africa and the Middle East, where externally-designed borders artificially lump different ethnicities into the same political entity. While we don’t have a clear idea of the origin of Ding Dong Dell’s situation, the results are very similar to what we see in our world. One ethnicity (or species, in this case) ends up having slightly more power than others, then uses its advantage to consolidate its position and repress its rivals.
Ironically, the steps Evan’s father took to address the problem were probably what led to the coup in the first place. Most coups require key people in power to switch sides, since they hold the resources that would be required for the insurgents to succeed. By placing a member of mousekind in a powerful role (it seems like the chancellor might have control of the military, which is especially dangerous to leave in the hands of someone with questionable loyalty), he opened the door for a coup to happen without the grimalkin changing sides.
Evan is forced to flee the kingdom with Roland. The aftermath is also in line with what we would expect from an ethnic coup. There is an immediate exodus of grimalkin refugees, fleeing the mousekind’s retaliation. The borders are soon closed to keep the grimalkin in, after which they are confined to ghettos (here, underground and monster-infested slums). Mousekind replace grimalkin in all positions of authority, and the average grimalkin faces harassment from mousekind police and citizens alike. It’s implied that this is significantly harsher than what the mice endured before the coup, but this kind of disproportionate response is also common.
Legitimacy and Social Contracts
Once outside Ding Dong Dell, Evan decides to found a new kingdom. However, he can’t just find an empty plot of land and declare himself king; he first needs to bond a kingmaker. We learn that without a kingmaker, no one will take a would-be king seriously. In political terms, the kingmaker is the source of a king’s “legitimacy.” After some adventures, Evan manages to bond a kingmaker. Well, kind of…
There are several reasons a king needs a kingmaker to be considered legitimate. First, a kingmaker will only bond with a ruler who meets certain moral criteria. The bond can even become stronger if the king is willing to sacrifice more on the people’s behalf. Second, we’re told that a kingmaker can act as a powerful soldier on the battlefield, though we don’t see this happen on a large scale. Late in the game, we also see that subjects of a kingdom can be protected from powerful magical attacks, simply because of the land’s kingmaker. Third—and most importantly—if a ruler begins mistreating his people and becomes unfit to rule, the kingmaker will rebel against the misbehaving monarch. This is a key part of the plot, as the game’s villain manipulates rulers into betraying their subjects’ trust, then steals their kingmakers from them.
This falls in line with a theory of rulership from the European Age of Enlightenment called the social contract. According to Thomas Hobbes in his book Leviathan, mankind is naturally uncivilized and brutish. People implicitly agree to cede some of their freedoms to governments—an arrangement he calls a social contract—in exchange for benefits and protections of their inalienable rights. If the government violates these rights, then the contract is broken and citizens are no longer required to submit to the government. Interestingly, Hobbes concludes that the only form of government that can meet the needs of the people is an absolute monarchy.
There’s also a bit of the theory of “divine right to rule.” By this philosophy, a ruler gains legitimacy by being chosen by a supernatural power. Proving their chosen status is a vital goal for rulers with this type of legitimacy. Kings in Ni No Kuni get their blessing from a magic dragon instead of a god, but it still counts.
As mentioned, we see this in action several times. A king or queen breaks the people’s trust in one way or another, removing their bond with the kingmaker. One thing I wish the game had handled differently are the effects of losing this bond. For all intents and purposes, the kingdom seems to function exactly as it did before. In some cases, the change of heart that the ruler experiences after the kingmaker is stolen can lead to renewed loyalty from the citizens. This isn’t what would happen in real life. In divine right to rule systems, a ruler that has lost the blessing of heaven in the public’s eye can face real consequences. Opportunistic nobles can lead rebellions, for example. Ni No Kuni’s kingdoms should be breaking down, not growing stronger.
Not everyone in Ni No Kuni lives in a kingdom. On their way to retrieve their kingmaker, Evan and Roland meet and gain the loyalty of the Sky Pirates. After learning of Evan’s mission, they quickly sign up and become the first citizens of Evan’s new kingdom, Evermore.
In a world where there are key benefits to being in a kingdom, this is completely realistic. Existing kingdoms might have refused to give the Sky Pirates citizenship, since they’re dangerous outlaws. Evermore doesn’t have this luxury, since it needs residents in order to function. Batu, the Sky Pirates’ leader, claims to see Evan’s pure heart and believe in his mission, but he’s also securing a lot of benefits for his people that he would never be able to get otherwise. (It also helps that it’s implied that Evan and Batu’s adoptive daughter get married after the game ends, ensuring his posterity will be part of the royal line.)
Things are different in real life, mostly because there isn’t an objective test of legitimacy. Early Rome absorbed Italian tribes by pitting them against each other, not by convincing them of their right to rule. This is usually how things work. Regardless, the way things are depicted in Ni No Kuni are a natural result of the kingmaker system, and line up with expectations.
Once Evan has his kingmaker, it’s time to actually found Evermore. Roland stresses the importance of a kingdom’s location, prompting Evan to consider his choice carefully. In the end, they settle on an area known as the Heartlands, which seems to have everything. It’s an area of rolling plains, which look to be agriculturally fertile. In addition, they’re close to a large forest for lumber and the world’s main ocean for fishing and trade. Once they start building facilities, they discover that there’s ore deposits underground as well.
There’s one important problem: the area isn’t too defensible. There aren’t any natural barriers that Evermore can use to set up a fortress. The locals are hostile to travelers, and especially hostile to Evan and his fledgling kingdom. These bandits and the area’s openness are likely why existing kingdoms haven’t been able to claim the Heartlands for themselves. (A DLC also reveals that the ghostly previous kingdom of the Heartlands will attempt to eliminate any kingdom on their turf, but that’s slightly outside the scope of this article.)
There are clear real-world parallels here, not all of them good. The Heartlands and Evermore’s story closely mirrors the North American Great Plains and the United States. The problematic part is the comparison of Native Americans to the Heartlands’ bandits. The US committed atrocity after atrocity, driving the Native Americans off of their ancestral lands. With this agricultural breadbasket under its control, the US’ production and population growth took off. The nation’s wealth encouraged a massive wave of immigrants, strengthening the pool of human capital and further increasing its advantage.
The Great Plains are essentially identical to Ni No Kuni’s Heartlands, though waterways are rivers instead of the ocean. The bandits are savages, and Evan’s warring against them is viewed as unambiguously good. Like the US, Evermore’s use of these precious lands leads to prosperity, bringing a flood of immigrants from the other kingdoms (these immigrants quickly eclipse the Sky Pirates in numbers). There’s no moral grey area about how this success came by forcing locals off of their lands, which I would’ve liked to see.
Coalitions and Supernational Organizations
Now that the kingdom has been formed, Evan and the others are able to fully turn their attention to the plot of the game. Roland encourages Even to come up with a national strategy, pointing out that Evan’s initial goal of “make a place where everyone can live happily ever” is nice, but not very actionable. After some discussion, they decide to create a treaty between all the kingdoms of Ni No Kuni prohibiting military conflict. They call this treaty the Declaration of Interdependence (warned you about the puns), and collecting signatories is a major goal throughout the game.
This is the area where Ni No Kuni sacrifices verisimilitude for the sake of the story. It’s a good plot, but it would absolutely never happen in real life—especially in the premodern or early modern period that influences the game’s world. One of the biggest obstacles is that the Declaration seems to place Evermore in a dominant position, in charge of coordination and conflict resolution. Whenever a nation has taken this role, it’s almost always a regional hegemon that can essentially force other states to listen to its decisions. The notable exception is Switzerland, which managed to deal with being surrounded by powerful neighbors by committing itself to neutrality and acting as a mediator. It had a few things going for it that Evermore doesn’t, which are a bit too complex to go into now.
Instead of having one nation dominate others, something that works a bit better is having a separate, non-state organization that is (theoretically) politically independent. The United Nations serves that role now, and in a very weird way, the Catholic Church tried to accomplish this in Medieval Europe. This is rare, and is usually very fragile, especially in the early modern or premodern eras. It doesn’t make much difference now, since that’s clearly not what Evermore is trying to do.
There is one thing that the Declaration has going for it. At the same time that Evan is looking for signatories, Ni No Kuni’s kingdoms are facing an existential threat from the game’s villain. Several kingdoms sign the Declaration explicitly to join forces against him. This does happen in the real world, and is called a “coalition.” Several times in history, there have been alliances against common enemies. The classic example is Napoleon, who faced seven coalitions against him. The difference is that coalitions are temporary by nature. Once the threat is gone, the nations naturally split up as their conflicts surface again. The Declaration is apparently meant to be permanent, and events in the epilogue make this clear.
All that said, I still think that this is an area where the writers weren’t trying to follow real life. The dynamics of the plot were more important, and I think I agree. The game is supposed to be an idealistic exploration of basic, childlike morality, and it’s fair to focus on that in this case.
In the end, I have to congratulate the devs on making a surprisingly sophisticated political story. I had a great time, and I hope any readers will, too.
Do you have any works of fiction that surprised you with their deep worldbuilding? Let me know—maybe I’ll have a look and do an analysis!
A few people have asked for info on the early modern period. While the premodern era has many lessons worldbuilders can draw from, the rest of history definitely has its share of inspiration. This is the introduction to a series collecting what I consider to be some of the more interesting features.
I think that in order for this series to be useful, we have to lay down a foundation of what I mean when I say “early modern.” Looking at this can help put the other articles into context. Be warned: this post is more opinionated than most others.
The sections today are the definition, Great Divergence, early modern Europe, and elsewhere in the era.
In general, the early modern era followed the Middle Ages and stops at the Age of Revolutions—a very Eurocentric definition, but one that works well enough. This time saw the world transform into a global system with gunpowder militaries and scientific inventions.
First, let’s set some start and end dates. The traditional start date of the early modern era is the Fall of Constantinople in 1453. Losing Constantinople to the Muslim Ottoman Empire forced Christian Europe to look for alternate ways to access the Silk Road (Venice still had decent access, but that wasn’t enough for most powers), spurring the Age of Discovery. You could argue for 1444 (Gutenberg’s printing press) or 1492 (Columbus’ arrival in North America), but I believe that Constantinople is the best demarcation.
For an end date, let’s use 1789, the start of the French Revolution. This event kicked off a massive chain of European wars that resulted in mostly-modern borders and the end of most absolute monarchies. Again, you could easily argue for 1776 (start of the American Revolution) or 1799 (the coup that put Napoleon in power).
You may notice that there’s a lot that happens in this time. Even though this only covers about 350 years, this period has a massive amount of social, technological, political, and military change. It’s almost easier for me to treat the premodern era as a single entity than the early modern period. Even though there’s several thousand years of premodern history, various constraints meant that many elements stayed largely the same throughout. Instead, the world looks almost unrecognizable at the end of the early modern period compared to the beginning.
For this reason, it’ll be next to impossible for me to describe features that were common across time and space. I’ll have to talk about developments rather than constants. With luck, this will allow readers to make the same sort of informed worldbuilding decisions that the For Your Enjoyment series did.
Another thing that you might notice is that this time period does not include the Industrial Revolution. Factories, railroads, and everything else that’s associated with industrialism isn’t covered.
There’s something very important about this time that has to be addressed. At the start of the early modern era, Europe was essentially a mediocre backwater. The most powerful nation was probably Ming China, though they did relatively little to project that power. However, by the end of the period, European nations were by far the strongest in the world, spreading their influence through colonies, conquest, and trade to almost all continents and regions. This development makes any serious analysis of this time period relatively Eurocentric almost by necessity.
I want to be clear here. I do not like being forced to look at history like this. One of the reasons I like premodern history is that most of the time, everyone is at the same level of power. Empires are unusual, and they occur across plenty of different regions—the Mediterranean, the Middle East, South or East Asia, etc. Modern thinkers are already too prone to looking at Europe and America while ignoring the rest of the world. However, it’s inescapable: by the end of this era, almost every region in the world was defined by its relationship with Europe (to varying degrees). We need to acknowledge this and move on.
This phenomenon—where Europe went from obscurity to global power over the course of a few hundred years, roaring past plenty of other, stronger powers—is called the Great Divergence. Trying to uncover the causes behind it has stumped historians and analysts for a long time.
Time for another thing to be very clear about. While we’re not sure exactly what was behind the Great Divergence, we are very sure that it wasn’t because of any special qualities of European people. Europeans, Christians, or white people (“whiteness” being a modern concept that would have been alien and bizarre for most of history) are not more intelligent, hardworking, cultured, etc. than other people. Even if this weren’t a morally repugnant theory that flies against common sense, it has been thoroughly disproven using modern experiments and historical analysis. This another thing that we need to acknowledge and move on from.
There are a few plausible theories for what led to the Great Divergence. The most likely rely on two factors: fragmentation and firearms.
In this context, “fragmentation” refers to when a region is split up between many governments instead of one unified one. Europe’s geography is unique in a couple key ways. While travel of individuals and small groups is easy, movement of large groups like armies can be difficult. This means that widespread domination is hard, while cultural diffusion is encouraged. Another key factor is that Europe has fairly fertile agriculture, leading to decent population growth.
This fragmented geography created an immensely competitive political environment. Local powers were always looking for more resources to support their growing populace, but had a hard time securing regional dominance because of the geography. At the same time, the fast cultural transmission meant that whenever a state achieved a key advantage, other nations would have access to it and copy it relatively quickly. This forced the governments to be constantly innovating in order to survive, and if any group made a breakthrough, the others had to adopt it quickly or risk falling behind. Because of this, important philosophical and social developments—like experimental science, for example—spread very quickly and were rapidly improved on.
These factors forcing competition and innovation came to a head throughout the early modern era. After a long period of obscurity, rising population levels meant that states couldn’t afford to be ineffective anymore.
This leads into the second factor leading to the great divergence: firearms. Gunpowder was not a European invention; it was a Chinese one. While it led to modest military gains in China, it wasn’t seriously invested in for a couple reasons. For one, Chinese military architecture was naturally resistant to cannonfire. For another, China was enjoying a period of stability under the Ming Dynasty. Military innovation wasn’t necessary or encouraged.
These factors were precisely reversed for the Europeans. Castles and other common fortifications were very vulnerable to cannons. In addition, like we said, European nations were forced to be constantly experimenting and innovating. They latched onto gunpowder and made effective artillery, eventually leading to portable hand-held firearms. These proved to be a massive game-changer, providing a weapon with which Europe could gain dominance over cultures the world over and extract resources to fuel their ravenous citizens.
Together, fragmentation forced Europe to evolve and firearms provided a tool to conquer. Over the course of 350 years, these gave the region the advantages needed to loom over large portions of the globe.
Early Modern Europe
Because of the Great Divergence, getting a very general, overall view of the early modern era is easiest if we break it up into “Europe” and “everywhere else.” (Still makes me grumpy.) Let’s have a brief look at some of the key developments. To make things easier, I’ll stick to familiar names and events, though readers should understand that there are a lot of very interesting people to learn about. “Most famous” doesn’t necessarily mean “most important.”
Let’s look at religion, politics, philosophy, science, economics, military, and imperialism.
Religion was one of the most influential areas. The printing of the Gutenberg Bible (1444) contributed to widespread spiritual literacy, challenging traditional Catholic doctrine. This led to Martin Luther’s Ninety-Five Theses against the Church (1517), which led to the Protestant Reformation splitting European Christianity in two. There were a long series of incredibly violent religious wars, culminating in the catastrophic Thirty Years War. The Peace of Westphalia at the war’s end (1648) firmly broke the Catholic Church’s political hold over Europe, led to a period of relative religious tolerance, and established the nation-state as the fundamental unit of political power (no longer answering to the Church). Here, we can see that religious developments led directly to geopolitical adaptation.
This leads into another important field: politics. Contrary to popular thought, premodern European royalty was largely dependent on the consent of the nobility. Following Westphalia, monarchs rapidly consolidated power to secure control over the new nation-states. This led to a phenomenon called absolutism, where the king or queen effectively held all governmental power (much more like what we imagine monarchs were). The greatest example was the French king Louis XIV, the Sun King, who famously said, “L’Etat, c’est moi”—“I am the state.”
Political philosophy also blossomed during this period, a time called the Age of Enlightenment. Thinkers like Thomas Hobbes published Leviathan (1651) that a government was a “social contract” where the governed agreed to cede some freedoms to a ruling authority in exchange for services. In his view, the only practical government was an absolutist monarchy. Other philosophers like John Locke and Voltaire expanded these ideas to religious tolerance and anti-slavery rhetoric. Some absolutist monarchs became “enlightened despots,” using their power to enact enlightenment policies. By the end of the period, the growing power of absolutists conflicted with enlightenment thinkers, leading to calls for democratic rule. These inspired the American Revolution, which then inspired the French Revolution.
Science flourished here as well. The early-early modern era contained the Renaissance, a period of renewed interest in Greco-Roman arts and sciences. Classic figures like Leonardo da Vinci and Michelangelo were influential. This contributed to an interest in experimental and data-based science, spurred on by Copernicus’ works (1543) and Isaac Newton’s research (1687).
Finally, the early modern period saw extreme advancement in economics. The commercial revolution at the start of the era came about due to increased trade. Mercantilism, a faulty economic theory that stressed maximizing exports, became popular as imperialism began to dominate European foreign policy. At the end, the foundations of capitalism were laid as Adam Smith published The Wealth of Nations (1776).
Many of these developments happened behind the scenes as far as the rest of the world was concerned. The military and imperialist elements of European policy were what everyone else experienced.
The European rush for exploration and colonization started with the Fall of Constantinople in 1453. As mentioned, Christian Europe didn’t like the Muslim Ottoman Empire in charge of trade through the Silk Road. Governments started to look for a way around the Ottomans to get to India and China. Once the Portuguese invented the caravel and carrack, the first large ships that were really suited for long-distance ocean travel, the Age of Discovery began. Columbus, Vasco de Gama, Magellan, and others hit important milestones for global exploration. With exploration came colonization, which leads us into military advancements.
The military revolution came about almost exclusively due to gunpowder. At the start of the early modern era, warfare had an advanced medieval feel—full plate armor for knights and horses, great pike formations, massive sieges, etc. The first sign of change came with portable artillery, which could tear through castle walls as well as soldiers. By the end of the period, castles were replaced with massive, complex “bastion forts” that could withstand sieges for a very long time. Portable firearms made plate armor (mostly) irrelevant. While cavalry persisted for a while, resulting in infantry “pike and shot” squares to ward them off, it was eventually phased out. Infantry formations mostly became simple lines that fired in volleys. Sailing ships were equipped with artillery, creating the first real combat navies (before then, ships were mostly troop transports, with naval engagements more accident than anything). All these changes came at the same time as increasing state bureaucracy and resources allowed for massive professional armies, making war even more deadly.
Europeans’ growing access to the world, increased military ability, and desperation for resources led to a new, global imperialism. At the start, relations with locals were mostly equitable, with exchanges of firearms for goods. The Europeans started setting up small “port and fort” outposts in key coastal areas. Eventually, large-scale conquests of native inhabitants commenced, sometimes leading to the complete destabilization of regional powers. For most of the world, this is what the early modern era looked like: foreigners arriving and irrevocably changing the life you’ve lived, often for the worse.
Elsewhere in the Era
Finally, let’s look at what was going on in the rest of the world. Just so I’m completely clear, I want to say again that we can’t ignore what the vast majority of humanity experienced. While events in Europe had a disproportionate amount of impact on world events, they were still brought about by a small minority of people. It’s important to study and learn about what things were like on the receiving end of imperialistic expansion. I regret that this article has to be divided in this way to give a fair overview of the period.
(I’m definitely overstating the degree to which most of the world was confined by European imperialism. That’s intentional; many sources focus on how the era looked from the perspective of Europe itself. Pushing things in the other direction can provide a fresh perspective.)
I should say that many historians consider Russia and the Ottoman Empire to be effectively European. During this era, Russia established a Tsardom and expanded outwards from its base in Eastern Europe, conquering Siberia. Its control would be much more complete with the completion of the Trans-Siberian Railroad (though that won’t come until after the Industrial Revolution).
The Ottoman Empire is one of three “gunpowder empires” through the Middle East and South Asia. Like other European powers, the Ottomans’ power expanded throughout the period, widening its influence until it eclipses most others in the region by the era’s end.
The Ming Dynasty of China was arguably the world’s strongest power at the start of this period. Over time, it weakened until it was overthrown by peasants (something that, despite the stories, is very rare). The Qing Dynasty that took over after the peasants were ousted tried to remain isolationist, but Europe began undermining their sovereignty by encouraging widespread opium addiction among their citizens. This came to a head shortly after the end of the early modern period in the Opium Wars of the mid-1800s, in which European powers were victorious.
Japan was arguably the only area that effectively resisted European influence. At the start of the era, a shogunate emerged with a governmental structure that was similar to Middle Ages European feudalism. After some initial interactions with Europeans, Japan also became isolationist, remaining so until 1853, when America—a non-European, but still Western nation—forced them to open their borders to trade.
India’s strongest force of the time was the Mughal Empire. Unfortunately, it dissolved just in time for the Europeans to arrive. The British and Dutch East India Companies forced coastal powers to submit to European rule. Eventually, the British would conquer the subcontinent, though that wouldn’t be completed until after the early modern period ended.
Africa was the home of several empires during the era, some of them Islamic. The most notable interaction with the Europeans involved their holdings in West Africa. The Europeans offered firearms in exchange for slaves (which West African nations often captured in their wars). The increased military strength and desire for more slaves these transactions created resulted in violent wars across the region. Africa’s interior was mostly free from European influence until the Scramble for Africa in the late 19th and early 20th centuries (a phenomenon some more provocatively call the Rape of Africa).
I would argue that the greatest disaster in the early modern era occurred on the American continents. Prior to European arrival, the Americas had several developed societies, including the formidable Aztec Empire. The Europeans swept across the area with guns and swords, but brought something even worse than their military might: smallpox. The disease completely ravaged American peoples, killing as much as 90% of the population. No epidemic in history was this deadly. In the newly-emptied lands, the Portuguese, Spanish, British, and French set up large colonies.
These are the real stories of the early modern period. We’re often enamored with what things looked like within Europe, but we can’t ignore that these advancements came about at the expense of a massive amount of people. Worldbuilders may understandably focus on the more palatable elements of the time, but we can’t forget the way things really were.
There you have it! Perhaps a bit more depressing than some of my other articles, but that has to be addressed. Let me know if you have any requests!
Well, here we are. As far as I can tell, this is the last topic I can helpfully cover about premodern life. Everything else is too varied or too complex to discuss in an article of reasonable length.
But, that leads into two new upcoming series! For Your Enchantment will revisit the topics covered in the For Your Enjoyment posts, adding magic and other fantasy elements to the mix to speculate about what might change. I’m also thinking about a For Your Enlightenment series that would look at the early modern period in the same way that we’ve examined the premodern era. Let me know what you think!
Same conditions as always: there’s an attempt to present principles that are true across most premodern societies. If my European- and Mediterranean-heavy education shows through, please feel free to let me know if there’s anything I’m not aware of. You could also make an argument that most fantasy settings are early modern instead of premodern; I’ll look at early modern in the next series (assuming people are interested). Lastly, fantasy elements like magic and monsters change things drastically. I’ll leave that for the other series, too.
We will look at medical theory, doctors, and ailments and treatment. Most of this article is dedicated to understanding the philosophies behind premodern medicine, but I do have some comments in other sections, too.
The same perspective that helped us gain empathy regarding premodern religion can shed light on premodern medicine. These people weren’t stupid. They were trying to be scientific, but they didn’t have the tools and perspective to do it well. In addition, the high price of failure discouraged innovation and encouraged people to follow tradition; if it works, why risk killing someone to try something new? Of course, “works” is very relative, and without large sample sizes and regular experimental procedures, it’s very hard to tell whether one practice performs better than another.
Cultural norms could also make things difficult. One frequent factor is dissection taboos. Many, many societies viewed dissection as desecration of the body. Many even considered the dissection of animals to be morally wrong. This understandably made research into the nature of the body incredibly difficult. Without this information, people had to make guesses based on philosophy or religious doctrine about what was actually inside the body and how it worked. If scholars were able to dissect animals, they could try to draw parallels—the Greek physician Galen used monkeys for this. If a culture didn’t have dissection taboos—like the Egyptians—they frequently moved drastically ahead of its neighbors in medical knowledge.
Lastly, religions could either be a help or a hindrance. Some encouraged medical research and practice, while others actively repressed the medical fields, believing that this these things were the domains of the divine.
Now that we understand why premodern physicians believed the things they did, let’s talk about what they believed. While there’s a lot of different theories proposed models of how the body would work, almost all of them have one key philosophy in common. They believed that phenomena in nature, society, and morality all paralleled what happened within the body. Seasons, elements, personalities, types of food, and other things had corresponding organs or bodily functions. In general, illnesses were caused by imbalances or corruptions of these elements, and treatment involved restoring balance and/or purging corruption.
Anthropologist Charles Leslie described this as saying that traditional medicine theories focused on an “all-encompassing order of things.” I call it “microcosmic medicine,” since it suggests that the body and soul are microcosms of what we see in the rest of the universe.
Many, many theories followed this model, but we can briefly look at the two that had the most wide-reaching influence. These are the four-humors model and traditional Chinese medicine.
Humorism suggested that there were four “humors,” or fluids, that controlled everything in the body: blood, phlegm, black bile, and yellow bile. There’s a possibility that this theory came from watching blood coagulate over time. If left in a container, blood will settle into four layers: black platelets, red blood cells, clear-ish white blood cells, and yellow serum. The humors had parallels to seasons, stages of life, classical elements, organs, and personalities.
Like other microcosmic medicine theories, illness was often seen as due to an imbalance between the humors. This is where bloodletting and leeches came from—it was theorized that too much blood was a cause of disease, so getting rid of the excess was a form of treatment.
While the phrase “traditional Chinese medicine” glosses over a lot of regional and temporal differences, the fundamental theory behind it is mostly the same wherever you go. The model of the body draws from the Yin-Yang model as well as the Wu Xing, or “five phases” model of elements (fire, earth, metal, water, wood). These meet to form a series of functions in the body that are named after organs, but aren’t entirely the same thing (the function “triple burner” doesn’t have any corresponding physical organ, for example).
Energy flows through these organs/functions along paths called meridians. Again, illness is seen as an imbalance of the functions or blockages of energy (“chi”). Treatments involve restoring balance, sometimes through stimulating energy by inserting needles in specific points along meridians—acupuncture.
Having covered all this, there’s one last thing I should mention. Among non-scholars, it often didn’t matter why treatment worked, only that it worked. The technical phrase is that folk medicine is “non-explanatory,” meaning that it doesn’t necessarily try to provide explanations. Formally trained doctors might be familiar with these theories, but local physicians might not know about them, and might prescribe treatments that go against the prevailing scientific consensus.
Different cultures assigned the role of “doctor” to different members of society. Physicians could be religious practitioners, they could be graduates of medical universities, they could be local leaders with no other qualifications, or they could be something else entirely.
Some societies would blend medical professions in unusual ways. Many areas in Middle Ages Europe would have their barbers perform surgery. Surgery was considered a lesser art to the rest of medicine, so cutting hair was usually enough preparation to amputating limbs. (There clearly was some training, but I’m exaggerating because I personally find this concept so bizarre.)
Generally, there were two models of education for physicians: apprenticeships or universities. As civilizations developed, they tended to move towards standard education, certifications, etc. One Islamic government required all aspiring doctors to successfully cure three members of an opposing religious group before they were clear to practice on their own.
Despite all this, in most cultures, most people would go to local, untrained physicians before resorting to trained professionals. This naturally led to a lot of inconsistency between diagnoses and treatments.
Ailments and Treatment
As a society becomes more advanced, the illnesses that people deal with changes. The epidemiological transition model suggests that more developed societies experience more chronic illness (like heart disease), while less developed ones experience more infectious illnesses (like tuberculosis) and child mortality. This essentially describes the perspective shift that we need in order to imagine what premodern societies experienced. More diseases, more sick children, and fewer far-reaching and man-made difficulties like smoking and obesity.
A few illness areas that are often underrepresented are dental, ocular, and digestive issues. Large portions of the premodern medical texts I’ve looked at dealt with these issues, suggesting that they were a source of a lot of the burden that physicians experienced.
One key fact was that both diagnosis (determining exactly what was wrong with the patient) and prognosis (predicting what the patient will experience in the future) were both innovations that had to be thought up. Physicians might try to learn what the patient was experiencing in order to prescribe effective treatments, but classifying illnesses into recognizable patterns was unusual. Predicting what was going to happen was also usually considered next to impossible.
Tools to diagnose illnesses or suggest treatments could be very interesting. A surprising number of cultures diagnosed based on the patient’s pulse. Other common tools include tongue examination, smelling the breath, or other things that would be extremely unusual for us. People would also look at the person’s general physical appearance to get a sense for general predilections, though these won’t make sense to us—you’re thin and have a large Adam’s apple, so you’re prone to breathing difficulties.
Herbs and herbal concoctions were almost always at the forefront of medical treatment. In Europe, practitioners would often keep “physic gardens” of medicinal herbs at their homes. (These are actually the precursors to modern botanical gardens.) Middle Ages Europe had a religious theory called the “doctrine of signatures.” This doctrine stated that God had provided a cure for every illness, and the observant could tell what each plant was made for by looking for physical “signatures.” Skullcap flowers looked like skulls, so they were prescribed for headaches; lungwort looked like lungs, so it was prescribed for respiratory infections. Many premodern cultures had similar philosophies, though usually not so explicitly stated.
Often, another dimension of microcosmic medicine was morality. People who were sinful were imagined to upset balances within their body, leading to sickness. According to folklore, an ailing Mongolian warlord was advised to give amnesty to everyone in his domain; when he did this, he recovered. Similar treatments could be suggested in other cultures.
In a related vein, disease could also be related to spiritual issues. This could be essentially the same as the moral illnesses described above, or they could deal with problems in the spiritual realm. A classic example is exorcism, since the influence of malign spirits was a common thread in many religions. Here again, we can see the intersection of religion and medical science.
And there we go! Let me know if you have any other requests for the premodern era, if there’s any feedback on this article, or if there’s something you’d like to see in the upcoming series, feel free to post a comment!
Assuming your world will feature a variety of fascinating places, your characters will probably spend a lot of time travelling between them. Today we’ll look at all the factors affecting transportation in the premodern world.
The usual conditions apply. Magic changes a lot, so we’ll address that in a different article. You could argue that many fantasy settings are early modern, but we’ll focus on the premodern world today. Lastly, if my Mediterranean- and European-heavy bias of historic knowledge shows through, let me know. As much as possible, I’ll be focusing on elements that should remain common across all premodern societies.
We’ll look at travelers, transportation, roads, waterways, and banditry.
While travel was fairly common, it was a relatively small portion of the population that traveled regularly. Commoners (as defined in the first article of the series) generally lacked the resources to travel, and usually felt no need to do so. Travel for leisure is a fairly recent phenomenon—premodern peoples traveled because they had to. They might enjoy the experience, but they rarely traveled for its own sake.
There were five varieties of premodern traveler. The first is governmental travel. Officials frequently needed to go from place to place as part of their duties. Nobles may want to travel between their estates, attend official events, or visit their superiors. There are plenty of non-nobles who act on the government’s behalf as well. Several ancient empires developed postal services with couriers who served official needs.
Another category is military travel. We know from the second article of this series that an underappreciated domain of military action is operations, or getting the army from place to place. Traveling armies are unique in several ways. As discussed in the Operations section, they need to keep moving and keep taking resources from locals or risk running out of supplies. Military travel can significantly stress transportation infrastructure. This was the fundamental reason why the famous Roman road system was constructed: to make military movement easier. Any benefits to trade or other travel was secondary.
Perhaps the largest category of travelers was merchants. Again, this was mentioned in the third article on economics. There was a constant hum of small-scale, local merchant traffic. Larger trains were also common, and the Silk Road saw massive convoys called caravans. Similar trends could be seen in water transport.
The last regular type of traveler is migrants. There are many reasons why people would want to look for a new place to live, usually boiling down to economic hardships or opportunities. Economic migrants often head for cities, though the pull is less strong before the Industrial Revolution. A less common, but far more impactful, force for migration is war. Conflict can create massive amounts of refugees looking for a safer place to call home. Some cultures of war allowed citizens a relative amount of peace, but collateral damage is inescapable.
Some religions encouraged an additional variety of traveler: pilgrims. Many religions ascribe special importance to particular places. This encourages worshippers to make pilgrimages to visit these holy sites. Specific festivals could make these pilgrimages more regular. Some developed religious institutions provided infrastructure to encourage these treks, as we’ll discuss in the Roads section. We can also include the journeys clergy go on in their duties here. Religions with central authorities often require their priests to travel to assignments.
While poor migrants, pilgrims, and other impoverished travelers usually traveled on foot, anyone who could afford it looked for other options. Animals and vehicles could be hired for specific trips, though wealthier travelers usually preferred to own and maintain their own.
Animals pulling carts preceded mounted travel by several thousand years. Several innovations made carts more effective. Harnesses that place more weight on the shoulders instead of the neck as well as spoke-and-rim wheels (which are more expensive than solid wood wheels, but lighter and way easier to maintain; people will always go for spoke-and-rim wheels whenever possible) were both helpful.
Another set of discoveries that made carts better were the successful domestications of various draft animals. Horses are useful, but the breeds that were available for most of history were relatively weak and couldn’t pull large loads. Donkeys were a bit sturdier relative to the amount of food they needed to eat, but they were famously stubborn and pain-resistant, making them hard to motivate (like it or not, whipping and other physical punishments were key motivators for much of history).
When people realized horses and donkeys could be bred together, the world of mules opened many doors. Mules share the best of both worlds (though they can still be stubborn at times). They became the animal of choice for many traveler types, especially merchants. Mules’ sterility meant that people couldn’t keep personal farms of just mules—they needed to either have horses and donkeys, have one and arrange with someone else for breeding, or purchase mules from a seller.
Another option was to go with oxen. Once proper yokes were designed, oxen could pull truly massive loads. The downside was that they were both slow and hungry. Because of this, were really only used by merchants who would use the oxen to carry lots of goods to sell, paying for their exorbitant cost.
I should briefly mention that people could pull carts themselves. This was very difficult, and was often restricted to very short trips within cities or between a city and its hinterland. In extreme cases, very impoverished people might need to use handcarts to travel very long distances. The classic example is the Mormon pioneer experience, though the 1800s could hardly be called “premodern.”
You might notice that I’ve made lots of references to things being in carts, but not people. That’s because for a lot of history, actually riding in carts was tremendously uncomfortable. It took until the very end of the European Middle Ages for any kind of suspension to be discovered, and that was just straps of leather. If you wanted to sit in a vehicle, it usually had to be for short trips within a settlement (which usually had slightly gentler roads) or actually carried by horses or people, no wheels involved. Chariots are an exception, but people stood in those—only for brief periods—and they were never built for comfort in the first place.
In addition to using animals for draft (pulling things) there were three other purposes: pack (carrying goods in bags), mount, and as the goods themselves. Pack animals don’t require too much explanation. Mules were favored when available; oxen’s backs are so broad and awkward that panniers (bags slung over the back of a pack animal) don’t fit well. Livestock was often cargo in its own right, transported in large groups between pastures or to marketplaces. Some areas had menageries, a kind of traveling zoo.
This brings us to the final use of animals: mounts. My horse-loving wife has a long list of pet peeves she has about how riding is often portrayed in fiction, so I had a lot of help with my research here. (And yes, it is easiest to just talk about horses here. You could ride mules and donkeys, as well, but you can figure out what those are like by learning more about horses.)
The biggest thing is that horses will not be galloping the whole way. Races are short for a reason. In the best conditions, a horse could gallop for about two miles at once, but then it will need to stop and rest a while. Shadowfax doesn’t count. He was very, very magic, and was extremely tired after galloping for a day and a half. My wife specifically wrote in italics: “Your horse will die if you gallop too long, and that’s assuming it doesn’t yank you out of the saddle with its teeth first.”
Instead, your horse will be walking most of the time. Horse walking is still faster than human walking, so you will still get to your destination faster and less tired, but you won’t be running the whole way. You can go faster than a walk if you want, but that means you’ll have to stop and rest every once in a while—especially if you’re trying to save your horse(s) up for a battle.
We should expand on that and point out that if your horse is going to be doing something important at the end of a journey—a battle, race, or hunt—then you don’t want to be riding it during the actual trip. You’ll probably need a different horse to ride along the way, along with another pack animal (probably). This is another reason why cavalry and recreational horse events are reserved for the wealthy.
In the end, riders make about 25-30 miles each day. On foot, people can make about 20-25. It doesn’t seem like much, but that adds up when most journeys worth hiring animals for are multi-day affairs.
The last thing to consider about animals for transport is food and water. These will be some of the largest expenses and one of the key determinants of the route you take. You will want to stay close to bodies of water whenever possible, and ideally you will want to travel where there’s enough grass or other food for your animals to forage. You can carry grain with you, though it’s very bulky. Long story short: using animals is invaluable, but it doesn’t solve everything.
There isn’t a better place to put this, but readers should be aware that almost every mode of transport was very seasonally dependent. Even a heavy rain could make a route useless for days at a time (one benefit or Roman roads was that they were designed so rain flowed off of them, making this less of a concern). Roads and waterways both could be completely impassible in Winter (in a European climate). This meant that most settlements had to be self-sufficient for those seasons, since trade at any distance would be cut off. Local governments would need to be able to function without reliable communication with higher-ups. Wars were fought during non-Winter seasons, since Winter made both travel and foraging much harder.
This section describes all land travel, even though it’s called just “roads.” Despite the image of people roughing it through the rugged countryside, roads were there for a reason. They were next to water and other resources, they were easy to traverse, and they would have settlements for resupplies and other amenities. Even armies stayed on roads for the most part, since they had to forage supplies off of the locals.
Investment in roads was a key part of many developed civilizations. The most classic example was the Roman Empire, though their roads weren’t primarily intended for citizens. Roads were built behind marching legions, making military mobility extremely efficient. Any benefits for traders and other travelers were a happy side-effect. Indeed, there are a couple reports that the pavement was awful to carts (no suspension, remember) and uncomfortable for some animals, which is why the roads often had unpaved areas to either side. They were designed for marching soldiers accompanied by baggage trains.
Romans were also known for their milestones, small roadside monuments that recorded the distance from nearby settlements (and Rome itself). These were so regular that historians started using milestones to record locations.
One thing that many worldbuilders ignore is navigation. It’s generally assumed that most people didn’t have access to maps. If maps were available, they were usually low-quality, inaccurate, and very local. How did people know where they were going? Aside from memorizing common routes (and we have some records of songs used to remember roads and landmarks), one important tool was the “itinerary.” Instead of a map, you would have a list of directions, distances, and landmarks or settlements. “Five miles south to A, then six miles southeast to B, etc.” Itinerary hawkers could make a decent living selling directions. In Rome there was a master itinerary on the Pantheon that listed distances and directions to far-off settlements, allowing travelers to copy down their own itineraries from its lists.
It’s worth spending some time on inns, since they’re such a staple of fantasy. I’ve heard some people say that inns are almost completely fictional, and that most people slept in their own tents. This is definitely not true. Inns were almost everywhere except for smaller villages (where travelers could often pay to sleep with local families). Most of these were about a day apart, allowing travelers to almost always spend the night under a roof.
Inns were regarded differently in different cultures, though. Inns in Roman times were very seedy; many ones archaeologists have uncovered have plenty of graffiti and references to prostitution. Traveling officials would instead stay in a “mansio,” which was a villa set aside for them. More reputable inns called “tabernae” would show up eventually.
The mansio was an example of something several societies did: provide formal inns for traveling government agents. They would require passports for use and were, naturally, far more luxurious and convenient than regular inns.
Occasionally, religious orders would support pilgrims by offering their properties as inns. Monasteries served this purpose in Middle Ages Europe.
I found something very interesting in my research for this article. I looked at inn layouts in Europe, the Middle East, Central Asia, and elsewhere, and was very surprised to find that the design was very similar in all of them. We usually imagine something similar to modern hotels, where travelers would come into a common area and rent rooms upstairs, all accessible via hallways. Real inns were usually structured around a courtyard, with rooms opening directly into the courtyard itself. Larger inns would have multiple galleries of rooms (still facing the courtyard and accessible via balconies), and fancier ones would have a fountain in the courtyard as well. Stables and storage were offered either behind the bedrooms or at the ends of the wall. Less like a hotel, more like a motel. I guess this layout was best for convenience, allowing residents to come and go without bothering others.
Another inn variant that was culture-specific—but very influential—was the “caravansary.” We’ve mentioned that traders traveling the Silk Road moved in large convoys called caravans. These caravans would stop in places called caravansaries, which were effectively small forts. They were structured just like regular inns, but they featured watchtowers and reinforced gates that closed at sunset. Caravansaries were outside town rather than within them, too. These precautions were necessary to protect the caravan’s riches from bandits.
One last thing that I was interested to learn about inns was that they often didn’t provide food. Residents had to eat whatever they’d brought. When food was available, there wasn’t a menu; everyone had to eat the same thing. Some civilizations did have restaurants—Rome had something similar to fast-food places for the poor people who didn’t have personal kitchens—but inns didn’t serve that purpose.
Waterways—any body of water that a boat can use—were of immense importance in the premodern world. We’ve mentioned this in other articles, but the cost and time efficiency of water travel is immense. In Roman times, river travel was five times cheaper and sea travel twenty times cheaper. To put this in perspective: say you’re a farmer on the coast. You need to travel 21 miles to a town, also on the coast. It would be easier to walk almost 19 miles in the other direction, then take a boat to your destination. (I think I did the algebra right there…)
Because of this, you are much more likely to see settlements next to waterways. Those that aren’t near one probably won’t be able to grow very large, and will have to be more self-sufficient. If there are important inland resources, like ore deposits, roads and other land infrastructure will be a top priority to get the resources to where they could be processed and/or sold. Lumber is especially easy to transport by river; it can just be tied up into rafts and floated down, a practice called “timber rafting.” (This is where the North American sport of logrolling comes from.)
Canals, or man-made waterways, showed up earlier than you might expect. The first canals were for irrigation, and it was a happy coincidence if a canal was large enough for riverboats. Once technology advanced enough for more reliable canal construction, they could be built to connect important waterways or provide access to important inland areas.
River barges had to be very flat-bottomed in order to get them past shallower portions. They were often smaller, since they might have to be pulled on land a short ways in the event of low bridges or very shallow fords. When technology made it possible, seagoing vessels could have much deeper bottoms to make room for more cargo.
For almost all premodern cultures, seafaring boats had to stay pretty close to shore (the portion of ocean known as the “littorals”). The tools to navigate open waters were a while off, the weather could be harsher, and the vessels often needed to resupply frequently. The exception was the region of the Pacific Islands, whose peoples developed incredibly sophisticated methods and technology for dealing with these challenges. For the rest of the world, everything stayed coastal for a long time.
If a traveler couldn’t afford to own a boat or rent one—which was the majority of travelers—traveling by water was essentially the premodern equivalent to hitchhiking (though you’d need to catch a ride at settlements rather instead of any place en route). You needed to find a vessel that was going in your direction, then buy or barter for a seat. There weren’t regular passenger lines that consistently traveled between key settlements—there might be merchants that made regular rounds like that, but nothing geared specifically for travelers.
Travelers had to constantly watch out for robbers. People usually only call these people bandits if they operate on land, pirates on waterways. Both bandits and pirates almost always worked in groups, which distinguished them from “footpads”—lone muggers that usually stuck to settlements. For simplicity, I’ll use “bandits” to describe anyone who robs travelers, land or water. Obviously many factors change when you move from land to water, but less than you would expect—and what changes there are are intuitive to understand.
The most common motivation for banditry was personal need rather than desire for riches. Consequently, specific items like grain, livestock, or household tools were more likely to be stolen than trade goods like gems or silks. Another consequence of this was that if someone didn’t have something the bandits specifically wanted, they would usually be left alone. Juvenal, Roman writer, wrote that travelers with empty pockets were safe on the roads.
In general, bandits were disorganized, with every group operating independently. Corrupt nobles could run bandit operations in their territory to extract extra revenue or terrorize opposition. Merchants and clergy could do this, too, though it was less common. Sometimes, governments would even hire bandits to harass enemy territory.
Sometimes in history, a bandit organization would become folk heroes. Robin Hood’s Merry Men are the archetypal example of “social banditry,” but they’re not the only one. Like Robin, social bandits often restricted their victims to wealthier travelers, earning popular support. Most weren’t organized groups like the Merry Men, but were more like a common occupation. In extreme cases, these social bandits could grow in power and become the vehicle of a peasant revolt against the ruling class, though this usually didn’t end well.
One of the most common sources of bandits was out-of-work mercenaries. If your only skillset was hurting people, what else would you do between jobs? Refugees and forced migrants often turned to banditry to survive, meaning that war and economic distress contributed significantly to bandit activity. Surprisingly, shepherds also frequently became bandits. They were separated from the economic structures that would meet their basic needs, so they sometimes needed to steal to survive.
The most common banditry tactic was the toll road. Bandits would station themselves on a thoroughfare—road or waterway—and demand payment from passers-by. Whenever possible, they would hide, revealing themselves only at the last moment so victims would have less time to escape. One way that authorities could combat this was by regularly removing trees and undergrowth that got too close to the road in common ambush spots.
Banditry was often legally defined by lethal violence. The penalty for banditry was severe, so bandits were incentivized to get rid of witnesses. Travelers would travel far out of their way to avoid “badlands”—areas where bandits were known to operate.
Banditry could be a serious limiting factor for long-distance travel. Fighting bandits and making travel safer was often a key goal for governments. Powerful authorities like the Roman or Han Empires could create a golden era of relative peace for travelers—the Romans established police checkpoints and watchtowers. More fragmented governments like those of Middle Ages Europe usually lacked the resources and coordination to effectively deal with the problem.
Inversely, rising banditry could be an early-warning sign that a government was losing its power. During one period, the Roman province of Syria were so plagued by pirates that it couldn’t afford to pay its taxes to Rome. This was another reason why bandits were considered a top priority and banditry was so highly punished.
And there we go! Let me know if you have any comments regarding this article or suggestions for future ones.
Another week, another ramble about the intersection between history and worldbuilding! This week, it’s the commonalities between the upper classes of premodern cultures. I should note that while I use the word “nobility,” I just mean an upper class that is given special rights and/or responsibilities by the government. In general, the concepts used in this article have many different terms used in various cultures, so feel free to mix things up.
The usual conditions apply. Fantasy magic and cosmology changes a lot, though less for this topic than for others we’ve looked at. The usual “most fantasy is early modern” also affects less here. Finally, if my unfortunate European- and Mediterranean-heavy education shows here, please let me know and point me to places to learn. This is another area where there’s a significant amount of variation between societies, but I’ll do what I can to fish out general trends.
We’ll look at power bases, courts, and wealth displays.
One of the simplest considerations for your culture’s upper class is, “How did they get their power?” There are a few common routes to nobility.
The first way is military conquest. Naturally, an area’s conquerors tend to become the nobles of the new society. Because of this, aristocrats in general tend to have a very pro-war and militaristic attitude. Military triumphs will often come with significant social rewards, earning the respect of noble peers. These warrior values can trickle down to the common folk, reinforcing a general culture of warfare. In general, the aristocrats of every culture have a military base; other nobles have to find their way into this preexisting group using other methods.
Another road is simple wealth. Prominent merchants can sometimes enter the ranks of nobility simply by being too rich to ignore. Traditional aristocrats almost universally dislike these upstarts, and sometimes pass laws to limit the power of wealthy merchants. If kept out of the traditional nobility, merchants may create a middle class with power that rivals the aristocracy.
A final method is political manipulation. On its own, this tactic is pretty ineffective. An aspiring noble usually has to have an additional source of power to rely on. One sometimes-successful idea is to become a member of a noble’s household—the higher-ranking, the better—and endear yourself to them in the hope that you’ll be granted a minor title.
Once a family is official nobility, another factor becomes relevant: inheritance. In the first section of the “Power and International Relations” article, we looked at the idea that everyone in power relies on a group of people called the “coalition” to stay in power. These coalition members expect benefits for their support. In order to keep the coalition loyal, the leader needs to be able to guarantee that these rewards will keep flowing. This includes planning for what will happen after the leader leaves, either as part of the system (e.g. a different president being elected) or through death. If the coalition thinks that their position is in jeopardy because a leader may be leaving soon—for example, if the leader starts looking old and/or sick—they might back a different claimant to keep their benefits going.
One way to keep the coalition loyal by ensuring smooth succession is to develop clear inheritance customs. If members know exactly who will be coming next, they’ll get less worried if the current leader starts to look ill. Keeping power in the family is best, since the leader can teach the heir from birth about their obligations. Awarding ruling positions based on parentage instead of skill is less effective, but far more predictable, which is what the coalition values.
In practice, this can get difficult. While we’re most familiar with the inheritance system called male primogeniture—the firstborn man gets everything—it hasn’t always been this way. Primogeniture makes the heir’s siblings very unhappy, and can lead to conflict at best and succession wars at worst. If the central government isn’t strong enough to handle these tensions, other inheritance systems might be used that split up the parent’s lands and titles more evenly among the children (unfortunately usually leaving the daughters out of it).
Splitting things up makes the kids happy, but makes the coalition nervous. Even if they know who they’ll be reporting to and are sure that their benefits will be coming, there will be fewer rewards coming their way because the properties get split up. This makes a tough balancing act. More division keeps the kids happy, more consolidation keeps the coalition happy.
In general, less developed cultures err on the side of greater division and happier heirs. Once things get more organized, simpler systems like primogeniture become more common.
In my search for aristocratic elements that are common across premodern cultures, I found one that surprised me: courts. One definition that I liked was that a court was “when a ruler’s household (family) and bureaucracy blend personnel.” In other words, it’s when a ruler’s son manages the treasury and their butler is considered part of the family, and everyone lives in the same house. (Simplified, obviously.)
The stereotypical image of a court is a monarch sitting on their throne, watching sycophants mingle and listening to their concerns. This is accurate in some ways and inaccurate in others. Courts were usually centered on the highest authority in the government, like the king—lower nobles might have small courts, but they weren’t very important. However, these people wouldn’t be just in the throne room muttering to each other. All these people have jobs, and will be working around the palace (generic term for governmental residence) most of the day. This includes the monarch, who will go to offices, churches, and other areas to do their work. The throne room is reserved for special occasions and ceremonies, such as coronations or receiving an important visitor.
Monarchs would frequently expect important nobles to live at their palace as a courtier. The noble would leave someone in charge of their estate—lots of titles for this position, such as chamberlain or seneschal—and join the royal court. The largest courts in history, such as that of the Byzantine Empire, could reach over a thousand members.
The court was an important cultural focus for the region. The monarch and their courtiers would set the standard that everyone else aspired to. One element that had strong political importance was fashion. The ability of aristocrats throughout the area to mimic the attire, customs, and values of the royal court was an viewed as a measure of how politically savvy they were. If a new practice emerged among the courtiers, it would spread quickly among the nobility.
In some special cases, the court itself would form the center of government, and not a capital city. These itinerant courts would essentially take the government with them as they traveled. The Holy Roman Empire had an itinerant court that traveled to cities in its member states to help them feel included, while the Mongolian court was itinerant because the society itself was nomadic, making a capital settlement impractical.
In a few cultures, the harem or concubines were also considered to be part of the royal court. Frequently, male courtiers and workers who worked in these areas were eunuchs to ensure that no “relations” would lead to illegitimate heirs (complicating the all-important inheritance system).
Some court offices were hereditary, passing from parent to child. This served to ensure that rewards would continue to flow to the courtier’s coalition—this system continues all the way down.
In general, a courtier’s prestige was often tied to how physically close they were to the monarch. The highest positions in Middle Ages Europe would sleep in the same room as the monarch. The term “privy council”—“privy” being another word for private chambers—still refers to important officials in several countries. In Tudor England, there was even an office called the Groom of the Stool that would actually help the king go to the bathroom—disgusting for us, highly sought-after for them.
One common misconception is that the relationship between courtier and ruler was one-sided. Remember, the leader needs the coalition’s (here meaning the court’s) support in order to be effective. As such, courtiers frequently received gifts and favors to ensure their loyalty.
Some cultures rewarded loyal nobles with “sinecures”—titles with great benefits but few responsibilities. Imagine if the US president rewarded a key supporter with the job of “Head Manager of the Kitchen Light Bulbs” with a salary of $300k. The term “sinecure” comes from religious positions, but can refer to secular ones as well. In some cases, sinecures came with a position at court.
The last thing to cover is the variety of court appointments. The French system, the Maison du Roi, had an effective division of offices between domestic, military, and religious courtiers. I’ll add administrative positions to this list, since they were present in several courts (though absent in the Maison du Roi).
Domestic courtiers were in charge of the maintenance of the palace (or, in the case of itinerant courts, whatever other domestic needs were relevant). These included duties like the butler, cupbearer, head chef, etc. It might be surprising to consider that these appointments could have just as much power in court as other positions, but remember that what was truly important was proximity to the ruler. Managing the palace could be both lucrative and important. Domestic court appointments were sometimes given to non-nobles, which was one way to get on the ruler’s good side and potentially earn a noble title in time.
Military appointments had significant weight in the warlike cultures of most aristocracies. There’s little to discuss here, since we already have a fairly clear picture of what generals, admirals, and other high-ranking military officers do.
Religious courtiers served as both advisors and managers. They could be liaisons between the ruler and church bureaucracy, if there was a central religious authority. Alternatively, they could be central religious authorities themselves if there was a strong blending of church and state. Note that even when religious courtiers report to an external church authority, they are often more loyal to their liege than their religious manager. King Henry IV protested the Pope’s reforms with the support of “all [his] bishops.” Examples of religious positions include chaplains, seeing to the courtiers’ spiritual needs, and almoners, which manage the monarch’s charitable efforts. There were also territorial (in charge of religious activities within a geographic area) and departmental (in charge of specific areas of church activities, like temple maintenance).
I add administrative appointments to the list of categories. These positions are in control of various executive functions. In some societies, the administrative courtiers formed a special group of advisors called the “cabinet”—another reference to how access to the ruler’s private chambers was a symbol of status. Like some religious positions, could be territorial or departmental. Departmental administrators could include some very interesting duties, such as managing national industries (the Russian court included a position that was in charge of the federal pottery factory).
I said that most nobles don’t have a court; that’s technically true. Every noble has a sort of “court lite” called a “household.” The household comprises the noble’s family and retinue (important servants or lower nobles). Things usually aren’t nearly as formal as an official court. There really isn’t much to say here—I just wanted to mention that households are something to keep in mind.
Putting on a show was very important for nobles. Potential claimants looking for your titles were always looking for weakness, and we’ve already talked about how willing coalition members are to support opposition if the ruler seems like a risky bet. Even the noble’s followers might be unwilling to follow orders if it seems the noble doesn’t have the ability to deal out punishments or rewards.
Obvious displays of wealth were invaluable for signaling economic strength. We tend to think that all aristocrats were pompous, spoiled idiots for wearing fancy clothes, investing in art, and engaging in exotic meals and pastimes. Often, this was true. However, even if a noble wanted to be frugal and didn’t care for these sorts of things, they usually needed to do them anyways. If they didn’t, others might wonder if the treasury had secretly run dry, or that the noble wasn’t willing to spend on loyal followers either. In the same way that fashion was an important sign of social and political aptitude, wealth was a sign of financial strength.
There was an interesting institution that emerged in various forms among several cultures as a way to create these displays: patronage. Patronage was the practice of nobles sponsoring artists or other specialists in exchange for a degree of control over what was created.
The technical term for someone sponsored by a patron is a “client,” and the patron-client relationship can be a powerful dynamic shaping the culture of an area. Patronage encourages art, literature, and other creative works to reflect the aristocracy’s values and preferences.
The level of control a patron had over a client varied between cultures and patronages, with some offering only slight direction while leaving most decisions up to the artist, and others making the artist work only for the patron. In some cases, the client actually moved in with the patron as a member of the noble’s household.
Another way to display wealth was a little more productive for society: public works. Often, if a ruler noble wanted not only to prove that they were rich, but also that they cared for their subjects, they would invest in something big. This could be as small as a chapel or as large as a city-wide sewage system.
Naturally, there were often catches to these projects. One catch is that they usually benefitted the noble in a practical way. Improved roads would encourage trade, the sewage system leads to a healthier workforce, the chapel earns support from the clergy, etc.
Another catch is that often, access to these resources is restricted to members of the upper class—though perhaps not directly. A new theater might require proof of nobility to attend, or it might be priced well above what a commoner could pay. These not-so-public works still serve to improve the noble’s reputation, though society at large doesn’t benefit as much.
There are plenty of other ways to show your wealth. Lavish events are a classic way. Giving one of your supporters an extravagant party for a special occasion can go a long way. Wear fine apparel whenever possible, show off art and other riches, and do whatever you can to prove you’re not broke, and people will be more willing to stay on your side.
There we go! Let me know if you have thoughts on this article or future ones.
I’m running low on topics that can effectively be addressed in these “focus on cross-cultural trends” articles. I’ve got this one, then nobility, then travel, then I’m out. I’m open to moving on to For Your Enchantment, which will revisit these topics with magic and monsters added in, but I’d be fine with returning to cover any other requests that I think would make a good post.
Today we’ll be looking at everything associated with law: legal systems, enforcement, courts, and sentences. In some areas, it’s difficult to define features that separate premodern from modern societies. When this happens, we’ll just talk about more general theory that will hopefully help you as worldbuilders.
The usual conditions apply: I’ll by trying to hold to things that are true across most premodern civilizations, so there’s a lot of variation to account for. Fantasy magic and cosmology changes a lot, though less than you’d expect for this topic. The usual “most fantasy is early modern” also affects less here. Finally, if my unfortunate European- and Mediterranean-heavy education shows here, please let me know and point me to places to learn.
The phrase “legal system” describes the basic philosophy behind a government’s laws. There are two main families used to classify modern legal systems. These don’t map to premodern cultures perfectly, but they can still help worldbuilders to think about how laws might work in their societies.
The first family is civil law, also called statutory law. In this system, all laws are defined by a legislative body or other authority. You might expect this to be the only possibility, but there are others.
The second is common law. In a common law system, past judicial decisions can have just as much weight as legislative laws. This is the system that is used in America; it’s why landmark court cases like Roe v. Wade still have so much power. When there is no judicial precedent, courts must fall back on traditional written laws.
There are lots of variations. An extreme version of common law is customary law, where instead of relying on previous cases, judges use established traditions or cultural norms. Customary law systems are only possible in small, relatively simple societies like independent villages. The amount of wiggle room is too great for complex cultures to handle.
Similarly, an extreme case for civil law is something I would call autocratic law, though that’s not an established term. In today’s civil law systems, laws are created by a legislative body—usually elected. In autocratic systems, these laws come from a single person. This is often the head of the government, such as a monarch, but other systems have regional or local laws made by a local authority (for a while, Roman praetors served this role). This is another one that gets more difficult as a state gets larger. Eventually, leaders have to delegate legislative power, though they may reserve the right to veto.
Another special case is religious law. Religious law can layer over the systems we’ve discussed. A detailed work of scripture can serve as the basis for civil law, while spiritual authorities can be relied on for common, customary, or autocratic systems.
Aside from legal systems, one useful distinction is between civil and criminal law. Criminal law deals with actions that harm society itself, while civil law (not to be confused with civil legal systems) deals with actions that harm specific people. Individuals will go to court to look for justice concerning civil laws, while the government itself will prosecute people that violate criminal law. Murder is criminal, slander is civil.
Even in modern times, the line between these categories can be hazy. Many premodern societies didn’t make the distinction at all. Several Greek city-states, for example, considered almost all law to be civil. Even in extreme cases, such as murder, the government wouldn’t seek seek justice on its own unless the victims’ families specifically sought redress. If no one acted on the victim’s behalf, the murderer could easily go unpunished. The benefit of this system is that the government doesn’t have to invest many resources in unearthing crimes; your citizens will bring any relevant violations to your attention. The obvious downside is that a lot of crime will fall through the cracks.
I haven’t seen any societies that went to the other extreme—making all laws criminal instead of civil. I can’t see it being practical, or even possible. Citizens wouldn’t be able to pursue litigation on their own; it would all be up to the government. The state would have to develop extensive surveillance programs to find violations. This is almost certainly beyond the capabilities of real-world premodern societies. If anyone knows of a culture that I’m not aware of, I’d love to know.
Professional police forces are relatively rare in premodern societies. They represent a significant investment of resources and manpower. Because of this, there were two cheaper types of enforcers that were explored first.
The first enforcer type was the citizens themselves. In very simple societies, residents could often deal with criminals on their own. I mentioned one interesting example in the first article on premodern societies. Villages in several medieval cultures had something called the “hue and call.” If anyone was in distress (usually from an assailant), they would give a special shout—we don’t know what it sounded like. Everyone who heard one of these shouts was socially and legally obligated to drop what they were doing and come to help.
The second enforcer type was off-duty soldiers. We’ve already mentioned that standing armies were very rare. One way that governments could offset the expense of a professional army was to have the soldiers perform additional services when not actively fighting. Law enforcement was an excellent option, since the combat training professional soldiers had would help them be more effective. Infrastructure construction and maintenance were other popular options for employing off-duty soldiers.
Only when other options had been exhausted would governments resort to a dedicated police force. The first areas to get proper police would be high-security locations like temples and governmental residences.
Non-citizen enforcers could have a very interesting toolkit. Ancient Egyptian enforcers supplemented trained dogs with trained monkeys, though I don’t know what those were used for. Swords were more common than spears, since they can more easily be carried around and used in closer quarters.
One thing to note is that purpose-built jails were very uncommon. Again, they were a significant investment. Imprisonment as punishment was rare, since feeding and housing criminals at the government’s expense is only the sort of thing modern states can afford. One of the only times imprisonment was used was to confine offenders before their trial. Even then, jails usually weren’t separate buildings, but were repurposed portions of existing ones, like the cellar of a castle.
It’s easiest to look at courts by covering modern judicial systems and then discussing how premodern systems were different. Again, the two contributing factors for the differences are less complexity and fewer resources.
There are a few ways that modern courts can vary. One of the easiest is how the trial itself is conducted. There are two main methods: the adversarial procedure and the inquisitorial procedure.
The adversarial method is what we’re familiar with in America. Representatives for the plaintiff and defendant essentially argue with each other while the judge acts as referee. The main benefit to this system is that there is theoretically a balance of presented evidence and arguments. Judge and jury remain impartial through the entire process, only rendering judgement at the end of the proceedings.
The inquisitorial method places much more power in the hands of the judge. Instead of being a passive recipient of information, the judge actively calls witnesses, asks questions, and seeks evidence. They act kind of like an in-court detective. Lawyers serve a less prominent role in these courts, mostly serving as expert intermediaries for the participants. The inquisitorial system was invented in direct response to the adversarial one. Instead of waiting for people to report crimes, judges (inquisitors) would seek them out on their own. If I’m honest, it seems like this system would be prone to abuse and false convictions, but my lack of familiarity with it might be to blame.
In modern courts, most common law systems have adversarial courts while civil legal systems have inquisitorial courts. As far as I can tell, this is due to cultural history rather than practical considerations. I would say that worldbuilders could mix these systems without worry about realism.
There are also variations in how court systems are organized. In most modern judicial systems, criminal and civil cases are tried in separate courts (though this is one area of bureaucracy that could be ignored for less complex societies). With criminal cases, the government or citizenry is represented by a state representative called a prosecutor.
In most systems, there’s a kind of “pre-trial trial” where officials can determine whether a full trial is necessary. There are lots of names for this—inquests, grand juries, etc.
Now we can look at how premodern societies cope with fewer bureaucratic resources and exploit legal simplicity. One way is how evidence was gathered. In order to elicit confessions, many governments regularly used torture, unaware of how consistently it produces false testimonies. In addition, many groups used trials by combat or ordeal (subjecting the defendant to dangerous conditions to see if they survive) to see if the culture’s god(s) were on the defendant’s side.
One way to deal with legal cases without stressing the bureaucracy too much is to officially sanction nongovernmental courts—sometimes called “popular courts.” For example, guilds were often permitted to hold trials for their own members (at least in commercial issues), and ancient Indian offenders were tried first by courts organized by their families. Official courts would only be used in criminal cases, or if there was an appeal of the decision of a popular court.
Another way to deal with caseloads without overly stressing governmental systems was to use itinerant courts. Judges, along with all relevant support staff, would travel from town to town. In each settlement, they would hear all the cases that had been collected since their last visit, render judgement, and move on. This was used when there wasn’t too much demand for judicial services, but popular courts couldn’t be trusted for whatever reason.
There is one last modern institution that can be done away with if laws aren’t too complicated: lawyers. If almost everyone understands the laws in question, such as in customary legal systems, then it’s perfectly reasonable for parties in a case to speak for themselves. Professional lawyers are only needed when laypeople can’t reasonably be expected to
There are three main philosophies behind sentences of guilty parties: retaliation, restitution, and rehabilitation. These lend themselves to very different punishments.
Retaliation-based sentences focus on punishing the criminal. Flogging and mutilation would fall under this heading, as well as most “eye for an eye” laws. Fines or confiscation of property are also common. In premodern societies, slavery is also frequently seen. The ultimate retaliation sentence is execution, and is more common in premodern cultures—though some cultures specifically forbade it. Social shunning and banishment are also options.
Restitution-based sentences focus on compensation for the victim. This is frequently financial, though some ancient cultures would make the perpetrator the slave of the victim for a period of time. Smaller governments or clever judges might be able to think of some clever ways for criminals to right their wrongs.
Finally, rehabilitation-based sentences focus on reforming the criminal to ensure they’re less likely to offend again. This is a very recent philosophy—I haven’t been able to find any premodern polities that based their sentences off of this theory. Rehabilitation sentences often use therapy, training, and employment to reform offenders.
Once again, the boundaries here are vague and subjective. Many sentences can serve multiple purposes as listed here. Several things that give restitution also provide retaliation—if the offender is forced to pay the victim, for example.
We mentioned in the Enforcement section that jails were very rarely used for punishments. There was one exception: debtors’ prisons. If someone was sentenced to a fine they weren’t able to pay, they would be sent to special jails where they would provide forced labor until their debts were worked off. This system was obviously susceptible to abuse. If a government needed workers, they could just heavily fine a bunch of poor people, condemning them to years of servitude.
And there you go! Let me know if you have feedback on this article or suggestions for future ones.
This is my first post directly on this fantastic blog. Hello, world!
Also, I’d like to announce a new series I’d like to start: For Your Enchantment. At the start of every post, I mention that these are only to address real-world characteristics. Fantastic elements like magic and monsters can change things dramatically, and I don’t want to make these posts longer than they already are. However, people have consistently requested that I talk about these aspects, so For Your Enchantment will revisit every post from the original series and discuss how these might change in fantasy.
This post will talk about political dynamics within and between nations. You may notice I dropped “premodern” from the title. This particular topic is one of a few I’ve been asked to tackle that require this kind of treatment. Sometimes, it’s hard to find things that premodern societies all had in common that separate them from modern ones. This is one of those fields. There’s a lot of variation in governments and international relations, and what few things premodern civilizations had in common with one another are things that modern civilizations also share.
Because of that, I’ll be using general theory to address these areas. This should effectively cover most societies you’ll be designing.
Our sections will be internal power, international anarchy, trinity of war, and diplomacy.
This section is largely inspired by Bueno de Mesquita et al.’s “selectorate theory,” which you can learn about through three methods. The book The Dictator’s Handbook contains a lot of great information on power within and between countries. You can also see the same material in its original, scholarly form in The Logic of Political Survival, or the more accessible YouTube video The Rules for Rulers by CGP Grey
There are two questions you should ask yourself when thinking about power dynamics within an organization: “Who’s in charge?” and “Who’s really in charge?” The answer to the first question is the “leader” (which doesn’t have to be a single person; we’ll stick to “leader” to simplify things) and the answer to the second is the “coalition.”
The leader is the person or group that technically has the most power in the country or organization. There is one overpowering motivation behind every leader: they have to stay in power. This is true regardless of their alignment or intentions. Even a benevolent ruler who wants to help their people will be unable to do so if they can’t hold their position of authority. Sometimes, this incentivizes good leaders to do questionable things so they can retain their ability to serve their people. This is the central idea behind Niccolò Machiavelli’s The Prince (though it’s argued that he wrote the book to get in the good graces of Italian nobility).
The coalition—who’s “really” in charge—may not have authority in most areas, but they can do one very important thing: remove the leader from power. Maybe these are key voters in a democracy, aristocrats with military might in a feudalistic nation, or anything else in between. Because they have the ability to do the one thing that the leader is truly afraid of, most of the leader’s time will go into keeping the coalition happy. Everything else is secondary. The constant struggle between these two groups has a significant impact on the organization’s activities.
The leader’s main tool to limit the power of the coalition is his ability to replace the coalition members. Coalition members have the same fundamental fear that the leader does: if they lose their position, they won’t be able to coerce the leader to act to their benefit. If the leader can expand the pool of people the coalition can be picked from (called the “selectorate”) and/or limit the amount of members in the coalition, this will give the leader greater power to switch members out if they misbehave. The greater the leader’s ability to replace coalition members, the greater power they’ll have and the longer they’ll stay in office. Democracies have huge coalitions (the voting population), so leaders have relatively little power; autocracies have small coalitions and large selectorates (the few elites the leader has to keep friendly), so they live long and strong.
If the leader can’t replace the coalition members, they have only one option left: bribery. They need to spend resources to buy coalition loyalty. If the coalition is small, then the most efficient way to do this is with “private goods,” like personal riches and favors. If the coalition is very large, then it’s too hard to single members out to give them private goods. In this case, the leader must turn to “public goods” like education, infrastructure, and healthcare, which are more expensive but blanket almost everyone in the organization. This is why autocracies have relatively poor people and rich elites, while democracies have relatively equal conditions between the rich and poor. (When democracies start seeing the rich get richer and the poor get poorer, selectorate theory suggests that this is due to the formation of a new coalition that has the power to get rid of the leader, or otherwise have resources the leader can’t function without.)
These two dynamics spiral out into most of the things we see governments doing. A monarch is encouraging the growth of new noble families? They’re making their aristocratic allies more expendable. A fascist who served the people became the victim of a coup? The leader’s benevolent spending spree left less for the coalition, who then sponsored a revolution. A politician makes grand promises on campaign, but doesn’t follow through once in office? The large coalition needed to get elected switched to a much smaller coalition needed to stay elected. You can even follow the chain down and see the coalition that keeps coalition members in power. The possibilities are endless.
Because it goes nowhere else, I’d like to briefly discuss the “separation of power” theory of governance. To my surprise, I haven’t been able to find a better framework to describe different governmental structures than what Americans learned in high school. (Other countries probably learn about this, too, but I don’t know since I’m an ignorant American.) Put simply, a government serves three main activities: legislative (makes laws), executive (enforces laws), and judicial (judges cases where laws are broken). Understanding the relationship between these branches is a simple way to visualize governmental characteristics. The judicial branch is frequently separate from others to encourage objectivity, though it has had executive powers in the past (see the reign of the judges in ancient Israel). A presidential system keeps legislative and executive branches independent and places the power of the executive in a single person. A parliamentary system makes the executive leader a special member of the legislature. There are far too many variations to list here, but this is the simplest way to describe your government. Who writes? Who enforces? Who judges?
One very brief note on premodern countries: nations as we think of them are actually a very modern concept dating back to the Treaty of Westphalia in 1648. Through most of history, there weren’t states with clearly defined borders with a monopoly on power within those borders. Instead, there were constantly-shifting lands defined only by which group held the most control over them. To paraphrase Bret Devereaux (I can’t find the exact reference), there was no “nation of France” any more than there’s a specific “library of Bret”—there’s just the books that I happen to have at a given time, the same way there’s just the lands that happen to be under the control of the French monarchy. We can still look at historic international relations through a state-based lens, but we need to acknowledge that things were muddier than that in real life.
There are a few theories that can be used to describe international relations, but the one that I find to be most useful is “realism.” This starts on the same assumption that selectorate theory does—just as the most important thing for a leader (regardless of their motives) is to stay in power, the most important thing for a nation to do is survive. Here, this means that its government must retain authority over its lands. The other basic assumption behind realism is that there is no power above nations that can effectively control states’ actions. This hasn’t always been strictly true (the Roman Catholic Church and the modern United Nations are examples) but they’re extreme exceptions. Even when such super-national forces exist, they usually only work because states all agree to let these institutions control them and not because they have any power by themselves. The failure of the League of Nations to stop World War II made that evident.
The lack of super-national institutions is called “international anarchy” and it, along with the state goal of survival, forms the basis of realist theory. The result of these two principles is that nations will always seek to increase their security by trying to grow more powerful than their neighbors. This is usually done through military might and conquest, which give them the resources to become more resilient against outside threats. The issue is that the stronger a nation gets, the more threatening they become, inspiring neighbors to invest in militaries themselves and wage preventive wars. This is called the “security dilemma” or the “Red Queen effect”—nations will always try to out-compete their rivals, but will usually not become any safer.
If a nation doesn’t try to increase its security through conflict and military might, it will often become taken over by a state that’s more pragmatic in its policies. This forces even well-meaning states to become military powerhouses if possible. Just like leaders, they can’t help anyone if they’re not in charge.
Nations can usually only escape the security dilemma and grow stronger in general if they have a special advantage that its neighbors lack. This is usually geographic in nature (better waterways for transportation, better farmlands that can feed more soldiers, a rare resource that gives them more revenues through taxes), but can sometimes be cultural (like a more robust military culture or a religion that encourages fervor in its citizens). When a nation has these advantages, it can grow into an empire and last longer than most other countries. A nation like this is called a “hegemon,” and this system of international domination is a “hegemony.” Most hegemonies are regional, but there have been one or two worldwide hegemonies before.
One important thing to consider is the fate of the small nations. If survival is based on military prowess, what can a country do if it just can’t achieve that kind of dominance? The answer is to ally itself with stronger nations, giving them shelter at the expense of some of their freedoms and resources.
Alliances tend to form to curb the power of a threatening neighbor. The neighbor then forms alliances of its own. This leads to a complex system of constantly-shifting allegiances, roughly trending towards alliances of vaguely similar strengths. This is called the “balance of power” theory, and can be best seen in the dizzying network of allegiances in European nations prior to World War I.
In rare cases, an extremely asymmetrical alliance network can form if nations decide to work together to fight an especially dangerous nation. These temporary alliances are called “coalitions.” The most amazing example of historical coalitions is the Napoleonic Wars. Napoleon was so terrifying that he inspiredEuropeans to form coalitions against him seven times. This is, obviously, rare; an upstart nation usually can’t survive a single coalition, let alone several.
I should mention that there are a few other theories of international relations out there. Liberalism says that a super-national authority can hold power by itself, and constructivism says that culture, not self-interest, is the motivating force behind state actions. I would argue that both of these theories go against historical record and—more important for us—are less useful for worldbuilders.
Trinity of War
We’ve discussed warfare before, but we’re now going to look at it through the lens of “grand strategy,” which is the realm that most foreign policy takes place in. Put simply, if there’s an option of “don’t have a war,” we’re discussing grand strategy.
The most influential military theorist throughout history is probably Carl von Clausewitz, writing during the time of the Napoleonic Wars. Clausewitz’s ideas have had a huge impact on military thought. For the purposes of worldbuilding, we’re going to focus on the idea of his “trinity of war”—government, people, and army. For those of you already familiar with Clausewitz’s work, you may note that I’m using his “secondary trinity” instead of his “primary” one.
Clausewitz argued that any serious attempt to study of warfare has to go beyond the effectiveness of its armies. We’ve already discussed the characteristics of armies and militaries extensively, so feel free to look at the second article in this series if you’d like to learn more (or just want a refresher). We’re going to focus on the other two elements here: government and people.
Government describes the administrative, relatively rational element of a society. In general, in order to wage war effectively, a nation needs a strong government with clearly defined and sensible goals. It needs to be able to utilize non-military tools as well, such as diplomacy, espionage, and economic persuasion. If a government is fractured, disorganized, or starved of resources, its wars will probably end in defeat.
People describes the popular, relatively irrational element. The greatest tool a nation’s people bring to a war is its resources. This includes economic strength and manpower for armies. However, one of the best things it can offer is its determination to fight. The will of the people can keep a war going for a very long time—or cut it short. Military defeats and victories have a strong impact on popular support, which is one reason why nations that are on the losing side of a conflict tend to push towards unrealistic, desperate victories. They need to keep the people on their side, or they’ll lose what little momentum they have.
In practice, a nation at war doesn’t need to check all these boxes in order to function well. In one of these elements is lacking, however, it does mean the others have to compensate if the country is to have any hope. A loose or nonexistent government requires strong coordination and determination on the part of the people. An unwilling populace requires a very authoritarian government to keep the war effort moving. Military ineffectiveness is hard to deal with, but a well-organized resistance can at least make it hard for enemies to secure their gains.
Speaking of “securing gains”—an often-overlooked step in conquest is how the conqueror makes temporary control of lands into permanent ownership. One useful resource is the talk, Reaping the Rewards: How the Governor, the Priest, the Taxman, and the Garrison Secure Victory in World History, a talk by Wayne Lee. He argues that each of these roles is necessary for premodern success after war. The priest uses religion and culture to integrate conquered peoples, the taxman extracts local resources for the victor, and the garrison is a small military force stationed locally to discourage resistance. The governor is often filled by local authorities who are encouraged to ally with the region’s new rulers. This allows the victor to assert control without expending too many resources on setting up a local bureaucracy.
I’m including this section because I feel duty-bound to cover it, but to be honest, there isn’t much for me to say here. The logic behind alliances has been discussed in the “International Anarchy” section, and there’s a lot of variation in diplomatic systems. There’s honestly too much to find overarching trends. I’ll do my best to convey what little I’ve found.
In general, it seems that extensive diplomatic systems form in two main scenarios. The first is when a tenuous assortment of states with roughly-equal power need to ensure communication to prevent catastrophic, all-out war (as in much of Indian history and the Warring States period in China). The second is when a hegemon wants to extend its reach beyond its borders, either preparing for war with neighbors or enticing them into peaceful unification (as in the Roman Empire and Imperial China). If things are more disorganized than either of these scenarios, it seems that diplomacy tends to be more informal and less widely-utilized.
The role of diplomats has varied across cultures and eras, though they were usually granted a protected status to ensure peaceful communication (sometimes this was enshrined in local religions). In China’s Warring States, diplomats were essentially hostages. If a state acted up, its diplomats in rival states would be killed. In India, diplomats were expected to act as spies and thieves, though I’m not sure how this worked with the norm that diplomats were to be unharmed—if you knew who was stealing your secrets and treasures, why would you let them go free? Roman diplomats acted mostly as archivists, documenting local trends for imperial records. In many areas, diplomats acted as religious missionaries or economic intermediaries.
There’s also a lot of variation in the types of diplomatic positions. Messengers or heralds simply conveyed information, lacking the authority to do anything else. Envoys tended to stay in the target nation’s lands in order to learn more and build a relationship, expressing the general views of their home country’s leadership. Ambassadors were long-term envoys who usually had more authority to negotiate on behalf of their home nation. The most extreme on this spectrum were called “plenipotentiaries” (“many powers”), which had the right to enter into treaties and other agreements even without their leader’s permission. Plenipotentiaries became necessary when diplomats had to go far from their home, making constant communication for confirmation impractical.
And that’s all I’ve got! I hope this article was useful. Please let me know if there’s anything else you’d like to see me cover!
After general society, warfare, and economy, people have been asking for religion. So here we go! Right at the start, I’d like to recommend Bret Devereaux’s “Practical Polytheism” series on his blog, A Collection of Unmitigated Pedantry. That series inspired a lot of this, though I’ve added some insights and resources as well.
Alrighty, the usual conditions: I’ll by trying to hold to things that are true across most premodern civilizations, so there’s a lot of variation to account for. Fantasy magic and cosmology changes a lot, though less than you’d expect for this topic. The usual “most fantasy is early modern” also affects less here. Finally, if my unfortunate European- and Mediterranean-heavy education shows here, please let me know and point me to places to learn.
In addition, while this post focuses on polytheistic religions, almost all the points can apply to monotheistic systems as well. It could be argued that Medieval Catholicism followed most of the following points except for two main exceptions: other gods definitely didn’t exist, and God is morally right. This’ll make more sense once you read the rest of the article.
I’ve realized that these posts are too long for many people to read through, so I’m going to add a brief summary here:
Religion was less about beliefs and morals and more about achieving real benefits through rituals; deities and myths were mostly explanations on why rituals worked.
Think pantheons, not individual gods; your characters need someone to turn to for every situation. You can use existing pantheons to make sure you’ve got everything covered. Also, alignments don’t matter; people can’t afford to offend a god, no matter how much they disagree with what the god says, does, or wants.
For ideas, you can use the Thompson Motif-Index of Folk-Literature; A0-A599 are great for gods, A600-A2599 for creation myths, and everything else for more general myths. (Details on how to use this fantastic resource in the article.)
This article has sections on origins, pantheons, rituals, myths, worldly matters, and religious relations.
The biggest lesson you can learn here is that ancient religion was about practicality, not morality. Religion wasn’t for doing what was morally right, but for keeping the gods on your good side to get real benefits in your life. What follows is the generally-accepted explanation for how premodern religion came to be.
B. F. Skinner, the psychology who discovered operant conditioning (basically positive reinforcement) made another, less well-known discovery called “pigeon superstition.” He divided pigeons into two groups. For one group, each pigeon was placed in a cage where they could push a button and a door would open, revealing a treat. As expected from his previous experiments, the pigeons were incentivized to push the button. The second group’s cages had treat doors that would open at random. These pigeons still tried to figure out how to make the door open, but in the absence of reliable feedback, they ended up making incorrect associations about what was working. They ended up creating very complex behaviors (flap twice, hop three times, spin, hop two more times) that they would repeat, trying to make the door open on purpose. Psychologists call this behavior “superstition,” the belief in causal relationships where they don’t really exist.
So far as we can tell, this is what happened for premodern religionists as well. They wanted something good to happen (e.g. crops to grow), and started trying things to make it happen (e.g. pour some wine on the ground). If it worked, they would keep doing it; over time, experimentation would lead to very complex rituals. However, because premodern societies are so risk-averse (see my first article), consistency was more important than innovation. Later came attempts to explain why the rituals worked (e.g an earth goddess was drinking the poured wine and she encouraged the crops in gratitude). These explanations were ultimately less important than the ritual results, but they formed an important cultural backbone.
This is important: premodern people didn’t have complicated religions because they were stupid. They had these things because they were trying to be scientific in an environment that made progress effectively impossible. These beliefs eventually morphed into the sort of religious fervor that we know and love from relatively recent history, but they didn’t start out that way.
Now, a lot of the reasoning behind this section doesn’t hold as well if the gods are actually real, as in most fantasy settings. However, a lot of the results of these forces do apply, so I’m including it anyway.
This doesn’t really go anywhere else, but as an aside, atheism didn’t exist in the premodern world. It’s a very recent invention. Without adequate scientific tools, there isn’t a good way to explain natural phenomena without religion. “There are no gods” makes about as much sense as “There is no sky.” We’ll touch on this in the final section, but most religious wars weren’t saying that enemy gods didn’t exist, but that the enemy gods were weaker than yours.
I’ve mentioned that gods probably came after rituals in real-world religious reasoning, but since they’re where most worldbuilders begin, we’ll address them first.
The most important thing to remember is, again, practicality trumps morality. There are two main effects of this. The first is that the most vital thing your gods can do is solve problems for your world’s denizens. Critically, they need to be able to help your denizens in all areas of your life. Real religions do this in two ways: either they have an all-powerful single god, or a pantheon that collectively can do everything a worshipper could want.
Many fantasy settings have individuals or cultures pick a third option that makes no sense: the person or society will worship one or two gods that can’t help them everywhere. It’s all well and good to say your orcs serve Gorshnakh the Bloody, God of Conquest, but what will they do when their crops need rain? When they need to secure an important alliance? When there’s a problematic childbirth? Gorshnakh probably won’t be able to help too much there. Your orcs need to be able to get help for whatever problems they encounter. The same holds true for individual characters. If your paladin worships only the Gentle Lady of Dreams, then they’re sunk if they need anything not sleep-related. Real-world priests still paid homage to other gods.
In your settings, it’s perfectly reasonable to have different pantheons for different societies and ancestries. They can even have overlapping domains. Premodern polytheists generally held this view: other gods existed, they were just weaker. We’ll return to this point later.
The second effect is that morality is completely irrelevant. Many RPG systems’ deities have alignment restrictions: Gorshnakh will only accept chaotic evil acolytes, while the Gentle Lady only takes neutral good followers. This isn’t at all how premodern religions worked. In the end, it didn’t matter whether you agreed with a god’s ideas or requirements; their power over you meant that you didn’t have much choice but to do what they wanted. What do you do if you’re an Aztec citizen who thinks that cutting out the heart of your neighbor’s daughter is a bad idea? You suck it up, because if that sacrifice doesn’t happen, the moon eats the sun and then teams up with the stars to devour the earth and everyone you ever loved.
This isn’t to say that there’s no correlation between a god’s character and a culture’s or character’s morals. For one thing, the explanation that a society comes up with for why its rituals work usually flows from what it values. For another, the power of cognitive dissonance encourages people to rationalize and justify actions they’re forced to take; over time, our Aztec will probably come up with a reason why human sacrifice is fine after all, and then teach that to their children.
We now have two general rules: think pantheons, not deities, and alignment doesn’t matter. (I’m placing this as its own bullet to make it easier to find for readers; hope that helps with these text walls.)
I have one technique that I use to make sure I’ve covered every need a group has. You can take a real-world pantheon—the twelve Olympians are low-hanging fruit, but they work just fine—and make sure your pantheon can do everything the Earth deities can. That doesn’t mean your gods have to be based directly on the “real” ones, but they do have to be able to accomplish the same things. If none of your gods can help with family matters, like Hera can, you may need to add a new god or give that power to an existing one. You can lump these domains into few gods or spread them out over many, it doesn’t matter. Some civilizations may have different requirements: a purely underground dwarven society won’t need a weather god, but they might need a god of subterranean creatures.
One thing that almost every premodern polytheistic religion had was “little gods.” The big guys (like the Greek Olympians) were extremely powerful, but they might have their hands full with big matters. Because of this, polytheistic systems usually had very minor gods over specific domains (the Romans had a god of hinges), places (this river, that hill), people (your family), or events (a god of marriages, business deals, etc.). The premodern person would spend most of their religious attention on these little gods, while acknowledging the superiority of the big ones.
At this point, I’d like to introduce a fantastic—and somewhat overwhelming—resource for religious worldbuilding. A folklorist named Stith Thompson composed a massive, six-volume classification for folklore and myths. There’s… a lot there. You can find a summary of the Thompson Motif Index here; you can click the red codes on the left to see the even more detailed sub-classifications. For ideas for deities, I suggest using A0-A599. As an example, I just clicked on A280 for Weather Gods, then scrolled down and saw A287.0.1: “Rain god and wind god brought back in order to make livable weather,” which apparently comes from an Indian myth. I’ve already got two deities and an idea for a myth. It’s great stuff, guys.
Rituals, or standardized rites of worship, are really what premodern religion is all about. An acceptable analogy would be the average car owner. You don’t really need to know what’s going on under the hood; most of your time is spent driving, not learning about its history or operations. In general, rituals are grossly underrepresented in fictional works. Putting rituals in your setting is one way to really flesh out your religions.
The fundamental idea behind rituals is called do ut des, Latin for “I give that you might give.” The supplicant does something for the deity—maybe a sacrifice, or at least an acknowledgement of the god’s power—in the hope that they will receive something in return. It’s a transaction, though an unequal one. This is a good thing to keep in mind for designing your own rituals.
A quick note about real rituals: obviously there will be times when a ritual doesn’t work. You pray for rain and there’s a drought. There are two classic explanations: either you did the ritual wrong, or the god just decided that it didn’t feel like accepting the ritual this time.
I’ll be using Victor Turner’s ritual categorization system, though I’m changing the names because the original terms seem counter-intuitive to me. In studying African rituals, he identified a few main types that I’ll call regular, irregular, divination, and consecration. If you read the descriptions and decide that other terms make sense, I’ll gladly rename them.
When I say that some rituals are regular, I don’t mean they’re ordinary—I mean that they happen regularly. These are rituals that happen consistently at specific times in the year, month, day, or other time increment. Seasonal rituals (solstices/equinoxes, harvest and planting festivals, etc.) fall under here. There might also be rituals for lunar phases, as well as daily events like sunrise and sunset. Cultures could come up with rituals associated with other times that are more arbitrary in their calendar, like the Sabbath in Abrahamic religions.
Irregular rituals are those that are brought on by specific events in one’s life. Turner further divided these into life-event and affliction rituals. Life-event rituals are used in key points of transition in a person’s life: birth, puberty, marriage, pregnancy, death, etc. Affliction rituals are used when people have a very specific need. A general needs success in an upcoming battle, a husband seeks aid for an ailing wife, a lovelorn teen needs a divine wingman, etc. One important variety of affliction ritual is exorcisms, where the ritual focuses on banishing a wicked being responsible for the problem.
Divination, when it comes to ritual theory, does not refer to seeing the future (although foreknowledge might be one result). Divination is when people want to learn what the gods have to say. “Is this marriage a good idea?” “Should I attack today?” “Why is my horse sick?” There are a lot of ways to let the gods speak. Classic divination uses random phenomena (the flight of birds, the appearance of animal organs, etc.), though drug- or trance-induced visions from oracles work too. Romans would sometimes overturn consular elections based on the results of a divination ritual; as Bret Devereaux says, “The gods get a vote, too.”
The final kind of ritual is consecration. We’ll be discussing this in greater detail in the “Offerings section, but the essence of these rituals is to dedicate something to the god in question.
Unfortunately, I don’t have much to say here. In the real world, myths are the results of people trying to explain things: why rituals work, why natural phenomena exist, where a civilization came from, even the origins behind place names. The story of Theseus and the Minotaur seems to be an attempt by the Greeks to explain why ancient Minoans liked bulls and had a labyrinth-goddess. Other myths may be for trying to come up with fables to justify the society’s values. This is anthropologically interesting, but generally not too useful for worldbuilders, since myths are usually supposed to be things that actually happened, not invented stories.
All I can really offer here is another callout to the Thompson Motif Index. It’s useful for deity ideas, and you can get some creation myths from A600-A2599, but it goes all the way to Z356. There’s just… so much there. Another random click (H1250, “Quest to the other world”) and scroll brought me to H1252.4, “King sends hero to otherworld to carry message to king’s dead father.” That could even be a real historical event or a quest hook.
(I struggled with a name for this section; if you think up a better one, let me know.)
In premodern religions, the gods could own things just like everyone else. The gods could claim things on their own (Mount Olympus is a very real mountain that the Greeks decided the gods owned), but most of the things the gods possessed were the result of worshippers giving them willingly. Temples, for example, were places the gods genuinely lived in (in premodern societies’ perspective) when they weren’t in their normal homes.
The term for something owned by a god is “sacred.” Technically, the word “sacrifice” comes from the act of giving the offering to the god (sacer facere, “to make sacred”), not the act of killing the victim or giving something up in general.
One very important category of property the gods owned was people. The priesthood—the group of priests—were usually considered to be sacred themselves. Religious workers belonged to the god for as long as they served (not always for life; even the famed Vestal Virgins of Rome only had to be devoted virgins for 30 years, which isn’t that bad compared to what Christian monks dealt with).
The act of offering something—person, place, or thing—to a deity usually involves a ritual of its own. These are the consecration rituals I mentioned earlier.
Two brief notes: there are a lot of ways that cultures handle their priesthoods. It can be a full organization with a developed hierarchy, like the Catholic Church; it can be a diffuse group of actors, like the stereotypical medicine man; it could even revolve around people who aren’t actually offered to the god at all, like household leaders. There’s too much variety here to establish general trends.
The other thing I’d like to address is the idea of state religions. Given the amount of power that gods were understood to have in the premodern world, it’s understandable that governments almost universally sponsored religion in one way or another. The degree and nature of integration with the worship in question varies a lot, but “state cults” are everywhere.
To simplify things dramatically, we can say that there are two basic attitudes one religion can have about another: friendly and hostile.
When one polytheistic religion is friendly towards another, this can create some significant cultural merging. Remember, what’s important for premodern peoples is results, not “truth.” If another group’s gods seem to be more powerful—maybe their civilization has been around for longer, or they’re more successful in battle—it’s perfectly reasonable to start worshipping their deities. They’d usually add their own touches, since their gods clearly weren’t worthless; they’d gotten them this far, hadn’t they?
Hostile relations are generally easier to understand, with one caveat we’ve mentioned before. Usually, polytheistic cultures acknowledged that other gods existed, but they were certain their gods were stronger. There might be contests to see which god was better; one classic example is the Biblical story of Elijah and the priests of Baal in 1 Kings 18. Elijah challenged the opposing priests to get Baal to accept an offering of a bull; when no divine event occurred, Elijah mocked that Baal might be powerless, saying “Maybe he’s asleep? Shout louder!” When Elijah made the same offering, holy fire consumed the altar and everything around it. In response, the government put the offending priests to death in an attempt to appease the clearly-stronger God Elijah served.
Religiously-motivated wars and violence were often justified by similar logic. Our gods might be offended by those who worship others, so we’d better stamp out the heretics. Interestingly, if wars were waged for secular reasons, then there was plenty of room for the religions themselves to be friendly to each other. The Romans had a ritual before they attacked a large settlement where they would invite the enemy’s gods to switch sides and join the Romans; if they won, it was a sign that the gods had indeed changed allegiances and could reliably be worshipped.
And that’s what I have for you guys! Let me know if you have any additions or corrections, and if you have something else you’d like for me to talk about next. Have fun!