Literal Yearly Cider

About a month ago, I had the pleasure of attending a literal yearly cider pressing. It seemed like something I should write about on this blog.

In terms of equipment, this cider pressing was very different than the last one I’d attended. That one was back in fall 2016, when I was adventuring in the woods of North Carolina. I was visiting some friends at Wild Roots, an off-the-grid community with no electricity or indoor plumbing. They get their food from hunting, foraging, and gardening; they also dumpster-dive, and collect food and materials that the rest of society is throwing away.

On the day of the cider pressing, we all climbed into the back of Todd’s truck and drove to a nearby organic apple farm. (Todd is the person who’s been at Wild Roots the longest; you’ll see him in the video at that link.) Wild Roots had an arrangement with the apple farm, where after the season was over, they’d come and clean up the apples that had fallen on the ground, taking home as many as they wanted. So we spent the morning salvaging apples, and then drove over to a friend’s house to use his big wooden cider press. We sorted through apples, cutting out the especially rotten or wormy bits, and cutting them into chunks so we could press them more easily. Then we threw them into the wooden press and crushed them into juice.

The cider pressing I attended this year, in Longmont, Colorado, had much more modern machinery, but a similar sense of community. People drove in from nearby towns, bringing crates of apples that they had picked. Some had gotten them from trees in their own yards; others had gone out to public spaces and collected apples there. One woman told me about a volunteer organization that picked apples in order to save the bears. Apparently, there’s a lot of apples trees in town, and many of the people who own the properties don’t care about harvesting the apples. So they just ripen and fall on the ground, and the bears come into town to eat them. Apparently, if a bear is sighted a certain number of times in town (I think it was three), then the bear gets shot. So these volunteers drive around, picking the unwanted apples in hopes of keeping the bears away.

So everyone arrived, and we finished cleaning the equipment, and then the cider pressing began. Everyone who was there pitched in to help. People seemed to settle into jobs. Someone needed to load the apples into the washer, and then someone else needed to help guide them into the grinder after they were washed. And once they were ground, someone needed to carry the buckets of ground apples over to the tables where people were filling them in to the pressing cloths. I ended up working with two boys who were putting buckets under the grinder, and making sure that, when one bucket got filled up, a new bucket could be efficiently slid into place. The younger boy, who was maybe eight years old, seemed like a future engineer. He designed an assembly line process for getting the buckets slid into place, and then stayed there for hours, making sure that everyone was doing the exact job they were supposed to. But he wasn’t strong enough to slide the full buckets himself, so I helped with that part.

Here’s some pictures of the process. These are some of the apples that people brought to be pressed:

1

This is the apple washer and grinder. The part on the left spins the apples and sprays water on them, in order to rinse them. (The water drips down into the bathtub and then recirculates.) Then a chute opens and the apples spill out into the red part, which is the apple grinder.

The second picture shows the inside of the apple spinner, so you can get a sense for how it works. It wasn’t spinning, so the two gentlemen in the photo had opened it to fix it. They’re the ones own and built the cider press; they used to host this cider pressing every year, until a fire destroyed their barn and all of their equipment. They’ve spent the last few years rebuilding, and this year was the first cider pressing with their new equipment. The concrete slab we were standing on was the place where the old barn used to be.

Here’s some photos of the grinding process. The first two show the apples rolling into the grinder. The last photo shows the grinder itself; that cylinder spins, and the bits of metal (maybe pieces of nails?) shred the apples.

Once the apples were ground, they were put in cloths to be pressed. The wood frames were used to get the filled cloths to be the right size. A cloth was placed in the frame, and then filled with apples, and then the cloth was folded over the top, and the wood frame was removed.

The finished cloths full of apples were stacked in between sheets of plastic to press them in this hydraulic press.

Here’s the cider coming out of the press! It was filtered through a mesh bag to remove any solids, and then it flowed down a pipe (the cider-colored one) to a refrigerated tank.

When the cider pressing was finished, my friend Trevor and I took about 12 gallons home to brew into alcoholic beverages. (Trevor is the one who invited me to the cider pressing; he’s also been teaching me, over the last year or so, how to brew various things.)

So the next day, I went over to Trevor’s house to actually do the brewing. Trevor wanted to make a cyzer (which is a combination of cider and mead), while I wanted to see what the cider would taste like if fermented with its natural yeasts. So we did both.

It turns out it’s extremely easy to make hard cider. In fact, it practically makes itself. If you’ve ever picked a wild apple, you’ll know that it’s not shiny; instead, it looks kind of matte, because it’s covered in a thin white film. That film is the yeasts. The yeasts want to eat the apple, but they can’t, because they can’t get through its skin. (That’s the whole point of the apple skin — to protect the fruit from microorganisms that might want to eat it.) But even though the yeasts can’t eat the apple yet, they stay on the skin anyway, waiting for the day when the apple will fall off the tree and the skin will get broken open.

So when you take a whole apple, skin and all, and you crush it for cider, you’re doing exactly what the yeasts were waiting for: you’re giving them access to the delicious apple flesh within. And the cider you press will be full of yeasts as well. So basically, all you need to do in order to make wild cider (that is, cider made with wild yeasts) is to put the freshly-pressed cider in a nice clean container, and let the yeasts do their work. (The alcohol they produce will keep any other microorganisms from growing in the beverage.)

Here’s some pictures of that process.

The first (and most time-consuming) step in any brewing project is cleaning and santizing all the equipment. For cleaning, we just wash stuff. For sanitizing, we use something called StarSan, which sanitizes the equipment without affecting the flavor of the beverage. Here’s a picture of Trevor washing the equipment, and me spraying the fermenting bucket with StarSan.

We then poured some cider from the tank where we were storing it, into the bucket. At this point, the cider had been sitting in the tank for almost 24 hours, and it had already started to ferment. It was slightly fizzy.

Once the cider was in the bucket, we aerated it. Aerating introduces oxygen, which makes the yeasts extra excited about fermenting. We used Trevor’s immersion blender for this step.

Then we transferred the cider from the bucket to the carboy, and left it to ferment. (We could have fermented the cider in the bucket, but we wanted to use the bucket for the cyzer project. So we moved the cider to the carboy. The reason we didn’t just pour it into the carboy to begin with is that you can’t aerate it when it’s in the carboy.)

6

Then Trevor took a specific gravity reading. I don’t quite understand how this works, but somehow it measures the amount of sugar in the cider, which gives a sense for how much alcohol it might eventually produce.

7

The last step was putting the airlock on the carboy and then leaving it to ferment. The airlock solves the following problem: Fermentation produces a lot of carbon dioxide, and you need to let the carbon dioxide out of the carboy, or else it will explode. But if you just leave the carboy open, then all sorts of bugs and microorganisms can get in. So you use an airlock, which lets air out but not in (at least assuming that the pressure inside the carboy is higher than the pressure outside it).

So we put on the airlock, and we filled it with liquid (we used StarSan, but water would have worked), and then we left it there for nature to take its course.

So that’s our cider! It’s been sitting there for about a month now. It’s probably done fermenting, or almost done, but Trevor and I have both been busy (including with other homebrewing projects), and haven’t had time to rack or bottle it. (Racking is when you transfer it from one fermenting vessel to another. That stirs up the cider, which causes it to start fermenting more; it also helps get rid of some of the sediment, which is the solid gunk that sinks to the bottom of the carboy.)

So I can’t tell you how the cider tastes yet; I’ll have to report back later. But I can tell you that I’m looking forward to tasting some real Colorado cider, made from apples that we picked and pressed ourselves, and using the yeasts that grow naturally in this part of the world.

 

Advertisements
Posted in Uncategorized | Leave a comment

The Fragility of Knowledge

A couple years ago, I was taking an outdoor class in West Virginia. I had brought a book to read, and I had put it down on a tree stump as I worked on my project. And while I wasn’t looking, one of the resident goats came by and ate the book’s cover.

This got me thinking about the fragility of knowledge. A book is such a fragile object, and the knowledge it contains is fragile as well. If our society collapsed, how much of our knowledge would be saved, and how much would be lost? What would it take to preserve that knowledge?

I.

We have such an abundance of knowledge these days that we forget how easy it is to destroy. We store our knowledge in physical media (books, hard drives, etc.) and those physical objects are remarkably fragile.

Digital media are fragile in a few ways. For one thing, they’re physically fragile; if you’ve ever dropped a hard drive and broken it, you know how easy it is to lose hundreds of gigabytes of data. They’re also fragile in that they rely on electricity; if the electric grid goes down, we’ll lose access to all the knowledge on our DVDs, hard drives, and the internet. And they’re fragile in the sense that digital technologies quickly become obsolete; I remember watching my dad laboriously copying our family’s home videos from VCR to DVD when that technology shifted.

We can work to preserve our digital data. We can guard against physical damage by making backups, and if the electric grid goes down, we can bring it back up. And there are data librarians, carefully transferring knowledge into new storage media and file formats as the old technologies become obsolete.

But it takes a lot of work to maintain our digital data, and already, so much of it has been lost. Essential data from the Apollo missions is inaccessible because we’ve lost the ability to read the tapes it’s stored on. And the last programmer who really understood the code for the Voyager spacecraft has now retired. As time goes on, more and more scientific data and instruments will be lost in the same way.

II.

We don’t think about books as fragile, because we live in a carefully climate-controlled environment where they’re easy to preserve. But imagine trying to preserve a library full of books in a more primitive dwelling place, say, a humble log cabin with a dirt floor.

Animals will eat your books if you’re not careful. It should be easy to keep out the larger ones, like goats, as long as your cabin has a door. But mice and rats are notoriously good at sneaking through the cracks. They’re a problem in modern dwellings, so I can only imagine they’d cause that much more trouble in a primitive house.

And if you somehow manage to keep out the mice and rats, then you still have bugs to contend with: silverfish and all the other creepy-crawlies that like chewing on paper.

Now suppose you somehow manage to eliminate the rats, mice, and bugs. You still have to worry about dampness, which will rot your books if you’re not careful.

And you also need to worry about humans. Presumably, you’re going to have people handling these books (since otherwise, why bother storing them?). And people are going to cause wear and tear, ripping pages and bending covers and so on. The oils from people’s fingers will damage the books. And if you’re living in a less civilized society, where people work outside and don’t wash their hands as often, they’re going to end up smearing dirt, grease, and animal residue on the pages.

(To give you a more concrete picture: a couple years ago, I spent a week visiting some friends who had run away to live in the woods. When you spend all day outdoors, your hands get dirty. There was no easy access to soap, so you’d use the bathroom and then just not wash your hands (presumably following the left-hand right-hand rule). If you wanted to cook meat, you’d just grab some raw meat that was sitting out (they didn’t have a refrigerator), throw it in the pot or pan, and then wipe your hands on your pants. I helped them chop up some bear fat to render, and then I wiped my greasy, smelly, meaty hands on my pants, and picked up my book and started reading it again. By the time I got back from that trip, the cover had fallen off. That’s also part of what made me want to write this post.)

Anyway, even if you solve all these problems, even if you are perfectly and inhumanly carefully with your books, they still only have a finite lifespan. Eventually, the pages will get brittle, and will crumble to dust in your hand. No matter how carefully you tend to your books, you can’t hold onto them forever.

If you want to preserve books for hundreds of years, you have to copy them. If you have a printing press, you can use that. If you don’t, then someone has to sit down and copy the books by hand.

III.

“So what,” you say to all this. “It’s an interesting thought experiment, but we’ve got people archiving all the important digital knowledge. And if something happens to our books, we can just print new ones. It’s all well and good to think about previous civilizations, but this is 2018. We’ve got climate control. We’ve got very technologically sophisticated libraries.”

And yes, we’re fine for now. As long as we can preserve our current levels of technology, then we can also preserve our knowledge. But with one little blip, all of that could be destroyed (nuclear apocalypse, destruction of the electrical grid, major plague that kills 90% of people, you name it).

The thing is, knowledge and technology are mutually reinforcing. As we’ve discussed, you can’t preserve knowledge without the technology to read, store, and reproduce the media that contain that knowledge. And without the knowledge, you can’t recreate the technology… because the instructions for building a printing press were (you guessed it) written in one of the books that got destroyed.

So suppose you have a nuclear war that kills large numbers of people and throws society into chaos for a couple hundred years. People are too desperate for food and survival to bother preserving books, or building printing presses, or doing much of else besides struggling to survive. And by the time things have stabilized enough for people to start wanting to print books again, all the instructions on building a printing press (or a computer, or the electrical grid) have been lost, and all the people who once knew how to do it are dead.

If you fail to preserve knowledge, if you fail to save the printing presses and copy the books as they start to fall apart, then you only have a short window — maybe 40 or 50 years — before anyone who might have been able to recreate the technologies will have died off.

IV.

Let’s think about a non-industrial, non-electric society (either pre-modern or post-apocalyptic), and let’s ask: what would it take for such a society to preserve knowledge?

If you don’t have the printing press, then the only way to pass written knowledge down through the ages is to have scribes laboriously copying books by hand.

And to make the books at all, you need certain technologies. You need something to write on: paper or parchment, or maybe clay tablets. If you’re using paper or parchment, you need ink. And you need something to write with: a pen or stylus. You need the raw materials to make these things from, and you need the knowledge of how to make them. For the raw materials, there may only be so much to go around — only so many animal skins to use as parchment, only so many berries from which to get ink. So you’ll be limited in the number of books you can store, since you can only copy a finite number as they start to disintegrate.

And let’s not forget the most important technology of them all — written language. We take it for granted, but writing was only invented around 3200 BC. Without writing, all of your knowledge must be transmitted orally (or through non-verbal pictures). Oral histories can be surprisingly high-fidelity, but they’re still very limited in the amount of knowledge they can store. When all you have is the oral tradition, you’re limited to the amount that your tribe can memorize and pass on.

Could we lose our writing system? Could we lose the concept of writing altogether, and forget that our ancestors had ever stored words in a physical object? I think that’s unlikely — even in the most dire post-apocalyptic setting, scraps of writing will remain: words inscribed into monuments and gravestones, ancient fading street signs. It will be a long time before all traces of our writing are gone.

But if we did lose it, could we recover it? How long would it take? It took most of our evolutionary history to develop the concept in the first place. How long would it take us to figure it out again?

V.

So those are some of the technological requirements for storing and copying knowledge. But what about the social requirements?

Well, first of all, you need a society that values the preservation of knowledge. If all your society cares about is warfare and cattle herding, then it’s not going to devote its resources to creating and preserving books. Why waste human resources training scribes when you could send those people out to wage war or raise cattle?

And even if your society cares about preserving knowledge, it has to be physically structured in such a way that creating and preserving books is possible. For instance, if you belong to a nomadic tribe, and you have to carry all your possessions on your back or in small wagons, a large library just wouldn’t be feasible.

And preserving knowledge takes time and resources. Someone has to create the paper or parchment, and someone has to make the ink to write with. Then someone has to laboriously copy the books, letter by letter.

If you belong to a small tribe of only 50 or 100 people, it might not make sense to train someone as a scribe (or an ink-maker, or a paper-maker). And even if you do have someone skilled in these roles, they might not have much time to devote to it in addition to all the everyday tasks around the village.

I may be wrong, but the way I see it is, book-making (at least at a scale larger than one or two books per tribe) only starts to become viable when your society gets large enough to support division of labor. You need enough people taking care of the basic subsistence needs that a few others can devote their time to specialized crafts like being a scribe.

VI.

So what might we expect to see, in a society that values the preservation of knowledge?

Well, for one thing, we’re likely to find scholars: people whose job is it to produce and preserve knowledge. And it’s likely that these people will receive high status, or some other kind of special recognition, in the society.

And this is indeed what we have in our society today! We have academics, a whole group of people whose entire job is to study the knowledge that other people have generated, and to add to that body of knowledge.

Think how weird academics might seem to a stone-age civilization. “You mean there are people who just sit and read books all day? And they never go outside and work with their hands? They do nothing but read and write, and yet society still gives them food and shelter? What freeloaders!”

If you don’t understand the fragility of knowledge, and the importance of knowledge to society, it’s easy to see “academic” as a worthless profession. There are historians that do no nothing but read and understand ancient texts and add a little commentary to their interpretation. There are mathematicians who climb through ever-higher realms of abstraction with less and less connection to the actual world.

And it’s unclear whether the knowledge produced by any specific historian or mathematician will be useful down the line (or even what “useful” means to a given society). But if a society has reverence for knowledge in general, and gives a special class of people the freedom to pursue knowledge, then you end up with a large body of information and a group of people that understand what it means and how to use it.

Another thing I’d expect to see, in a society that valued knowledge, is a cult of sacredness surrounding books. The need for this cult arises directly from the fragility of the physical media: if books are fragile, and books are important, then it’s essential to treat them with reverence and care, in order to preserve them as long as possible.

Books are easily destroyed, so you store them in a special room or container to keep them from being damaged. People are dirty, so you make them undergo a ritual purification before they are allowed to touch the book. Scribes treat their jobs as holy, reverently copying character after character from the old to the new parchment.

And indeed, you can find this in Judaism, a religion whose worship centers around a holy book, the Torah, which makes it necessary to preserve that book carefully. The Torah is stored in the Ark, and only taken out during special religious occasions. You don’t touch it directly with your hands, but with a special metal pointer. The Torah can only be copied by a qualified scribe.

In general, if you want your culture to treat books with respect and care, it makes sense to teach people that books are sacred. The individual rituals might do nothing to preserve the books, but they teach people that books are important and worthy of reverence. For instance, in Judaism, if you accidentally drop a holy book or set it on the floor, it is customary to kiss it. When the Torah is taken out of the Ark and carried around, the whole congregation stands. See this article for more details of Jewish ideas and practices surrounding holy books.

VII.

So you can make a society that values knowledge. You can make a society that treats books with reverence, and which has the technology and manpower to copy its books over as the old ones decay.

But knowledge isn’t just a physical object. You need a human to interact with what’s stored in that object. You need a reader to look at those written words and understand what they mean. Fun thought exercise: if you have a book written in a dead language that no one can read, can it still be said to be storing knowledge?

Even if you manage to preserve the books, even if you have the technology to copy them as needed, and your society wants to devote its resources to doing so… if no one can understand what the books mean, then they’re worthless.

Here are some issues that make knowledge fragile even if there’s no apocalypse or loss of technology:

Language change. Anyone who’s tried to study Greek or Roman texts knows this one. If you want to access the knowledge written by Ovid and Cicero, you either need to learn Latin well enough to read the originals, or you have to rely on a translation (which will necessarily introduce some inaccuracies).

If you rely on translations, eventually the language is going to change enough that you need a new translation. So if you have Cicero in 21st-century English and the original Latin, and no one in 2400 AD understands Latin, then they’re going to have to translate Cicero from the English, introducing even more inaccuracies.

So in practice, if you really want to preserve knowledge, then you’re going to need scholars of ancient languages, who are willing to devote their time to studying the meaning of these texts.

Loss of context. Even if you have scholars who are fluent in some ancient tongue, they won’t necessarily be able to understand everything that’s written in it. Life has changed so much over the last few thousand years (both culturally and in terms of material existence) that it’s hard to understand what people were writing about back then.

For example, I have a friend who’s a historian. At one point, he and some colleagues were studying fight scenes in ancient literature. The fights were described as very short, and my friend’s colleagues were speculating about why the authors had chosen this stylized representation. My friend (who has done a fair bit of fighting) had to point out that no, these fight scenes weren’t stylized; real fights tend to be quite short.

This seemed like a perfect example of how, without the lived experience, it’s impossible to tell what’s an accurate description and what’s artistic flair; it’s impossible to tell what’s normal behavior (on the part of the characters or environment) and what’s out of the ordinary.

As another example, I recently read a translation of The Voyage of St. Brendan, a 10th-century Latin text written by Irish monks. I found a lot of it difficult to understand, because the narrative is organized around Christian holidays and the different prayers recited at different parts of the day, and I’m not familiar enough with Christianity to understand their significance. It’s clear that these times and holidays had symbolic meaning in the text, which the reader was expected to understand.

(Note that modern writers omit just as much context as ancient ones. For a silly example of what stories might look like if we included more of the context, see “If All Stories Were Written Like Science Fiction Stories” by Mark Rosenfelder.)

In general, in order to understand a piece of ancient writing, you need to understand the material life that people led back then, and you also need to understand the culture. You also need to understand the canon of previous literature that the writer was drawing on. If you’re not familiar with the Bible, you’re going to have a lot of trouble with European literature from the last 1000 years, since most of it is sprinkled with biblical allusions.

VIII.

One of the biggest threats to knowledge is the people who actively want to destroy it.

I’ve said that knowledge and technology are mutually reinforcing, but knowledge and society reinforce each other too. The society preserves the books, and the books contain the ideals that form the foundational principles of the society.

If you want to destroy a society, and make sure it never returns, then you need to destroy everything that society has ever written. If you don’t, then people in your culture might discover those books and be influenced by them.

This is why acts of destruction so often focus on the intellectual content of a society. The library at Alexandria was burned by Muslim conquerors who wanted their ideology to be the only one. Legend states that:

“John the Grammarian” asked Amr to spare the library, and Amr contacted the caliph Umar for authorization. Umar replied that if the books agreed with the Quran they were redundant, and if they did not, then they were forbidden. Amr handed the books over to Alexandria’s heated bath houses, where they were burned as fuel for six months.

And there have been many occasions, throughout history, where conquering societies have burned the temples or religious relics of the people they conquered. That, along with burning books, is an effective way of destroying a culture.

Even in our own society, there are those who want to destroy knowledge. Environmentalists, who think our technologies are wasteful and destructive, might be happy to get rid of the knowledge that we need to construct industrial machines. Those who hate warfare might be happy if we lost the knowledge for making nuclear weapons. Those who are afraid of a future of designer babies might be glad if we lost the knowledge for genetic engineering.

It’s a common theme in post-apocalyptic fiction that knowledge was destroyed on purpose, because it was that knowledge that people used to create the weapons that wiped out society.

So if you want to preserve knowledge, it’s not merely a matter of protecting the physical objects, and making sure you can still understand the knowledge they contain. You also must defend your books or inscriptions against those who will want to destroy them.

IX.

I’ve written this all as a hypothetical, but we really have lost a lot of knowledge over the ages. Some of it has disappeared “naturally”, the books simply getting lost or disintegrating as the years went by. Some of it has been deliberately destroyed.

We will never know how much has been lost. But we can catch glimpses of the past’s intellectual wealth from the few hints that survived through the ages. In ancient literature, we find stranded references to books that have been destroyed: lost pieces of mythology, lost works by Archimedes and Plato, books of the Bible, and non-canonical gospels from the early days of Christianity.

We also know about lost knowledge from ancient technologies that, even in the modern world, we struggle to recreate. No one knows how the Egyptian obelisks were raised onto their bases. The secret of Roman concrete was only recently rediscovered. Scientists are trying to recreate the recipe for garum, a fish sauce that ancient Romans used as a condiment. No one knows exactly what Greek fire (an incendiary that could float on water) was made of, and nobody knows how they produced Damascus steel.

Some technologies got lost during the dark ages, and weren’t rediscovered until a thousand years later during the Renaissance. The ancient Greek Antikythera mechanism predicted astronomical events with a sophistication unrivaled until the 1300s, and the Romans built their aqueducts to a level of precision that wasn’t equalled until the modern age.

And there are some technologies that we’ve lost simply because they no longer seem useful. Canning was only invented in 1810, after Napoleon offered a large financial reward to anyone who could more effectively feed his troops. Before that, people preserved meat by drying it, smoking it, or home-curing it with nitrates. People preserved vegetables using lacto-fermentation, a process which may be more nutritionally beneficial than canning is. Who knows what other food preservation techniques we’ve lost, simply because they seemed useless compared to modern technology?

We will never know how much has been lost to the vaults of history. We only have these few scraps that have come through to us through the archaeological or literary record. How much has simply disappeared, never to be seen again? If you think that nothing of real substance could be lost so easily, consider that for thousands of years, everyone simply forgot that the Sumerians existed.

X.

So if we value knowledge, what can we do to prevent its loss in the event of an apocalypse? How can we ensure that as many books as possible are carried into the future, and that people retain enough understanding to use the knowledge that they store?

Well, for one thing, we can teach people that knowledge is sacred, and that without knowledge, we could not have the way of life that we currently enjoy. We can teach people that knowledge is what lifted us up out of past ages, what allowed us to create civilization. We can teach people to hold onto knowledge even in the face of calamity.

But we can also write books with the fragility of knowledge in mind. We can write books that store the essential knowledge of our civilization, and we can write them in a way that makes them accessible to the people of the future.

In Finland, there is a book called the Taitokirja (“skill book”), whose pages contain detailed instructions on how to build various tools and perform various tasks. If I remember correctly, the tools range from simple wood and stone contraptions to modern industrial machinery. The book is intended to allow people to teach themselves these skills, in the event that they are forgotten.

I hear that there is a similar book in English, The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm by Lewis Dartnell. But I haven’t seen this book, so I can’t tell you exactly what it contains.

This is not a book, but the Global Village Construction Set contains instructions for building machines that can be used to bootstrap our way back to modern society in the event that our technology is lost.

On a different note, it is eerie to contemplate this inscription that was written for people 10,000 years in the future, warning them of radioactive waste. There’s no reason to believe that people in that time period would understand what radiation is, so they hired anthropologists to word the message in a way that would ward off people from various cultures. And in case the inscription itself was unreadable, they designed the landscape to be foreboding and to contain ominous forms that would threaten people away from the area.

I recently read the classic novel A Canticle for Leibowitz, which centers around an order of monks, living in a post-apocalyptic future, whose holy task is to collect and preserve any textual remnant of the past. That’s the sort of reverence that you need for preserving knowledge — meticulously copying ancient texts, even if you don’t understand what they mean, in hopes that they’ll be valuable to people of the future. Or simply because the texts themselves, as relics of the past, are sacred.

XI.

I have no real conclusion to this post. I simply wanted to convey the awe I have felt ever since I realized how precious our knowledge is, and how easy it is to destroy. I hold each book with reverence now, and I look at our world with a new sense of wonder. We are living in an era of unprecedented technology. Will our society continue to grow and expand? Or will it collapse, the way so many societies have before, leaving us to pick up the broken pieces from the ruins of our world? And if our society does collapse, how much of what we hold dear — our technologies, our philosophies, our novels and legends and stories — will be forgotten so entirely that future societies will never even know they existed? It makes me want to treasure the knowledge we have, for as long as we have it.

Acknowledgements

I’ve been obsessed with this topic for over a year now. Thanks to everyone at work who has put up with my incessant conversations about it. Particular thanks to my friend Kirby, who sent me many of the links included here.

I haven’t read any of these yet, so I can’t tell you whether they’re any good, but people have recommended the following books about the fragility of knowledge:

  • How the Irish Saved Civilization, by Thomas Cahill
  • 1491: New Revelations of the Americas before Columbus, by Charles C. Mann
  • Bread, Wine, and Chocolate: The Slow Loss of Foods We Love, by Simran Sethi
  • The Wake, by Paul Kingsnorth

For a visceral understanding of how much a culture can change in a few thousand years, and how much knowledge can be lost, I highly recommend the novels Riddley Walker by Russell Hoban and Engine Summer by John Crowley. Both are set in the post-apocalyptic future, and both contain a lot of very reasonable, but completely incorrect, misinterpretations of the artifacts of our time.

(If you have other books to recommend, please leave a comment! I would love to add more books to this list.)

Posted in Uncategorized | Leave a comment

Shattered Dichotomies

A couple years ago, I was browsing Facebook, and I stumbled across this quote from Marvin Minsky:

Could Computers Be Creative? I plan to answer “no” by showing that there’s no such thing as “creativity” in the first place. I don’t believe there’s a substantial difference between ordinary thought and creative thought.

I found this very interesting, because at the time, I was reading The Way We Think by Gilles Fauconnier and Mark Turner, and they also claimed that we use the same cognitive mechanisms for ordinary thought as we do for creative thought. Except instead of concluding that creativity doesn’t exist, they used this to argue that all thought is creative.

So which is it?

I think the answer is: the question is meaningless.

As a society, we rely on this conceptual distinction between ordinary and creative thought. Ordinary thought is the kind we do all the time, when we’re thinking about our shopping lists or putting gas in our cars. Creative thought is rarer; it appears in moments of inspiration, and results, perhaps, in art or poetry.

Or at least, that’s what our folk models lead us to believe. Our culture has an almost mystical view of creativity, where only a few gifted souls are able to produce art. And we look on them with a sort of reverence, thinking of them as different, lost in their own rarefied realm of shape and color and rhyme. Artists are sensitive, our folk models tell us. They think in a different way than other people.

And we use this model when we’re making inferences. When an artist forgets to pay the rent, we assume she’s so lost in creative thought that she can’t be bothered with earthly concerns. We view her disorganized nature as intimately tied to her artistic abilities, and we forgive her accordingly. But when a dental hygienist forgets to pay the rent, we just assume she’s lazy, and we wonder why she can’t keep track of what day it is.

I could go on for pages talking about the difference between “ordinary people” and “creative people” in our culture’s folk model of the world. But the important thing is, we view them as two separate groups of people. We view ordinary and creative thought as two separate phenomena. And we use this distinction when we’re reasoning about the world.

So when someone comes in and says “ordinary thought is the same as creative thought”, what do we do? How does this affect our reasoning process?

Well, if we really believe that these two things are the same, then we should throw out the dichotomy altogether, since it means our whole framework is wrong. It means there’s no such thing as “ordinary thought” or “creative thought”, there’s just “thought”, and so we can’t trust either of the original categories for inference.

But in practice, that’s not what happens. In practice, we keep the dichotomy, but resolve everything to one side of it or the other. We either decide that “all thought is creative”, in which case all thought is special and all people are artists in some sense, or we decide that “no thought is creative”, in which case all thought is ordinary and artists aren’t any different from the rest of us after all.

And this happens all the time, with all sorts of shattered dichotomies.

For instance, back in high school philosophy class, I used to argue that “all people are selfish”. If you’re hurt, and I go to help you, it’s not because I’m altruistic. It’s because the sight of you in pain causes me to feel pain, and I, selfishly, want to relieve my own pain (or I want to avoid the guilt I’d feel for not helping). Similarly, if I give you a gift, it’s not because I’m altruistic; it’s because I selfishly want the pleasure and satisfaction that comes from gift-giving.

In high school, I thought this was a great argument. As an adult, I roll my eyes. It’s not that the argument is wrong, per se; based on the definition of “selfish”, it really is possible to classify all actions as selfish. I just don’t think it’s useful. Our folk concepts of “selfless” and “selfish” might be fuzzy and imprecise (as all concepts are), but they help us navigate a complicated world. When you realize that your friend Mike is selfish, you might decide to hang out with him less, or to avoid doing him favors because you know they won’t be reciprocated. And when you’re deciding whether to give your friend Steve a ride to the airport, you might agree to do it, because you don’t want him to think you’re selfish.

(Though maybe the shattering of this dichotomy can be useful for some people! If someone suffers from scrupulosity, and is wracked with unnecessary guilt that they’ve chosen the selfish option too often, then completely removing the distinction between “selfish” and “selfless” could be exactly what they need.)

Another shattered dichotomy I’ve encountered is the people who argue “A city is just as natural as a pristine forest, because cities were made by humans, and humans are part of nature. A city is just as much a natural structure as a bird’s nest or an anthill.” I neither agree nor disagree with this argument; it’s really just a matter of what you want the concepts to mean. And that, in turn, will depend on what you’re using them for. A lot of people (myself included) find natural landscapes beautiful, but also find industrial complexes ugly. And, while I’ll always probably find refineries ugly on a visceral level, this argument helps me appreciate them as part of the ecosystem of human activity, which I do in fact find beautiful. So in that particular instance, I appreciate the shattering of the dichotomy. But when someone says “pollution in the Shenandoah river isn’t a big deal, because industrial waste is just as natural as fish poop”, then I’m going to object, because they’re relying on the standard inference “natural => harmless”, and they’re trying to get you to classify pollution as natural so you’ll think of it as harmless as well.

Anyway, there’s no bigger point to this post. I just find this to be a really interesting cognitive phenomenon, both in terms of how human concepts work, and how we use them in framing and rhetoric.

Posted in Uncategorized | 1 Comment

You can change people’s minds without changing their beliefs.

(This is not part of the Postrationality series. It’s just an isolated thought that I wanted to share.)

In order to change someone’s mind, you don’t have to change their beliefs. You just have to change their associations.

Let me unpack that a bit. By “change someone’s mind”, I mean change it in a way that affects their actions. In the rationalist community, we tend to see beliefs as the be-all and end-all of decision making. Based on our beliefs, we should choose our actions to maximize expected utility, and that’s all there is to decision-making. But in practice, beliefs are only part of our reasoning and decision-making processes.

Let me give you a (pretty obvious) example of how we can fail at decision-making despite having correct beliefs. Suppose your friend is coming over tonight, and you’re planning to make dinner. You know that your friend is a vegetarian, but when you’re at the supermarket, you forget this and buy chicken. This mistake leads to a loss of utility, since either you serve your friend chicken for dinner, or you eventually remember and have to run back out to the store. In either case, a failure of reasoning occurred, and it led to a loss of utility. But the problem here wasn’t with your beliefs; you knew about your friend’s dietary preferences. The problem was with your memory, and which beliefs you actually used when you were making the decision.

So that’s the point I’m trying to get at here: your decision doesn’t just depend on your beliefs; it also depends on which specific beliefs you actually use when you’re deciding. And we can’t just use every belief, because there’s too many to reason with efficiently.  So we have to do approximate inference, and restrict ourselves to a subset of our beliefs.

Fortunately, most beliefs will be totally irrelevant to a given decision. If you’re choosing what to buy for dinner, then it doesn’t really matter that DNA is stored in the nucleus of the cell. This means that in order to make good decisions, you need to figure out which of your beliefs are most relevant, and use those and only those when reasoning. In the example above, where the person bought chicken to serve to a vegetarian friend, that was a failure at retrieving and using a relevant belief. In general, when you forget something important, you are failing to retrieve a relevant belief.

This is where associations come in, because we use them in our belief-retrieval systems. That is, let’s say your vegetarian friend’s name is Steve, and he plays in a metal band, he frequently wears mismatched socks, and one time he went bungee jumping off the Eiffel Tower. These are all things you associate with Steve; that is, when you think of Steve, these are the facts that might spring to mind. If we want to get slightly more formal, we can imagine that the mind contains a network of facts/entities/ideas, and that every pair of these has a strength of association between them. These strengths will be mediated by context, so that in the context of cooking Steve dinner, your association score might grow stronger to facts about his food preferences, and weaker to facts about his socks. This means that in order to remember that Steve is a vegetarian, you either need a strong base association score between “Steve” and “vegetarian”, or you need the context of cooking Steve dinner to increase the score enough that it passes some threshold of relevance.

This is why, if you want to change how someone acts, you don’t need to change their beliefs. You just need to change their associations. Change which facts they (subconsciously) decide are relevant to the situation. Change what comes to mind when they think of a person or organization. The media does this all the time; it doesn’t even have to lie. It just has to broadcast information selectively. Suppose there’s a politician running for office, Senator Dick Head. You know that Senator Head once donated $5,000 dollars to protecting the short-snouted snail, a cause that is dear to your heart. But he also cheated on his wife, and you find this morally repugnant. The media doesn’t care about the short-snouted snail, so it never reports on his donation. But the news channel you watch is constantly telling you what a horrible awful cheater Senator Head is. So your association between “Senator Head” and “cheated on his wife” gets stronger, while your association to “cares about the short-snouted snail” remains weak. This means that by voting time, you are truly disgusted with Senator Head, and you vote for his competitor, Congressman Mike Rotch, instead.

So, in conclusion, reasoning is not just about which beliefs you possess. It’s also about which beliefs you actually use during a specific reasoning task.  Thus, if you want to change someone’s mind, you don’t have to change their beliefs.  You just need to change which beliefs they’re likely to use when reasoning.

Note: this post is not science. I cannot cite research that supports anything I just said (though I do think it’s reasonable, or I wouldn’t have written it). And I don’t know any mathematical models that reason by choosing beliefs according to strengths of association. I do know some researchers are working on how to choose which beliefs to use, but I have no idea how they’re going about it. So please don’t take anything I’ve said here as scientific fact. This is just informed speculation.

Posted in Uncategorized | 3 Comments

Postrationality, Table of Contents

A couple of weeks ago, Scott Alexander posted a map of the rationalist community, and much to my delight, I’m on it! Specifically, I’ve been placed in the country of Postrationality, alongside Meaningness, Melting Asphalt, Ribbonfarm, and A Wizard’s Word. This is truly an illustrious country, and I’m honored to be a member of it.

But anyway, as a result of this map, a lot of people have been asking: what is postrationality? I think Will Newsome or Steve Rayhawk invented the term, but I sort of redefined it, and it’s probably my fault that it’s come to refer to this cluster in blogspace. So I figured I would do a series of posts explaining my definition.

As you might imagine, postrationality has a lot in common with rationality. For instance, they share an epistemological core: both agree that the map is not the territory, and that concepts are part of the map and not part of the territory, and so on. Also, the two movements share some goals: both groups want to get better at thinking, and at achieving their object-level goals.

But the movements diverge in the way that they pursue these goals. In particular, rationality tends to give advice like “ignore your intuitions/feelings, and rely on conscious reasoning and explicit calculation”. Postrationality, on the other hand, says “actually, intuitions and feelings are really important, let’s see if we can work with them instead of against them”.

For instance, rationalists really like Kahneman’s System 1/System 2 model of the mind. In this model, System 1 is basically intuition, and System 2 is basically analytical reasoning. Furthermore, System 1 is fast, while System 2 is slow. I’ll describe this model in more detail in the next post, but basically, rationalists tend to see System 1 as a necessary evil: it’s inaccurate and biased, but it’s fast, and if you want to get all your reasoning done in time, you’ll just have to use the fast but crappy system. But for really important decisions, you should always use System 2. Actually, you should try to write out your probabilities explicitly and use those in your calculations; that is the best strategy for decision-making.

Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two. Postrationality understands that emotions and intuitions are often better at decision-making than explicit conscious reasoning (I’ll discuss this in more detail in the second post). Therefore, postrationality tends to favor solutions (magick, ritual, meditation) that make System 1 more effective, instead of trying to make System 2 do all the work.

Here are some other things that seem to be true of postrationalists:

  • Postrationalists are more likely to reject scientific realism.
  • Postrationalists tend to enjoy exploring new worldviews and conceptual frameworks (I am thinking here of Ribbonfarm’s “refactoring perception”).
  • Postrationalists don’t think that death, suffering, and the forces of nature are cosmic evils that need to be destroyed.
  • Postrationalists tend to be spiritual, or at least very interested in spirituality.
  • Postrationalists like (and often participate in) rituals and magick.
  • When postrationalists are trying to improve their lives/the world, they tend to focus less on easily quantified measures like income, amount of food, amount of disease, etc., and instead focus on more subjective struggles like existential angst.
  • Postrationalists enjoy surrealist art and fiction.

This may seem like a rather disjointed list, so one of the purposes of this series will be to show how these tendencies all fit together, and in particular how they all derive from the basic postrationalist attitude towards life.

My current plan is to include three posts in this series (which I’ll link to as they become available):

  • A post explaining the rationalist perspective, including the System 1/System 2 model of the mind, the need to overcome bias using our analytic reasoning skills, and a strange form of Bayesianism where people actually try to do explicit calculations with their subjective probabilities.
  • A post explaining why the rationalist perspective is misguided.
  • A post examining the attitudes held by the two communities. This will be the most important post, since at the heart of it, rationality vs. postrationality is not a factual disagreement, but a disagreement of attitude. I will try to show how the postrationalist attitude (one of accepting the world and our own humanity) gives rise to the bullet-pointed list of tendencies that I showed above.

As a final note, I should probably mention: this definition of postrationality is purely my own. In particular, it does not necessarily represent the viewpoint of the other Postrationalists on Scott’s map. So if you’re on that map, and you think the definition of postrationality should be different than the one I’m giving here, then I hope you will leave a comment and let me know!

Posted in Uncategorized | 22 Comments

Even the Ugliness of the Universe Is Beautiful

The universe is a chasm of inconceivable space, surrounding us dizzily from all directions. We are afraid of the distance between adjacent stars and we are afraid of the distance between adjacent atoms; any open space is a breeding ground for phantoms. When confronted with the unknown and the wild, we have two choices: to build strength enough to join the wilderness, to revel in its fathomless wonders; or to hide within our fear, to tear down everything we can’t control and build walls to insulate ourselves against the sky.

I am trying to follow the path of strength, but it’s such a steep and narrow road. I want to look at the universe unflinchingly, to meet the eyes of God and hold his gaze. But the two ravens, fear and desire, circle above me; they try to push me off the pathway into the endless black abyss.

Our eyes were not meant for the universe in its rawness. Cognitive science reduces the human mind to mechanical computation. Evolutionary biology shows us that everything we do is rooted in selfishness. Quantum physics is maddeningly impossible to interpret. If we dwell on these things too long, we may find ourselves swallowed by insanity.

Cognitive science was my own personal bane. I got caught in the trap of watching each of my thoughts unfold, seeing how the analogical links I made were shaping my understanding of the world. It became impossible to believe in any thought or reason I concocted, because I could easily see how each thought arose and how many alternatives were possible.

And so I was almost ready to turn back, to retreat to the ancestral forest and abandon my quest for knowledge. But now I understand: if the discoveries of science seem ugly, if they warp our minds into madness, it is only because this knowledge was not meant for Man. We are digging deeper into these questions than evolution has prepared us for, and we’re finding that the universe is stark and alien and Other. If I’m disheartened by the knowledge that I’ve gained, it’s because I have started to pierce through the veil of human illusions; I am starting to see the universe as it truly is.

And so I will continue on my quest, armed with this understanding: even the ugliness of the universe is beautiful; even my descent into madness is beautiful. We are dealing with cosmic mysteries that were not meant for the eyes of Man. I will climb this steep and narrow road, even though the abyss still yawns before me. For now, when I look into its depths, I see that it is full of stars.

Posted in Uncategorized | 9 Comments

Identity and Bureaucracy

I.

Lately, the internet has been awash with new gender and sexual identities. On the gender side, the strict dichotomy of male and female has given way to a proliferation of possibilities, including agender, transgender and gender fluid; these categories have entered the public consciousness to the point where Facebook recently changed the way it handles gender, allowing users to pick from fifty-six different options instead of just the usual two. As for sexuality, the choices are no longer limited to heterosexual and homosexual; the list has grown to include asexual, sapiosexual, and demisexual as well.

As usual when society changes, we see a lot of people lauding this trend as the next big step towards freedom, equality, and acceptance, and we also see a lot of people condemning this trend as a sure sign that society is headed straight to hell. Both views have their merits, and I don’t want to argue about which one is right. Instead, I want to ask a different question: why is society changing in the first place? What sorts of cultural and environmental pressures are causing people to be dissatisfied with their default genders?

All sorts of explanations have been proposed. One is that people have always longed for this freedom to choose their own gender, but up until now, society has been too bigoted and close-minded to allow it. Another explanation blames plastics and other industrial chemicals for interfering with our hormones, causing many people to feel at odds with their biological gender.

In this post I’d like to put forth another potential explanation, which is that our cultural obsession with fine-grained gender identities is a natural consequence of living in a rigid bureaucratic society.

II.

As Ribbonfarm explains, in order to function, bureaucracies require the world to be legible.

The idea of legibility is rooted in our human need for order. The neater and more organized the world is, the easier it is for us to process and interact with it. From Ribbonfarm:

In Mind Wide Open, Steven Johnson’s entertaining story of his experiences subjecting himself to all sorts of medical scanning technologies, he describes his experience with getting an fMRI scan. Johnson tells the researcher that perhaps they should start by examining his brain’s baseline reaction to meaningless stimuli. He naively suggests a white-noise pattern as the right starter image. The researcher patiently informs him that subjects’ brains tend to go crazy when a white noise (high Shannon entropy) pattern is presented. The brain goes nuts trying to find order in the chaos. Instead, the researcher says, they usually start with something like a black-and-white checkerboard pattern.

The idea of legibility is as follows: when a system is so complex that we can’t process it, we change that system to make it simpler. Ribbonfarm gives the example of “scientific” forestry:

The early modern state, Germany in this case, was only interested in maximizing tax revenues from forestry. This meant that the acreage, yield and market value of a forest had to be measured, and only these obviously relevant variables were comprehended by the statist mental model. Traditional wild and unruly forests were literally illegible to the state surveyor’s eyes, and this gave birth to “scientific” forestry: the gradual transformation of forests with a rich diversity of species growing wildly and randomly into orderly stands of the highest-yielding varieties. The resulting catastrophes — better recognized these days as the problems of monoculture — were inevitable.

Bureaucracies are known for being rigid, dehumanizing, soul-sucking things, and it’s easy to see how legibility is responsible for this. Every piece of paperwork makes the world more legible, by distilling the complexity of our lives down to a few discrete fields. When you apply for a job, your application will be reviewed by some guy hunched over his desk, reading through 500 similar applications while drinking his third cup of coffee that morning. He doesn’t care about you as a person, in all your glorious uniqueness and complexity. He just wants to get through your application as quickly as possible. The job application form makes his life easier because he can see at a glance where you went to school, what your previous work experience is, and so on. It lets him decide very quickly whether you’re qualified for the job. The paperwork makes you legible to him.

But it also means there’s no room for individual differences and special cases. If you never went to college and you have no work experience in that field, the guy at the desk might throw away your application, even if you’re self-taught and brilliant and you really would be the best person for the job. And thus all of us have learned: the system does not reward people who are brilliant and capable. The system rewards people who are brilliant and capable and willing to play by its rules. The system is dehumanizing because it reduces a whole, complicated, intricate human being down to a handful of statistics.

As this example hopefully makes clear, bureaucracies don’t do this because they’re evil. They do it because they’re in a hurry and they don’t have time to consider the complicated details of everyone’s individual lives. Bureaucracies are like assembly lines. Before assembly lines, you had a whole bunch of craftsmen each making just a few items at a time. If a woodworker made chairs with two different shapes of legs, that was fine, because he could build two different chair-seats, one that went with each pair of legs. But in a factory, it’s essential that all parts be identical; that’s what makes the assembly line run quickly.

Analogously, when the world was smaller and less centralized, it used to be that a few individuals could gather together and work problems out for themselves. Because these people were operating on a small scale (three or four people rather than hundreds, thousands, or millions), it was possible to deal with each person at high resolution; it was possible to take everyone’s complex personalities into account when devising a solution. But a bureaucracy is dealing with thousands and thousands of cases at a time; it doesn’t have time to devise a unique solution for every individual problem. So instead, it pattern-matches each problem to some general class of problems, and applies a one-size-fits-all solution.

III.

Now let’s return to the original topic of the post. Why on earth would rigid bureaucracies cause people to develop new gender identities?

To answer that, let’s consider the following situation. Suppose you’re a grad student, and your life has been pretty stressful lately. Maybe your girlfriend just broke up with you, or maybe your mom is in the hospital. Whatever the reason is, you’re having trouble focusing on your schoolwork, and you decide you want to take a semester off. Well, if your school is anything like mine, then in order to take a semester off, you have to apply for a leave of absence, which they’ll give you if you have a medical condition, a family hardship, or you need to do military service. Family hardship covers the “mom is in the hospital” case, but what about the guy whose girlfriend just broke up with him? Well, it turns out there’s a medical condition that corresponds to his problems, and it’s called “depression”. So all he needs to do is go to a doctor, explain what’s going on in his life, and get a diagnosis.

I have a lot of problems with how our culture views “mental health issues”, but that’s not the point I’m trying to get at right now. I don’t want to debate whether this hypothetical student is actually depressed, or whether depression is actually a medical condition. Instead, I want to point out that pasting a label of “depression” onto this guy’s life didn’t change his situation in any way. He was just as depressed about his breakup before a psychologist filled out an official-looking form as he was afterwards. And yet, prior to receiving that label, this guy was not qualified for a leave of absence. After receiving the label, he was. The label didn’t change his problem; it just made it visible to the bureaucracy.

The point I’m trying to get at is that our bureaucratic society is sending us a powerful message: until your problem has a name, it doesn’t exist.

IV.

And this is grad school, where people are treated as individuals to the point where every student is personally mentored by a successful researcher in the field. Grad students have it easy compared to the elementary, middle, and high school students at your average public school. If a 10-year-old with Asperger’s gets overwhelmed by all the noise and commotion in gym class, the gym teacher can’t just notice this and allow the student to sit out. The parents need to take their child to a psychologist, procure a diagnosis of autism, and bring this to the school; only then can any action be taken.

I don’t mean to say that this system is all bad. If the gym teacher is ignoring the problem, a note from the doctor can force her to take it seriously. And on the other hand, requiring a doctor’s note keeps the kid from faking an illness, or the teacher from playing favorites.

But this system does have its consequences. Once the child is labeled as “autistic”, the diagnosis can never be taken back. It will color how the parents view their child’s behavior, and ultimately influence how the child views himself. This makes a diagnosis of autism different from, say, a diagnosis of diabetes. Both are permanent conditions, and knowing about either of them will change how the child interfaces with the world. But the symptoms of autism, unlike those of diabetes, cover aspects of one’s personality and preferences that have traditionally been included as part of the self. This makes autism compelling as an identity label in the way that diabetes is not.

V.

Psychiatric diagnoses are everywhere these days. We are faced with a generation of children and young adults who have received these diagnoses, and who see them as a fundamental part of their identities. And it was the act of getting diagnosed, the act of having these identities recognized officially, that allowed these students’ individual differences to finally be taken seriously.

In our label-driven society, receiving the right classification is essential for ensuring that you are treated in a manner that befits you as a person. That’s why it’s important to find a set of labels that fit you well, and to make sure that those labels are accepted by society at large.

So is it any surprise that people are seeking out finer-grained gender identities, ones which describe their personalities better than “male” or “female” could? It is a surprise that people consider these labels so incredibly important?

My prediction is that the new gender identities will be embraced most strongly by people who have also strongly embraced one or more psychiatric diagnoses. And this prediction seems to be borne out by the number of people on the internet who introduce themselves by some combination of gender and psychiatric identities. “Hi, I’m a non-binary asexual submissive with anxiety and depression.”

VI.

So that’s my answer. Why do we create these identity labels, these ever-finer-grained descriptions of who we are? Maybe it’s because we live in a rigidly bureaucratic society, where our individual differences will only be noticed if they have a label attached. Maybe it’s because we’re used to having our dimensionality reduced down to a few searchable keywords. Maybe we’re trying to make ourselves visible by making ourselves legible. Maybe we, as a culture, have internalized the idea that if something doesn’t have an official label, it might as well not exist.

Posted in Uncategorized | 7 Comments

Sneaking Past the Gatekeeper

Over thinking, over analyzing separates the body from the mind,
Withering my intuition, leaving all these opportunities behind.
— Tool, “Lateralus”

Overanalysis

I think most readers of this blog will agree that analytical thinking is a very good thing. Without careful, analytical thinking, it would be difficult for us to reason about the world or figure out which actions to take. Also, thinking analytically is just plain enjoyable; that’s why I’m in academia, where I get paid to do it.

And yet, I think most of us can also agree that it’s possible to be overanalytical: thinking too much can drain our experiences of emotional vividness and make them feel less real. In this post, I’d like to explore overanalysis a bit. Why is it that analytical thought can dissociate us from the world and prevent us from really experiencing our lives?

One answer that I’ve heard, particularly from the mindfulness meditation people, is that thinking is just really distracting. If we get caught up in our thoughts, it means that our attention is directed inwards at the contents of our mind, instead of outwards at the world around us. From what I understand, the point of mindfulness meditation is to quiet our thoughts enough that some of the raw power of experience can get through. Then, we can live in the moment instead of getting caught up in memories of the past and worries about the future.

I think this is part of the explanation, but not all of it. It’s not just that experience and analysis are competing for our limited attentional resources. I think it’s much more deliberate than that, and in fact one of the purposes of analytical thinking is to form a protective barrier that shields us from the full emotional impact of our experiences.

I’ll give some examples of this, but first, it should come as no surprise that we would build a shield against powerful emotions. After all, emotional experiences are dangerous: they have the potential to change and transform us dramatically, and they have far-reaching impacts on our thoughts and beheavior. That’s why we all know to be careful of things, like advertising and political propaganda, that are designed to appeal to our emotions.

So let’s take advertising as an exmple. We all know that ads are trying to manipulate us. By showing us pictures of successful, attractive people using their product, they try to persuade us that buying it will bring us friends, sex, and popularity. In order to resist the ads’ allure, we are taught from an early age to use our analytical faculties, or “critical thinking skills”, when dealing with them. By deconstructing advertisements to understand how they’re trying to manipulate us, we can defuse and deflect their emotional messages before they can get through. This is what I mean when I say that analytical thinking can act as a shield.

It seems pretty clear to me that we should be wary of advertisements and their emotional hold on us. But I think many intellectuals extend that suspicion to any stimulus that appeals to us on an emotional level, even if it might be beneficial. We’re overly defensive; we raise our shields even when it’s not appropriate.

A good example of this is ritual. Many intellectuals seem wary of ritual because of its emotional hold on us, even though ritual is known to have very positive effects. Ritual creates group solidarity, and can help us to emotionally reinforce our existing beliefs and goals. What this feels like from the inside is often euphoria and a heightened sense of connection to those around us.

Now, sometimes we have good reasons to avoid a specific ritual. A religious ritual, for instance, might draw on and strengthen beliefs that we reject. But ritual is much broader than religious ceremonies. In college, for example, I attended a CS competition that could easily be described as ritualistic. Each school sent a team of ten to the weekend-long event, typically dressed up in some costume related to that year’s theme. There were parties with dancing, and whole rooms of people singing bawdy songs together, and a giant trophy cup for the winners that we all drank beer out of.

There was nothing about this ritual that I might rationally want to avoid. “CS student” was a huge part of my identity in college, so I felt like I belonged there, and had a lot in common with the other participants. And there was nothing about the specific rituals that bothered me; actually, there’s few things I love more than shouting bawdy songs along with hundreds of other computer scientists. And yet, when I first attended this competition, I felt a huge amount of resistance towards just letting go and participating in the ritual. In some sense, it required surrendering myself, relinquishing control to the collective energies surging through the room, and letting myself be swept up in the excitement of the event. It involved letting down my normal, analytical defenses against powerful emotional experiences. Once I did manage to let go, I had the time of my life. But I can still remember just how difficult it was to do it.

I suspect a lot of rationalists feel this way when encountering rituals. Intellectually, we may understand the benefits of ritual, but emotionally, we have trouble letting go. In a brilliant LessWrong comment, Viliam Bur wrote: “A ceremony is a machine that uses emotions to change people. For some people this may be a sufficient reason to hate ceremonies. Because they are typically designed by someone else, and may support goals we don’t agree with. Or the idea of some process predictably changing my mind feels repulsive.” As rationalists, we tend to be wary of anything that changes our mind, but isn’t backed up with a rational argument. We want to make sure our beliefs are justified, so when we encounter things like ritual, which have a profound emotional impact on us but which are difficult to understand rationally, we very naturally approach these things with caution and even suspicion. But if we avoid rituals, or block out their emotional effects, then we are ignoring a very powerful tool, which could instead be used to our advantage.

Maybe ritual is a bad example to use here, if I’m trying to explain why overanalysis can be a problem, because I think a lot of my readers will say “yep, ritual is dangerous; I’m glad I have critical thinking skills to keep me from falling into its traps”. But I would expect that even the most wary of rationalists wants to let some things effect him emotionally, like powerful music or well-written fiction. The emotional content is precisely what makes these things enjoyable, but overanalyzing them can often severely diminish their emotional effect.

A good example of this is high school English classes. I’ve heard lots of people say that they might have actually enjoyed the books they had to read in high school, if only their teachers hadn’t forced them to dissect every little detail of the author’s symbolism.

I suspect that a lot of us overanalyze our lives the way English teachers overanalyze books. Many of us do this for the reason I described above: we’re suspicious of things that affect us emotionally, so we make sure to keep our critical thinking skills turned on constantly. And some of us do it just out of habit; we work in professions that require us to think analytically for eight hours a day, and once we get home it’s hard to turn that off.

As I said above, some amount of critical thinking is necessary to prevent us from getting seduced by advertisements, or otherwise taken advantage of. But I think that we tend to err on the side of too much analytical thinking, and don’t spend enough time just allowing ourselves to experience our lives. This is a big problem, because overanalysis squeezes our experiences dry of emotional vividness, making life drab and dreary.

One solution to this problem is to learn how to turn off our analytical thinking minds once in a while. But this can be quite difficult, especially for those of us whose professions train us to think analytically all the time. Personally, I’ve been trying for years and years to quiet my mind and just “live in the moment”, and I still find this incredibly difficult.

But fortunately, I think there’s another solution. There are some emotional stimuli which are so subtle that no matter how much we try to analyze them, they still manage to slip through our defenses and affect us emotionally. Even the most intransigently analytical among us can take refuge in stimuli such as these. They are the stimuli that are able to sneak past the gatekeeper.

The Gatekeeper

I propose the following metaphor: we can think of analytical reasoning as a gatekeeper that prevents ideas or experiences from entering the inner courtyards of our minds. When we use our analytical thinking “the right amount”, or apply it to the right things, then the gatekeeper serves us well. It keeps out nasty travelers, like advertisements, that want to pollute the inner places of our minds, or scatter the seeds of weed-like desires in our imaginations. And it lets in the nice travelers, like music and literature and perhaps even ritual; these travelers bring with them gifts that enrich our inner courtyards. But when we overanalyze, this is like having an overzealous gatekeeper, a paranoid and suspicious guard that turns away all guests, even the ones who are clearly carrying invitations. If you have a gatekeeper like this, then no new ideas or experiences will be able to visit your inner courtyard. It will grow dry and barren and the only thoughts you will have will be old, tangled, gnarled ones that circle through your mind like tumbleweed.

If you have a gatekeeper like this, then the only experiences that will be able to get through are those that can sneak past the gatekeeper. At the risk of alerting your (perhaps hypervigilant) gatekeeper to these usually-invisible travelers, I would like to spend the next section exploring what types of stimuli are able to sneak past.

Sneaking Past

It should be fairly straightforward to characterize the stimuli that can sneak past the gatekeeper: if analytical thinking blocks out our emotional experiences, then the things that affect us emotionally will be the ones we don’t think analytically about. Of course, in order for these things to affect us at all, some part of our mind has to process them; I’ll call that part the intuitive or subconscious mind.

So, the things that can sneak past the gatekeeper are the ones we interpret using our intuitive rather than analytical minds. I’ll try to give you some examples.

(1) Narrative

Narrative is something we tend to interpret more intuitively than analytically. This may be part of why stories have such a powerful effect on us. We read stories for entertainment, but as we read them, we are unknowingly learning more about how the world works. Stories, even fictional ones, have a kind of truth to them, because we can relate them to our own experiences, and they thereby give us insight into our lives.

The less we analyze stories, and the more we just allow ourselves to experience them intuitively, the more powerful their effects will be. The deepest emotional experiences will come when we blindfold the gatekeeper, suspend disbelief, and allow the story to engulf us.

Relatedly, here is a beautiful quote from David Chapman’s Buddhism for Vampires, where a character within the frame story explains why we might tell frame stories in the first place:

[W]hen you listen to a story, you enter a new world, created out of words. And you are willing to let the world be as the teller tells it. But that can only go so far, and if the world does not make sense, you will interrupt the tale and argue. By putting the story within a story, the teller of the inner story becomes only a character himself, so you cannot argue with him. Then the inner story can be less realistic. If you wrap it in enough layers of indirection, you can tell a completely ridiculous story and have it seem somehow believable.

And then, a story always works some transformation in the hearer. It is not ‘information’; it works on the heart. Although it is made of words, the true meaning of a story cannot be put in words. So the story teller has to stop the hearer from using their ordinary mind to listen. When the teller says ‘once upon a time…’, the listener knows it is time to listen with the heart. But the listener’s mind may still get in the way. To confuse ordinary mind, the story-teller wraps worlds in worlds, until the hearer gets lost, and can listen without judgment.

(2) Mythology

Mythology is a kind of narrative, and so what I said about stories is true for myths as well: they affect us deeply because we interpret them more intuitively than analytically, and they teach us about our lives because we’re able to connect them to our own experiences. But I think mythology deserves its own category, because myths communicate in archetypes and imagery, which do a particularly good job eluding our conscious minds. We can usually understand a narrative analytically if we try hard enough, distilling its plot down into themes and moral lessons. But archetypes are harder to interpret. Submersion in water might symbolize rebirth, for instance, but we’re not usually conscious of this fact as we read the story of Noah. Some deep, intuitive part of our mind does understand this symbol, however, and so the myth is able to convey its message in terms that only the subconscious mind can understand. It bypasses the conscious mind altogether and speaks directly to our intuitions.

Sometimes, I think all myths have two parts: a comprehensible narrative, which keeps the conscious mind occupied, and archetypal imagery, which carries the true meaning of the story without us realizing it.

It’s also worth mentioning surrealist art here, since much like mythology, surrealist art communicates in symbols, archetypes, and dreamlike imagery.

(3) Sigils in Chaos Magick

Many techniques in magick are specifically designed to sneak past the gatekeeper and elude the conscious mind. I’m going to describe one such magickal technique, called a sigil, but first, let me try to explain what magick actually is. There are a bunch of different interpretations of magick, including ones that treat gods, angels, and demons as real, but the one I’ll focus on here avoids any supernatural explanations. It says that magick provides a set of techniques for altering our minds, to make them better at doing what we want them to do. According to this interpretation, then, magick is a lot like rationality, except that rationality typically focuses on altering the conscious mind, while magick focuses on altering the subconscious mind. Since magick and rationality are dealing with two different parts of the mind, they naturally use very different toolkits. Magick’s toolkit typically involves arcane rituals, complex webs of symbols, magickal objects, and the like.

Personally, I’ve dabbled in Chaos Magick, which focuses on techniques rather than beliefs, and encourages practicioners to choose whatever worldview suits their purposes best at any given moment. The standard text on Chaos Magick is a book called Liber Null, by Peter Carroll. Interestingly, the first chapter is basically just meditation exercises: since the goal is to alter your mind, you first need to control your mind. Once you’ve managed to do that, you can begin making sigils.

Sigils are a method of planting a suggestion in your mind, and then forgetting you planted it there. You have some wish or desire, and so you make an image, called a sigil, representing that desire. Then, you deliberately forget the connection between the image the desire. Or at least, you forget it consciously. Your subconscious mind remembers, and so you look at the sigil frequently to remind your subconscious to carry out its appointed task. Thus, the sigil bypasses the conscious mind and its gatekeeper, and allows the subconscious mind to operate without interference.

In Liber Null, in the section on sigils, I found the following paragraph on the importance of eluding the conscious mind:

The magician may require something which he is unable to obtain through the normal channels. It is sometimes possible to bring about the required coincidence by the direct intervention of the will provided that this does not put too great a strain on the universe. The mere act of wanting is rarely effective, as the will becomes involved in a dialogue with the mind. This dilutes magical ability in many ways. The desire becomes part of the ego complex; the mind becomes anxious of failure. The will not to fulfill desire arises to reduce fear of failure. Soon the original desire is a mass of conflicting ideas. Often the wished for result arises only when it has been forgotten. This last fact is the key to sigils and most forms of magic spell. Sigils work because they stimulate the will to work subconsciously, bypassing the mind.

(4) Cognitive Science

(This is not an example, but an anti-example.)

I’ve been reading a lot about cognitive science lately, but for the longest time, I avoided studying it, since I thought it would be very dangerous. After all, the whole point of cognitive science, in some sense, is to use conscious, analytical reasoning to understand the workings of the subconscious mind. This does two things: it gives the conscious mind access to subconscious processes that are usually hidden, and it might actually interfere with the subconscious mind’s functioning.

Regarding the first: if we give the conscious mind access to subconscious processes, this is like strengthening the gatekeeper, or perhaps equipping him with better security tools. Now, in addition to eyes, he has security dogs and infrared scanners and so on, which means that much less can sneak through. The average person’s gatekeeper is not very well-trained at security, but the skeptic’s is, and the cognitive scientist’s even more so.

Regarding the second: studying cognitive science might actually alter the mind’s functioning. If this sounds odd, I think it’s because our standard cultural models for understanding science blind us to the possibility. The scientific mindset perpetuates a division between the self, who is doing the studying, and the other, which is out there, and must be studied. Even with all the reminders from quantum physics that observing something can alter it, we still think of the scientist and the object of study as distinct. But this is very much not the case in cognitive science, where the mind studies itself.

It’s not unreasonable to think that, by studying ourselves, we could alter ourselves in the process. Watching our own cognitive mechanisms tick requires a mental contortion of sorts, turning our eyes backwards into our heads to watch our thoughts as they unfold. Sometimes I fear that this vivisection of my thoughts will leave me unable to think, like pulling apart the fibers of a muscle as it’s trying to run. Could it be that studying cognitive science because I find the mind’s workings beautiful is like dissecting my own eyes to understand how they comprehend beauty, only to find that I have blinded myself?

I feel like studying cognitive science has already damaged me. I think it’s exacerbated my hyperanalytical tendencies, and armed my gatekeeper so well that even narrative and mythology can no longer make it through. As a result, the world has become hollow and meaningless; I’ve drifted towards nihilism.

So I find myself asking, how can I reverse the damage that cognitive science has done to me so far? And how can I continue to study cog sci without letting it destroy me? I’m drawn to the field like a moth to a flame. As usual, I think the answer probably involves learning meditation. That way, I could clear my mind of unwanted, overanalytical thoughts when necessary, but allow them at times when I’m actually studying cognitive science.

What kind of analysis is overanalysis?

Overall, I think this gatekeeper metaphor is pretty sound. But I’m worried I’m committing a dangerous oversimplification here: I’ve treated all types of analytical thought the same way, when in fact, some might lead to a deadening of experience, while others might not. As far as I can tell, the type of analysis that most drain the vividness from experience is a sort of “dissective”, deconstructive analysis, the kind that takes a third-person approach to a first-person phenomenon.

For instance, when I was a kid, and I got hurt, my dad would say to me, “Don’t worry, pain isn’t real; it’s just a nerve signal that your body sends to your brain.” This argument has all sorts of flaws (define “real”, for instance), but that’s not the point. The point is that when I was a kid, this argument worked on me. Somehow, thinking about the pain in that sort of detached, analytical way actually lessened its effect.

I suspect that in general, thinking about our own experiences this way dulls them. And thinking about other people’s experiences in this way probably reduces empathy. I seem to feel much more empathy for other people when I understand their actions in terms of their subjective experiences, instead of understanding their actions in terms of e.g. brain chemicals or evolutionary psychology. I’m talking about stuff like “she destroyed a bunch of his stuff because she was angry at him for cheating on her”, vs. “she destroyed a bunch of his stuff to disincentivize him from investing emotions and resources in other potential mates”.

So why does this kind of analytical thinking decrease empathy? This article gives one explanation: we have one brain circuit for empathy, and another for analysis, and the two inhibit each other’s activity. We can also think about it in terms of concepts activating one another: when we describe this woman’s response as “anger”, it activates the concept of anger in our minds, which activates the feeling of anger, leading us to empathize. The evolutionary psychology explanation, on the other hand, does not activate any of our emotions, so we don’t have an empathetic response.

(Yes, I recognize the irony of using a detached analytical explanation to explain why detached analytical explanations might be bad.)

A Final Thought

A final thought, before I conclude. This essay has played on a popular dichotomy which pits reason (and particularly analytical thinking) against intuition and emotion. It’s not unreasonable that this dichotomy should have arisen in popular thought. After all, strong emotions often prevent us from thinking rationally, and (as we have seen in this essay), thinking analytically can prevent us from feeling emotions.

But the popular dichotomy is harmful, because it suggests that reason and emotion cannot coexist, and so we need to pick one and completely ignore the other. Thus, many of us have opted to join “team reason” and ignore our emotions, while others have joined “team emotion” and refused to listen to reason.

What we really need is a balance between these two things. Reason and emotion are not opposites or enemies; we need both of them in order to function in the world. The problems arise when we favor one over the other. Too much emotion, and our actions will become erratic and irrational. Too much analytical thinking, and we’ll lose the ability to feel.

So I’ve written this post not because I think we need to abandon analytical thinking altogether, but because I think the readers of this blog probably err on the side of overanalysis, and need to be pushed in the other direction.

I suspect a lot of readers of this blog probably come from the LessWrong rationalist community. To their very great credit, the rationalists of LessWrong are not Spock-clones at all, and they fully acknowledge the need for balancing emotion and reason. They emphasize that rationality is about “winning” (that is, actually achieving one’s goals). Whatever method helps us achieve our goals, whether it’s emotional or rational, that’s the one we shuld follow. And on LessWrong there’s also a widespread recognition of the importance of emotion in our lives.

And yet, reading LessWrong and talking to members of the community, I often get the sense that while rationalists think it’s perfectly sensible to embrace emotion, they just don’t know how to do it. I should say “we”, because this description has definitely applied to me too, at many points in my life. It’s a constant struggle for me to embrace emotional experiences without trying to analyze or control them. I think that a lot of us, for whatever reason, have armed our gatekeepers very thoroughly, and it’s hard for anything to sneak through. And so I hope that this post, this metaphor, and these examples will help to give people who want to embrace their emotions a foothold into changing how they interface with the world.

In conclusion, then, I leave you with the following two pieces of advice:

  • Do not let your emotions drown out your ability to think.
  • Do not let your thoughts drown out your ability to feel.

Acknowledgements

Particular thanks to Justin and to Aaron for many enlightening discussions on this topic.

Posted in Uncategorized | 7 Comments

Open Source Mythology Project?

Before writing was invented, stories were transmitted orally. This means that myths and fairy tales evolved organically as they were passed from one storyteller to the next. Since there was no canonical version, storytellers were free to modify their tales to suit the occasion, the audience, or their personal taste. For this reason, the stories were very much alive, able to grow and change along with the culture. This allowed them to retain their relevance over long periods of time.

The written tradition has changed this, of course. A story, once set in writing, becomes official and canonical. In the case of religious texts like the Torah or Bible, this is because the text is taken to be the word of God, which needs to be preserved faithfully. In the case of modern fiction, it’s because the story is taken to be intellectual property, whose plot and characters belong to the single author (or small group of authors, or corporation) which created it. This contrasts with myths and fairy tales; since their origins are often unknown, we tend to think of them as written by and belonging to the entire community, to be retold and modified for personal use as people please.

It’s hard to think of modern strategies for writing, and attitudes toward creation, which resemble the oral tradition. One possibility I can think of is fanfiction, where although there’s a single canonical story, fans are able to modify it and use its characters and world in stories of their own creation. But in fanfiction, the original story always dominates; the modified versions rarely leave the enclave of the fandom or gain anything close to the same status as the original. Perhaps closer to the oral ideal of collaborative storytelling is Scott Alexander’s conworlding community, which has in fact generated a bunch of mythology. But if I’m understanding that community correctly, the members all write pieces of the story and sew them together like a patchwork quilt; thus it differs from the oral tradition, where a single story is told in many different ways by many different storytellers.

Of course, we still have all the original myths and fairy tales that have flowed down to us from antiquity, and we are free to revise and retell these as we please. The proliferation of modern adaptations ensures that these stories stay alive and relevant to the 21st century. But I would really like to see some new myths and fairy tales emerge organically through time as well, for instance myths that reflect our scientific worldview.

Fortunately, in modern times we have developed a method of collaboratively creating content, and that is the open source movement. An open source project seems like the perfect way to implement something resembling the oral tradition. It would allow people from all over the world, who may not even know each other personally, to develop a story together. And if I understand open source culture correctly (having never actually contributed to an open source project), the resulting program does not have an individual owner or creator, but is thought to belong to and have been created by the entire community. Also, version control systems like git allow variation instead of one single canonical version; different versions of the story can develop in separate branches.

So, does anyone else think this is a good idea? Would anyone reading this be interested in contributing to such a project? Note that there’s room for people with all sorts of talents, everything from developing the characters and plot to improving the wording of a near-final draft. One of the nice things about this approach is that you don’t have to be good at every aspect of storytelling in order to contribute.

So if you might want to join such a project, or even if you don’t want to join but still think it’s a cool idea, please leave a comment below! If enough people are interested, then we can commence with the storytelling.

Posted in Uncategorized | 3 Comments

Increased Choices and Existential Crises

It seems to me that the more choices we have in life, the more we will suffer from existential crises.

During existential crises, we find ourselves asking philosophical questions like “What is the meaning/purpose of life?”. But these questions often arise from more practical concerns, like “What should I do with my life?”. It’s through seeking answers to the latter question that we are led to the former. This explains why existential crises are particularly common during our early-to-mid twenties, when many of us are first forced to confront the question of what to do with our lives.

Up until our early twenties, life is laid out in a clear, straight line. If you are an elementary school student, your task in life is to prepare for middle school. If you’re a middle school student, your task is to prepare for high school. A high school student prepares for college. But once you get to college, the tree trunk ends and the choices branch out in all directions. Suddenly you need to pick a major and thereby decide what to do with your life. It’s only natural, then, that existential crises should begin to arise during college and soon after graduation.

I’ve been no stranger to existential dilemmas, and while trying to resolve them and determine what to do with my life, I’ve often bemoaned the profusion of choices spread out before me. After all, the more choices we have, the more difficult it is to pick between the competing options. So I’ve tended to attribute our society’s epidemic of existential crises to the number of choices we have available.

In this essay, I’ll examine our culture’s obsession with choice. Then I’ll explain why it’s based on unsound principles, and thus contributes to existential crises. Finally, I’ll explain how we use identity to cope with the overabundance of choices available to us.

Increased Choices

In terms of what to do with our lives, we seem to have more choices these days than ever before. In the past, the number of options was limited, both because society was simpler and because strict social stratification constrained the set of roles available to any single individual.

In the distant past, all societies were unstratified subsistence cultures where most of one’s time was spend hunting, gathering, or growing food. These societies might have had some gender-based division of labor, and perhaps there would be a few specialists such as shamans. But for the most part there were not many social roles to choose from, and the people in these cultures led very similar lives in terms of their daily activities.

As society complexified, division of labor increased, which also increased the available choices. Now not everyone had to be farmers; some people could be blacksmiths or carpenters or traders or statesmen. But the choices in Ancient Greece or Rome were still far more limited than the ones available today, as there were fewer professions to choose from. And in many societies, strict social stratification also limited the choices available to any individual person: for the most part, you took your father’s vocation, or followed some profession that was fitting for your social station.

Thus, compared to modern Western societies, past cultures gave people a far more restricted set of choices for what to do with their lives. It seems that in recent centuries, the number of choices has exploded, both because there are more professions to choose from, and because our liberal society tries to ensure that all of these choices are available to all people.

Tradeoffs

Past cultures, with their limited set of choices, presumably worked wonderfully for people who liked their allotted positions in society, but caused great inner conflict for those who felt themselves at odds with the position they were assigned. Our current system works wonderfully for people who have a clear preference for one of the many choices (or who just don’t care and are happy with wherever they end up), but it causes great inner conflict for those who are uncertain about which life-path to choose.

We tend to view an abundance of choices as a good thing, and even consider it a moral imperative to provide people with as many choices as possible. But I don’t share this moral sense; instead I get the impression that societies with different amounts of choices each have their own strengths and weaknesses, and that choosing between them is a matter of balancing tradeoffs.

Of course, it’s easy for me to say this from the comfort of my ivory tower. How can I evaluate the tradeoffs between different cultures when the only culture I’ve experienced is my own? Perhaps it’s utterly naive of me to think I could get along in a culture with fewer choices, given the intensity of my individualist tendencies. I’ve always adamantly done things my own way instead of following established rules or traditions, and it’s hard to imagine what life would be like if that option weren’t permitted to me. But increasing the number of choices isn’t the only way to make room in society for outliers. As long as a culture has some designated place for outliers, where they are respected as members of society (shamans are an example of this), then I’m not sure that increasing the choices of social roles is actually necessary for providing outliers with good lives.

My perspective on this issue seems to be fairly unusual. In order to understand why it might not be completely unreasonable, I’d like to look at some of the assumptions underlying the usual worldview, and the flaws in these assumptions. In particular, I want to examine the assumptions that lead us to view choice as a moral value.

Choice as a Moral Value

Most people I talk to seem to have strong moral intuitions that choice is important, and that denying it to people is wrong. In America, this viewpoint seems to be particularly common among liberals and libertarians. Lack of choice is seen as a great injustice, because it means that people can be forced into roles that are unsuited for them. The solution to this is to give people as many choices as possible (as long as these choices don’t violate even more basic ethical principles, like not hurting anyone). Choice is equated with freedom, and denying people choices is a matter of denying them freedom.

The importance of choice is apparent in many causes that liberals feel strongly about. The most obvious example is “pro-choice”: people support abortion being legal because they want to give women more choices for what to do with their bodies. And quite a few social justice/equality issues can be framed in terms of choice. Feminism increases the number of choices for people by removing gender-based division of labor: women shouldn’t be forced to be housewives, because many women find careers more fulfilling. Conservatives promote strong social norms about family organization (one man and one woman, until death do them part), but liberals encourage choice in family organization (gay marriage, divorce, polyamory): you pick whichever family style is right for you, whichever one makes you happiest.

Most importantly for the purpose of this discussion, we find that gender equality, decreases in social stratification, and emphasis on happiness rather than prestige in choosing a career, leads to more choices in professions. In modern times, you certainly don’t have to pick the same job your parents did. And as long as you have the economic means, you are not in principle restricted to jobs associated with your social class. In practice, there’s still a ton of social stratification, hence the white/blue collar divide. But in an ideal liberal world, all of this would go away and everyone would be able to pick whatever job they wanted. And in this ideal world, no profession would be thought of as any better or more prestigious than any other; you simply choose the job that’s right for you. Your choice is evaluated on how well it fits your individual personality, rather than on its impressiveness or its ranking in some objective hierarchy.

Hence we get a lot of people asking “What profession is right for me? How do I choose?”

Choice and the Conception of the Self

The key assumption here is that this question has an answer, that there really is some profession that’s right for you. You seek the profession that’s in greatest alignment with your “true self”. Thus, the whole ideology surrounding increased choices rests on our understanding of the self.

If I had to define the “true self”, it’s the aspects of personality that persist over time; it’s the fixed, static, core components of who we are. It’s a sort of model of our own minds that we can draw on when making decisions and predictions of our future behaviors. In addition to assuming the existence of a true self, we also assume that any decision we make can either be in alignment with it or at odds with it (or be at some non-binary point between these two extremes). The individualist strives to “be himself” and “be true to himself”, to obey the impulses of his true self instead of just blindly following some path laid down by society.

These ideas aren’t completely wrong. I certaintly don’t mean to claim that the self doesn’t exist, or anything like that. People definitely seem to have some innate personality that persists over time, and our culture’s conception of the “true self” is not an unreasonable model of this. And I can speak from experience that this innate personality can sometimes be so at odds with society that it causes conflict. I’m glad that our society gives us a lot of freedom to be ourselves. But I object to the assumption that we need to give people this freedom by increasing the number of choices available. Increasing choices often seems to make life more complicated, without providing a substantial benefit.

It’s important to realize that our personalities, preferences, and “true selves” are not simply things we are born with. Instead, they form out of the interaction between our environments and our innate predispositions. Our innate dispositions specify some possibilities for the kind of person we can be, and our cultures/environments also specify possibilities for the kind of person we can be. The person you end up becoming will depend on how your culture channels your specific predispositions. For instance, if you are born with an innate tendency towards being aggressive and competitive, you might become a warrior in one culture and a Wall Street banker in another. What you end up being depends on what your culture values, since your culture’s values get incorporated into your self. Your culture, as well as your innate tendencies, shape your desires about what your life should contain.

If we lived in a culture with a completely different set of choices, we’d presumably still find ways to be happy. I mean, I’m a computer scientist, and I chose this job because I like analytical thinking and problem-solving. But there was no computer science in the Roman empire, so if I had been born there, I would have had to find some other outlet for my analytical tendencies. Or maybe they wouldn’t have developed at all; maybe they’re a product of my schooling. I was born with the potential to become an analytical person, but if that trait had never been encouraged or rewarded, then maybe it wouldn’t have developed at all. It’s hard to say. At the very least, it seems unreasonable to claim that I was born to be a computer scientist, and had I lived in the Roman empire, I would have been forever unfulfilled.

Even in modern times, it’s hard to believe that of all the choices available to me, computer science is my one true calling. I don’t think I’m doing computer science because it’s inherently the best fit for me; I don’t think that out of all the careers available, it’s the one that’s best in line with my personality. When I was growing up, my dad was a programmer; if he had been a physicist, maybe I would have ended up studying that instead. And I majored in computer science because I felt at home in the CS department at my school. If I had gone to a different school where the faculty weren’t as awesome and the students weren’t my kind of people, I might have easily majored in something different, like English or Anthropology.

So I don’t think we have true callings. I think we all have a fairly wide set of professions that could be fulfilling to us, and it won’t matter all that much which one we end up in. It’s for this reason that I don’t think increasing our choices for professions helps at all to increase our happiness. It only increases confusion over which choices we should make, since the choices become so fine-grained that we have trouble picking between them. And our cultural insistence that we should follow the urgings of our “true selves” makes us that much more confused, since instead of realizing there are many equally valid choices, we spend long hours agonizing over which choice is “right”, which choice is “best”, which choice is “most meaningful”.

Identity and Self

So far I’ve talked about increased choices leading to existential crises, but for many people, the number of choices leads to identity crises instead. Since our culture teaches us to be true to ourselves, identity crises will be particularly common for people who view the self as a matter of identity.

The idea of identities assumes that there are distinct clusters of selves; determining your true self (and what you should do with your life) thus becomes a matter of determining which identity cluster you fit into. Once you’re secure in your identity, it tells you who to be and how to behave. But figuring out which identity fits you best can be hard, since many might fit, or none might fit perfectly. So people have identity crises about all the different choices they need to make in life. There are identity crises around gender and sexuality, and about the type of relationship you want (monogamy? polyamory?), and so on. Interestingly, I don’t think that “what career should I pick?” generates the same kind of identity crises, maybe because the career you end up with is seen as less of an essential part of who you are.

It’s interesting to contrast two different approaches to being true to yourself. One approach says to act according to your inner urges without following any of the rules or categories laid down by society. The other approach says to view your self as defined by your identity. If you follow the first strategy, you will pick choices and actions that seem right for you specifically. If you follow the second strategy, you will pick choices and actions that seem right for a person who belongs to the categories you belong to. It’s a tradeoff between effiency and accuracy. The second strategy only approximates your actual desires, but it’s more efficient: you can appeal to fixed rules and categories, which alleviates a lot of the difficulty in making decisions. Instead of choosing among all possible actions at every step, you just choose a few identities at the beginning, and then at every step you go along with the “rules” of that identity. Instead of asking yourself “What do I want to do right now?”, you can ask yourself “What would a scientist do?” or “What would a liberal do?” These questions often have much clearer answers, since you can look at what other members of the group are doing, and then do that thing. Note that in practice, we probably alternate between these two strategies for decision-making, with some people tending towards one more strongly than the other.

To summarize this section, if you think of the self in terms of identity, the increase in choices might give you an identity crisis instead of an existential one. With existential crises, you try to answer the question of what you should do with your life by figuring out what’s meaningful. With identity crises, you try to answer the question of what to do with your life by determining what kind of person you are.

Conclusion

Living in the 21st century, we are faced with a truly incredible number of choices for what we can do with our lives. These choices arise partly from societal complexity and extreme division of labor, and partly because we view choice as a moral imperative. Having all of these choices gives us an unprecedented amount of freedom, but it also leaves us with a lot of uncertainty about what we should do with our lives. This uncertainty tends to manifest as existential crises and identity crises.

Personally, I’m happy with this modern state of affairs. I prefer freedom and exploration to safety and comfort. But I recognize that these things come with tradeoffs, and that it’s difficult to figure out how to act when we’re faced with so much uncertainty. There are no culture-wide authorities that can definitively tell us the right answer. But in the face of uncertainty, we often find ourselves seeking out some authority who can tell us the answer. Religious beliefs (and sometimes scientific beliefs) can serve as authorities for existential questions. For questions of identity, we often look to psychologists and psychiatrists as authorities. It’s interesting to speculate what forms of authority we will look to in the future; a friend of mind suggests that we will increasingly ask science and technology for answers, perhaps in the form of personality tests based on statistics. In addition to new cultural authorities, it will be interesting to see what kinds of worldviews and social institutions we will develop to help people who are struggling with existential and identity crises.

Posted in Uncategorized | 1 Comment