You can change people’s minds without changing their beliefs.

(This is not part of the Postrationality series. It’s just an isolated thought that I wanted to share.)

In order to change someone’s mind, you don’t have to change their beliefs. You just have to change their associations.

Let me unpack that a bit. By “change someone’s mind”, I mean change it in a way that affects their actions. In the rationalist community, we tend to see beliefs as the be-all and end-all of decision making. Based on our beliefs, we should choose our actions to maximize expected utility, and that’s all there is to decision-making. But in practice, beliefs are only part of our reasoning and decision-making processes.

Let me give you a (pretty obvious) example of how we can fail at decision-making despite having correct beliefs. Suppose your friend is coming over tonight, and you’re planning to make dinner. You know that your friend is a vegetarian, but when you’re at the supermarket, you forget this and buy chicken. This mistake leads to a loss of utility, since either you serve your friend chicken for dinner, or you eventually remember and have to run back out to the store. In either case, a failure of reasoning occurred, and it led to a loss of utility. But the problem here wasn’t with your beliefs; you knew about your friend’s dietary preferences. The problem was with your memory, and which beliefs you actually used when you were making the decision.

So that’s the point I’m trying to get at here: your decision doesn’t just depend on your beliefs; it also depends on which specific beliefs you actually use when you’re deciding. And we can’t just use every belief, because there’s too many to reason with efficiently.  So we have to do approximate inference, and restrict ourselves to a subset of our beliefs.

Fortunately, most beliefs will be totally irrelevant to a given decision. If you’re choosing what to buy for dinner, then it doesn’t really matter that DNA is stored in the nucleus of the cell. This means that in order to make good decisions, you need to figure out which of your beliefs are most relevant, and use those and only those when reasoning. In the example above, where the person bought chicken to serve to a vegetarian friend, that was a failure at retrieving and using a relevant belief. In general, when you forget something important, you are failing to retrieve a relevant belief.

This is where associations come in, because we use them in our belief-retrieval systems. That is, let’s say your vegetarian friend’s name is Steve, and he plays in a metal band, he frequently wears mismatched socks, and one time he went bungee jumping off the Eiffel Tower. These are all things you associate with Steve; that is, when you think of Steve, these are the facts that might spring to mind. If we want to get slightly more formal, we can imagine that the mind contains a network of facts/entities/ideas, and that every pair of these has a strength of association between them. These strengths will be mediated by context, so that in the context of cooking Steve dinner, your association score might grow stronger to facts about his food preferences, and weaker to facts about his socks. This means that in order to remember that Steve is a vegetarian, you either need a strong base association score between “Steve” and “vegetarian”, or you need the context of cooking Steve dinner to increase the score enough that it passes some threshold of relevance.

This is why, if you want to change how someone acts, you don’t need to change their beliefs. You just need to change their associations. Change which facts they (subconsciously) decide are relevant to the situation. Change what comes to mind when they think of a person or organization. The media does this all the time; it doesn’t even have to lie. It just has to broadcast information selectively. Suppose there’s a politician running for office, Senator Dick Head. You know that Senator Head once donated $5,000 dollars to protecting the short-snouted snail, a cause that is dear to your heart. But he also cheated on his wife, and you find this morally repugnant. The media doesn’t care about the short-snouted snail, so it never reports on his donation. But the news channel you watch is constantly telling you what a horrible awful cheater Senator Head is. So your association between “Senator Head” and “cheated on his wife” gets stronger, while your association to “cares about the short-snouted snail” remains weak. This means that by voting time, you are truly disgusted with Senator Head, and you vote for his competitor, Congressman Mike Rotch, instead.

So, in conclusion, reasoning is not just about which beliefs you possess. It’s also about which beliefs you actually use during a specific reasoning task.  Thus, if you want to change someone’s mind, you don’t have to change their beliefs.  You just need to change which beliefs they’re likely to use when reasoning.

Note: this post is not science. I cannot cite research that supports anything I just said (though I do think it’s reasonable, or I wouldn’t have written it). And I don’t know any mathematical models that reason by choosing beliefs according to strengths of association. I do know some researchers are working on how to choose which beliefs to use, but I have no idea how they’re going about it. So please don’t take anything I’ve said here as scientific fact. This is just informed speculation.

Posted in Uncategorized | 3 Comments

Postrationality, Table of Contents

A couple of weeks ago, Scott Alexander posted a map of the rationalist community, and much to my delight, I’m on it! Specifically, I’ve been placed in the country of Postrationality, alongside Meaningness, Melting Asphalt, Ribbonfarm, and A Wizard’s Word. This is truly an illustrious country, and I’m honored to be a member of it.

But anyway, as a result of this map, a lot of people have been asking: what is postrationality? I think Will Newsome or Steve Rayhawk invented the term, but I sort of redefined it, and it’s probably my fault that it’s come to refer to this cluster in blogspace. So I figured I would do a series of posts explaining my definition.

As you might imagine, postrationality has a lot in common with rationality. For instance, they share an epistemological core: both agree that the map is not the territory, and that concepts are part of the map and not part of the territory, and so on. Also, the two movements share some goals: both groups want to get better at thinking, and at achieving their object-level goals.

But the movements diverge in the way that they pursue these goals. In particular, rationality tends to give advice like “ignore your intuitions/feelings, and rely on conscious reasoning and explicit calculation”. Postrationality, on the other hand, says “actually, intuitions and feelings are really important, let’s see if we can work with them instead of against them”.

For instance, rationalists really like Kahneman’s System 1/System 2 model of the mind. In this model, System 1 is basically intuition, and System 2 is basically analytical reasoning. Furthermore, System 1 is fast, while System 2 is slow. I’ll describe this model in more detail in the next post, but basically, rationalists tend to see System 1 as a necessary evil: it’s inaccurate and biased, but it’s fast, and if you want to get all your reasoning done in time, you’ll just have to use the fast but crappy system. But for really important decisions, you should always use System 2. Actually, you should try to write out your probabilities explicitly and use those in your calculations; that is the best strategy for decision-making.

Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two. Postrationality understands that emotions and intuitions are often better at decision-making than explicit conscious reasoning (I’ll discuss this in more detail in the second post). Therefore, postrationality tends to favor solutions (magick, ritual, meditation) that make System 1 more effective, instead of trying to make System 2 do all the work.

Here are some other things that seem to be true of postrationalists:

  • Postrationalists are more likely to reject scientific realism.
  • Postrationalists tend to enjoy exploring new worldviews and conceptual frameworks (I am thinking here of Ribbonfarm’s “refactoring perception”).
  • Postrationalists don’t think that death, suffering, and the forces of nature are cosmic evils that need to be destroyed.
  • Postrationalists tend to be spiritual, or at least very interested in spirituality.
  • Postrationalists like (and often participate in) rituals and magick.
  • When postrationalists are trying to improve their lives/the world, they tend to focus less on easily quantified measures like income, amount of food, amount of disease, etc., and instead focus on more subjective struggles like existential angst.
  • Postrationalists enjoy surrealist art and fiction.

This may seem like a rather disjointed list, so one of the purposes of this series will be to show how these tendencies all fit together, and in particular how they all derive from the basic postrationalist attitude towards life.

My current plan is to include three posts in this series (which I’ll link to as they become available):

  • A post explaining the rationalist perspective, including the System 1/System 2 model of the mind, the need to overcome bias using our analytic reasoning skills, and a strange form of Bayesianism where people actually try to do explicit calculations with their subjective probabilities.
  • A post explaining why the rationalist perspective is misguided.
  • A post examining the attitudes held by the two communities. This will be the most important post, since at the heart of it, rationality vs. postrationality is not a factual disagreement, but a disagreement of attitude. I will try to show how the postrationalist attitude (one of accepting the world and our own humanity) gives rise to the bullet-pointed list of tendencies that I showed above.

As a final note, I should probably mention: this definition of postrationality is purely my own. In particular, it does not necessarily represent the viewpoint of the other Postrationalists on Scott’s map. So if you’re on that map, and you think the definition of postrationality should be different than the one I’m giving here, then I hope you will leave a comment and let me know!

Posted in Uncategorized | 18 Comments

Even the Ugliness of the Universe Is Beautiful

The universe is a chasm of inconceivable space, surrounding us dizzily from all directions. We are afraid of the distance between adjacent stars and we are afraid of the distance between adjacent atoms; any open space is a breeding ground for phantoms. When confronted with the unknown and the wild, we have two choices: to build strength enough to join the wilderness, to revel in its fathomless wonders; or to hide within our fear, to tear down everything we can’t control and build walls to insulate ourselves against the sky.

I am trying to follow the path of strength, but it’s such a steep and narrow road. I want to look at the universe unflinchingly, to meet the eyes of God and hold his gaze. But the two ravens, fear and desire, circle above me; they try to push me off the pathway into the endless black abyss.

Our eyes were not meant for the universe in its rawness. Cognitive science reduces the human mind to mechanical computation. Evolutionary biology shows us that everything we do is rooted in selfishness. Quantum physics is maddeningly impossible to interpret. If we dwell on these things too long, we may find ourselves swallowed by insanity.

Cognitive science was my own personal bane. I got caught in the trap of watching each of my thoughts unfold, seeing how the analogical links I made were shaping my understanding of the world. It became impossible to believe in any thought or reason I concocted, because I could easily see how each thought arose and how many alternatives were possible.

And so I was almost ready to turn back, to retreat to the ancestral forest and abandon my quest for knowledge. But now I understand: if the discoveries of science seem ugly, if they warp our minds into madness, it is only because this knowledge was not meant for Man. We are digging deeper into these questions than evolution has prepared us for, and we’re finding that the universe is stark and alien and Other. If I’m disheartened by the knowledge that I’ve gained, it’s because I have started to pierce through the veil of human illusions; I am starting to see the universe as it truly is.

And so I will continue on my quest, armed with this understanding: even the ugliness of the universe is beautiful; even my descent into madness is beautiful. We are dealing with cosmic mysteries that were not meant for the eyes of Man. I will climb this steep and narrow road, even though the abyss still yawns before me. For now, when I look into its depths, I see that it is full of stars.

Posted in Uncategorized | 9 Comments

Identity and Bureaucracy


Lately, the internet has been awash with new gender and sexual identities. On the gender side, the strict dichotomy of male and female has given way to a proliferation of possibilities, including agender, transgender and gender fluid; these categories have entered the public consciousness to the point where Facebook recently changed the way it handles gender, allowing users to pick from fifty-six different options instead of just the usual two. As for sexuality, the choices are no longer limited to heterosexual and homosexual; the list has grown to include asexual, sapiosexual, and demisexual as well.

As usual when society changes, we see a lot of people lauding this trend as the next big step towards freedom, equality, and acceptance, and we also see a lot of people condemning this trend as a sure sign that society is headed straight to hell. Both views have their merits, and I don’t want to argue about which one is right. Instead, I want to ask a different question: why is society changing in the first place? What sorts of cultural and environmental pressures are causing people to be dissatisfied with their default genders?

All sorts of explanations have been proposed. One is that people have always longed for this freedom to choose their own gender, but up until now, society has been too bigoted and close-minded to allow it. Another explanation blames plastics and other industrial chemicals for interfering with our hormones, causing many people to feel at odds with their biological gender.

In this post I’d like to put forth another potential explanation, which is that our cultural obsession with fine-grained gender identities is a natural consequence of living in a rigid bureaucratic society.


As Ribbonfarm explains, in order to function, bureaucracies require the world to be legible.

The idea of legibility is rooted in our human need for order. The neater and more organized the world is, the easier it is for us to process and interact with it. From Ribbonfarm:

In Mind Wide Open, Steven Johnson’s entertaining story of his experiences subjecting himself to all sorts of medical scanning technologies, he describes his experience with getting an fMRI scan. Johnson tells the researcher that perhaps they should start by examining his brain’s baseline reaction to meaningless stimuli. He naively suggests a white-noise pattern as the right starter image. The researcher patiently informs him that subjects’ brains tend to go crazy when a white noise (high Shannon entropy) pattern is presented. The brain goes nuts trying to find order in the chaos. Instead, the researcher says, they usually start with something like a black-and-white checkerboard pattern.

The idea of legibility is as follows: when a system is so complex that we can’t process it, we change that system to make it simpler. Ribbonfarm gives the example of “scientific” forestry:

The early modern state, Germany in this case, was only interested in maximizing tax revenues from forestry. This meant that the acreage, yield and market value of a forest had to be measured, and only these obviously relevant variables were comprehended by the statist mental model. Traditional wild and unruly forests were literally illegible to the state surveyor’s eyes, and this gave birth to “scientific” forestry: the gradual transformation of forests with a rich diversity of species growing wildly and randomly into orderly stands of the highest-yielding varieties. The resulting catastrophes — better recognized these days as the problems of monoculture — were inevitable.

Bureaucracies are known for being rigid, dehumanizing, soul-sucking things, and it’s easy to see how legibility is responsible for this. Every piece of paperwork makes the world more legible, by distilling the complexity of our lives down to a few discrete fields. When you apply for a job, your application will be reviewed by some guy hunched over his desk, reading through 500 similar applications while drinking his third cup of coffee that morning. He doesn’t care about you as a person, in all your glorious uniqueness and complexity. He just wants to get through your application as quickly as possible. The job application form makes his life easier because he can see at a glance where you went to school, what your previous work experience is, and so on. It lets him decide very quickly whether you’re qualified for the job. The paperwork makes you legible to him.

But it also means there’s no room for individual differences and special cases. If you never went to college and you have no work experience in that field, the guy at the desk might throw away your application, even if you’re self-taught and brilliant and you really would be the best person for the job. And thus all of us have learned: the system does not reward people who are brilliant and capable. The system rewards people who are brilliant and capable and willing to play by its rules. The system is dehumanizing because it reduces a whole, complicated, intricate human being down to a handful of statistics.

As this example hopefully makes clear, bureaucracies don’t do this because they’re evil. They do it because they’re in a hurry and they don’t have time to consider the complicated details of everyone’s individual lives. Bureaucracies are like assembly lines. Before assembly lines, you had a whole bunch of craftsmen each making just a few items at a time. If a woodworker made chairs with two different shapes of legs, that was fine, because he could build two different chair-seats, one that went with each pair of legs. But in a factory, it’s essential that all parts be identical; that’s what makes the assembly line run quickly.

Analogously, when the world was smaller and less centralized, it used to be that a few individuals could gather together and work problems out for themselves. Because these people were operating on a small scale (three or four people rather than hundreds, thousands, or millions), it was possible to deal with each person at high resolution; it was possible to take everyone’s complex personalities into account when devising a solution. But a bureaucracy is dealing with thousands and thousands of cases at a time; it doesn’t have time to devise a unique solution for every individual problem. So instead, it pattern-matches each problem to some general class of problems, and applies a one-size-fits-all solution.


Now let’s return to the original topic of the post. Why on earth would rigid bureaucracies cause people to develop new gender identities?

To answer that, let’s consider the following situation. Suppose you’re a grad student, and your life has been pretty stressful lately. Maybe your girlfriend just broke up with you, or maybe your mom is in the hospital. Whatever the reason is, you’re having trouble focusing on your schoolwork, and you decide you want to take a semester off. Well, if your school is anything like mine, then in order to take a semester off, you have to apply for a leave of absence, which they’ll give you if you have a medical condition, a family hardship, or you need to do military service. Family hardship covers the “mom is in the hospital” case, but what about the guy whose girlfriend just broke up with him? Well, it turns out there’s a medical condition that corresponds to his problems, and it’s called “depression”. So all he needs to do is go to a doctor, explain what’s going on in his life, and get a diagnosis.

I have a lot of problems with how our culture views “mental health issues”, but that’s not the point I’m trying to get at right now. I don’t want to debate whether this hypothetical student is actually depressed, or whether depression is actually a medical condition. Instead, I want to point out that pasting a label of “depression” onto this guy’s life didn’t change his situation in any way. He was just as depressed about his breakup before a psychologist filled out an official-looking form as he was afterwards. And yet, prior to receiving that label, this guy was not qualified for a leave of absence. After receiving the label, he was. The label didn’t change his problem; it just made it visible to the bureaucracy.

The point I’m trying to get at is that our bureaucratic society is sending us a powerful message: until your problem has a name, it doesn’t exist.


And this is grad school, where people are treated as individuals to the point where every student is personally mentored by a successful researcher in the field. Grad students have it easy compared to the elementary, middle, and high school students at your average public school. If a 10-year-old with Asperger’s gets overwhelmed by all the noise and commotion in gym class, the gym teacher can’t just notice this and allow the student to sit out. The parents need to take their child to a psychologist, procure a diagnosis of autism, and bring this to the school; only then can any action be taken.

I don’t mean to say that this system is all bad. If the gym teacher is ignoring the problem, a note from the doctor can force her to take it seriously. And on the other hand, requiring a doctor’s note keeps the kid from faking an illness, or the teacher from playing favorites.

But this system does have its consequences. Once the child is labeled as “autistic”, the diagnosis can never be taken back. It will color how the parents view their child’s behavior, and ultimately influence how the child views himself. This makes a diagnosis of autism different from, say, a diagnosis of diabetes. Both are permanent conditions, and knowing about either of them will change how the child interfaces with the world. But the symptoms of autism, unlike those of diabetes, cover aspects of one’s personality and preferences that have traditionally been included as part of the self. This makes autism compelling as an identity label in the way that diabetes is not.


Psychiatric diagnoses are everywhere these days. We are faced with a generation of children and young adults who have received these diagnoses, and who see them as a fundamental part of their identities. And it was the act of getting diagnosed, the act of having these identities recognized officially, that allowed these students’ individual differences to finally be taken seriously.

In our label-driven society, receiving the right classification is essential for ensuring that you are treated in a manner that befits you as a person. That’s why it’s important to find a set of labels that fit you well, and to make sure that those labels are accepted by society at large.

So is it any surprise that people are seeking out finer-grained gender identities, ones which describe their personalities better than “male” or “female” could? It is a surprise that people consider these labels so incredibly important?

My prediction is that the new gender identities will be embraced most strongly by people who have also strongly embraced one or more psychiatric diagnoses. And this prediction seems to be borne out by the number of people on the internet who introduce themselves by some combination of gender and psychiatric identities. “Hi, I’m a non-binary asexual submissive with anxiety and depression.”


So that’s my answer. Why do we create these identity labels, these ever-finer-grained descriptions of who we are? Maybe it’s because we live in a rigidly bureaucratic society, where our individual differences will only be noticed if they have a label attached. Maybe it’s because we’re used to having our dimensionality reduced down to a few searchable keywords. Maybe we’re trying to make ourselves visible by making ourselves legible. Maybe we, as a culture, have internalized the idea that if something doesn’t have an official label, it might as well not exist.

Posted in Uncategorized | 7 Comments

Sneaking Past the Gatekeeper

Over thinking, over analyzing separates the body from the mind,
Withering my intuition, leaving all these opportunities behind.
— Tool, “Lateralus”


I think most readers of this blog will agree that analytical thinking is a very good thing. Without careful, analytical thinking, it would be difficult for us to reason about the world or figure out which actions to take. Also, thinking analytically is just plain enjoyable; that’s why I’m in academia, where I get paid to do it.

And yet, I think most of us can also agree that it’s possible to be overanalytical: thinking too much can drain our experiences of emotional vividness and make them feel less real. In this post, I’d like to explore overanalysis a bit. Why is it that analytical thought can dissociate us from the world and prevent us from really experiencing our lives?

One answer that I’ve heard, particularly from the mindfulness meditation people, is that thinking is just really distracting. If we get caught up in our thoughts, it means that our attention is directed inwards at the contents of our mind, instead of outwards at the world around us. From what I understand, the point of mindfulness meditation is to quiet our thoughts enough that some of the raw power of experience can get through. Then, we can live in the moment instead of getting caught up in memories of the past and worries about the future.

I think this is part of the explanation, but not all of it. It’s not just that experience and analysis are competing for our limited attentional resources. I think it’s much more deliberate than that, and in fact one of the purposes of analytical thinking is to form a protective barrier that shields us from the full emotional impact of our experiences.

I’ll give some examples of this, but first, it should come as no surprise that we would build a shield against powerful emotions. After all, emotional experiences are dangerous: they have the potential to change and transform us dramatically, and they have far-reaching impacts on our thoughts and beheavior. That’s why we all know to be careful of things, like advertising and political propaganda, that are designed to appeal to our emotions.

So let’s take advertising as an exmple. We all know that ads are trying to manipulate us. By showing us pictures of successful, attractive people using their product, they try to persuade us that buying it will bring us friends, sex, and popularity. In order to resist the ads’ allure, we are taught from an early age to use our analytical faculties, or “critical thinking skills”, when dealing with them. By deconstructing advertisements to understand how they’re trying to manipulate us, we can defuse and deflect their emotional messages before they can get through. This is what I mean when I say that analytical thinking can act as a shield.

It seems pretty clear to me that we should be wary of advertisements and their emotional hold on us. But I think many intellectuals extend that suspicion to any stimulus that appeals to us on an emotional level, even if it might be beneficial. We’re overly defensive; we raise our shields even when it’s not appropriate.

A good example of this is ritual. Many intellectuals seem wary of ritual because of its emotional hold on us, even though ritual is known to have very positive effects. Ritual creates group solidarity, and can help us to emotionally reinforce our existing beliefs and goals. What this feels like from the inside is often euphoria and a heightened sense of connection to those around us.

Now, sometimes we have good reasons to avoid a specific ritual. A religious ritual, for instance, might draw on and strengthen beliefs that we reject. But ritual is much broader than religious ceremonies. In college, for example, I attended a CS competition that could easily be described as ritualistic. Each school sent a team of ten to the weekend-long event, typically dressed up in some costume related to that year’s theme. There were parties with dancing, and whole rooms of people singing bawdy songs together, and a giant trophy cup for the winners that we all drank beer out of.

There was nothing about this ritual that I might rationally want to avoid. “CS student” was a huge part of my identity in college, so I felt like I belonged there, and had a lot in common with the other participants. And there was nothing about the specific rituals that bothered me; actually, there’s few things I love more than shouting bawdy songs along with hundreds of other computer scientists. And yet, when I first attended this competition, I felt a huge amount of resistance towards just letting go and participating in the ritual. In some sense, it required surrendering myself, relinquishing control to the collective energies surging through the room, and letting myself be swept up in the excitement of the event. It involved letting down my normal, analytical defenses against powerful emotional experiences. Once I did manage to let go, I had the time of my life. But I can still remember just how difficult it was to do it.

I suspect a lot of rationalists feel this way when encountering rituals. Intellectually, we may understand the benefits of ritual, but emotionally, we have trouble letting go. In a brilliant LessWrong comment, Viliam Bur wrote: “A ceremony is a machine that uses emotions to change people. For some people this may be a sufficient reason to hate ceremonies. Because they are typically designed by someone else, and may support goals we don’t agree with. Or the idea of some process predictably changing my mind feels repulsive.” As rationalists, we tend to be wary of anything that changes our mind, but isn’t backed up with a rational argument. We want to make sure our beliefs are justified, so when we encounter things like ritual, which have a profound emotional impact on us but which are difficult to understand rationally, we very naturally approach these things with caution and even suspicion. But if we avoid rituals, or block out their emotional effects, then we are ignoring a very powerful tool, which could instead be used to our advantage.

Maybe ritual is a bad example to use here, if I’m trying to explain why overanalysis can be a problem, because I think a lot of my readers will say “yep, ritual is dangerous; I’m glad I have critical thinking skills to keep me from falling into its traps”. But I would expect that even the most wary of rationalists wants to let some things effect him emotionally, like powerful music or well-written fiction. The emotional content is precisely what makes these things enjoyable, but overanalyzing them can often severely diminish their emotional effect.

A good example of this is high school English classes. I’ve heard lots of people say that they might have actually enjoyed the books they had to read in high school, if only their teachers hadn’t forced them to dissect every little detail of the author’s symbolism.

I suspect that a lot of us overanalyze our lives the way English teachers overanalyze books. Many of us do this for the reason I described above: we’re suspicious of things that affect us emotionally, so we make sure to keep our critical thinking skills turned on constantly. And some of us do it just out of habit; we work in professions that require us to think analytically for eight hours a day, and once we get home it’s hard to turn that off.

As I said above, some amount of critical thinking is necessary to prevent us from getting seduced by advertisements, or otherwise taken advantage of. But I think that we tend to err on the side of too much analytical thinking, and don’t spend enough time just allowing ourselves to experience our lives. This is a big problem, because overanalysis squeezes our experiences dry of emotional vividness, making life drab and dreary.

One solution to this problem is to learn how to turn off our analytical thinking minds once in a while. But this can be quite difficult, especially for those of us whose professions train us to think analytically all the time. Personally, I’ve been trying for years and years to quiet my mind and just “live in the moment”, and I still find this incredibly difficult.

But fortunately, I think there’s another solution. There are some emotional stimuli which are so subtle that no matter how much we try to analyze them, they still manage to slip through our defenses and affect us emotionally. Even the most intransigently analytical among us can take refuge in stimuli such as these. They are the stimuli that are able to sneak past the gatekeeper.

The Gatekeeper

I propose the following metaphor: we can think of analytical reasoning as a gatekeeper that prevents ideas or experiences from entering the inner courtyards of our minds. When we use our analytical thinking “the right amount”, or apply it to the right things, then the gatekeeper serves us well. It keeps out nasty travelers, like advertisements, that want to pollute the inner places of our minds, or scatter the seeds of weed-like desires in our imaginations. And it lets in the nice travelers, like music and literature and perhaps even ritual; these travelers bring with them gifts that enrich our inner courtyards. But when we overanalyze, this is like having an overzealous gatekeeper, a paranoid and suspicious guard that turns away all guests, even the ones who are clearly carrying invitations. If you have a gatekeeper like this, then no new ideas or experiences will be able to visit your inner courtyard. It will grow dry and barren and the only thoughts you will have will be old, tangled, gnarled ones that circle through your mind like tumbleweed.

If you have a gatekeeper like this, then the only experiences that will be able to get through are those that can sneak past the gatekeeper. At the risk of alerting your (perhaps hypervigilant) gatekeeper to these usually-invisible travelers, I would like to spend the next section exploring what types of stimuli are able to sneak past.

Sneaking Past

It should be fairly straightforward to characterize the stimuli that can sneak past the gatekeeper: if analytical thinking blocks out our emotional experiences, then the things that affect us emotionally will be the ones we don’t think analytically about. Of course, in order for these things to affect us at all, some part of our mind has to process them; I’ll call that part the intuitive or subconscious mind.

So, the things that can sneak past the gatekeeper are the ones we interpret using our intuitive rather than analytical minds. I’ll try to give you some examples.

(1) Narrative

Narrative is something we tend to interpret more intuitively than analytically. This may be part of why stories have such a powerful effect on us. We read stories for entertainment, but as we read them, we are unknowingly learning more about how the world works. Stories, even fictional ones, have a kind of truth to them, because we can relate them to our own experiences, and they thereby give us insight into our lives.

The less we analyze stories, and the more we just allow ourselves to experience them intuitively, the more powerful their effects will be. The deepest emotional experiences will come when we blindfold the gatekeeper, suspend disbelief, and allow the story to engulf us.

Relatedly, here is a beautiful quote from David Chapman’s Buddhism for Vampires, where a character within the frame story explains why we might tell frame stories in the first place:

[W]hen you listen to a story, you enter a new world, created out of words. And you are willing to let the world be as the teller tells it. But that can only go so far, and if the world does not make sense, you will interrupt the tale and argue. By putting the story within a story, the teller of the inner story becomes only a character himself, so you cannot argue with him. Then the inner story can be less realistic. If you wrap it in enough layers of indirection, you can tell a completely ridiculous story and have it seem somehow believable.

And then, a story always works some transformation in the hearer. It is not ‘information’; it works on the heart. Although it is made of words, the true meaning of a story cannot be put in words. So the story teller has to stop the hearer from using their ordinary mind to listen. When the teller says ‘once upon a time…’, the listener knows it is time to listen with the heart. But the listener’s mind may still get in the way. To confuse ordinary mind, the story-teller wraps worlds in worlds, until the hearer gets lost, and can listen without judgment.

(2) Mythology

Mythology is a kind of narrative, and so what I said about stories is true for myths as well: they affect us deeply because we interpret them more intuitively than analytically, and they teach us about our lives because we’re able to connect them to our own experiences. But I think mythology deserves its own category, because myths communicate in archetypes and imagery, which do a particularly good job eluding our conscious minds. We can usually understand a narrative analytically if we try hard enough, distilling its plot down into themes and moral lessons. But archetypes are harder to interpret. Submersion in water might symbolize rebirth, for instance, but we’re not usually conscious of this fact as we read the story of Noah. Some deep, intuitive part of our mind does understand this symbol, however, and so the myth is able to convey its message in terms that only the subconscious mind can understand. It bypasses the conscious mind altogether and speaks directly to our intuitions.

Sometimes, I think all myths have two parts: a comprehensible narrative, which keeps the conscious mind occupied, and archetypal imagery, which carries the true meaning of the story without us realizing it.

It’s also worth mentioning surrealist art here, since much like mythology, surrealist art communicates in symbols, archetypes, and dreamlike imagery.

(3) Sigils in Chaos Magick

Many techniques in magick are specifically designed to sneak past the gatekeeper and elude the conscious mind. I’m going to describe one such magickal technique, called a sigil, but first, let me try to explain what magick actually is. There are a bunch of different interpretations of magick, including ones that treat gods, angels, and demons as real, but the one I’ll focus on here avoids any supernatural explanations. It says that magick provides a set of techniques for altering our minds, to make them better at doing what we want them to do. According to this interpretation, then, magick is a lot like rationality, except that rationality typically focuses on altering the conscious mind, while magick focuses on altering the subconscious mind. Since magick and rationality are dealing with two different parts of the mind, they naturally use very different toolkits. Magick’s toolkit typically involves arcane rituals, complex webs of symbols, magickal objects, and the like.

Personally, I’ve dabbled in Chaos Magick, which focuses on techniques rather than beliefs, and encourages practicioners to choose whatever worldview suits their purposes best at any given moment. The standard text on Chaos Magick is a book called Liber Null, by Peter Carroll. Interestingly, the first chapter is basically just meditation exercises: since the goal is to alter your mind, you first need to control your mind. Once you’ve managed to do that, you can begin making sigils.

Sigils are a method of planting a suggestion in your mind, and then forgetting you planted it there. You have some wish or desire, and so you make an image, called a sigil, representing that desire. Then, you deliberately forget the connection between the image the desire. Or at least, you forget it consciously. Your subconscious mind remembers, and so you look at the sigil frequently to remind your subconscious to carry out its appointed task. Thus, the sigil bypasses the conscious mind and its gatekeeper, and allows the subconscious mind to operate without interference.

In Liber Null, in the section on sigils, I found the following paragraph on the importance of eluding the conscious mind:

The magician may require something which he is unable to obtain through the normal channels. It is sometimes possible to bring about the required coincidence by the direct intervention of the will provided that this does not put too great a strain on the universe. The mere act of wanting is rarely effective, as the will becomes involved in a dialogue with the mind. This dilutes magical ability in many ways. The desire becomes part of the ego complex; the mind becomes anxious of failure. The will not to fulfill desire arises to reduce fear of failure. Soon the original desire is a mass of conflicting ideas. Often the wished for result arises only when it has been forgotten. This last fact is the key to sigils and most forms of magic spell. Sigils work because they stimulate the will to work subconsciously, bypassing the mind.

(4) Cognitive Science

(This is not an example, but an anti-example.)

I’ve been reading a lot about cognitive science lately, but for the longest time, I avoided studying it, since I thought it would be very dangerous. After all, the whole point of cognitive science, in some sense, is to use conscious, analytical reasoning to understand the workings of the subconscious mind. This does two things: it gives the conscious mind access to subconscious processes that are usually hidden, and it might actually interfere with the subconscious mind’s functioning.

Regarding the first: if we give the conscious mind access to subconscious processes, this is like strengthening the gatekeeper, or perhaps equipping him with better security tools. Now, in addition to eyes, he has security dogs and infrared scanners and so on, which means that much less can sneak through. The average person’s gatekeeper is not very well-trained at security, but the skeptic’s is, and the cognitive scientist’s even more so.

Regarding the second: studying cognitive science might actually alter the mind’s functioning. If this sounds odd, I think it’s because our standard cultural models for understanding science blind us to the possibility. The scientific mindset perpetuates a division between the self, who is doing the studying, and the other, which is out there, and must be studied. Even with all the reminders from quantum physics that observing something can alter it, we still think of the scientist and the object of study as distinct. But this is very much not the case in cognitive science, where the mind studies itself.

It’s not unreasonable to think that, by studying ourselves, we could alter ourselves in the process. Watching our own cognitive mechanisms tick requires a mental contortion of sorts, turning our eyes backwards into our heads to watch our thoughts as they unfold. Sometimes I fear that this vivisection of my thoughts will leave me unable to think, like pulling apart the fibers of a muscle as it’s trying to run. Could it be that studying cognitive science because I find the mind’s workings beautiful is like dissecting my own eyes to understand how they comprehend beauty, only to find that I have blinded myself?

I feel like studying cognitive science has already damaged me. I think it’s exacerbated my hyperanalytical tendencies, and armed my gatekeeper so well that even narrative and mythology can no longer make it through. As a result, the world has become hollow and meaningless; I’ve drifted towards nihilism.

So I find myself asking, how can I reverse the damage that cognitive science has done to me so far? And how can I continue to study cog sci without letting it destroy me? I’m drawn to the field like a moth to a flame. As usual, I think the answer probably involves learning meditation. That way, I could clear my mind of unwanted, overanalytical thoughts when necessary, but allow them at times when I’m actually studying cognitive science.

What kind of analysis is overanalysis?

Overall, I think this gatekeeper metaphor is pretty sound. But I’m worried I’m committing a dangerous oversimplification here: I’ve treated all types of analytical thought the same way, when in fact, some might lead to a deadening of experience, while others might not. As far as I can tell, the type of analysis that most drain the vividness from experience is a sort of “dissective”, deconstructive analysis, the kind that takes a third-person approach to a first-person phenomenon.

For instance, when I was a kid, and I got hurt, my dad would say to me, “Don’t worry, pain isn’t real; it’s just a nerve signal that your body sends to your brain.” This argument has all sorts of flaws (define “real”, for instance), but that’s not the point. The point is that when I was a kid, this argument worked on me. Somehow, thinking about the pain in that sort of detached, analytical way actually lessened its effect.

I suspect that in general, thinking about our own experiences this way dulls them. And thinking about other people’s experiences in this way probably reduces empathy. I seem to feel much more empathy for other people when I understand their actions in terms of their subjective experiences, instead of understanding their actions in terms of e.g. brain chemicals or evolutionary psychology. I’m talking about stuff like “she destroyed a bunch of his stuff because she was angry at him for cheating on her”, vs. “she destroyed a bunch of his stuff to disincentivize him from investing emotions and resources in other potential mates”.

So why does this kind of analytical thinking decrease empathy? This article gives one explanation: we have one brain circuit for empathy, and another for analysis, and the two inhibit each other’s activity. We can also think about it in terms of concepts activating one another: when we describe this woman’s response as “anger”, it activates the concept of anger in our minds, which activates the feeling of anger, leading us to empathize. The evolutionary psychology explanation, on the other hand, does not activate any of our emotions, so we don’t have an empathetic response.

(Yes, I recognize the irony of using a detached analytical explanation to explain why detached analytical explanations might be bad.)

A Final Thought

A final thought, before I conclude. This essay has played on a popular dichotomy which pits reason (and particularly analytical thinking) against intuition and emotion. It’s not unreasonable that this dichotomy should have arisen in popular thought. After all, strong emotions often prevent us from thinking rationally, and (as we have seen in this essay), thinking analytically can prevent us from feeling emotions.

But the popular dichotomy is harmful, because it suggests that reason and emotion cannot coexist, and so we need to pick one and completely ignore the other. Thus, many of us have opted to join “team reason” and ignore our emotions, while others have joined “team emotion” and refused to listen to reason.

What we really need is a balance between these two things. Reason and emotion are not opposites or enemies; we need both of them in order to function in the world. The problems arise when we favor one over the other. Too much emotion, and our actions will become erratic and irrational. Too much analytical thinking, and we’ll lose the ability to feel.

So I’ve written this post not because I think we need to abandon analytical thinking altogether, but because I think the readers of this blog probably err on the side of overanalysis, and need to be pushed in the other direction.

I suspect a lot of readers of this blog probably come from the LessWrong rationalist community. To their very great credit, the rationalists of LessWrong are not Spock-clones at all, and they fully acknowledge the need for balancing emotion and reason. They emphasize that rationality is about “winning” (that is, actually achieving one’s goals). Whatever method helps us achieve our goals, whether it’s emotional or rational, that’s the one we shuld follow. And on LessWrong there’s also a widespread recognition of the importance of emotion in our lives.

And yet, reading LessWrong and talking to members of the community, I often get the sense that while rationalists think it’s perfectly sensible to embrace emotion, they just don’t know how to do it. I should say “we”, because this description has definitely applied to me too, at many points in my life. It’s a constant struggle for me to embrace emotional experiences without trying to analyze or control them. I think that a lot of us, for whatever reason, have armed our gatekeepers very thoroughly, and it’s hard for anything to sneak through. And so I hope that this post, this metaphor, and these examples will help to give people who want to embrace their emotions a foothold into changing how they interface with the world.

In conclusion, then, I leave you with the following two pieces of advice:

  • Do not let your emotions drown out your ability to think.
  • Do not let your thoughts drown out your ability to feel.


Particular thanks to Justin and to Aaron for many enlightening discussions on this topic.

Posted in Uncategorized | 7 Comments

Open Source Mythology Project?

Before writing was invented, stories were transmitted orally. This means that myths and fairy tales evolved organically as they were passed from one storyteller to the next. Since there was no canonical version, storytellers were free to modify their tales to suit the occasion, the audience, or their personal taste. For this reason, the stories were very much alive, able to grow and change along with the culture. This allowed them to retain their relevance over long periods of time.

The written tradition has changed this, of course. A story, once set in writing, becomes official and canonical. In the case of religious texts like the Torah or Bible, this is because the text is taken to be the word of God, which needs to be preserved faithfully. In the case of modern fiction, it’s because the story is taken to be intellectual property, whose plot and characters belong to the single author (or small group of authors, or corporation) which created it. This contrasts with myths and fairy tales; since their origins are often unknown, we tend to think of them as written by and belonging to the entire community, to be retold and modified for personal use as people please.

It’s hard to think of modern strategies for writing, and attitudes toward creation, which resemble the oral tradition. One possibility I can think of is fanfiction, where although there’s a single canonical story, fans are able to modify it and use its characters and world in stories of their own creation. But in fanfiction, the original story always dominates; the modified versions rarely leave the enclave of the fandom or gain anything close to the same status as the original. Perhaps closer to the oral ideal of collaborative storytelling is Scott Alexander’s conworlding community, which has in fact generated a bunch of mythology. But if I’m understanding that community correctly, the members all write pieces of the story and sew them together like a patchwork quilt; thus it differs from the oral tradition, where a single story is told in many different ways by many different storytellers.

Of course, we still have all the original myths and fairy tales that have flowed down to us from antiquity, and we are free to revise and retell these as we please. The proliferation of modern adaptations ensures that these stories stay alive and relevant to the 21st century. But I would really like to see some new myths and fairy tales emerge organically through time as well, for instance myths that reflect our scientific worldview.

Fortunately, in modern times we have developed a method of collaboratively creating content, and that is the open source movement. An open source project seems like the perfect way to implement something resembling the oral tradition. It would allow people from all over the world, who may not even know each other personally, to develop a story together. And if I understand open source culture correctly (having never actually contributed to an open source project), the resulting program does not have an individual owner or creator, but is thought to belong to and have been created by the entire community. Also, version control systems like git allow variation instead of one single canonical version; different versions of the story can develop in separate branches.

So, does anyone else think this is a good idea? Would anyone reading this be interested in contributing to such a project? Note that there’s room for people with all sorts of talents, everything from developing the characters and plot to improving the wording of a near-final draft. One of the nice things about this approach is that you don’t have to be good at every aspect of storytelling in order to contribute.

So if you might want to join such a project, or even if you don’t want to join but still think it’s a cool idea, please leave a comment below! If enough people are interested, then we can commence with the storytelling.

Posted in Uncategorized | 3 Comments

Increased Choices and Existential Crises

It seems to me that the more choices we have in life, the more we will suffer from existential crises.

During existential crises, we find ourselves asking philosophical questions like “What is the meaning/purpose of life?”. But these questions often arise from more practical concerns, like “What should I do with my life?”. It’s through seeking answers to the latter question that we are led to the former. This explains why existential crises are particularly common during our early-to-mid twenties, when many of us are first forced to confront the question of what to do with our lives.

Up until our early twenties, life is laid out in a clear, straight line. If you are an elementary school student, your task in life is to prepare for middle school. If you’re a middle school student, your task is to prepare for high school. A high school student prepares for college. But once you get to college, the tree trunk ends and the choices branch out in all directions. Suddenly you need to pick a major and thereby decide what to do with your life. It’s only natural, then, that existential crises should begin to arise during college and soon after graduation.

I’ve been no stranger to existential dilemmas, and while trying to resolve them and determine what to do with my life, I’ve often bemoaned the profusion of choices spread out before me. After all, the more choices we have, the more difficult it is to pick between the competing options. So I’ve tended to attribute our society’s epidemic of existential crises to the number of choices we have available.

In this essay, I’ll examine our culture’s obsession with choice. Then I’ll explain why it’s based on unsound principles, and thus contributes to existential crises. Finally, I’ll explain how we use identity to cope with the overabundance of choices available to us.

Increased Choices

In terms of what to do with our lives, we seem to have more choices these days than ever before. In the past, the number of options was limited, both because society was simpler and because strict social stratification constrained the set of roles available to any single individual.

In the distant past, all societies were unstratified subsistence cultures where most of one’s time was spend hunting, gathering, or growing food. These societies might have had some gender-based division of labor, and perhaps there would be a few specialists such as shamans. But for the most part there were not many social roles to choose from, and the people in these cultures led very similar lives in terms of their daily activities.

As society complexified, division of labor increased, which also increased the available choices. Now not everyone had to be farmers; some people could be blacksmiths or carpenters or traders or statesmen. But the choices in Ancient Greece or Rome were still far more limited than the ones available today, as there were fewer professions to choose from. And in many societies, strict social stratification also limited the choices available to any individual person: for the most part, you took your father’s vocation, or followed some profession that was fitting for your social station.

Thus, compared to modern Western societies, past cultures gave people a far more restricted set of choices for what to do with their lives. It seems that in recent centuries, the number of choices has exploded, both because there are more professions to choose from, and because our liberal society tries to ensure that all of these choices are available to all people.


Past cultures, with their limited set of choices, presumably worked wonderfully for people who liked their allotted positions in society, but caused great inner conflict for those who felt themselves at odds with the position they were assigned. Our current system works wonderfully for people who have a clear preference for one of the many choices (or who just don’t care and are happy with wherever they end up), but it causes great inner conflict for those who are uncertain about which life-path to choose.

We tend to view an abundance of choices as a good thing, and even consider it a moral imperative to provide people with as many choices as possible. But I don’t share this moral sense; instead I get the impression that societies with different amounts of choices each have their own strengths and weaknesses, and that choosing between them is a matter of balancing tradeoffs.

Of course, it’s easy for me to say this from the comfort of my ivory tower. How can I evaluate the tradeoffs between different cultures when the only culture I’ve experienced is my own? Perhaps it’s utterly naive of me to think I could get along in a culture with fewer choices, given the intensity of my individualist tendencies. I’ve always adamantly done things my own way instead of following established rules or traditions, and it’s hard to imagine what life would be like if that option weren’t permitted to me. But increasing the number of choices isn’t the only way to make room in society for outliers. As long as a culture has some designated place for outliers, where they are respected as members of society (shamans are an example of this), then I’m not sure that increasing the choices of social roles is actually necessary for providing outliers with good lives.

My perspective on this issue seems to be fairly unusual. In order to understand why it might not be completely unreasonable, I’d like to look at some of the assumptions underlying the usual worldview, and the flaws in these assumptions. In particular, I want to examine the assumptions that lead us to view choice as a moral value.

Choice as a Moral Value

Most people I talk to seem to have strong moral intuitions that choice is important, and that denying it to people is wrong. In America, this viewpoint seems to be particularly common among liberals and libertarians. Lack of choice is seen as a great injustice, because it means that people can be forced into roles that are unsuited for them. The solution to this is to give people as many choices as possible (as long as these choices don’t violate even more basic ethical principles, like not hurting anyone). Choice is equated with freedom, and denying people choices is a matter of denying them freedom.

The importance of choice is apparent in many causes that liberals feel strongly about. The most obvious example is “pro-choice”: people support abortion being legal because they want to give women more choices for what to do with their bodies. And quite a few social justice/equality issues can be framed in terms of choice. Feminism increases the number of choices for people by removing gender-based division of labor: women shouldn’t be forced to be housewives, because many women find careers more fulfilling. Conservatives promote strong social norms about family organization (one man and one woman, until death do them part), but liberals encourage choice in family organization (gay marriage, divorce, polyamory): you pick whichever family style is right for you, whichever one makes you happiest.

Most importantly for the purpose of this discussion, we find that gender equality, decreases in social stratification, and emphasis on happiness rather than prestige in choosing a career, leads to more choices in professions. In modern times, you certainly don’t have to pick the same job your parents did. And as long as you have the economic means, you are not in principle restricted to jobs associated with your social class. In practice, there’s still a ton of social stratification, hence the white/blue collar divide. But in an ideal liberal world, all of this would go away and everyone would be able to pick whatever job they wanted. And in this ideal world, no profession would be thought of as any better or more prestigious than any other; you simply choose the job that’s right for you. Your choice is evaluated on how well it fits your individual personality, rather than on its impressiveness or its ranking in some objective hierarchy.

Hence we get a lot of people asking “What profession is right for me? How do I choose?”

Choice and the Conception of the Self

The key assumption here is that this question has an answer, that there really is some profession that’s right for you. You seek the profession that’s in greatest alignment with your “true self”. Thus, the whole ideology surrounding increased choices rests on our understanding of the self.

If I had to define the “true self”, it’s the aspects of personality that persist over time; it’s the fixed, static, core components of who we are. It’s a sort of model of our own minds that we can draw on when making decisions and predictions of our future behaviors. In addition to assuming the existence of a true self, we also assume that any decision we make can either be in alignment with it or at odds with it (or be at some non-binary point between these two extremes). The individualist strives to “be himself” and “be true to himself”, to obey the impulses of his true self instead of just blindly following some path laid down by society.

These ideas aren’t completely wrong. I certaintly don’t mean to claim that the self doesn’t exist, or anything like that. People definitely seem to have some innate personality that persists over time, and our culture’s conception of the “true self” is not an unreasonable model of this. And I can speak from experience that this innate personality can sometimes be so at odds with society that it causes conflict. I’m glad that our society gives us a lot of freedom to be ourselves. But I object to the assumption that we need to give people this freedom by increasing the number of choices available. Increasing choices often seems to make life more complicated, without providing a substantial benefit.

It’s important to realize that our personalities, preferences, and “true selves” are not simply things we are born with. Instead, they form out of the interaction between our environments and our innate predispositions. Our innate dispositions specify some possibilities for the kind of person we can be, and our cultures/environments also specify possibilities for the kind of person we can be. The person you end up becoming will depend on how your culture channels your specific predispositions. For instance, if you are born with an innate tendency towards being aggressive and competitive, you might become a warrior in one culture and a Wall Street banker in another. What you end up being depends on what your culture values, since your culture’s values get incorporated into your self. Your culture, as well as your innate tendencies, shape your desires about what your life should contain.

If we lived in a culture with a completely different set of choices, we’d presumably still find ways to be happy. I mean, I’m a computer scientist, and I chose this job because I like analytical thinking and problem-solving. But there was no computer science in the Roman empire, so if I had been born there, I would have had to find some other outlet for my analytical tendencies. Or maybe they wouldn’t have developed at all; maybe they’re a product of my schooling. I was born with the potential to become an analytical person, but if that trait had never been encouraged or rewarded, then maybe it wouldn’t have developed at all. It’s hard to say. At the very least, it seems unreasonable to claim that I was born to be a computer scientist, and had I lived in the Roman empire, I would have been forever unfulfilled.

Even in modern times, it’s hard to believe that of all the choices available to me, computer science is my one true calling. I don’t think I’m doing computer science because it’s inherently the best fit for me; I don’t think that out of all the careers available, it’s the one that’s best in line with my personality. When I was growing up, my dad was a programmer; if he had been a physicist, maybe I would have ended up studying that instead. And I majored in computer science because I felt at home in the CS department at my school. If I had gone to a different school where the faculty weren’t as awesome and the students weren’t my kind of people, I might have easily majored in something different, like English or Anthropology.

So I don’t think we have true callings. I think we all have a fairly wide set of professions that could be fulfilling to us, and it won’t matter all that much which one we end up in. It’s for this reason that I don’t think increasing our choices for professions helps at all to increase our happiness. It only increases confusion over which choices we should make, since the choices become so fine-grained that we have trouble picking between them. And our cultural insistence that we should follow the urgings of our “true selves” makes us that much more confused, since instead of realizing there are many equally valid choices, we spend long hours agonizing over which choice is “right”, which choice is “best”, which choice is “most meaningful”.

Identity and Self

So far I’ve talked about increased choices leading to existential crises, but for many people, the number of choices leads to identity crises instead. Since our culture teaches us to be true to ourselves, identity crises will be particularly common for people who view the self as a matter of identity.

The idea of identities assumes that there are distinct clusters of selves; determining your true self (and what you should do with your life) thus becomes a matter of determining which identity cluster you fit into. Once you’re secure in your identity, it tells you who to be and how to behave. But figuring out which identity fits you best can be hard, since many might fit, or none might fit perfectly. So people have identity crises about all the different choices they need to make in life. There are identity crises around gender and sexuality, and about the type of relationship you want (monogamy? polyamory?), and so on. Interestingly, I don’t think that “what career should I pick?” generates the same kind of identity crises, maybe because the career you end up with is seen as less of an essential part of who you are.

It’s interesting to contrast two different approaches to being true to yourself. One approach says to act according to your inner urges without following any of the rules or categories laid down by society. The other approach says to view your self as defined by your identity. If you follow the first strategy, you will pick choices and actions that seem right for you specifically. If you follow the second strategy, you will pick choices and actions that seem right for a person who belongs to the categories you belong to. It’s a tradeoff between effiency and accuracy. The second strategy only approximates your actual desires, but it’s more efficient: you can appeal to fixed rules and categories, which alleviates a lot of the difficulty in making decisions. Instead of choosing among all possible actions at every step, you just choose a few identities at the beginning, and then at every step you go along with the “rules” of that identity. Instead of asking yourself “What do I want to do right now?”, you can ask yourself “What would a scientist do?” or “What would a liberal do?” These questions often have much clearer answers, since you can look at what other members of the group are doing, and then do that thing. Note that in practice, we probably alternate between these two strategies for decision-making, with some people tending towards one more strongly than the other.

To summarize this section, if you think of the self in terms of identity, the increase in choices might give you an identity crisis instead of an existential one. With existential crises, you try to answer the question of what you should do with your life by figuring out what’s meaningful. With identity crises, you try to answer the question of what to do with your life by determining what kind of person you are.


Living in the 21st century, we are faced with a truly incredible number of choices for what we can do with our lives. These choices arise partly from societal complexity and extreme division of labor, and partly because we view choice as a moral imperative. Having all of these choices gives us an unprecedented amount of freedom, but it also leaves us with a lot of uncertainty about what we should do with our lives. This uncertainty tends to manifest as existential crises and identity crises.

Personally, I’m happy with this modern state of affairs. I prefer freedom and exploration to safety and comfort. But I recognize that these things come with tradeoffs, and that it’s difficult to figure out how to act when we’re faced with so much uncertainty. There are no culture-wide authorities that can definitively tell us the right answer. But in the face of uncertainty, we often find ourselves seeking out some authority who can tell us the answer. Religious beliefs (and sometimes scientific beliefs) can serve as authorities for existential questions. For questions of identity, we often look to psychologists and psychiatrists as authorities. It’s interesting to speculate what forms of authority we will look to in the future; a friend of mind suggests that we will increasingly ask science and technology for answers, perhaps in the form of personality tests based on statistics. In addition to new cultural authorities, it will be interesting to see what kinds of worldviews and social institutions we will develop to help people who are struggling with existential and identity crises.

Posted in Uncategorized | 1 Comment

The Fall of Nature: A Lament

(Just to forewarn you, this post will be rather different, both in style and content, from anything I’ve written here before. This is not a permanent shift in focus; I’m deliberately keeping this blog free of any theme or organization, so that I have the freedom to explore whatever subject interests me at a given moment. I’m still quite interested in the topics I’ve written about so far, and I expect to write plenty more posts in a similar vein. So if you find this particular post incomprehensible or unpalatable, don’t worry: I’ll be back to blogging about “ordinary” topics soon. For instance, the next post will discuss the interaction between rationality and intuition, and I’m also hoping to write some stuff about epistemology soon.)


The city of Baltimore imprisons me. I wake each morning to the sight of it spread beneath my window: buildings and roads stretching as far as the eye can see, roads swarming with cars and the sidewalks infested with people. Of all the things I observe, nearly all are made by man. Here nothing of the unknown, the wild, the mystery of nature remains; there is nothing but the dead fixed grid of city streets and the cold square vaults of endless buildings. We’ve banished the mystery to where it can’t meddle in our neatly-organized lives; we’ve scrubbed the world clean of all danger and uncertainty.

To the forces and beauty of nature, we give but a token acknowledgement: trees are planted between the sidewalk squares, flowerbeds placed in front of buildings. But the trees we plant here are pet trees, kept in cages; their roots curl backwards on themselves where they hit the sidewalk walls. These trees were placed here by humans; they do not grow of their own accord. Only the weeds spread by any will of their own, and their uprisings are quickly suppressed.

We build our cities to hide from the uncertainty of the world. But the wilderness has a harmony to it, a stillness that the city cannot attain. Always the forest breathes and pulses with life; always it moves with its slow and powerful rhythm. We have forgotten how to dance to that rhythm, so the pulses knock us off our feet. Then we flee to the cities where the rhythm cannot reach us; we take comfort in disordered cacophony, where no such rhythm can form. So great is our need for control that we’d rather live in dirt and noise of our own making than listen to a hauntingly beautiful melody composed by forces beyond our comprehension.

I cannot stand it here in Baltimore. Am I the only one who perceives the horrible grinding, screeching noise and stench of the city? Am I the only one who feels my senses assaulted by its sheer and constant insanity?


When the tangled chaos of the city grows too loud, I pull my senses back into the shell of my imagination, and I dream of the empty and desolate places to the North.

In the North, the Lord of Winter is still strong, and few choose to live in his domain. But where the people don’t go, the trees proliferate. For hundreds of miles they cover the ground. Always straight-backed, they keep their perpetual vigil, their black branches bearing the weight of the snow.

A year ago, the Lord of Winter summoned me, and I visited his stronghold. I loved it there; it was such a glorious refuge from the chaos of the city. Each day I looked out the window and saw the clear light of the North. I saw the spruce trees standing there, wrapped in their blankets of snow, patient and silent. Whenever I felt a disquiet within me, whenever my calm was broken, I could look out at the trees, and their stillness would calm me. Whatever drama shook my life, it could not bend their sturdy trunks. Over time I acquired their stillness; my life’s oscillating chaos slowed until it matched their steady cadence.

It wasn’t just their physical stillness that calmed me. The trees showed me that a world existed beyond my human concerns, that something was there which my problems could not touch. The city was different, because in it there was only human life. In the city, when drama sent ripples through my social life, there was no sturdy foundation that remained unmoved; this is why my human concerns seemed earthshaking and dire. But the spruce trees reminded me just how trivial and localized my concerns really were. The trees existed in a world where all of my worries were irrelevant.

I think this must be why social anxiety proliferates in urban environments. Surrounded by the world of the human, there’s nothing to remind us how little we matter in the grand scheme of the cosmos. There’s nothing to curb our delusions of grandeur.


I wanted to stay in the North forever, but the Lord of Winter sent me back to Baltimore. He told me I had a duty to fulfill in the human realm. So I went. But before I left, I asked him, how could I withstand the city that I hated with every atom of my being? How could I live in a land so desolate, stripped of any sign of the natural?

The Lord of Winter lent me his strength and armed me with his wisdom. He told me: concrete is shallow but the soil is deep. Beneath the crust of the city lies millions of years of history; beneath the dead veneer we’ve painted atop it, the earth is still alive. In order to withstand the city, I must reach downward with my awareness; I must force my perception down through the layers of concrete to rest it against the living body of the earth.

The trees may be slaves in the world of the city, the Lord of Winter told me, but they remember what it was like to be free. The trees are patient, the most patient of all the creatures of the earth. They will bear this indignity in silence; they will not lament or complain of their plight (as I am doing here). They will only stand and wait, century after century, as long as is necessary, until the cities crumble to dust and the world is theirs again. Then they will gather their age-old knowledge up from the roots of their memories. They will make new seeds and spread across the cold, dead earth, filling it with breathing life again.


But I’m afraid, so afraid, that the Lord of Winter is wrong. I’m afraid that the Age of Trees will never come again, that humanity will blot out nature entirely. Ours might be one of the last generations to witness the glory of nature. We might be the last to look upon vast uninterrupted stretches of forest and marvel at the unknown.

The ancient strongholds of the trees are diminishing. We encroach ever further on their domain; we corner the wilderness. It retreats into the uninhabitable places; it hides in the coldest, windiest reaches of the frozen and desolate north. But the population swells larger each year and floods across the land; the climate change melts the ancient frost. Soon even these untouched places will be settled. Then we will look down on earth from space and see nothing but a single stretch of city lights, broken only by the ocean.

This is my worst fear, that humanity will conquer the entire world, subverting all of nature to its will. The earth will lose all of its wildness; all its unpredictability will be replaced by neat orderly rows of buildings, all the same construction, the same few stores in every town, so that the whole world will be simple and regular enough to be automated. I fear that we will turn the entire planet into one sprawling concrete and asphalt wasteland of a city.

If all this comes to pass, I’ll take bitter comfort in the knowledge that nature is cleverer than us; it will not be purged from the earth so easily. Even if we destroy all its outward manifestations, it will live on inside us. Our senses of aesthetics were crafted by nature; our senses of beauty come from thousands of years of nature imprinting itself on our perceptions. And so, even if we conquer the world in the way that I fear, I can picture a young child, growing up in a desert of asphalt, bricks, and dust, arranging her plastic playthings in the shapes of flowers and trees and animals that she has never seen… This thought gives me some hope.


But in time, even our genetic memories of nature will fade. We’ll forget our innate knowledge of flowers and trees and grass; they’ll be replaced by memories of glass and concrete and brick. We’ll learn to see those things as beautiful instead. Our aesthetics are shaped by evolution, and evolution will reshape us to fit our barren new home. We’ll forget we ever lived on a planet teeming with life.

Even now, many of us seem to have forgotten it. The human mind is an amazingly adaptable thing. People raised in cities feel at home there, are unable to sleep away from the sound of the cars. Even people raised close to the forest may come to see the cities as home, may seek them out, preferring their abrasive excitement to the quiet splendor of the forest. All the time, people are moving from small towns to cities, drawn like moths to the neon lights.

It seems I am in the minority; it seems I’m one of very few whose senses rebel against the cities’ clamor. The world is changing, humanity is changing, and I am stuck in the past. I’m one of the last representativies of a dying aesthetic.


Who am I to lament the the downfall of nature? Who am I to rail against the cosmic tides? Countless times the face of the earth has changed; countless times the old order has been superseded by the new. Who am I to fight the turnings of the wheels of Fate? I am the yessayer; I must gaze upon the universe unflinchingly and affirm its trajectory, even if it contains my destruction and the death of all I love.

And what a small voice I am, in the echoing void of space. Even if I tried to change the future, what impact could I have? I am powerless against the great tides of Fate.

Sometimes I envision a future where humanity dies out, but in the process creates a new kind of robotic life that goes on to colonize the stars. This new life might destroy our very planet in the process; after all, it will need many resources to begin its life. It might hatch and crack the egg of the earth.

Who am I to prevent such momentous happenings? Each time I struggle against this vision of the future, I feel like one of the anaerobic organisms of the distant past, lamenting the coming of oxygen-producing life. It would be selfish of me to hold the universe back from this greater, more beautiful complexity, just to preserve my own tiny sliver of a life for a few extra seconds.


But I know that fighting this trajectory is not a matter of selfishness; it is a matter of Fate. The love of nature is ingrained too deeply within me for me to take any other path. It is my fate to struggle against this abominable future, just as the earth might be fated to succumb to it.

All of us must play our parts in life. All of us must follow the rivers that run through our blood, the deepest urges of our beings. The yes-sayer, in affirming the universe, must also affirm his place within it and he role he must play. He must find his deepest Will and follow it.

And I know that my Will yearns towards nature and the beauty of the forest. I feel myself pulled towards these things. I realize my aesthetics are subjective; I realize my life is ephemeral. But all the philosophy in the world cannot dull the pain I feel when I see a forest chopped down. All the conditioning in the world cannot make me see Baltimore as beautiful.

These are the tides of my being. These are the rivers that run through my blood; I will follow them wherever they lead. If it comes to it, I will fight for nature even if the fight is futile and I throw myself against a sky-high immovable wall. I will fight even if it accomplishes nothing and destroys me in the process. This is what it means to say “yes” to the universe. It means I must follow the path of my Will, even if it leads me into the fiery bowels of hell.

All I can hope is that, when the time comes, I will have the strength and courage to follow my destiny.

Posted in Uncategorized | 2 Comments

Personal Space Bubbles and the Physical Location of the Self

Personal Space, Status, and Territory

Kevin Simler has a great post about social status and its relation to space; a central aspect of this is personal space. As Kevin observes, we can understand a lot about people and the social relations between them by how watching they interact in space. People exert dominance by taking up a lot of space, or by invading someone else’s space; people express submission by huddling up, making themselves small, and trying to use as little space as possible.

So space is related to status in the following manner: having more status means having more space. Personal space can be explained very naturally in terms of territory, since having more territory makes you more powerful. If you’re trying to dominate someone, you will steal his territory. If you’re trying to submit to someone, you’ll surrender your territory to him.

The Personal Space Bubble

The personal space bubble.

We can understand personal space metaphorically as a bubble. This “personal space bubble” marks the invisible boundary of your territory. As long as no one gets inside your bubble, all is well. But as soon as someone enters it unbidden, you’ll begin to feel physically uncomfortable, because your space has been invaded. Notice how this metaphor of invasion captures both the territorial aspect of personal space and the threat inherent in its violation.

So we can conceive of the personal space bubble as demarcating territory. But I’d like to explore another metaphor for understanding personal space: namely, we can view the personal space bubble as an extension of the physical body. This leads to some interesting insights, especially when we combine the PERSONAL SPACE IS BODY metaphor with the metaphor BODY IS SELF to get PERSONAL SPACE IS SELF.

Personal Space as an Extension of the Body

If we view the personal space bubble as part of the body, then entering someone’s personal space is like touching that person physically. The discomfort we feel at having our personal space invaded resembles the discomfort we experience at being touched by someone we don’t really know.

On the physical body, we view some parts as more spatially central, and others as more peripheral. Touching a central body part is much more intimate than touching a peripheral one. Someone you barely know might tap you on the shoulder, but touching someone’s chest or belly is usually reserved for romantic relationships. During courtship, physical contact proceeds from the most peripheral (least intimate) body parts to the most central (most inimate) ones. Pickup artist blogs encourage men to playfully touch a woman’s arm (a peripheral body part) in order to initiate physical contact and thereby create the beginnings of intimacy. “Escalating” the physical contact increases the intimacy. (MORE IS UP, for my fellow metaphor nerds.)

Personal Space as an Extension of the Self

Why is physical contact such a big deal? What makes it so intimate, or so threatening, depending on the context? The evo-psych answer to this is obvious: physical proximity equals vulnerability, so it makes sense that we’d have developed a very strong emotional response to it. But we can gain more insights into the significance of physical contact when we view it in terms of the metaphor BODY IS SELF, which we can combine with PERSONAL SPACE IS BODY to get PERSONAL SPACE IS SELF.

Observe that these metaphors contrast with the very space-limited conception we often have of the self. The self is frequently equated with the mind, or constrained even more narrowly to the higher cognitive functions. This means that if we grant the self a physical location at all, it will be in the brain. (A few weeks ago, I asked a friend where his self was, and he replied, with no hesitation, “my prefrontal cortex”.) Embodied cognition has begun to challenge this idea, extending the boundaries of the self to encompass the body as well. But even with the insights from embodied cognition, people usually don’t conceive of the self as extending beyond the body. (Notable exceptions include the people who study extended cognition, as well as all the mystical traditions which seek to remove the separation between self and other by asserting that all of us are one.)

Despite our dualistic heritage, though, I don’t think we really conceive of the self as restricted to the brain. We may talk about it that way in rational philosophical discourse, but linguistically and conceptually, we seem to perceive the body as part of the self. If someone touches your arm, it’s much more likely that you’ll think to yourself “someone touched me” than “someone touched my body”. This shows that the body and self are linked metonymically. And there’s an entire disorder for people who feel like their bodies are not part of their selves; it’s called depersonalization, and I’ve experienced it. If you have this condition, you will look at your body, and it won’t feel like a part of you; instead, it will feel like a machine that you operate. When you lift your arm, you’ll understand that you’re the one making it move, but it will feel like operating a robot from a distance. This suggests to me that the BODY IS SELF metaphor provides a very accurate description of our ordinary perceptions.

So what are the implications of this metaphor and its extension PERSONAL SPACE IS SELF? For one thing, it means that whenever someone invades your personal space or touches your body, they are touching and therefore altering your self. (Note that this relies on a further metaphor: to touch something physically is to alter or control it. Examples are “the film touched me deeply” and “he pushed me to publish my results”. These metaphors follow very naturally from the fact that touching things physically often allows us to influence, change, or control them.)

If someone dominates you by touching you or invading your personal space, they are imposing their self on yours; they are entering your self-space and thereby influencing your personality and identity. The closer they come, the further they intrude into your self-territory, the more their self blends with yours. This explains why couples in relationships are happy to enter each other’s space – they want to blend their selves, to unify into a single entity in some sense. And this is why sex is the most intimate of all acts, because it is the closest two selves can merge with one another. It’s also why rape is so traumatic: it doesn’t just hurt your body, but invades the deepest parts of your self.

Self and Possessions

We can extend our metaphor (and our selves!) even further if we say that possessions are part of the self. I don’t know about anyone else, but I’m a territorial person, and when someone uses my stuff without asking first, it makes me feel very uncomfortable, like someone were violating my personal space. So this metaphor makes a lot of sense to me, and I’d like to add it to my conceptualization of the world.

Conceiving of possessions as part of the self gives a new meaning to the idea that “the things you own end up owning you”. When you buy something, that object gets integrated into your self. Some objects, like fancy cars, seem to come with their own “personalities”; when you buy one of these, it will merge with and influence your own personality. If you start out with a weak personality, and you acquire a lot of powerful artefacts, you may find them taking over…

But consider objects that don’t come with a personality; or rather, since all objects presumably have some personality, consider objects whose personalities are weak enough to be negligible. I’m thinking of things like an ordinary, functional stapler, a plain white coffee mug, or a simple decorative item in your house. You probably own a lot of these objects, but not all of them will be equally important to you. Thus, we can think of more and less peripheral objects, like there were more and less peripheral body parts. A disposable object that you don’t care about losing is likely to be peripheral. The most central objects are things like a kid’s blankie or teddy bear, or an adult’s computer or guitar.

What determines which objects are central and which are peripheral? The model I use for this is that when you first acquire an object, it gets infused with a little bit of your self, just because you own it. The longer you own the object, and the more you use it, the more of your self gets infused into it. This can happen with any kind of use, but the more emotion you feel when using the object, the stronger the bond between you and it will grow.

People say that freedom from possessions makes you happier. In terms of my metaphor, I think there are two reasons for this. One was already mentioned above: if you own a lot of objects with strong personalities, you may find that your own personality gets overwhelmed. But another is just that, if you own a lot of possessions, then your self will get spread too thin. You’ll have put bits of your self into so many objects that you won’t have any left to keep inside your body.

One way to free yourself from possessions is just to own less stuff: the fewer things you have, the less thinly you will need to spread yourself between them. And having fewer objects lets you invest more sentimental value in each of them. If you only have one spoon, it will be a lot more important to you than if you had fifteen identical spoons and used a different one each night.

But another way to free yourself from possessions is to “hold each possession lightly”. That is, you can have lots of possessions, as long as you don’t infuse much of your self into them. This can help prevent loss aversion, the phenomenon where we care more about losing something than we did about acquiring it in the first place. My explanation of loss aversion is that when you lose an object, you don’t just lose the physical possession, but the bits of your self that you’ve infused into it. So you can prevent loss aversion simply by putting less of your self into your possessions. Phrasing this in terms of a more common metaphor, the less you invest emotionally in your possessions, the less of a big deal it will be if you lose them. This is the physical possession version of keeping your identity small.


This post was inspired by Kevin Simler‘s essays exploring different metaphors (see the bottom section here), as well as some conversations I had with him. So, thanks Kevin!

Posted in Uncategorized | 9 Comments

Blogging and Academia

It’s been nearly four months since I blogged here last. Sorry about that. Partly it’s because I’ve been busy; such is the nature of grad school. But I’ve also been questioning whether I should blog at all. Is blogging incompatible with my academic endeavors? Do I sacrifice a piece of my academic reputation for every blog post I write? Often I feel like I’m torn between the culture of the internet and that of academia.

Privacy and Credibility

Perhaps the most obvious concern is for my credibility. In undergrad, I didn’t have to worry about this; I could proclaim my opinions as loudly as I wanted without having to fear any consequences. That’s because I didn’t have to interact with anyone in a professional setting. If I talked loudly about my political opinions, say, and someone disagreed, then I could choose not to interact with that person, or he could choose not to interact with me. We could retreat to our filter bubbles, and never have to confront the fact that we didn’t get along. But in a professional setting, one has to interact with all sorts of people from cultures all around the world. So I ask myself, why create opportunities for unnecessary tension by posting my opinions online? Why not save the politics and religion for private gatherings of trusted friends?

But it’s more than just wanting to avoid conflict. I also need to consider my academic reputation. If I write about spiritual experiences online, will that hurt my credibility as a scientist? What if I confess that I view science as a religious endeavor, a spiritual quest that brings us closer to the cosmic forces that guide our lives?

I am fortunate, though. For the most part, I don’t think my spiritual beliefs will harm my scientific reputation, because I belong to an empirical discipline. In an empirical field, it doesn’t matter where my ideas come from, or what they share space with in my brain. The only thing that matters is how well they predict the data. Why use a scientist’s character to evaluate his theories when one can just test them empirically? Furthermore, I study natural language processing, and all of my experience so far suggests that people in this field are incredibly friendly and open-minded, tolerant of strange ideas, and willing to approach even the most politically charged topics with calm rationality. If my field were not so open-minded, I doubt I would have ever started blogging, and certainly not under my real name.

But even supposing that blogging won’t hurt my credibility in NLP, I still think there is cause for concern. I also have to consider how my field is perceived by outsiders, especially outsiders to academia. Right now I’m a lowly grad student, but someday I could be a professor, a public representative of academic and intellectual life. Then all of my opinions could be seen as reflecting on my field. Less dramatically, if I am a professor someday, then I will have to worry what my students can find about me on the internet. With this in mind, I am wary of saying anything too stupid now.

But is it cowardice to want to hide my opinions? Am I betraying the things I believe in by being ashamed to speak of them publicly? Sometimes I think so, but I’m also sympathetic to the argument that one must choose one’s battles. If, as a public academic, I want to argue for unpopular models or philosophical stances, it might be best not to tarnish these things by association with my other ideas.

But isn’t this dishonest? Shouldn’t I put everything out on the internet and trust the intellectual community to use all information available to find the truth? I’ve thought about this question a lot, and alas, I don’t think that’s how truth-seeking works. Even if the academic community could be trusted to filter out and preserve the salvageable components of my philosophy (and perhaps it can), I couldn’t expect the internet to do the same. The internet is not a rational, unbiased place. One wrong move online, and I could fall into the spotlight of public ridicule and be forever associated with something stupid I once said.

Perhaps this is why most academics in my field seem to keep their online presence very professional. Looking at their Google+ pages, for instance, I see discussions of research, enthusiasm over new scientific findings and technological advances, and the occasional article supporting an uncontroversial political opinion. And academics’ webpages often have a “personal life” section describing unobjectionable hobbies. But I rarely see academics write publicly about their deeply-held philosophical or spiritual beliefs, about the innermost forces that motivate them to do research. It could just be that these topics don’t interest people who aren’t me, but I’ve met enough academics who have plenty to say about these things in private that I’m disinclined to believe they’re unusual interests.

It sounds like I’m complaining, but I really don’t mean to criticize anyone. Increased privacy is important in a professional setting. And to some extent, there are generational differences at work here; I assume most academics grew up without the internet, and people the people who did grow up on the internet are only just beginning their academic careers. Often, I wonder how the extreme openness of the “internet generation” will interact with the aloofness the professional world requires. I’ve seen some of my friends slowly bury their non-academic identities from public view as they progressed through grad school, and I suppose that to some extent I’ve been doing the same. But when I think about it, I really don’t want to withdraw from online discourse. The internet is such a huge part of my life; to stop writing publicly online would be to cut off many of my closest friendships. Sure, these friends and I do communicate through private channels, but I found many of them through their blogs and forum posts, and some of them found me through my own online writings, and a lot of our interactions center around shared membership in online communities. If I withdraw from this social world out of concerns for privacy, then I will lose one of my main avenues for forming meaningful connections.

In any case, I couldn’t disappear from the internet even if I wanted to. I’ve been leaving trails there all my life; if you google me closely enough, it’s still possible to find embarrassing things I wrote when I was thirteen. So maybe my desire for privacy is hopeless, and the best I can do is bury my old online stupidity beneath my current philosophical opinions, which in turn I will bury with more writings once I decide that my current opinions are stupid too.

Conflicting Communities

So privacy is one of the main reasons I’ve avoided blogging lately. But another reason is that I’ve often felt that the blogging community I’m part of is at odds with the academic system. Sometimes I feel like I have to pick one or the other, and that any loyalty to the blogging community means I am being unfaithful to academia.

It doesn’t seem like this should be the case. After all, blogging is a hobby, and pretty much everyone has hobbies that they do outside of work. But blogging isn’t like knitting or biking or even writing fiction. These activities have different goals from academic research. But the blogging community that I’ve hoped to be a part of shares a goal with academia: both communities try to develop ideas collaboratively. And they do so in dramatically different ways, leading me to think that I can’t do both at once while remaining respectable in both communities.

Academia values depth, while what I will call “internet philosophy” values breadth. From the perspective of academia, internet philosophers are laughably amateurish; they read a couple books on a subject and then formulate grand overarching theories that are easily contested by academics in the relevant field. Internet philosophers dabble, unwilling to engage with a subject for the length of time it takes to understand it thoroughly. As a friend of mine put it, it’s called an “academic discipline” because doing high-quality academic research requires a lot of discipline.

But from the perspective of the internet philosophers, academic fields are hopelessly narrow. Academics tend to zoom in on one tiny little problem so intensely that they lose sight of the world around them. And after studying a field in such detail, it’s hard to avoid getting caught up in the subtle misconceptions that permeate it; it takes an outsider to suggest a radically new way of doing things. Thus it seems better to look at many different fields and keep the outside perspective.

But this choice between breadth and depth is a false dichotomy. As one of my professors pointed out, there’s a third option: studying two or three academic fields in great depth. Then, the focus is still narrow enough to allow for deep engagement with the subject, but broad enough to confront you with multiple perspectives and keep you from getting trapped in any one field’s way of doing things.

Another contrast I’ve struggled with is that academia is strictly hierarchical, while anyone can contribute to internet philosophy. Within the world of internet philosophy, it’s perfectly reasonable for me to write out grand theories about the workings of the mind, with nary a reference to support my claims. But it would be unspeakably arrogant of me to do so in the context of the academic system, unless I had researched the matter empirically and in great depth. And so I am torn – if I have a small insight that seems like it could be valuable, but which I don’t have time to study more deeply, should I post that idea online? If I did, I could get feedback from people who really do know what they’re talking about. And on the off chance that there’s something to my ideas, someone else could then take them up and explore them in more depth. But I fear that it’s arrogant to write when I really have no idea what I’m talking about.

To some extent, whether or not I am arrogant depends on my writing style when I propose these ideas. If I write with appropriate humility, then there shouldn’t be any problem. But so far, I’ve written all my blog posts in overdramatic internet philosophy style, wording things grandiloquently and claiming to put forth profound new theories. If I want to continue blogging without it conflicting with my academic life, I will have to change my writing style.

A final conflict between blogging and academia is that the two communities often see themselves as actively opposed to each other. Academics dismiss bloggers and their intellectual contributions, while bloggers reject the whole academic system as a useless waste of time. In particular, the LessWrong community seems to value autodidacticism and finding ways to succeed outside of traditional hierarchies and life-scripts. Why bother with an academic degree, they ask, when you can learn the material on your own, for free? A diploma is just a piece of paper, and anyone who cares about them is overly focused on appearances. But then I look at the members of the LessWrong community, and it seems that most members of the LW community struggle to motivate themselves to work. I have met plenty of very intelligent people on LessWrong who have avoided college and are working dead-end jobs at Walmart and despairing for their futures.


After thinking about academia vs. internet philosophy over the past few months, I’ve decided (fortunately, because it pays me) that I prefer the academic system. In general, I just find people in academia so much more admirable: everyone works so hard, and their devotion inspires me to work hard as well. And I like the feeling that, by working in academia, I’m contributing to the larger emergent structure of science and philosophy. I still greatly respect some of my fellow bloggers (especially the ones on my blogroll), but they seem like notable exceptions to the generalizations I made above.

So finally, after four months of avoiding all the online communities I once frequented and instead immersing myself as deeply as possible in the academic life, I am firmly on the side of academia in this “blogging vs. academia” internal debate. This means that I can return to blogging without worrying that I am betraying academia by doing so. My writing style will change a bit (in particular I will try to be less arrogant and presumptuous), and in the interest of privacy I will keep any especially personal details to myself. But for the most part I expect to keep blogging about the same topics I was exploring before.

Posted in Uncategorized | 7 Comments