Personal Space Bubbles and the Physical Location of the Self

Personal Space, Status, and Territory

Kevin Simler has a great post about social status and its relation to space; a central aspect of this is personal space. As Kevin observes, we can understand a lot about people and the social relations between them by how watching they interact in space. People exert dominance by taking up a lot of space, or by invading someone else’s space; people express submission by huddling up, making themselves small, and trying to use as little space as possible.

So space is related to status in the following manner: having more status means having more space. Personal space can be explained very naturally in terms of territory, since having more territory makes you more powerful. If you’re trying to dominate someone, you will steal his territory. If you’re trying to submit to someone, you’ll surrender your territory to him.

The Personal Space Bubble

The personal space bubble.

We can understand personal space metaphorically as a bubble. This “personal space bubble” marks the invisible boundary of your territory. As long as no one gets inside your bubble, all is well. But as soon as someone enters it unbidden, you’ll begin to feel physically uncomfortable, because your space has been invaded. Notice how this metaphor of invasion captures both the territorial aspect of personal space and the threat inherent in its violation.

So we can conceive of the personal space bubble as demarcating territory. But I’d like to explore another metaphor for understanding personal space: namely, we can view the personal space bubble as an extension of the physical body. This leads to some interesting insights, especially when we combine the PERSONAL SPACE IS BODY metaphor with the metaphor BODY IS SELF to get PERSONAL SPACE IS SELF.

Personal Space as an Extension of the Body

If we view the personal space bubble as part of the body, then entering someone’s personal space is like touching that person physically. The discomfort we feel at having our personal space invaded resembles the discomfort we experience at being touched by someone we don’t really know.

On the physical body, we view some parts as more spatially central, and others as more peripheral. Touching a central body part is much more intimate than touching a peripheral one. Someone you barely know might tap you on the shoulder, but touching someone’s chest or belly is usually reserved for romantic relationships. During courtship, physical contact proceeds from the most peripheral (least intimate) body parts to the most central (most inimate) ones. Pickup artist blogs encourage men to playfully touch a woman’s arm (a peripheral body part) in order to initiate physical contact and thereby create the beginnings of intimacy. “Escalating” the physical contact increases the intimacy. (MORE IS UP, for my fellow metaphor nerds.)

Personal Space as an Extension of the Self

Why is physical contact such a big deal? What makes it so intimate, or so threatening, depending on the context? The evo-psych answer to this is obvious: physical proximity equals vulnerability, so it makes sense that we’d have developed a very strong emotional response to it. But we can gain more insights into the significance of physical contact when we view it in terms of the metaphor BODY IS SELF, which we can combine with PERSONAL SPACE IS BODY to get PERSONAL SPACE IS SELF.

Observe that these metaphors contrast with the very space-limited conception we often have of the self. The self is frequently equated with the mind, or constrained even more narrowly to the higher cognitive functions. This means that if we grant the self a physical location at all, it will be in the brain. (A few weeks ago, I asked a friend where his self was, and he replied, with no hesitation, “my prefrontal cortex”.) Embodied cognition has begun to challenge this idea, extending the boundaries of the self to encompass the body as well. But even with the insights from embodied cognition, people usually don’t conceive of the self as extending beyond the body. (Notable exceptions include the people who study extended cognition, as well as all the mystical traditions which seek to remove the separation between self and other by asserting that all of us are one.)

Despite our dualistic heritage, though, I don’t think we really conceive of the self as restricted to the brain. We may talk about it that way in rational philosophical discourse, but linguistically and conceptually, we seem to perceive the body as part of the self. If someone touches your arm, it’s much more likely that you’ll think to yourself “someone touched me” than “someone touched my body”. This shows that the body and self are linked metonymically. And there’s an entire disorder for people who feel like their bodies are not part of their selves; it’s called depersonalization, and I’ve experienced it. If you have this condition, you will look at your body, and it won’t feel like a part of you; instead, it will feel like a machine that you operate. When you lift your arm, you’ll understand that you’re the one making it move, but it will feel like operating a robot from a distance. This suggests to me that the BODY IS SELF metaphor provides a very accurate description of our ordinary perceptions.

So what are the implications of this metaphor and its extension PERSONAL SPACE IS SELF? For one thing, it means that whenever someone invades your personal space or touches your body, they are touching and therefore altering your self. (Note that this relies on a further metaphor: to touch something physically is to alter or control it. Examples are “the film touched me deeply” and “he pushed me to publish my results”. These metaphors follow very naturally from the fact that touching things physically often allows us to influence, change, or control them.)

If someone dominates you by touching you or invading your personal space, they are imposing their self on yours; they are entering your self-space and thereby influencing your personality and identity. The closer they come, the further they intrude into your self-territory, the more their self blends with yours. This explains why couples in relationships are happy to enter each other’s space – they want to blend their selves, to unify into a single entity in some sense. And this is why sex is the most intimate of all acts, because it is the closest two selves can merge with one another. It’s also why rape is so traumatic: it doesn’t just hurt your body, but invades the deepest parts of your self.

Self and Possessions

We can extend our metaphor (and our selves!) even further if we say that possessions are part of the self. I don’t know about anyone else, but I’m a territorial person, and when someone uses my stuff without asking first, it makes me feel very uncomfortable, like someone were violating my personal space. So this metaphor makes a lot of sense to me, and I’d like to add it to my conceptualization of the world.

Conceiving of possessions as part of the self gives a new meaning to the idea that “the things you own end up owning you”. When you buy something, that object gets integrated into your self. Some objects, like fancy cars, seem to come with their own “personalities”; when you buy one of these, it will merge with and influence your own personality. If you start out with a weak personality, and you acquire a lot of powerful artefacts, you may find them taking over…

But consider objects that don’t come with a personality; or rather, since all objects presumably have some personality, consider objects whose personalities are weak enough to be negligible. I’m thinking of things like an ordinary, functional stapler, a plain white coffee mug, or a simple decorative item in your house. You probably own a lot of these objects, but not all of them will be equally important to you. Thus, we can think of more and less peripheral objects, like there were more and less peripheral body parts. A disposable object that you don’t care about losing is likely to be peripheral. The most central objects are things like a kid’s blankie or teddy bear, or an adult’s computer or guitar.

What determines which objects are central and which are peripheral? The model I use for this is that when you first acquire an object, it gets infused with a little bit of your self, just because you own it. The longer you own the object, and the more you use it, the more of your self gets infused into it. This can happen with any kind of use, but the more emotion you feel when using the object, the stronger the bond between you and it will grow.

People say that freedom from possessions makes you happier. In terms of my metaphor, I think there are two reasons for this. One was already mentioned above: if you own a lot of objects with strong personalities, you may find that your own personality gets overwhelmed. But another is just that, if you own a lot of possessions, then your self will get spread too thin. You’ll have put bits of your self into so many objects that you won’t have any left to keep inside your body.

One way to free yourself from possessions is just to own less stuff: the fewer things you have, the less thinly you will need to spread yourself between them. And having fewer objects lets you invest more sentimental value in each of them. If you only have one spoon, it will be a lot more important to you than if you had fifteen identical spoons and used a different one each night.

But another way to free yourself from possessions is to “hold each possession lightly”. That is, you can have lots of possessions, as long as you don’t infuse much of your self into them. This can help prevent loss aversion, the phenomenon where we care more about losing something than we did about acquiring it in the first place. My explanation of loss aversion is that when you lose an object, you don’t just lose the physical possession, but the bits of your self that you’ve infused into it. So you can prevent loss aversion simply by putting less of your self into your possessions. Phrasing this in terms of a more common metaphor, the less you invest emotionally in your possessions, the less of a big deal it will be if you lose them. This is the physical possession version of keeping your identity small.


This post was inspired by Kevin Simler‘s essays exploring different metaphors (see the bottom section here), as well as some conversations I had with him. So, thanks Kevin!

Posted in Uncategorized | 9 Comments

Blogging and Academia

It’s been nearly four months since I blogged here last. Sorry about that. Partly it’s because I’ve been busy; such is the nature of grad school. But I’ve also been questioning whether I should blog at all. Is blogging incompatible with my academic endeavors? Do I sacrifice a piece of my academic reputation for every blog post I write? Often I feel like I’m torn between the culture of the internet and that of academia.

Privacy and Credibility

Perhaps the most obvious concern is for my credibility. In undergrad, I didn’t have to worry about this; I could proclaim my opinions as loudly as I wanted without having to fear any consequences. That’s because I didn’t have to interact with anyone in a professional setting. If I talked loudly about my political opinions, say, and someone disagreed, then I could choose not to interact with that person, or he could choose not to interact with me. We could retreat to our filter bubbles, and never have to confront the fact that we didn’t get along. But in a professional setting, one has to interact with all sorts of people from cultures all around the world. So I ask myself, why create opportunities for unnecessary tension by posting my opinions online? Why not save the politics and religion for private gatherings of trusted friends?

But it’s more than just wanting to avoid conflict. I also need to consider my academic reputation. If I write about spiritual experiences online, will that hurt my credibility as a scientist? What if I confess that I view science as a religious endeavor, a spiritual quest that brings us closer to the cosmic forces that guide our lives?

I am fortunate, though. For the most part, I don’t think my spiritual beliefs will harm my scientific reputation, because I belong to an empirical discipline. In an empirical field, it doesn’t matter where my ideas come from, or what they share space with in my brain. The only thing that matters is how well they predict the data. Why use a scientist’s character to evaluate his theories when one can just test them empirically? Furthermore, I study natural language processing, and all of my experience so far suggests that people in this field are incredibly friendly and open-minded, tolerant of strange ideas, and willing to approach even the most politically charged topics with calm rationality. If my field were not so open-minded, I doubt I would have ever started blogging, and certainly not under my real name.

But even supposing that blogging won’t hurt my credibility in NLP, I still think there is cause for concern. I also have to consider how my field is perceived by outsiders, especially outsiders to academia. Right now I’m a lowly grad student, but someday I could be a professor, a public representative of academic and intellectual life. Then all of my opinions could be seen as reflecting on my field. Less dramatically, if I am a professor someday, then I will have to worry what my students can find about me on the internet. With this in mind, I am wary of saying anything too stupid now.

But is it cowardice to want to hide my opinions? Am I betraying the things I believe in by being ashamed to speak of them publicly? Sometimes I think so, but I’m also sympathetic to the argument that one must choose one’s battles. If, as a public academic, I want to argue for unpopular models or philosophical stances, it might be best not to tarnish these things by association with my other ideas.

But isn’t this dishonest? Shouldn’t I put everything out on the internet and trust the intellectual community to use all information available to find the truth? I’ve thought about this question a lot, and alas, I don’t think that’s how truth-seeking works. Even if the academic community could be trusted to filter out and preserve the salvageable components of my philosophy (and perhaps it can), I couldn’t expect the internet to do the same. The internet is not a rational, unbiased place. One wrong move online, and I could fall into the spotlight of public ridicule and be forever associated with something stupid I once said.

Perhaps this is why most academics in my field seem to keep their online presence very professional. Looking at their Google+ pages, for instance, I see discussions of research, enthusiasm over new scientific findings and technological advances, and the occasional article supporting an uncontroversial political opinion. And academics’ webpages often have a “personal life” section describing unobjectionable hobbies. But I rarely see academics write publicly about their deeply-held philosophical or spiritual beliefs, about the innermost forces that motivate them to do research. It could just be that these topics don’t interest people who aren’t me, but I’ve met enough academics who have plenty to say about these things in private that I’m disinclined to believe they’re unusual interests.

It sounds like I’m complaining, but I really don’t mean to criticize anyone. Increased privacy is important in a professional setting. And to some extent, there are generational differences at work here; I assume most academics grew up without the internet, and people the people who did grow up on the internet are only just beginning their academic careers. Often, I wonder how the extreme openness of the “internet generation” will interact with the aloofness the professional world requires. I’ve seen some of my friends slowly bury their non-academic identities from public view as they progressed through grad school, and I suppose that to some extent I’ve been doing the same. But when I think about it, I really don’t want to withdraw from online discourse. The internet is such a huge part of my life; to stop writing publicly online would be to cut off many of my closest friendships. Sure, these friends and I do communicate through private channels, but I found many of them through their blogs and forum posts, and some of them found me through my own online writings, and a lot of our interactions center around shared membership in online communities. If I withdraw from this social world out of concerns for privacy, then I will lose one of my main avenues for forming meaningful connections.

In any case, I couldn’t disappear from the internet even if I wanted to. I’ve been leaving trails there all my life; if you google me closely enough, it’s still possible to find embarrassing things I wrote when I was thirteen. So maybe my desire for privacy is hopeless, and the best I can do is bury my old online stupidity beneath my current philosophical opinions, which in turn I will bury with more writings once I decide that my current opinions are stupid too.

Conflicting Communities

So privacy is one of the main reasons I’ve avoided blogging lately. But another reason is that I’ve often felt that the blogging community I’m part of is at odds with the academic system. Sometimes I feel like I have to pick one or the other, and that any loyalty to the blogging community means I am being unfaithful to academia.

It doesn’t seem like this should be the case. After all, blogging is a hobby, and pretty much everyone has hobbies that they do outside of work. But blogging isn’t like knitting or biking or even writing fiction. These activities have different goals from academic research. But the blogging community that I’ve hoped to be a part of shares a goal with academia: both communities try to develop ideas collaboratively. And they do so in dramatically different ways, leading me to think that I can’t do both at once while remaining respectable in both communities.

Academia values depth, while what I will call “internet philosophy” values breadth. From the perspective of academia, internet philosophers are laughably amateurish; they read a couple books on a subject and then formulate grand overarching theories that are easily contested by academics in the relevant field. Internet philosophers dabble, unwilling to engage with a subject for the length of time it takes to understand it thoroughly. As a friend of mine put it, it’s called an “academic discipline” because doing high-quality academic research requires a lot of discipline.

But from the perspective of the internet philosophers, academic fields are hopelessly narrow. Academics tend to zoom in on one tiny little problem so intensely that they lose sight of the world around them. And after studying a field in such detail, it’s hard to avoid getting caught up in the subtle misconceptions that permeate it; it takes an outsider to suggest a radically new way of doing things. Thus it seems better to look at many different fields and keep the outside perspective.

But this choice between breadth and depth is a false dichotomy. As one of my professors pointed out, there’s a third option: studying two or three academic fields in great depth. Then, the focus is still narrow enough to allow for deep engagement with the subject, but broad enough to confront you with multiple perspectives and keep you from getting trapped in any one field’s way of doing things.

Another contrast I’ve struggled with is that academia is strictly hierarchical, while anyone can contribute to internet philosophy. Within the world of internet philosophy, it’s perfectly reasonable for me to write out grand theories about the workings of the mind, with nary a reference to support my claims. But it would be unspeakably arrogant of me to do so in the context of the academic system, unless I had researched the matter empirically and in great depth. And so I am torn – if I have a small insight that seems like it could be valuable, but which I don’t have time to study more deeply, should I post that idea online? If I did, I could get feedback from people who really do know what they’re talking about. And on the off chance that there’s something to my ideas, someone else could then take them up and explore them in more depth. But I fear that it’s arrogant to write when I really have no idea what I’m talking about.

To some extent, whether or not I am arrogant depends on my writing style when I propose these ideas. If I write with appropriate humility, then there shouldn’t be any problem. But so far, I’ve written all my blog posts in overdramatic internet philosophy style, wording things grandiloquently and claiming to put forth profound new theories. If I want to continue blogging without it conflicting with my academic life, I will have to change my writing style.

A final conflict between blogging and academia is that the two communities often see themselves as actively opposed to each other. Academics dismiss bloggers and their intellectual contributions, while bloggers reject the whole academic system as a useless waste of time. In particular, the LessWrong community seems to value autodidacticism and finding ways to succeed outside of traditional hierarchies and life-scripts. Why bother with an academic degree, they ask, when you can learn the material on your own, for free? A diploma is just a piece of paper, and anyone who cares about them is overly focused on appearances. But then I look at the members of the LessWrong community, and it seems that most members of the LW community struggle to motivate themselves to work. I have met plenty of very intelligent people on LessWrong who have avoided college and are working dead-end jobs at Walmart and despairing for their futures.


After thinking about academia vs. internet philosophy over the past few months, I’ve decided (fortunately, because it pays me) that I prefer the academic system. In general, I just find people in academia so much more admirable: everyone works so hard, and their devotion inspires me to work hard as well. And I like the feeling that, by working in academia, I’m contributing to the larger emergent structure of science and philosophy. I still greatly respect some of my fellow bloggers (especially the ones on my blogroll), but they seem like notable exceptions to the generalizations I made above.

So finally, after four months of avoiding all the online communities I once frequented and instead immersing myself as deeply as possible in the academic life, I am firmly on the side of academia in this “blogging vs. academia” internal debate. This means that I can return to blogging without worrying that I am betraying academia by doing so. My writing style will change a bit (in particular I will try to be less arrogant and presumptuous), and in the interest of privacy I will keep any especially personal details to myself. But for the most part I expect to keep blogging about the same topics I was exploring before.

Posted in Uncategorized | 7 Comments

Modeling Willpower

People talk a lot about willpower. Supposedly, it’s a kind of magical energy that gives you the ability to complete your goals, break free of bad habits, and basically solve all your problems. How many times have you heard someone say, “If I just had more willpower, I could lose these 50 pounds”?

Sometimes willpower is treated as innate: you either have it or you don’t, and if you don’t have it, there’s really no use trying to lose those 50 pounds. I’ve heard many people say things like “Oh, I’d try diet and exercise, but I just don’t have the willpower.”

Other times, willpower is treated as something you can acquire. This viewpoint is especially popular among people who are giving you advice. “You just need to get more willpower,” they tell you. “Then you can lose those 50 pounds!” Great, you think to yourself. So, what is this “willpower” thing, exactly, and where do I get it?

Willpower is something I’ve thought a lot about lately. I started grad school after a grueling four years of undergrad, and found myself incredibly burnt out. Suddenly, the willpower that had propelled me through undergrad had vanished, and I just couldn’t find the energy to work anymore. In vain, I sought my lost willpower. In vain, I tried to bully myself into working again. All failed.

Part of the problem was that my thinking about willpower was confused. And to a large extent, I think that my confusion stemmed from some general misconceptions that our culture has about willpower. So in this post, I’ll first describe our culture’s standard model of willpower. In particular, we tend to think of willpower as a matter of sucking it up and doing something unpleasant, in service of some greater goal. Then, I’ll describe an alternative model, where willpower does not fight against desire, but is simply another form of desire. Finally, I’ll argue that a full definition of willpower should encompass both viewpoints. I’ll refactor willpower into two components, determination and desire, which reflect the two models respectively, and which combine to give a more complete account of what willpower actually is.

The Usual Model: Willpower vs. Desire

In our culture, I think we often associate willpower with the word “should”. For instance, you wake up in the morning and think, “I should really start that CS homework/do the dishes/clean the house today… but I really don’t want to.” Then, a battle ensues between willpower and desire.

If willpower wins, you do the CS homework, and get it out of the way. If you’re lucky (and I’ll come back to this), you’ll enter a state of flow. Once you’ve pushed past the initial resistance, the homework assignment will fly by. When you finish it, you may be exhausted, but you’ll feel good because you’ve accomplished something. If you’re unlucky, though, the initial resistance will never disappear. You won’t enjoy the homework assignment at all, and you might spend the whole time resenting it, perhaps interleaving short bursts of working with long stretches of procrastination. When you finish the assignment, you’ll feel drained and defeated, and perhaps you’ll even feel like your day has been wasted. After all, if you hadn’t had this stupid homework assignment to do, you could have gone on a bike ride/finished reading that novel/hung out with some friends.

That’s if willpower wins. If willpower loses, then you procrastinate. Maybe you go for that bike ride, or maybe you just screw around on the internet all day. Either way, you’ll be plagued the whole time by guilt. No matter how much you’re enjoying the bike ride/internet/whatever, you’ll still feel the guilt of procrastination tugging at the back of your mind, reminding you that you should really be working right now. And later, when the assignment is almost due, you’ll write it up in a mad rush; when you hand it in, you’ll feel a flood of shame at the poor quality of your work.

In either case, the premise is the same. You are made up of two subagents, willpower and desire, and they fight with each other for control of your actions. Who wins will depend on a couple of things:

  • The strength of your willpower. In this model, it is taken for granted that different people have different strengths of willpower, though as I mentioned above, sometimes this strength is treated as innate, while other times it’s claimed to be mutable.
  • The strength of your desire. As William Blake wrote, “Those who restrain desire do so because theirs is weak enough to be restrained.”

The battle between willpower and desire is much like Freud’s classic struggle between the superego and the id. Willpower is the superego, the sensible subagent. It knows both what’s moral and what’s good for you. Cheating on your girlfriend is immoral; eating junk food is bad for you; your willpower disapproves of both of these things. I picture the Willpower subagent as a no-nonsense elementary school teacher, waggling its finger at disobedient Desire. “You shouldn’t do that,” Willpower says. “It’s wrong/unhealthy/lazy/etc.”

Desire, on the other hand, is greedy, in the computer science sense of the word. Desire cares about what will feel good now; it doesn’t care about getting a good grade on the test, or decreasing the risk of a heart attack fifty years in the future. It just wants to sit on the couch all day, playing video games and eating candy. Desire will cause all kinds of trouble if Willpower isn’t around to keep its sharp eyes on it.

They’re also associated with different parts of the mind. Desire is low-level, intuitive, and automatic. Willpower, on the other hand, is conscious, verbal, high-level, and cerebral. When willpower wins, it’s a triumph of the rational, moral part of the mind over the base and animalistic.

To reiterate the most important contrast in this model, Desire says “I want”, but Willpower says “I should”. When people envision someone with strong willpower, they imagine someone for whom the “should” always wins. People who believe they “don’t have the willpower” think this because they see their desire constantly triumphing over the “should”, the irrational parts of their brain winning over the rational ones.

But there is another, completely different way of looking at willpower, which I’ll describe in the next section.

A New Model: Willpower Is Desire

Once, in college, a professor of mine told me “there’s no such thing as willpower”. We were driving to an event, and we’d stopped at a store on the way. As we waited in the checkout line, I found myself tempted by the candy they put right next to the register. I said out loud, “Gah, I really want that candy, but I really shouldn’t eat it. I hope I have the willpower to resist this temptation.” And my professor responded with something that surprised me greatly at the time. He said, “There’s no such thing as willpower. There’s only wanting things enough.” He explained that if I didn’t buy the candy, it wouldn’t be because I had the willpower to resist it. It would be because my desire to not buy the candy outweighed my desire to buy it. He told me that if I really wanted to stop eating candy, I should go online and read about the negative impact that candy has on my health.

This idea that there was no such thing as willpower startled me, and seemed to contradict my experience. Here, faced with this candy bar, I was confronted with two feelings: (1) “I really want that candy bar because it’s tasty”, and (2) “I shouldn’t buy the candy bar because eating candy is unhealthy”. These are, of course, the “want” and “should” of the previous section. But my professor was suggesting that I should build up associations between candy and negative things, so that the next time I saw a candy bar, I’d have the following two feelings instead: (1) “I really want that candy bar because it’s tasty”, and (2) “I really don’t want that candy bar because it would give me cavities”. There would be no “want” vs. “should”, just two different “wants” competing with each other to see which was stronger.

What exactly is the difference between a “want” and a “should”? Writing the two different versions of (2) out in words conceals the fact that these two impulses are implemented very differently in the mind. As I mentioned in the previous model, the “should” is a very high-level, cerebral thing, while the “want” is a low-level, visceral desire. So in the second scenario, (2) couldn’t just be a piece of abstract knowledge that eating candy bars results in cavities. It would have to be a visceral revulsion to the candy bar based on its negative effects.

If you wanted to implement the “do not want” of the second scenario, you would need to use methods that specifically appealed to the subconscious, intuitive parts of your mind. For instance, you might picture the sugar from the candy bar eating holes into your teeth, or think of the CEO of an evil corporation laughing gleefully at how he’d tricked you with his insidious advertisements. Pick whatever image you find most viscerally powerful. With enough reinforcement, the stimulus of the candy will get linked up to that image and the negative feelings associated with it. You’ll be reprogramming your mind so that the “should” of the first scenario won’t be necessary. You won’t be faced with a choice between the visceral desire for the candy bar and a purely intellectual understanding that you shouldn’t eat it. Instead, you’ll be caught between two visceral desires, and the stronger desire will win. No agonizing, ego-depleting blast of “willpower” will be necessary.

So according to this second model, willpower is a matter of visceral desire. All of the rational considerations in the world will get you nowhere without intuitive desire. Real willpower isn’t the voice saying “should”, it’s the intuitive motivations behind the reasoning. The thought “I should do X” doesn’t do anything but remind you of your pre-existing desire to do X. Whether you follow the “should” or not depends on the strength of your desire to do X and the strength of your desire not to do X. So in the homework example from the previous section, whether willpower “wins” or not will depend on how badly you want to do the homework vs. how badly you want to do something else. This is why it’s a lot easier for willpower to “win” if you actually enjoy the assignment. Barring that, if you care deeply enough about your grades, or you’re consumed by worry about your inability to complete the assignment on time, these things may also help you to resist procrastinating. Those of you who are prone to procrastination will have noticed that it becomes easier to work on the assignment as the due date approaches. That’s because the intuitive urgency of completing the assignment increases with proximity to the deadline.

So if you follow this second model, “increasing your willpower” is really just a matter of reconfiguring your desires. Some of this can be done using the technique I discussed above: manually associating a stimulus with an emotion (see this link too for an excellent description). Some of it can be done by altering your mind to care more about the future and the long-term effects of your actions. Once your desires are sufficiently reconfigured, there will be no need for the word “should”. In fact, there will be no need for conscious decision-making at all. Once you’ve reached this state, following your instincts will be enough, since they’ll have been honed to align with what the “should” would have said if it still existed. I think this is what Crowley was getting at when he talked about the True Will. It’s something that your whole body decides to do. There’s only the one true action, and every fiber of your being is united in completing it.

Integrating the Two Models

We’ve seen two models now: one which says that willpower is the ability to do things you don’t want to, and another which says that willpower is just another form of desire. So, which of these models is correct? The answer, of course, is “neither”.

I tend to think that a good model of willpower will integrate aspects of both of these models. Thus, in my current model, I’ve refactored willpower into two components, determination and desire. Determination is the “should” kind of willpower from the first model; desire is the “want” kind of willpower from the second. To accomplish anything, you’ll need some of both.

Unless you manage to find your True Will, desire alone will not be enough. No matter how much you enjoy doing your homework (for instance), desire will eventually falter, and you’ll need determination to get you through that moment of weakness. When I was in undergrad, I absolutely loved studying computer science, but there were times when an assignment was due in twelve hours and I had barely slept in three days. I wanted nothing more than to rest, but I knew I had to keep working in order to get the assignment in on time. It was then that I called on determination.

Determination alone is never enough either. You always need some amount of desire. When I was burnt out, I spent all my determination trying to force myself to work, and I still wasn’t able to complete my assignments. It wasn’t that my determination had decreased since undergrad; it was my desire that had vanished.

The second model insisted that determination doesn’t actual exist as a separate thing from desire. But I don’t believe this; the two things feel subjectively different to me. Here’s the best explanation I can give of that difference: determination is like a temporary power boost that strengthens one of your desires (like the desire to hand in your homework on time) so that it can overpower your other impulses (like the desire to get some sleep).

Determination is like rocket fuel; you only have so much of it. Eventually you’ll run out, and then the desire that is naturally strongest will win. Determination interacts with desire the same way that rocket fuel interacts with the space shuttle’s weight. The heavier your space shuttle (i.e. the less desire you have), the more rocket fuel (i.e. determination) it will take to get it off the ground. You can use techniques like meditation to increase your supply of determination. But no amount of determination can make up for a complete lack of desire, just like no amount of rocket fuel can lift an infinitely heavy spaceship.

This is the problem I encountered when I transitioned from undergrad to grad school and found myself burnt out. Recall the contrast I made at the beginning of this essay: that if “willpower wins over desire”, you can either be lucky or unlucky. If you’re lucky, you’ll enter a state of flow, and if you’re unlucky, you won’t. I think that whether you get “lucky” depends on the strength of your desire. If your desire is strong already, then it will just take a little bit of determination to boost it over the threshold. If your spaceship is really light, then it won’t take much rocket fuel to get it into outer space, where you no longer need to burn fuel to keep it from falling back to earth. But if your desire is really weak, then you’ll need to keep expending determination to keep yourself working, and you’ll never make it to a state of flow. This is what happens when your rocket is too heavy to make it into outer space. By burning all your rocket fuel, you can keep the thing aloft for a little while, but soon enough it will crash back down to earth.

So this concludes the model I’m currently using. Both determination and desire are necessary for accomplishing your goals. If you’re having trouble finding the willpower to do something, make sure you check which of these two components is missing. Because the first model of willpower is so prevalent, we often assume that the problem is a lack of determination. But I think that lack of desire is far more common, and can often be easier to fix.

Posted in Uncategorized | 12 Comments

On the “Obvious”ness of Certain Ideas

Today I was reading George Lakoff’s Women, Fire, and Dangerous Things. (Yeah, I know, I’m way behind on the required reading for writing this blog.) Anyway, I came upon a passage (pp. 58-59) that discusses the following questions: “Does language make use of general cognitive mechanisms? Or is it something separate and independent, using only mechanisms of its own?” As Lakoff observes, these are questions of great importance to the study of language and cognition. If language uses general cognitive mechanisms, then we can use our understanding of language to update our beliefs about how the mind works in general, and vice versa. But if language is something separate, then we need to keep our theories separate as well.

This is interesting, of course, but what I found especially fascinating was the following paragraph:

This issue is a profound one, because it is by no means obvious that the language makes use of our general cognitive apparatus. In fact, the most widely accepted views of language within both linguistics and the philosophy of language make the opposite assumption: that language is a separate “modular” system independent of the rest of cognition. The independence of grammar from the rest of cognition is perhaps the most fundamental assumption on which Noam Chomsky’s theory of language rests.

The reason I found this passage fascinating is that Lakoff says “it is by no means obvious”, but Lakoff’s conclusion that “language [does make] use of our general cognitive apparatus” is completely obvious to me. Whenever this sort of thing happens, I like to take a step back and ask “What are my beliefs and assumptions that make this obvious to me?” and “What are the beliefs and assumptions that make this non-obvious to other people?” (It’s also useful to ask these questions in the reverse direction: “What are my beliefs and assumptions that make their position non-obvious to me, and what are their beliefs/assumptions that make it obvious to them?”)

In this case, my assumptions are mostly based on my understanding of evolution, in particular the fact that evolution usually builds on or modifies pre-existing mechanisms rather than inventing entirely new ones. For this reason, I expect language to have evolved as an outcropping of pre-existing cognitive mechanisms; it seems highly unlikely to me that we would have just developed a whole separate grammar module with no relation to already-existing structures.

This allows me to identify a whole class of assumptions which influence my understanding of how the mind works. Specifically, I make use of my knowledge of evolution when thinking about cognitive science. Presumably, other people have different intuitions either because they just don’t use their knowledge of evolution when reasoning about cognitive science, or because they have a different understanding of how evolution works.

Now let us ask the questions in reverse. As Lakoff repeatedly observes, the cognitive scientists, linguists, and philosophers who support the other position (that language is a completely separate module) have been influenced by traditional dualist and computational understandings of the mind.

This entire discussion reinforces two major themes of my philosophy: the fact that our current models are wrong, and the importance of studying many different fields. For the former, when I see how obvious previous generations’ beliefs were to them, it gives me the perspective to realize that future generations will probably feel the same way about my own beliefs. After all, I know very little about evolution, partially because I’m not an evolution-ologist, and partially because most of that field’s models must also be incorrect or incomplete.

Regarding the study of many different fields, traditional ideas from linguistics seem quite implausible to me because I’ve read a couple of books about evolution. This suggests that the more fields I study, the more diverse perspectives I will be able to incorporate when building my models. I’m sure that my current models seem preposterous to specialists in various other fields.

Posted in Uncategorized | 8 Comments

The Doctrine of Original Irrationality

The Birth of Individualism

Recall how the medieval Catholic worldview transitioned to the worldview of the Renaissance.

In the middle ages, Catholicism dominated Europe, and one of its most central teachings was the doctrine of Original Sin. The church maintained that human nature was fallen, or fundamentally corrupt and inclined towards sin. Only God and Jesus could provide salvation from burning eternally in the fires of hell, and only the Catholic church could provide access to God. Thus, the only way to find salvation was to submit oneself to the teachings and the guidance of the church. Since human nature had been tainted, it followed that one could not trust one’s own bodily urges, one’s emotions and intuitions; these were corrupt, and would only lead one into sin.

This mentality, this absolute trust in the authority of the church and distrust in the urges of the body, persisted until the Catholic church grew so corrupt that people could no longer believe it was the source of the word of God. But when the church’s authority crumbled, where could people look to for a sense of direction? With no external guidance from the church, there was nowhere to turn but inwards, following one’s own reasoning, and trusting one’s own intuition. Thus, we see individualism bud and begin to flourish in Europe.

First we get the Protestant reformation, which turned away from the complications of the Catholic church, returning directly to scripture. Since external authorities could no longer be trusted to interpret the word of God, it was up to each man to understand the scriptures for himself.

The Renaissance and the Protestant reformation led into the age of Enlightenment, where faith in reason prevailed. It was during this period that science as we know it now was born. For in medieval times, men could only trust truths handed down by the authority of the Catholic church. After the Protestant reformation, men could interpret the Bible themselves, but were still forced to seek truth within its pages. But now, in the age of Enlightenment, men learned to trust their own reason as a source of truth.

Movements like Romanticism, though they rejected the Enlightenment’s rationality, were an even further departure from the medieval mentality. Romanticism encouraged people to trust their emotions and intuitions over reason; these were the very parts of man once thought to be most deeply corrupted by original sin. Thus Romanticism marked an even further progress into individualism, encouraging not a trust in the universal principles of logic and mathematics, but instead a trust in one’s one deepest, most personal experience, that which is most subjective.

Empiricism and Individualism

Out of the Enlightenment grew science as we know it today, and the key to this science was its emphasis on empiricism. The philosophy of empiricism cried, “Do not trust the teachings of authority. Do not even trust your own intuitions and reasoning. Go out into the world and observe!” Logic, reason, and mathematics may be essential tools for constructing new hypotheses, but no hypothesis can be accepted until it is verified through experiment. As Feynman said, “It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is – if it disagrees with experiment it is wrong. That is all there is to it.”

Those who believe in the grand cause of Science-with-a-capital-‘S’ are often intensely individualistic. According to their worldview, Science and Reason have triumphed over old authorities. The individual is no longer subject to the whims of cruel monarchs; neither is he force-fed knowledge by the Church. Instead, individuals rule themselves by democracy, and seek truth for themselves using science. Both of these endeavors (democracy and science) rely on a certain faith in human reason. Without reason, how could people ever hope to make good decisions in elections? Without reason, how could scientists trust their hypotheses or their ability to interpret their empirical findings?

But now, despite the intimate connection between science and individualism, a new sort of orthodoxy has begun to crystallize within the scientific establishment.

The Doctrine of Original Irrationality

I will repeat: in the scientific worldview I have described, it is up to the individual to find truth for himself, guided by his faculties of reason (which gives him the ability to formulate hypotheses) and his senses (which allow him to make empirical observations). This is an empowering worldview, one which teaches the individual to trust his mind’s thoughts and his body’s perceptions.

But Mitchell Porter observes that this trust has eroded:

The fact that these materialist or computationalist philosophies of consciousness, which are supposedly empirically motivated, end up requiring us to interpret utterly elementary and ubiquitous aspects of subjective experiences (like time, like colors) as illusions, tells you how anti-empirical they actually are. Empiricism originally means based in experience. It is mildly ironic that the scientific philosophy, which started out with an emphasis on seeing everything for yourself, has given rise to this new theoretical outlook which requires the believing practitioner to denigrate the reality of their own perceptions in favor of a theoretical apriori, an apriori that is loosely justified by elaborate reasonings and highly indirect evidence.

I blame this erosion of trust on what I will call the doctrine of Original Irrationality. This set of beliefs, which saturates the rationalist community, emphasizes human fallibility. Our senses are woefully imprecise and our minds are afflicted by a plague of cognitive biases. The research of Kahneman and Tversky has shown us that man is fundamentally irrational; cognitive biases are inherent to our nature. We must always strive to overcome these biases, but we will never be completely free of their influence. And thus we can no longer trust our own minds.

But if we can’t trust ourselves, then who can we trust? We are now forced to look to an external authority for help in our quest for truth, just as we were in the middle ages. But this time, the authority is Science.

Scientific Orthodoxy

At this point, it becomes necessary to distinguish between two kinds of science. Up until now, I have only discussed science as practiced by the individual, science as a personal endeavor. But there is also Science as practiced by the scientific system as a whole. This system is concerned with maintaining an established body of scientific knowledge. According to the doctrine of Original Irrationality, it is this knowledge, and not our own reason and experiences, which we must trust. Even an individual scientist cannot find the truth on his own, but must rely on the scientific institution as a truth-discovering engine.

Consider this quote from Jonathan Haidt:

Anyone who values truth should stop worshipping reason. We all need to take a cold hard look at the evidence and see reasoning for what it is. … [M]ost of the bizarre and depressing research findings [about cognitive biases] make perfect sense once you see reasoning as having evolved not to help us find the truth but to help us engage in arguments, persuasion, and manipulation in the context of discussions with other people.

I’m not saying we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments, but they are often disastrous as a basis for public policy, science, and law. Rather, what I’m saying is that we must be wary of any individual‘s ability to reason. We should see each individual as being limited, like a neuron. A neuron is really good at one thing: summing up the stimulation coming into its dendrites to ‘decide’ whether to fire a pulse along its axon. A neuron by itself isn’t very smart. But if you put neurons together in the right way you get a brain; you get an emergent system that is much smarter and more flexible than a single neuron.

In the same way, each individual reasoner is really good at one thing: finding evidence to support the position he or she already holds, usually for intuitive reasons. We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. This is why it’s so important to have intellectual and ideological diversity within any group or institution whose goal is to find truth.

This quote is representative of the position which I will call scientific orthodoxy. The individual cannot find truth on his own. Instead, he must trust the system to find it for him.

The Fallibility of the Scientific System

You may be wondering, what exactly am I arguing for in this post? Am I advocating a return to the naivety of the past, to a blind faith in reason? Certainly not. That faith was shattered by the research of Kahneman and Tversky, and rightly so; we cannot return to that worldview now. So let me be clear that I think it’s essential for us to recognize the limits of our own minds and senses. To do otherwise is pure folly.

But we should not replace our blind faith in reason with a blind faith in the scientific establishment. Human beings are fallible, surely, but the system is as well. The system may do better on average, but that does not mean that when an individual disagrees with the system, the individual must be wrong and the system must be right. After all, the system is made up of individuals. The system’s knowledge grows out of the contributions of individuals; its biggest growth spurts come from individuals who doubt the received scientific wisdom and dare to shift the paradigm.

I am not saying anything new here; we all know that the scientific system is fallible; we all know that our current models are most likely incomplete and incorrect. And yet many of us act as if anything written in a paper is Absolute Scientific Fact. There is a divide between our conscious recognition of the system’s fallibility and our intuitive trust in its authority. For instance, we realize that much of psychology research just surveys a few hundred undergraduates, assumes them to be representative of the human race, and proceeds to draw general conclusions about the workings of the human mind. Yet who among us takes this into account when updating his beliefs? I confess that I am guilty here. I know that when I am reading a blog post online, and the writer states some fact accompanied by a scientific paper supporting it, I almost never look at the paper. Instead, the presence of a citation is enough to fill me with trust in the truth of the fact. The fact has only to hold up its citation, and the guards of my knowledge base will welcome it in. How much false certainty have I gained in this manner?

So I exhort you: be skeptical of every scientific finding you see, especially when you only read a fact and do not consult the original paper. Do not trust blindly in the wisdom of the scientific establishment. When your intuition disagrees with the received wisdom, weigh both carefully instead of automatically discarding your intuition in favor of the teachings of the authority.

It will not be easy to resist the pull of scientific orthodoxy. As humans, we have a propensity to seek out authorities and then drink in their wisdom unquestioningly. This propensity grows even stronger when we can’t trust our own reason or intuition, as many of us will become desperate for a source of truth, for anything to close up the sucking void of uncertainty.

Let us not give in to our fears and temptations. Let us resist our tendencies towards blind trust in authorities. Let us remind ourselves of the fallibility of the scientific system. Let us not fall victim to the doctrine of Original Irrationality.

Posted in Uncategorized | 1 Comment

Thoughts on Creativity

(This blog post came out of a conversation I had with Scott Alexander, so I’ll be quoting him extensively.)

Jaron Lanier, Hipsters, and Hesse

I recently finished reading Jaron Lanier’s book You Are Not a Gadget: A Manifesto. There’s a lot I could say about this book, but I’ll focus on one particular point. Lanier claims that today’s youth, who are members of a digital culture, lack the creativity of previous generations. Lanier notes that most of the “artwork” produced on the internet is derivative. Popular youtube videos are not creative new short films made by aspiring directors, but instead are often mashups of pre-existing artwork created by external, commercial sources like the film industry. Lanier claims that culture/artwork froze sometime around the beginning of the internet era, and that today’s young people have no defining style of music or fashion. They just rearrange existing pieces made by actual creative geniuses, and call these mashups works of art.

Lanier wants to see young people transform and revitalize the world of art and thought, instead of simply recombining the aesthetics and ideals of previous generations. The internet has transformed society, and has overall reshaped the world completely. Why hasn’t it given rise to entirely new forms of artwork? Modern technology has not inspired completely new artistic media – in particular, Lanier is surprised that there hasn’t been much focus on creating immersive, interactive virtual worlds as a form of art. Presumably there are many other possible artistic media that we haven’t dreamed up yet, but which would be quite popular if invented. According to Lanier, digital culture discourages the kind of creativity that would lead to such advances.

I’m not sure I agree with Lanier that modern culture in general lacks creativity, but his criticisms definitely apply to hipsters. Hipsters look on modern culture in despair, and fleeing from it, they retreat into a world of nostalgia. Because hipsters think that modern, commercialized culture lacks anything worthy of aesthetic appreciation, they reappropriate the aesthetics of past ages. But they do so ironically and haphazardly; each of their outfits is a hilariously mismatched collection of miscellaneous past items, whose conjunction succeeds at looking ridiculous. These clothing-mashups are analogous to the youtube mashups that Lanier so disdains. (I am not free, by the way, from the barbs of this criticism. I, too, despair of the commercialized, super-stimulated, plasticized culture we live in, and look back with longing on bygone eras.)

I suspect Lanier would say that hipsters are channeling their objections to modern culture in the wrong direction. Hipsters say modern culture is lacking in aesthetic value, and so they look to the past. Instead, they should look to the future. If hipsters are dissatisfied with modern culture, then they should create an entirely new culture that better fits their sense of aesthetics. They could make new styles of clothing instead of relying on creations from the past. This would revitalize culture much more than trying to overlay outmoded aesthetics onto it. What we need is something new and fresh, something creative, something people will get excited about. But I suspect that hipsters’ irony and cynicism prevents them from such a sincere endeavor. Perhaps they doubt that such a thing can be accomplished.

All of this reminds me a bit of Hesse’s novel The Glass Bead Game. The future society he describes is a culture that has grown old and stagnated. The members of this society believe that they can’t create art of the same vitality as past ages, and so they abstain from artistic pursuits altogether. Instead, they only focus on analyzing and reanalyzing what’s already been created. Our current society may also head in this direction, if we lose faith in the human ability to create, and scorn any sincere attempt at creativity and originality.

Creativity: Does it exist?

But does creativity even exist? Is originality possible? Or have we already completed an exhaustive search of artspace? In our conversation, Scott suggested that perhaps “there are no untapped artistic primitives, and the new forms of art that can be invented are all just recombinations of existing ones, in the same way there are lots of words that don’t exist but probably many fewer completely novel human-conveniently-producible phonemes that don’t”.

I suspect that this is a natural question for readers of LessWrong. In particular, there seems to be a sense, among such intellectuals, that there’s no such thing as originality, and all attempts at it are naive. For instance, there are countless LessWrong posts/comments revealing that, although people might think they are dressing to “express themselves uniquely”, they are actually dressing in predictable manners to signal allegiance to various groups. I imagine there’s a large group on LessWrong that would claim all artistic endeavors are just attempts at signaling.

And there’s a certain perspective on creativity that I adhered to for a long time: nobody comes up with truly new ideas; we’re all just distilleries, sitting atop a confluence of influences, mixing old ideas together into new ones, with maybe a bit of Gaussian noise thrown in. Nietzsche, for example, seems profoundly new and revolutionary, but one can find pre-echoes of his ideas in Dostoevsky’s Crime and Punishment, and even as far back as William Blake’s Marriage of Heaven and Hell. There are intellectual tides, and the brilliant geniuses are the ones riding at the foremost edge of those tides, who advance it slightly further. But their ideas aren’t as new and creative as we believe them to be.

Similarly, I’ve read a lot of Joseph Campbell, and one can interpret his writings as saying that “all originality in stories is a lie”. People may write things that seem creative, but underlying all their works are the same few basic archetypes. [1]

But maybe recombination is what creativity is. As Scott points out, saying “there is no creativity, just novel recombinations of artistic primitives and existing ideas” might be like saying “there’s not really any rain, just drops of water falling from clouds”. The problem is that LWish people read things similar to the above and think “creativity is impossible”, and perhaps give up being creative or original after being exposed to this ideaplex. As a result, they produce things that are less creative! I’m speaking from experience, here, because I myself only recently escaped from this trap, and am now trying to re-cultivate my own creativity.

What are the artistic primitives?

If creativity is just the novel recombination of artistic primitives, it’s worth investigating these primitives in a bit more detail. What are the primitives, exactly?

Let’s look at primitives for writing. One possible answer was given above – the primitives of stories are archetypal characters and narrative segments, put together in a new arrangement.

For another possible answer, we look to a book that I’m reading, More than Cool Reason by George Lakoff and Mark Turner. It’s an analysis of metaphor as used in famous poetry. If someone had deliberately set out to write a book that specifically appealed to me, they could hardly do a better job than this. It’s a whole book where they take poems I love, and identify the metaphors people rely on for processing these poems cognitively! And yet… I find myself disagreeing with some things they’ve said. (Note: I’m about 40 pages into this ~200-page book, so it may be premature for me to be discussing it/disagreeing with its contents.)

One of Lakoff and Turner’s main observations is that the same metaphors appear over and over again. Poets extend and combine these metaphors in new and beautiful ways, but the same metaphors continually recur: life as a flame, or life as a year or a day, or time as a thief or devourer, for instance.

Here’s a quote from the book:

At this point, we have seen life and death understood metaphorically in terms of many different concepts – journeys, plays, days, fluid, plants, sleep, and so on. We have seen many complicated mappings of knowledge, images, reasoning patterns, properties, and relations. This diversity may be overwhelming and suggest that anything can be understood metaphorically in terms of anything else, or that all of our concepts are understood metaphorically in terms of concepts from different domains.

But that is not the case. Although human imagination is strong, empowering us to make and understand even bizarre connections, there are relatively few basic metaphors for life and death that abide as part of our culture. And there are tight constraints on how their mappings work. For example, PEOPLE ARE PLANTS gives us a basis for personifying death as something associated with plants [such as a reaper], but not just anything associated with plants will do. The structure of the metaphor exerts strong pressure against any attempt to personify death as an irrigation worker or as the baker who bakes wheat bran into muffins. There are reasons, which we will explore in chapter two, why death the reaper seems apt but death the baker does not.

What is remarkable in what we have seen so far is not how many ways we have of conceiving of life and death, but how few. Where one might expect hundreds of ways of making sense of our most fundamental mysteries, the number of basic metaphorical conceptions of life and death turns out to be very small. Though these can be combined and elaborated in novel ways and expressed poetically in an infinity of ways, that infinity is fashioned from the same small set of basic metaphors.

This tells us something important about the nature of creativity. Poets must make the most of the linguistic and conceptual resources they are given. Basic metaphors are part of those conceptual resources, part of the way members of our culture make sense of the world. Poets may compose or elaborate or express them in new ways, but they still use the same basic conceptual resources available to us all. If they did not, we would not understand them.

Both Scott and I feel very strongly that in these paragraphs, Lakoff and Turner underestimate the potential for new metaphors to be created. Just because we typically conceive of death as a reaper, or the driver of a carriage, doesn’t mean we can’t personify it as a balloon-seller or a fisherman’s wife instead.

Scott points out that “of course a casual reference like “the Reaper came for Jack” will have to use commonly understood terminology; if someone said “the Balloon-Seller came for Jack”, that would make no sense. But if we wanted to, we could establish a metaphor for Death as a balloon-seller. Like for example we take the balloons and then float up to Heaven. It would just be something that could only fit in a novel or a longish poem that established that particular metaphor, not as a throwaway reference.”

In fact, one could argue that the most creative people are those who can establish completely new metaphors and analogies. And I’m not just talking about artistically creative – I’m also talking about scientifically creative. A scientist can recognize the metaphors underlying his own worldview, and thereby become more aware of the paradigms constraining his thought processes. He can then explore alternatives to the traditional perspective. For me, this is one of the biggest appeals of reading Lakoff’s work.

I think that part of what Lakoff and Turner are saying is that we can’t make metaphors which conflict with how we already conceptualize something. We see death as destruction, dissolution, decay. But baking is the process of building something up. Baking is a subclass of making, and in Metaphors We Live By, Lakoff and Johnson observe that making is often metaphorically identified with birth. The creation is born from the materials. If baking is already identified with birth, that makes it much harder to identify it with death. So even though baking, like death, involves turning a living organism into a sort of food, it’s hard to conceive of Death as the Grim Baker. People who wanted to speak of death transforming something living into food would be more likely to pick “Death the Butcher” or “Death the Reaper”, since both of those focus on the dismemberment of the living, instead of the transformation of the body into sustenance. (Many apologies to Lakoff and Turner if this is what they’re saying in Chapter 2, which I haven’t read yet.)

So yes, it seems there are constraints on the metaphors we can create. But that doesn’t mean we can’t make any new metaphors! A truly creative author could come up with new metaphors that even find their way into the culture. And obviously, new metaphors arise all the time as new technology arises. For instance, the preface of Julian Jaynes’s book contains a beautiful discussion of how, in every era, prominent technologies/advances in the hard sciences provide new metaphors/models for the mind.

How to Be Creative

So here we’ve established one possible source of true creativity: the creation of new metaphors. But how does one go about creating new metaphors? One possibility is to use some kind of randomness. For instance, the balloon-seller example from earlier exists because I, looking for a concrete example, said “We often describe Death as a reaper, but we never talk about him as a balloon-seller.” And Scott replied “But we could”, and proceeded to generate an image of death as a balloon-seller. Thus, one could randomize metaphor-generation by opening to a dictionary, picking a word, and comparing a given concept (like Death) to that word. This obviously requires some amount of innate creativity. But it can help with thinking outside of the brain’s normal patterns. Upon thinking of Death, for instance, you might immediately think of “Reaper”, and really weird metaphors like “balloon salesman” won’t even occur to you.

Another possibility is to just start paying attention to what metaphors you use in your daily conversations. If you start to write a sentence, and find that it contains a cliche, then reword it! I started doing this a few months ago. Eventually, my brain figured out what I was up to, and slightly reoptimized its sentence search procedure for weird new metaphors. Now they come far more naturally. (Or, to be less machine-learning-y about it, I practiced a skill, and then got better at it!)

Obviously, new metaphors are only one possible form of creativity, and it’s worth analyzing others. One method involves consciously identifying the dimensions of a certain artspace, and varying them. Scott gives the example that if a certain style of art can be described as “representational, serious, pointillist, with lots of bright colors, photorealistic”, then one could create a new style by picking a new value for any of those dimensions.

Furthermore, at one point in our conversation, Scott objected to my notion of true creativity as “generating new metaphors”, because I was still relying on the preexisting abstract categories imposed by language. Generating new metaphors is still just a matter of rearranging primitives. He then described his ideas about true creativity, which I’ll quote here:

“My idea of an artistic revolution would be…well, imagine some Europeans who had never seen Asian architecture before and just had a lot of European architecture inventing something like Asian architecture. It looks completely different, it’s just as beautiful, but it’s beautiful in a totally different way. I agree that they’re both made out of things like walls and roofs and stuff, but my brain classifies them as two totally different categories, in a way that ‘Nara period Japanese architecture’ and ‘Kamakura style Japanese architecture’ aren’t. And it bothers me that even architecture in fantasy worlds seems more similar to European architecture than Asian architecture is, because that suggests either we’ve run out of architecturespace or everyone’s just incredibly boring.”

But how does one go about starting such an artistic revolution? This would seem like an impossible-to-answer rhetorical question, but Scott has a brilliant suggestion:

“Also, I’ve found that if you’re optimizing for something other than creativity, you usually end up much more creative than if you’re optimizing for creativity. Like my constructed societies were super boring elf clones until I thought ‘You know what, screw making something beautiful, how about I make a perfect society I would actually want to live in’ and then it got super weird. Actually, the same might include Asian architecture – they were optimizing for different materials and a different tech level and trying to solve the ‘make buildings that don’t fall down’ problem without optimizing for difference-from-Europe. Also, calligraphy apparently is kind of what happens naturally if you use a pen with the shape of a quill feather. I never realized that before. And Gothic lettering is what happens naturally if you try to cram as much text into as little space as possible because it’s the Middle Ages and you have to kill a calf for every sheet of parchment you want.”

And this fits with Lanier’s idea that new technology should inspire new art forms, because new technology provides vastly different startings condition.

I am interested to hear everyone else’s suggestions for artistic primitives, and for methods of “rational creativity”.

[1] I’m pretty sure Joseph Campbell would strongly object to this interpretation of his work, by the way. He very much emphasized the role of the individual artist in the creation of myths, and he directed his writings at artists, hoping that they would help someone to create myths that are suited to the modern age. Interestingly, though, I’ve heard many people express a frustration with Joseph Campbell, since his works have inspired many unoriginal stories that simply copy the hero’s journey in a formulaic fashion.

Posted in Uncategorized | 13 Comments

The Aesthetics of Ideas, Part 1.5: Musings

So, I realize it’s been ages since I wrote in this blog. Sorry about that. I’ve been having trouble organizing my thoughts into a coherent series of posts, in part because I’m constantly exposed to a deluge of new information on the subject. For instance, just a week ago, I learned that there is an entire field called framing which studies how the presentation of ideas influences the way people will perceive them. Anyway, I assure you that I am working on the series, if slowly. In the meantime, I’ll share with you some musings.

I said, in the intro post, that I was going to present my theory about how to find meaning in any worldview. My plan was to discuss a bunch of research from various fields (cognitive science, machine learning, linguistics, etc.), stitch that research into a (not-especially-original) theory about symbol grounding, and then explain how that theory supported my conclusion: that it’s possible (and often easy) to reshape how you feel about abstract ideas.

For all the theory behind it, my technique for making an idea beautiful was actually almost childishly simple: to make an idea beautiful, you just have to create a description of that idea where you portray it as beautiful. I tried to give some examples of this in my previous post (and really, the entire post was an exercise in the aesthetics of ideas), in hopes of convincing you that it could be done.

There were two reactions I was expecting these ideas to receive. The first reaction was “this is impossible!” I expected people to say “My emotional reaction to the concept of reduction is very deeply ingrained; I can’t just change it by reading a paragraph online which describes it as beautiful.” Or I expected people to go even further and claim something like “Ugliness is an inherent part of the concept of reductionism; the concept can’t be defined or described in a way that’s aesthetically appealing, while still matching up to what people typically call ‘reductionism’.”

The second reaction I was expecting was “this is obvious, but why would anyone want to do it?” Again, the technique I’m proposing is quite simple. If you want to make an abstract concept beautiful, write a paragraph about that concept where you highlight the positive aspects. When I put it this way, it doesn’t sound like a profound philosophical statement; it sounds like basic writing advice that you could find anywhere on the internet. Or worse, it sounds like a strategy for marketing. So you want to sell product X? Well, just write a commercial where you highlight all the positive aspects of X! Make sure not to mention any of its downsides. So it seems like what I’m proposing here is marketing tactics for abstract concepts.

This is, of course, exactly the sort of thing that people try to keep out of science. Scientific writing strives for objectivity, for descriptions unburdened by weighty connotations. A scientist or truth-seeker must be dispassionate and unsentimental. Emotions only get in the way of seeking the truth, by biasing us towards certain ideas that appeal to us for reasons completely separate from the ideas’ truthfulness.

So why am I advocating this technique? Well, first, I am not advocating using it on others, to convince them of ideas. And I’m not advocating it as a method of forming beliefs. I am not telling people to abandon their quests for truth, or to accept philosophical positions just because they’re pretty. I’m saying that once you do accept a philosophical position, you can make it pretty. My techique is to be used after you’ve come to a conclusion.

It’s still a bit weird: after all, people don’t usually use marketing tactics on themselves; on the contrary, people try to make themselves less susceptible to such influences. Everyone knows that the people who fall for marketing tactics are suckers, and the more aware you are of how marketers are trying to manipulate your mind, the better you can resist their attempts to do so. Yet here I am, telling you not to revile something akin to marketing tactics, but to embrace it, and to use it on yourself, to convince yourself that certain philosophical positions are more aesthetically pleasing.

Can this possibly work? Won’t your conscious awareness of what you’re doing limit the effectiveness of the technique? If you know you’re trying to manipulate your mind, won’t that shield you from the manipulation? Based on my own experiences, I don’t think so. I’ve been using my technique on myself for over a year now and it hasn’t yet lost its effectiveness.

But I’ve come to a realization about the aesthetics of ideas. My “theory” isn’t so much of a theory as it is a lifestyle. Originally, in this series of posts, I had planned to focus on the details of the theory. I was going to describe my understanding of symbol grounding – an amalgam of ideas gathered from cognitive science, machine learning, and comparative mythology. All of this theory would justify my claim that it’s possible (and even easy) to alter your own aesthetic associations for an idea. I still think this theory is worth discussing, if only because it will be interesting to write out a full account of my current understanding of symbol grounding. And the theory may indeed help to justify the technique.

But the theory will do no good unless you embrace the lifestyle. You must allow a certain flexibility in your sense of aesthetics. This shouldn’t be too hard for truth-seekers to manage – after all, the scientifically minded are accustomed to a certain flexibility in beliefs. No belief is held so tightly that it can’t be relinquished in light of new evidence. A similar flexibility is required for your sense of aesthetics. You must not hold any philosophical aesthetic preference so tightly that it can’t be overturned by a powerful new description or experience. (Note that I am limiting this to abstract ideas. I don’t expect people’s aesthetic preferences on the smell of vomit or the death of a loved one to change.)

So this is what I am advocating. I am suggesting that you make your aesthetic preferences more flexible, that you open your mind to the simultaneous beauty and ugliness of any idea. Then, when the descriptions arrive, your sense of aesthetics can make use of them.

That’s all my musings for now, but I hope to contribute a real post to this series within the next week. I will write even though I don’t know everything yet, and even though I’m not at all certain of how to organize these posts. I can always amend this series in the future, when I know more. I may also write some posts I’ve been working on but which are not part of this series. In either case, you can look forward to some more writing soon. Thanks very much to all my readers for your patience.

Posted in Uncategorized | 2 Comments

The Aesthetics of Ideas, Part 1: Introduction

(This is the first in a series of posts I am writing about the aesthetics of ideas. In it, I will put forth my theory about the emotional grounding of symbols, and describe a practical method for associating beliefs with emotions.)

I Will Show You Fear in a Handful of Atoms

When I was young, it seemed that life was so wonderful,
A miracle, oh it was beautiful, magical.
And all the birds in the trees, well they’d be singing so happily,
Joyfully, playfully watching me.
But then they sent me away to teach me how to be sensible,
Logical, responsible, practical.
And they showed me a world where I could be so dependable,
Clinical, intellectual, cynical.

There are times when all the world’s asleep,
The questions run too deep
For such a simple man.
Won’t you please, please tell me what we’ve learned
I know it sounds absurd
But please tell me who I am.
 — “The Logical Song”, by Supertramp

Science has brought us many incredible things, but it’s also shattered our beliefs in God and the supernatural, which provided primitive man with beauty and with comfort in the face of the unknown. In particular, primitive worldviews infused the world with meaning and life.

Before we go any further, let us ask: what is meaning, exactly? (Here, I’m using the word in the existential sense rather than the semantic one, though this series of blog posts will try to identify the relationship between the two.) I define existential meaning as a link between external and internal experience. Meaning is a way of relating the vast and inhuman movements of the cosmos to our own individual struggles in life.

When primitive beliefs and myths were interpreted literally, they gave the universe literal meaning: the sun did not cross the sky indifferent to humanity; it was created by the same gods who carefully crafted people. Perhaps it was placed there solely to warm the earth. So when you’re raised all your life in this belief system that’s saturated with meaning, and suddenly you discover that the world actually works a different way (the sun is a flaming ball of gas flying through space, utterly indifferent to the earth), then all of a sudden the universe seems to lack meaning; vitality drains from the landscape and all beauty crumbles into dust. Note that here I mean “lack of meaning” in a very concrete way, as per my definition above. When the sun was placed in the sky by a human-friendly deity (or when the sun was an anthropomorphic, human-friendly deity itself), it was obvious how it related to the struggles of mankind. But when the sun is just a giant nuclear furnace that doesn’t have a mind with which to care about humanity, its relation to humanity becomes a lot less clear and objective. Sure, the sun is necessary for us to survive, because it casts its light on our planet – but it doesn’t do it on purpose. It doesn’t care about us. It’s not an agent.

Thus, when moving from this primitive belief system to a modern scientific one, a feeling of disorienting dissonance arises between one’s perceptions of the sun as friendly and anthropomorphic, and the knowledge that such beliefs do not reflect reality. The mind’s way of perceiving the world emotionally (e.g. looking at the sun and feeling loved) must be retrained to accomodate the new worldview; these emotions no longer correspond with the facts, and so must be discarded.

These days, we often think of primitive, animistic beliefs as childish. After all, children are likely to personify inanimate objects, and to attribute events in nature to an anthropomorphic cause. The fact that children naturally gravitate towards these primitive-style worldviews suggest that such worldviews are more in line with how our minds naturally think, and that it takes some forcible redirection to push them into a different way of thinking, such as the logical, analytical mentality that currently predominates. If this is true, then every childhood contains a loss of innocence; each individual life echoes the Fall from the Garden of Eden, when we eat from the tree of knowledge of modern science, and the world ceases to be magical as we learn to view it in mechanistic terms. I suspect that the disillusionment I described in the previous paragraph, when a primitive man suddenly learns that the sun is just a flaming ball of gas, happens to all children in our society at an early age, as they are taught the basic truths that modern science reveals to us. Perhaps this is when we first begin to trust reason over intuition, to separate the mind from the body; perhaps this is when we move from embodied to disembodied consciousness. (Note that, presumably, a similar kind of disillusionment happens to those who move from a theist to an atheist worldview, as they grapple with a newly godless world.)

This “death of God” and loss of spiritual beauty has caused many an existential crisis. In the absence of God, humanity is on its own. No longer do we have the protection of a cosmic father figure watching us from on high. Instead, we wander, alone and abandoned in a universe that lacks any objective meaning or purpose. And the more we explore this universe, the deeper we delve into the cosmic mysteries, the more mysterious they become. Searching for the clear light of truth is like trying to find one’s way out of an infinite maze, a House-of-Leaves-like labyrinth of unanswerable questions, the chasm of the unknown only widening the more information we attain, so that we can’t even see across it to the distant shore of truth.

Primitive man was spared these existential and epistemological crises by his simple, orderly beliefs, which flowed down to him from the clear and ancient fountain of tradition. But in the modern age, we don’t have that luxury. In light of our modern scientific discoveries, it’s folly to accept these comforting beliefs as literal truths. And so we must face the harsh reality of the universe (and of modern thought) and come to terms with it. As the existentialists say, it’s up to us to find meaning in our lives; society and religion can no longer provide it for us.

Two Perspectives on Atheism and Science

For centuries now, people have been grappling with atheism and modern scientific beliefs. In what ways have different people come to terms with it? In this section, we’ll look at two different perspectives that people have taken with regard to these beliefs. In my introductory post, I claimed that worldviews were not just beliefs, but beliefs paired with emotional and perceptual experiences. According to that definition, the perspectives I am about to describe are two very different worldviews, though they contain very similar beliefs.

The first worldview is what I’ll call “hard-headed realism”. Hard-headed realists are usually materialist atheist reductionists, and they often accuse religious people of using their beliefs as a “crutch” because they “can’t handle the truth”. As suggested by their name, the hard-headed realists value “realism”, which they equate with cynicism: “The world is an unpleasant place; now suck it up and deal with it.” They think that human society needs to grow up from its childhood games of religion, stop playing pretend, and “face the facts”. This is considered unpleasant, but necessary for attaining any kind of intellectual maturity. It is worth observing that this worldview acknowledges that the beliefs set forth by religions are more aesthetically pleasing than their particular brand of materialist atheist reductionism.

And yet, when we look at the people who actually devote their lives to studying science, we often find a very different worldview. These people don’t find the knowledge revealed by science to be ugly. On the contrary, many scientists and mathematicians find an almost mystical beauty in the subjects they study. Far from stripping away unpleasant illusions and shattering comforting beliefs, scientific discovery helps to reveal the glory of nature. The world is huge and beautiful and complex. Learning how things work does not make them feel less magical to the scientist. If you tell a man who adheres to this worldview that emotions correspond to chemicals in the brain, he won’t feel disillusioned with emotions because they’re “less real” somehow. Instead, he’ll be even more impressed with the brain for working in such an interesting way; this new knowledge will deepen his appreciation of the complexity of nature. For him, this new knowledge will enrich the beauty of the world, not detract from it.

The Aesthetics of Ideas

The contrast between the two worldviews highlighted above suggests the main theme of this series of posts: abstract facts, such as the absence of God or the unattainability of truth, are not inherently beautiful or ugly. It is possible to find beauty in any belief system. There’s no need to live in a disillusioned world that lacks all feeling of magic or awe. The scientific quest for truth does not require disillusionment.

Certainly, it may be easier to find meaning in some belief systems than others. For instance, the primitive beliefs described above are perhaps inherently meaningful, and need no existential interpretation. But it’s still possible to find beauty in any belief system. It may take some work, but it’s possible. And this series of posts will attempt to explain how to do it.

But perhaps you don’t believe me yet, so let’s take a look at another example.

Case Study: Materialist Reductionism

Can materialist reductionism possibly be beautiful? I picked this example because I was once firmly convinced that it couldn’t be. Materialist reductionism seemed like the epitome of ugly ideas. It’s a belief system which separates us. We’re all isolated bits of matter floating around in space; we’re the ten thousand things instead of the one. And materialist reductionism depletes the world of life and makes people doubt that their own consciousnesses exist. How could such a worldview be beautiful? To quote David Zindell’s The Broken God on the ugliness of materialist reductionism:

The first science had resigned human beings to acting as objective observers of a mechanistic and meaningless universe. A dead universe. The human mind, according to the determinists, was merely the by-product of brain chemistry. Chemical laws, the way the elements combine and interact, were formulated as complete and immutable truths. The elements themselves were seen as indivisible lumps of matter, devoid of consciousness, untouched and unaffected by the very consciousnesses seeking to understand how living minds can be assembled from dead matter. The logical conclusion of these assumptions and conceptions was that people are like chemical robots possessing no free will. No wonder the human race, during the Holocaust Century, had fallen into insanity and despair.

But I have found that even materialist reductionism can be beautiful, because it highlights the fundamental unity of all matter, and it emphasizes that humanity is part of nature and not separated from it. Regarding the fundmental unity of all matter, the multifarious objects in the world are not as different as they might seem. I am a chunk of matter and the chair I’m sitting on is a very different chunk of matter, but we’re both built out of the same fundamental particles. You and I are just pieces of the universe, constructed of the same building blocks that were used to make all the other pieces of the universe. We humans, made of matter, participate alongside the material stars and planets in the dances of the cosmos. We are of one substance with nature; we are not the sole souled creatures, isolated from the world by our unique possession of some kind of dualistic spirit.

Here’s one of many relevant passages from Joseph Campbell, this one taken from Pathways to Bliss, though he emphasizes the physical nature of the universe rather than its material nature. However, the two are closely enough connected that I consider this worth including here, especially considering the relevance of its final sentence to my definition of meaning:

The laws of time and space and causality are within us, and anything we can see or know anywhere will involve these laws. What is the universe? Space. Out of space came a coagulation that became a nebula, and out of the nebula, millions of galaxies, and within one constellation of galaxies, a sun, with our little planet circling it. Then out of the earth came us, the eyes and the consciousness and the ears and the breathing of the earth itself. We’re earth’s children, and, since the earth itself came out of space, is it any wonder that the laws of space live in us? There’s this wonderful accord between the exterior and interior worlds.

So you see, reductionism can be either beautiful or ugly, depending on how it’s depicted. The description is what matters: it connects the abstract concept (“materialist reductionism”) to its aesthetic and emotional implications. The description gives us a way of grounding the abstract concept in our emotional experiences. This is, of course, how the aesthetics of ideas relates to symbol grounding. A description of an abstract concept relies on metaphors; the metaphors we choose will define our emotional experience of the concept.

Furthermore, observe that we can use the contrast between the two descriptions above to figure out what humans find inherently meaningful. Reductionism was ugly to me when I associated it with uncomforting things like isolation and aloneness, and beautiful to me when I related it to beautiful ideas like the mystical unity of everything, and our connection with nature. By analyzing many different beautiful and ugly descriptions, we can figure out which things we find beautiful, and which things we find ugly. Then, we can use this knowledge when designing new descriptions of ideas. Equipped with these tools, we can make abstract concepts as beautiful or as ugly as we choose.


This skill is necessary for truth-seekers in the age of science. Before science, knowledge was handed down through the ages. Factual and aesthetic knowledge were intermixed; the belief systems came pre-infused with meaning. But now we are in the scientific age, and science provides us with a method of discovering truth for ourselves, individually. But (as Joseph Campbell has observed) spirituality lags behind; spiritual truths are still taught by religious authorities, still handed down from on high, and they typically rely on outmoded cosmologies. This makes them unsatisfying to the scientifically minded individual; thus, many people have rejected spirituality altogether. But what we need is not a rejection of spirituality. Instead, we need a method for spirituality akin to the one we have for science: a method of individually discovering religious or spiritual truths or meaning. This series of blog posts is an attempt to provide such a method.

Furthermore, we live in a world where new knowledge is available daily. The intrepid truth-seeker must be willing to have all his beliefs shattered, and must be willing to accept strange and unpleasant things as true (at least until that model is replaced by one with even better predictive power). These blog posts are intended as an aid to such truth-seekers.

To summarize, spirituality and meaning are not a set of beliefs, but an emotional, intuitive, and aesthetic perspective on beliefs. Two people can believe the same factual claims, yet have very different emotional interpretations of those claims. This series of blogs posts is about learning to frame arbitrary abstract philosophical claims in terms of specific emotions. It’s also about stripping away the accidents of culture to reveal what humans truly find meaningful.

In the next post, we will begin to explore the technical details of my model, starting with some assumptions I am making about how the mind works.

Posted in Uncategorized | 11 Comments


Hello! My name is Lucidian (sometimes known as Darcey Riley). Before we get on with the content, allow me to subject you to some boring introductory material. Currently, I’m a first-year PhD student in artificial intelligence, particularly in machine learning and natural language processing. Beyond my academic endeavors, I’m fascinated by comparative mythology, cognitive science, and embodied cognition.

I’m starting this blog so that I will have a place to explore ideas as they arise, in an effort to deepen my understanding. I like to explore different worldviews and perspectives, engaging fully with each one as I encounter it. In fact, I have been accused of “taking ideas seriously”. Note that since I’m cavorting wildly through ideaspace, all thoughts written here are subject to change.

I’m indebted to all of my intellectual influences. Sometimes I think the body of Western thought is like a coral reef, where each thinker is an individual coral polyp. Writers die, but they leave skeletons of books behind them, allowing the next generation to build on top of the foundations they’ve laid. In this manner, the coral reef grows into beautiful and intricate idea-structures over time, branching into fields and subfields and warring factions of academics.

Thus it’s impossible for me to properly acknowledge all of my myriad influences, but I can at least list a few of the most important: David Zindell, Robert Anton Wilson, George Lakoff, Joseph Campbell, and the whole field of natural language processing and machine learning. I’m also indebted to my many close friends, and to various blogs and other corners of the internet. (Though coral feels like the wrong metaphor for online sources, since its rigid structure conflicts with the fluid flexibility of the internet. Maybe seaweed works here. I dunno.) Anyway, check out the blogroll on the right.

Now for the content! Since this whole blog is rooted in my desire to understand, I’ll begin by describing that desire in more detail.

A Journey towards Understanding

Some people say they’re on a quest for truth – as if truth were an object you could hold in your mind – some mystical formula, some key unlocking the secrets of reality, some objective representation, free from the vagaries (and vagueries) of human thought. As if a human being could ever grasp the nature of the universe, unfettered by the limits of our perception!

Since I consider the “quest for truth” misleading, I prefer to say that I’m on a journey towards understanding: both logical, rational understanding, and deep, resonant intuition. My journey has taken me through innumerable worldviews, and I think that the best way to increase my understanding is to explore as many worldviews, perspectives, and ideas as I possibly can.

You see, we can’t observe the universe directly. We can only see it through the filters imposed by our systems of categorization. In particular, our abstract categories are artifacts of our cultures and worldviews. We seem to understand abstract concepts by grounding them metaphorically in concrete ones, and our understanding changes dramatically depending on which concrete concepts we choose for these groundings.

But abstract concepts are the building blocks of ideas; they are the very substance of intellectual understanding. If abstract concepts are non-absolute, then how can we approach any true understanding of the universe? How can we see past the limits of our perceptions, into the nature of reality as it truly is?

Each worldview forms a filter through which we can see the world. So in order to get a more complete picture of the universe, I constantly add to my inventory of filters. The more perspectives I have, the better a view I will get of the world, even if each perspective is fundamentally constrained. To use a different analogy, it’s like taking multiple 2D snapshots of a 3D object. A single 2D photo can’t capture all of 3D reality, but if you take enough photos from enough different angles, you can eventually get a reasonable approximation of what the 3D object looks like.

Thus my goal is to explore as many worldviews as I can. This includes existing worldviews, as well as ones that I create on the fly. I want to develop new systems of categorization, new ways to divide up experience into discrete clusters. Venkatesh Rao of Ribbonfarm has a great term for this – he calls it “refactoring perception”.

So we’ve been talking about worldviews. But wait – what is a worldview, anyway? The mind is not a logical, symbol-processing engine, and a worldview is not a collection of facts one can store in a knowledge base. On the contrary – as the word suggests, worldviews are about perception. A worldview is a way of experiencing life. One can experience life as profound or meaningless, as dark or radiant, as brimming with sorrow or overflowing with joy.

To be sure, a worldview does contain beliefs, and those beliefs interact with perception in fascinating ways. Obviously, perception influences belief. But belief also influences perception. If you believe that God exists, you will be more likely to experience his presence, and to see his steady hand guiding the trajectory of your life. On the other hand, if you believe the universe is meaningless and arbitrary, you might experience the events of your life as pointless and chaotic, lacking any reason or structure.

This leads us to the first topic I would like to discuss on this blog, which is the aesthetics of ideas, and the relation between beliefs and emotions. I used to think some beliefs were fundamentally beautiful, while others were fundamentally ugly. For instance, materialist reductionism always seemed fundamentally barren and ugly to me, devoid of any reverence for life. But in January 2012, I had an epiphany that any abstract fact could be made beautiful, by grounding it in the appropriate concrete concepts. Since then, I’ve been exploring this idea in greater detail, as well as investigating its implications regarding which things humans find inherently meaningful.

If this interests you, then stay tuned for the upcoming series of posts! There, I will discuss the thoughts I’ve been accumulating on this topic. Expect approximately weekly updates.

Posted in Uncategorized | 2 Comments