On the “Obvious”ness of Certain Ideas

Today I was reading George Lakoff’s Women, Fire, and Dangerous Things. (Yeah, I know, I’m way behind on the required reading for writing this blog.) Anyway, I came upon a passage (pp. 58-59) that discusses the following questions: “Does language make use of general cognitive mechanisms? Or is it something separate and independent, using only mechanisms of its own?” As Lakoff observes, these are questions of great importance to the study of language and cognition. If language uses general cognitive mechanisms, then we can use our understanding of language to update our beliefs about how the mind works in general, and vice versa. But if language is something separate, then we need to keep our theories separate as well.

This is interesting, of course, but what I found especially fascinating was the following paragraph:

This issue is a profound one, because it is by no means obvious that the language makes use of our general cognitive apparatus. In fact, the most widely accepted views of language within both linguistics and the philosophy of language make the opposite assumption: that language is a separate “modular” system independent of the rest of cognition. The independence of grammar from the rest of cognition is perhaps the most fundamental assumption on which Noam Chomsky’s theory of language rests.

The reason I found this passage fascinating is that Lakoff says “it is by no means obvious”, but Lakoff’s conclusion that “language [does make] use of our general cognitive apparatus” is completely obvious to me. Whenever this sort of thing happens, I like to take a step back and ask “What are my beliefs and assumptions that make this obvious to me?” and “What are the beliefs and assumptions that make this non-obvious to other people?” (It’s also useful to ask these questions in the reverse direction: “What are my beliefs and assumptions that make their position non-obvious to me, and what are their beliefs/assumptions that make it obvious to them?”)

In this case, my assumptions are mostly based on my understanding of evolution, in particular the fact that evolution usually builds on or modifies pre-existing mechanisms rather than inventing entirely new ones. For this reason, I expect language to have evolved as an outcropping of pre-existing cognitive mechanisms; it seems highly unlikely to me that we would have just developed a whole separate grammar module with no relation to already-existing structures.

This allows me to identify a whole class of assumptions which influence my understanding of how the mind works. Specifically, I make use of my knowledge of evolution when thinking about cognitive science. Presumably, other people have different intuitions either because they just don’t use their knowledge of evolution when reasoning about cognitive science, or because they have a different understanding of how evolution works.

Now let us ask the questions in reverse. As Lakoff repeatedly observes, the cognitive scientists, linguists, and philosophers who support the other position (that language is a completely separate module) have been influenced by traditional dualist and computational understandings of the mind.

This entire discussion reinforces two major themes of my philosophy: the fact that our current models are wrong, and the importance of studying many different fields. For the former, when I see how obvious previous generations’ beliefs were to them, it gives me the perspective to realize that future generations will probably feel the same way about my own beliefs. After all, I know very little about evolution, partially because I’m not an evolution-ologist, and partially because most of that field’s models must also be incorrect or incomplete.

Regarding the study of many different fields, traditional ideas from linguistics seem quite implausible to me because I’ve read a couple of books about evolution. This suggests that the more fields I study, the more diverse perspectives I will be able to incorporate when building my models. I’m sure that my current models seem preposterous to specialists in various other fields.

This entry was posted in Uncategorized. Bookmark the permalink.

8 Responses to On the “Obvious”ness of Certain Ideas

  1. Kevin says:

    this was great… a few scattered thoughts.

    1. breadth over depth: I completely agree with you here that breadth brings huge advantages, although it might just be a matter of style (i.e. fox vs. hedgehog). Not sure if it’s just my own bias, but I feel like the bigger, better, more impactful thinking tends to be done by foxy types.

    2. Did you see this a couple years ago? http://blogs.discovermagazine.com/discoblog/2010/12/07/ncbi-rofl-clueless-doctor-sleeps-through-math-class-reinvents-calculus-and-names-it-after-herself/ Seems like she could have used a little more breadth :P

    3. I love the example you give here of evolution being used as a ‘razor’ to cut through some specious reasoning. An understanding of evolution is one of the most important razors in my own toolkit. The other big one is an understanding of computer science, abstraction, models of computation, etc., which helps cut through a lot of squishy philosophy written by non-technical people. Any other big razors?

    4. re language as repurposing other cognitive modules… three other domains with similar properties (nested/recursive, complex relationships between parts) come to mind: planning, spatial reasoning, and morality. I recently read a very interesting paper on the latter that gave me a nice little intellectual frisson: http://tuvalu.santafe.edu/~bowles/UniversalMoralGrammar.pdf

  2. 2obvious says:

    This sort of reminds me of some ideas I was kicking around about emotion’s relationship to logic, not too long ago. I was convinced that feelings clouded rationality. Ergo, step away from your mood, and you’ll draw the most reasonable conclusions. (Seemed pretty obvious to me, based on a cursory familiarity with psychology.)

    Somebody heard me spouting this ignorance and recommended _Descartes’ Error._ I read about folks who had the “feeling” parts of their brain destroyed, were still perfectly capable of solving mathematical equations, yet suffered from erratically inconsistent personalities.

    Turns out, without some connection between feeling and reason, they lost the ability to prioritize. The long-term benefit of a stable job couldn’t compete with the short-term benefit of sleeping in. Every place they turned was a new series of choices, with no ability to filter out the simple ones; there were no simple choices.

    Had I known anything about neurobiology beforehand, I probably wouldn’t have thought the connection I made was so obvious.


    You’re conclusions on evolution seem obvious to me, on the one hand.

    On the other hand, I know less than nothing about linguistics. So…?

    • Thanks for this comment! I had heard about people who had the “feeling” parts of their brains destroyed and couldn’t even pick which box of cereal to buy in a supermarket, but despite extensive googling, I couldn’t locate the source. I’ll have to check out Descartes’ Error; it would actually fit with my current reading because I am at the moment trying to figure out what’s up with this whole embodied cognition thing. (Does it make new predictions, or is it just a useful refactoring of agency that puts us in a better place for exploring the workings of the mind?)

      I’m curious about your life has changed since your reevaluation of the importance of emotion, but that seems like kind of a personal question to ask here. Anyway, if you are interested in telling, I am interested in hearing.

      • 2obvious says:

        (Grrr: staring at my typos, eternally preserved.)

        Actually yes, the author does put agents in unusual places. It’s debatable whether this is constructive. (The book is a pretty terrible read, actually.)

        Analogy: any time you approach a door, myriad possibilities cross through your mind. The author theorizes that, say, thrusting your hand through the door never seems to make the list, because AS you consider it, your __ has a strong bias against firing those synapses. I believe the jargon he invents is “somatic markers?”

        He attributes these biases to emotions. (He also takes great pains to differentiate emotions from feelings. And to define “primary” and “secondary emotions.”) If you can track with the depth of his nominalization, you may wind up with something plausible.

        But then, the “emotion” he’s defining probably bears little resemblance to the “emotion” you had in mind, I’d wager?

        …Perhaps for this reason, reading it has yet to alter my personal conduct.

  3. Kaj Sotala says:

    (Read this post a while back and then forgot about it, until a thought popped to mind relating to something that you’d said…)

    In this case, my assumptions are mostly based on my understanding of evolution, in particular the fact that evolution usually builds on or modifies pre-existing mechanisms rather than inventing entirely new ones. For this reason, I expect language to have evolved as an outcropping of pre-existing cognitive mechanisms; it seems highly unlikely to me that we would have just developed a whole separate grammar module with no relation to already-existing structures.

    Does that necessarily follow? I agree that evolution usually building on pre-existing mechanisms does make it more likely that language uses general cognitive mechanisms, but then again, a person’s eyeball is a pretty distinct entity from one’s kidney. Although they do share some mechanisms (e.g. both need a steady supply of blood in order to function properly), for the most part one can say that they’re modular and separate, despite having both been created by evolution.

    Relatedly, you seem to be saying that if language doesn’t use general cognitive mechanisms, then that implies that it originally evolved without any relation to already-existing structures. I don’t think that that follows. It could have started off as an offshoot of an existing cognitive mechanism and then gradually differentiated itself to a separate module, if e.g. the first version of language used general-purpose cognitive circuits but those then began becoming more and more specialized, until they’d become an entirely different system.

    Something like that might happen with general skill learning. When we first learn a new skill, that obviously has to build on existing machinery, but a high level in a skill seems to require cognitive machinery that becomes very specialized for exploiting the tiniest regularities that can be found for boosting performance in that skill. E.g. chess masters vastly outperform novices in remembering chess board setups, but only for “natural” setups that one could end up in by playing a real game of chess, and perform at novice levels for artificial setups they don’t have experience with. Or to mention a more language-related example, we quickly lose much of our general-purpose capability for distinguishing between sounds which our native language doesn’t distinguish between.

    A trend towards modularity in neural networks apparently also helps evolution evolve better designs.

    • Kaj Sotala says:

      A trend towards modularity in neural networks apparently also helps evolution evolve better designs.

      Eh, that was unnecessarily vaguely expressed, so in a few more words: studies have (apparently) shown that the brain’s architecture seems to minimize the summed length of the wiring diagram. Possibly this is because of the physical costs involved in maintaining large numbers of long-distance neuronal connections. But whatever causes the selection pressure that drives down the summed length also leads to the brain networks to become more modular, favoring intra-network connections over inter-network ones.

      As a side effect, this also seems to increase the brain’s evolvability, since it the effects of any changes become more localized and less likely to disrupt the overall system. ANN simulations suggest that preferentially evolving networks with low wiring costs also seems to lead to networks with better overall performance. More information and references in the provided links.

      This would also seem to make it much more plausible that a language faculty could become very modularized and not draw on general cognitive machinery as much.

      • Darcey Riley says:

        I meant to respond to these comments ages ago, but then I didn’t have a response besides “Wow, that’s interesting!” I still don’t, but a “Wow, that’s interesting! Thanks for the comment!” definitely seems worth posting. So, thanks! I will have to take this stuff into consideration when thinking about evolution. And now I’m very curious about the evolution of modules and whether anyone has come up with mathematical models of it.

  4. nadith says:

    So, what exactly is a ‘general cognitive mechanism’? Perhaps he defined it and I am simply ignorant as to his schema, but it sounds quite vague. If so, how would you argue out or in? It’s like arguing about autism 20 years ago. Or, more fun, ‘death by natural causes’.

    I can see why people may presume that language is somewhat distinct though, what with its critical period [Neurobiology] and Broca and Wernicke getting their names inscribed on so many images of brains.

    However to that I always wonder about this example :http://en.wikipedia.org/wiki/Hydrocephalus the exceptional case.

    That said though, I don’t think we know enough about the brain to make such assertions, let alone axons (since we can’t even model one which works based on our current theories, as the propagation times are WAY too long).
    As for being related, what is it you are speaking to in language. I mean, we used to think people speaking other languages were just grunting barbarians a step away from simians. Is language though communication, because if so then where is this plant brain that lets them communicate? Is it just abstraction? Is it drawing conclusions from abstraction to be abstracted again? It seems like a very vague idea to begin to approach without pinning it down.

    Arguing evolution though, I would say we have a pretty poor grasp of evolution as well. The basic idea of building things on top of other things, I get along with, but like I said it is generalized. Drosophilia and epigenetics and transgenetics and abiogenesis and all that considered, it feels to me like we are getting our toes wet in a rather big and deep pool.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s