Stereotypes and Classification Systems

Posted: June 2, 2005 at 11:45 pm in cognition, philosophy

I was having an email discussion earlier today with a friend where the topic of stereotypes came up. As is common, my friend said that stereotypes are bad, and we should judge people based on their own individual characteristics. This gave me an excuse to launch into a longstanding rant of mine, wherein I defend the existence of stereotypes and note common problems with them. While writing it up in email, I realized that I don’t think I’ve ever written this rant up for the blog, so I figured I should do that. We’ll see if it’s any more coherent this time than it was in the email I sent today.

Here’s the basic outline of the rant. Our brains have evolved over eons to be extremely good at recognizing patterns and classifying things, because those actions reduce cognitive effort exponentially, allowing us to use our big brains to work on innovations rather than memorization, thereby giving us a survival advantage. Therefore, fighting against the existence of stereotypes is pointless, because our brains will do it, and do it well, regardless. The real battle that needs to be fought is understanding how our brains develop stereotypes, and what the drawbacks to stereotypes are.

Let’s start with a non-politically charged example: doorknobs. A doorknob is a common object. Whenever we come across one, we turn it, the door opens, we go on our way. But how do we know that the doorknob in front of us works the way we expect? Our brain has encountered hundreds of doorknobs before, and come up with a universal theory of doorknobs – we see a doorknob, and we expect it to work like the other doorknobs we’ve used. In fact, the only reason a doorknob ever stands out from the canonical doorknob stereotype is when it fails to work as expected, when it needs to be pushed or pulled rather than rotated.

What’s the point of this example? It demonstrates the power of stereotyping – rather than have to remember how each individual doorknob of the hundreds of doorknobs we encounter work, we create a stereotype of a doorknob, remember how it works, and then all of the hundreds of individuals collapse into one category, possibly with a few outliers that work differently. It’s an incredible saving of cognitive effort.

Rather than delve into the dangers of stereotyping people, due to their comparatively more varied natures, I’m going to take this rant in a different direction, which is exploring the topic of classification systems. Stereotypes are just an example of the more general case of classification systems. What is a classification system? It is, in essence, a simplification of the world, throwing out “unnecessary” details to make the new description of the world more manageable. It is this throwing away of details that makes classification systems paradoxically both more and less powerful. Such systems are more powerful because they condense a lot of information about the world into a few bits of data. They are less powerful because in the process of condensing that information, they often ignore crucial details.

So what is one to do? The answer is to remember that classification systems are a cognitive tool. And just like a hand tool, they can be used both appropriately and inappropriately. As the saying goes, “When all you have is a hammer, everything looks like a nail.” We might stretch that to say “When all you have is a single classification system, everything gets squeezed to fit.” So it is important to be aware of the limitations of any classification systems that we use, and to be aware of the information that gets thrown away when we classify objects using that system. As a side note, this line of thinking is heavily influenced (if not stolen outright) from the book Sorting Things Out by Bowker and Star).

We can often see what the limitations of our classification systems are by noticing what doesn’t fit. For instance, the taxonomic classification system is relatively straightforward; if something has fur, it’s a mammal, if it lays eggs, it’s a bird or a reptile, etc. Then they found the platypus, a furry creature that laid eggs. Scientists dubbed the platypus a freak of nature, as if it was the platypus’s fault for breaking their classification system (as another aside, this placing of fault with the object under consideration is a direct consequence of the “object-oriented” viewpoint that I rail against in this post). All the platypus did was point out that there were things not handled by the classification system. And that’s okay. In fact, a classification system that tried to handle every exception would be useless because it would be so unwieldy. It would be like the map of a territory that is as large as the territory itself. The power of the classification system is in its condensation of information, even at the cost of not fitting the data exactly.

Circling back to the original topic of stereotypes, what practical advice can we extract from this theoretical discussion? First of all, stereotypes can be useful. If we see a guy at the bar, wearing a sports jersey and drinking Budweiser, we will probably guess that he’s a jock fratboy. If we see them wearing all black with stylish glasses, we’re thinking hipster. Given that first impression, we can often ascertain the accuracy of that impression with only a few questions, whereas if we had to start from scratch with each new person, it would take much longer to get a read on the type of person they are.

However, we must always be aware that all of our stereotypes are necessarily incomplete descriptions of people, extrapolating from a couple bits of information. When new information comes in that contradicts the stereotype, we need to recognize that the individual does not fit in our classification system, and to recognize that the fault lies with the system, not with the individual. The stereotype is just a tool. It should not control our perceptions to the point where we can not distinguish the individual from the stereotype. If we start talking to the jock fratboy, and it turns out that he’s an avid reader of Foucault and Derrida, we should not reject him for not living up to our expectations based on our stereotype.

In fact, I can use this as an opportunity to tie this topic into the Latour series of posts. One thought I had while thinking about this was that classification systems are a subset of Latour-ian collectives. In his conception of the collective as a political entity, Latour talks about external entities demanding entrance to the collective, which, if admitted after due consideration, are placed within a hierarchy. In the case of a classification system, when an exception is found to our system (e.g. the platypus, or the Foucault-reading jock fratboy), it is up to us to determine whether we will ignore the exception or whether we will modify the system to try to accommodate the exception. There are tradeoffs either way – if we ignore the exception, the classification system may be overly simplistic and unrealistic – if we accommodate it, the system may be too complex and therefore useless. So there should be an internal debate that takes place as to how to modify our classification systems in the presence of an exception. It’s a matter of being aware of our biases and being able to explicitly take them into account rather than letting them unconsciously rule us.

Okay, one last point because this has already gone on way too long. One idea that came up while I was talking about this with my friend was the idea of using people’s classification systems to classify them. We were talking about the different criteria that people use to classify people. Some people will judge people by the kind of car they drive, others by the college they went to, others by the books they read, others by their appearance. Each of these classification systems is a reflection of the individual doing the classifying, a reflection of their internal Latour-ian collective, a record of their history of interactions with other people. So it seems like we should be able to use their classification systems to extract information about them, and thereby be able to classify them. As an example, we might dismiss somebody who judged others by their car or by their attractiveness as being shallow, whereas somebody who judged people by the quality of their mind and by their inquisitiveness might be judged as being more thoughtful. I’m not quite sure where the idea goes from here, but I like it a lot, partially because of the reflexive and recursive nature of it, so I’m hoping somebody else can help me flesh it out.

Okay, I’m done for now. Sorry for the long post, but this is a topic I’ve had saved up inside of me for a long time. Plus I’m not sure when I’ll get the chance to blog again – things have been busy. Speaking of which, a little journal-type update for those of you strong-willed enough to struggle to the end of this post: things have calmed down at work from that first week, but are still busy. Spending Memorial Day weekend in Boston visiting friends and going to Crusher’s wedding was awesome. The car has been okay for three days in a row – is it a trend? I’m tired, but glad I wrote this – I need to just make the time for blogging, because I like doing it. That’s it. More another time.

7 Responses to “Stereotypes and Classification Systems”

  1. dave Says:

    I think your doorknob example blurs over an important distinction.
    Doorknobs are manufactured items, and the “stereotype” we use in interpreting them is nonaccidentally related to the template used in building them. That’s why not bothering to examine them further works out so well — there’s an “invisible hand” that suppresses much in the way of doorknob eccentricity.

    The same principle doesn’t apply so well to natural phenomena, which are not conveniently designed to match the categories we are wired to put them in.

    I don’t disagree with your point that stereotyping saves effort… in fact, I can’t see how anyone could. But when someone objects to stereotyping, I usually understand them to be saying that the effort saved by not analyzing individuals is not worth the negative consequences.

  2. Eric Says:

    My friend made the same point about the artificiality of doorknobs, so I guess it’s a bad example, because it’s distracting from the larger point I’m trying to get at, which is that we need to recognize the artificiality of the “categories we are wired to put [things] in”, as you put it. Maybe the point is too trite to consider. But I think if more people were aware that the classifications and stereotypes they use are merely a reflection of what they’ve been told or have discovered in the past, rather than classifications that are embedded in nature or in the world, then we’d all be better off.

    Now that I think about it, I should have dropped the doorknob example entirely, because it’s irrelevant to my other point. Or split the post into two parts. Shows what happens when I blog when tired.

    If I were to do it over now, I guess I’d make two posts. Post one would make the point that the human brain is wired to classify and stereotype things, because of the huge savings in cognitive effort. We can’t avoid doing it. The doorknob example sucks because of the artificiality, but we can use examples of ancient cultures using myths to try to make sense of nature, or the taxonomic classification system. Our brains will classify and stereotype whether we consciously want to or not.

    Post two would ask the question of what can we do to avoid the negative consequences of such stereotyping. This is where most of this post belongs, talking about the artificiality of classification systems. As is usual with my philosophy, I believe that the best defense is being self-aware, aware that our brain is madly unconsciously stereotyping, and learning to compensate for that when appropriate. Something like that, at least.

    Maybe I’ll take another stab at this when I’m more coherent.

  3. Beemer Says:

    Dave’s comments notwithstanding, I totally agree with you. I think the basic tool that you use to overcome the flaws of stereotyping is to try and always have the idea “my mental model of the universe may be inaccurate” (aka, “I might be wrong”) floating around in the back of your head. What happens then is that you can get that idea to activate whenever a stereotype activates (well, in a suitable context — like first impressions of a person, but not doorknobs), and a little mental bell rings reminding you to check your stereotype against the observed data. As you say, it’s all about being self-aware.

  4. robin Says:

    pretty much makes sense to me, and i agree with the previous comments.
    - the awareness of stereotypes/classification systems/scientific theory as a simplified model of a much more complex system keeps said classifications useful. if you can’t see outside them, you’ve lost a whole lot of interesting information.
    - and yeah, i think i do generally tend to meta-stereotype people based on their own judgement systems. so far it seems to work reasonably well. the same caveats hold, though…it’s still a simple model of a complex entity within a complex system.

  5. Eric Says:

    Right, Beemer – so long as everybody is aware that their mental model of the universe is not the same as the universe, that their classifications are not ordained by God or Nature, then we can get along much easier, as it becomes a matter of translation and Latour-ian diplomacy, rather than a clash of fundamentally incompatible worlds.

    And, Robin, while I agree that “it’s still a simple model of a complex entity within a complex system”, I think we’ve all had the experience where we found a better classification system or data representation that makes things just work better. It’s like a new paradigmatic model in science – it just does a better job of explaining the data we’re confronted with. One reason why I think that the idea of meta-classifying based on classification systems is better is that it splits people up along lines that I think matter. The stereotypes and classifications that people use are a reflection of their reality coefficients, and therefore grouping people together based on that will place people with others who share their reality. Again, this ties into the Latour stuff, with his idea of merging collectives. Hrm. More thought needed.

  6. The Rantings of Eric Nehrlich || Mental models as tools || June || 2005 Says:

    [...] s Work”
    Powered by del.icio.us and RSS Digest

    Recent posts
    Mental models as tools Stereotypes and Classi [...]

  7. nehrlich: Stereotypes and Classification Systems Says:

    [...] nehrlich (nehrlich) wrote,@ 2005-06-02 23:47:00      Stereotypes and Classification Systems A rather lengthy discussion on stereotypes and classification systems. And in a demonstration of the extent of my brainwashing, I do manage to tie it into the Latour book I read last month. Also, I’m not sure I ever LJ’d the last car post. Oops. Apologies.(Post a new comment) [...]

Leave a Reply

RSS feed

LinkedIn profile

Twitter

Speaking of biking, I just finished the San Juan huts mountain bike tour from Durango to Moab: plus.google.com/u/1/+EricNehrl…

Recent Posts

  • Thinking Fast and Slow, by Daniel Kahneman
  • How is your memory indexed?
  • Expertise as exception handling
  • Maximizing collisionability
  • Hosting update
  • Random Posts

  • Intelligent organizations
  • King Rat, by China Mieville
  • Amusing anecdotes
  • Come Out and Play
  • Paradox of Career Choice


  • Archives

  • Categories