After the disjointedness of my last post, it’s probably worth going to check out the comments, because I think I clarified some of what I was thinking with the help of my commenters. What I want to talk about today is how we use classification systems, and more broadly, mental models.
Let’s start with the fundamental assumptions I’m making: that there are a myriad of classification systems available to us, and that none of them is the One True Way, and that each of them has advantages and disadvantages. Given those assumptions, how do we decide which one to use at any given time? It’s a matter of finding the right fit between the task we are trying to accomplish and a classification system that will streamline that task. To use a non-classification-system analogy, neither Cartesian nor polar coordinates is “right”, but each system has its uses; when you are dealing with a linear problem, Cartesian coordinates work better, but for trigonometric problems, polar coordinates are far easier.
I made this point in the last post, but it’s worth restating: Classification systems are a cognitive tool. Like any tool, there are appropriate and inappropriate uses. Being aware of the limitations of our tools is an important aspect of mastering them. In particular, the main limitation is that any classification system is going to leave us with blind spots (as a side note, while re-reading this, I just realized this also ties in with the filtered world views post). When a form asks us to classify our ethnicity, we are reduced to one of a handful of options, and any sort of complexity is going to be glossed over and lost to the system. And, of course, any information about ourselves that is not related to ethnicity will also be lost. Looking at a population based on the results of that form will give us a very distorted view.
And the same is true of any other classification system. Each one introduces simplifications for the sake of making the data more manageable. So what are the advantages of these simplifications? By making the data more manageable, they can make trends more evident, allowing us to develop generalizable theories about the data. As most of my readers are scientists of one form or another, they are familiar with the feeling of finally hitting upon the right data representation, and everything just falling into place in our model. It happens in programming as well; with the right data structure, programming becomes much easier, whereas with the wrong one, every task becomes a kludge.
Given the ways in which data models and classification systems can streamline our solutions, the importance of choosing the right one when beginning a task becomes evident. I’d like to imagine that we can train ourselves to be like a master craftsperson in his/her workshop, surveying the tools available to them, and choosing just the right tool for the job. And I think most of us do this at a subconscious level, as described in Sources of Power, where Klein describes a “Recognition-Primed Decision Model”, wherein people develop subconscious models of situations, and react accordingly. In time-critical situations, we may have to depend on our subconscious to do the right thing. But that can also lead to critical errors, as our subconscious models blind us to other inputs that would lead us to choose different actions; Klein describes several such breakdowns in the book, including the case of the USS Vincennes shooting down an Iran Air commercial passenger plane.
Such breakdowns indicate to me that being able to consciously examine our mental models and assumptions should lead to better decisions (it can also lead to paralysis-by-analysis, but I’m going to ignore that for now). It’s important to be able to step back for a second, and ask whether there are alternative ways of looking at the task before us. This requires a certain flexibility of thinking, of being willing to admit that there might be other options available, and that one might have chosen less-than-optimally before. But far too many decisions are made once, possibly based on an incorrect data model, and never re-examined.
I wonder how we can train ourselves to become better at this skill of looking at problems with fresh eyes. I think it all comes back to my continued campaign for people to be self-aware, aware of the choices they make, the blinders they put on, the ways in which their mental models may torque their perceptions, etc. Since I continually struggle with my own self-awareness, I’m not sure why I think I have any authority on advocating it to others, but that’s another story.
Thinking on why I think I have made strides in self-awareness, here are some pointers. I have surrounded myself with good friends who are intelligent and observant. I have learned to trust their opinions and to listen to their advice, for the most part. I am still working on learning to open up and ask for help, and to be secure enough to admit when I am wrong, but that’s part of it as well. I need to do a better job of cultivating a diverse set of friends to be exposed to a wider variety of viewpoints. Hrm. There’s a lot more thinking to be done on this topic. I’ll pick it up another time. I’d love to hear any thoughts any readers have.
I think that the mental model of the universe you describe here is usefully accurate. =)
Some of the talks I saw at last year’s Serious Games Summit suggest that games (and probably ‘play’) are useful tools for reshaping people’s mental models and for developing the skill of questioning your mindmap.
Yeah, I think that playing games is definitely part of it. As is speculative fiction. I actually took a stab at this a while ago in this post, where I posit that one of the characteristics that I share with my friends is a willingness to play with the rules, and pondered where we might have learned that trait.