In Association with

Who am I?

You can look at my home page for more information, but the short answer is that I'm a dilettante who likes thinking about a variety of subjects. I like to think of myself as a systems-level thinker, more concerned with the big picture than with the details. Current interests include politics, community formation, and social interface design. Plus books, of course.

RSS 0.91

Blogs I read

Recent posts

Directories on this blog



Wed, 23 Feb 2005

Cognitive subroutines and context
More thoughts on yesterday's cognitive subroutines post after thinking about it some more, partially in response to Jofish's comment.

Jofish brings up the importance of leveraging the real world. We don't have to store a hypothetical model for everything in the real world, because we can use the real world to store information about itself, and use that to jog our memory. This is partially why people can find things more easily in a physical spatial environment than in a file system; the physical cues and landmarks of the real world help guide them to their destination. To some extent, the brain uses inputs from the real world to decide which of the cognitive subroutines to run.

This gets back to a running theme of mine that I never fully developed, which is the importance of context. I wrote a footnote post about it at one point, but never returned to the subject. One of the things that fascinates me about our brains is how incredibly contextual they are. For instance, my memory is totally associative. When I get to the grocery store, I often can't remember what I'm supposed to get, until I walk down the aisle, see something, and my memory is jogged. I've mentioned this phenomenon in social contexts as well.

When I put the importance of context together with the idea of cognitive subroutines, a neat idea pops out. Perhaps these cognitive subroutines are like computer functions in yet another way. They have a certain set of inputs which defines their behavior, much like a function prototype defines the inputs for a computer function. When our brain is presented with a situation with certain stimuli, it grabs among its set of cognitive subroutines, finds the one with the closest matching set of inputs, and uses it, even if it's not a perfect fit. In other words, these cognitive subroutines are called in an event-driven fashion based on incoming stimuli.

An interesting idea, but is there any evidence to support it? I think there may be in the existence of logically inconsistent positions. We all have positions on various issues that may conflict with each other. The canonical one is the person who is pro-life in opposing abortion, but pro-death in supporting the death penalty. How can the person reconcile these opposing viewpoints? Within a single hierarchical logical structure, it's difficult. However, if the brain and its beliefs are treated as a set of separately created cognitive subroutines, each of which is activated by its own set of inputs, then the contradiction goes away. Each belief isn't part of a large scale integrated thought structure; it's contained within its own idea space, its own scope to use the programming term. Within that scope, it's self-consistent, and it doesn't care about what happens outside of that scope.

Only if you make the effort to try to reconcile all of your individual beliefs do contradictions start to pop up. But it's a difficult task to break the beliefs out of their individual scopes, so most people don't bother unless they are philosophers.

And to tie this all back to my favorite unifying topic, of stories, the effectiveness of stories lies precisely in their ability to activate certain contexts within our brains. This is why Lakoff emphasizes framing; by framing issues in a certain way, the conservatives set the context that the audience uses and actually choose which cognitive subroutines are activated in considering that issue. Advertisers seek to take advantage of this as well; commercials showing beautiful women drinking beer are trying to activate certain cognitive subroutines to connect the concepts.

Wow. When I started this post, I didn't know I was going to be able to tie all of my hobby horses together into one overarching model, but there ya go. I know I'm ignoring a lot of details, and making a bunch of simplifying assumptions, and using an overly reductive model of the mind, and being unclear on language, but, hey, that's what you get when you read a blog. Eit.

P.S. The Firefly critique is written. I'll get to it tomorrow. Unless I end up expounding more on this subject.

posted at: 23:48 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Cognitive subroutines
This is going to be a relatively long post, mostly inspired by reading Blink, by Malcolm Gladwell, and Sources of Power, by Gary Klein, both books that explain how and why our unconscious decision-making capabilities are often better than our conscious ones, and also explain when such capabilities fail and need to be over-ridden.

I was sitting there thinking about these issues last week while sitting on stage during our concert of Schumann's Das Paradies und Die Peri. We have a section in the middle of the concert where we sit for about 45 pages with only a couple pages of singing in the middle to keep us awake. So for four nights in a row, I had plenty of time to sit and think. And on Friday night, I had one of those moments where I connected a bunch of ideas, and synapses lit up, and I found a story that really works for me explaining some of this stuff. I was actually sitting there in the concert trying to figure out if I could get out my Sidekick and send myself a reminder so that I wouldn't forget the synthesis, but I couldn't. Fortunately, the idea was strong enough that I jotted down the basic outlines when I got home. This is all probably pretty obvious stuff, but it put things together in a way that made a lot of sense to me, bringing together a bunch of different ideas. So I'm going to try to lay things out here.

The basic idea builds off of Klein's idea of expertise getting built into our unconscious. Our brain finds ways of connecting synapses that leverage our previous experience. Why does it do that? I'm going to assume that it's a result of the constraint stated in The User Illusion, that consciousness operates at only 20 bits per second. The information processing power of our conscious mind is very low, so our unconscious mind has to find ways of compensating for it.

Here's the basic analogy/story that I came up with, being the programmer that I am. When I'm writing code, I often notice when I need to do the same task over and over again. As any programmer knows, when you're doing something over and over again, you should encapsulate that repeated code into a subroutine so that it doesn't need to be copy-and-pasted all over the place. I would imagine that a self-learning neural network like our brain does a similar task. So far, so obvious.

This relates pretty well to my own experience as a learning machine. When I'm learning a new game, for instance, my brain is often working overtime, trying to figure out how the rules apply in any given situation, going through the rules consciously one by one to figure out what the right move should be. As I play the game more and more, I learn to recognize certain patterns of play so that I don't have to think any more, I just react to the situation as it's presented. This is what Klein describes as Recognition-Primed Decision Making. To take a concrete example, when I was first learning bridge, the number of bidding conventions seemed overwhelming. I had this whole cheat sheet written out to which I continually referred, and every bid took me a while to figure out. As I played more and more, I learned how each hand fit into the system, so that I could glance at my hand and know the various ways in which the bidding tree could play out. As Klein describes it, my expertise allowed me to focus on the truly relevant information, discarding the rest, allowing me to radically speed up my decision making time.

Back to my story. Thinking about wiring our unconscious information processing architecture as a bunch of subroutines leads to a couple obvious conclusions. For one, it's easy to imagine how we build subroutines on top of subroutines. A great example is how we learn a new complicated athletic action. It also applies on the input side.

Another obvious result is that because subroutines are easy to use cognitive shortcuts, they may occasionally be used inappropriately. What happens when a subroutine doesn't quite fit what it's being used for? Well, in my life as a programmer, I often try to use that subroutine anyway. It doesn't end up giving me quite what I want, so I find a way to kludge it. I'll use the same subroutine, because I don't want to change it and mess up the other places that it's called, but I'll tack on some ugly stuff before it and after it to compensate for the ways in which it doesn't quite do what I want.

How does this relate to our brains? I think a prejudice is essentially the same as a cognitive subroutine. It does a bunch of processing, simplifies the world down to a few bits, and spits out a simple answer. And, in most cases, the subroutine does its job, spitting out the right answer; it wouldn't have been codified into a subroutine if it didn't. Much as we may not want to admit it, prejudices exist for a reason. However, when we start to blindly apply our prejudices, using these canned subroutines without thinking about whether it's being applied under the appropriate conditions, then we get into trouble. Gladwell calls this the Warren Harding error.

What is the right thing to do? Well, in programming, the answer is to think about how the subroutine is used, pull out the truly general bits and encapsulate them into a general subroutine, and then create specific child subroutines off of that, assuming we're in an object-oriented environment. In general when using a subroutine, certain assumptions are made about what information is fed into the subroutine, and what the results of the subroutine will be used for. If those assumptions are violated, the results are unpredictable. A more experienced programmer will put in all sorts of error checking at the beginning of each subroutine to ensure that all the assumptions being made by the subroutine are met.

How does this apply to the cognitive case? I think this is a case where it gets back to my old post about questioning the assumptions. If we try to understand our brain, and understand our kneejerk reactions, we will be in a much better position to leverage those unconscious subroutines rather than letting ourselves be ruled by them; our intelligence should guide our use of the cognitive shortcuts and not vice versa.

This idea of cognitive subroutines also gives me some insight into how to design better software. I picture this cognitive subroutine meta-engine that tracks what subroutines are called, and strengthens the connections between those that are often called in conjunction or in sequence, to make it easier to string those routines together, eventually constructing a superroutine that encompasses those subroutines. It seems like complex problem-solving or pattern recognition software should be designed to have a similar form of operation, where the user is provided with some basic tools, and then the software observes how those tools are used together, and constructs super-tools based on the user's sequence of using the primitive tools (alert readers will note that this is the same tactic I propose for social software). I'm somewhat basing this on a book I'm reading at work called Interaction Design for Complex Problem Solving, by Barbara Mirel, where she discusses the importance of getting the workflow right, which can only be done by studying how the users are actually navigating through their complex problem space.

So there you go. Treating the brain as a self-organizing set of inheritable subroutines. I'm sure this is obvious stuff. Minsky's Society of Mind probably says this, but I've never read it. Jeff Hawkins's book On Intelligence probably says something similar as well (I should probably read it). And I suspect that Mind Hacks is in the same space. So it may be obvious. But, hey, it's new to me. And it just makes a lot of sense to me right now, in terms of how I learn to do new complex activities, and how it relates to my work as a programmer. I'll have to think some more about if this can actually be applied in any useful manner to what I do. And about the shortcomings of the theory.

P.S. Tomorrow we'll get back to more light hearted subjects, like why I think the TV series Firefly failed, with a compare and contrast to what Joss did right in Buffy.

new complicated athletic action: When learning a new action, we break the action down into individual components and practice them separately. When I was learning how to spike a volleyball, the teacher had us first work on the footwork of the approach. Left, right, left. Left, right, left. We did that a bunch of times, until it became ingrained into muscle memory. Then we practiced the arm motion: pulling both arms behind our back, bring them forward again, left arm up and pointed forward, right arm back behind the head, then snapping the right arm forward. Then we coordinated the arms with the footwork. Once the entire motion was solid and could be performed unconsciously, then we threw a ball into the mix. That had to come last because the conscious mind is needed to track the ball and coordinate everything else to make the hand hit the ball in the air for the spike. Only if everything else is automatic do we have the processing power to make it happen. If we had to think about what steps we needed to take, or how to move our arms, we would never be able to react in time to get off the ground to hit the ball. It's only because it's been shoved into our unconscious that we can manage it.

Another recent sports example for me is ultimate frisbee. I've been working on my forehand throw for the last year or so. After several months, I finally got it to the point where I could throw it relatively reliably while warming up. However, as soon as I got in the game, I would immediately throw the disc into the ground or miss the receiver entirely. It was incredibly frustrating because it demonstrated that I could only throw the disc when I was concentrating on the mechanics of how to throw the disc. As soon as I was thinking about where I wanted to throw the disc, or how to lead the receiver, the mechanics went away, and the throw failed. This last tournament I played, though, the muscle memory of the throw had apparently finally settled in, so when I saw a receiver open, I thought "Throw it there", and the disc went there. The relevant neural infrastructure had finally been installed, so that I could concentrate on the game, and not on the throw, and it was incredibly satisfying. I threw three or four scores, which was more than I ever had before, and only threw it away once the entire day, ironically on a play where I had too much time to think, so that the conscious machinery kicked back into play rather than letting the unconscious muscle memory do its thing.

input side: I guess the analogue on the input side would again be game play recognition. A beginning chess player will have to laboriously trace out where each piece can move and can maybe see the results of a single move. An intermediate chess player will recognize how to handle certain subsections of the board, and be able to project out a few moves. The expert chess player will instantly take in the position of the whole board, and understand how the game will develop as a whole. And this is definitely a cognitive shortcut born of repeated experience. This study demonstrates that chess masters perform vastly better than novices at being able to recognize and remember valid board configurations, but were no better than novices at recognizing invalid boards. In other words, because the novice perceives the board as a collection of individual pieces, they can not tell the difference between a valid and invalid board. Meanwhile, the expert, because they perceive the board as meaningful chunks of board positions, can rapidly grasp the game situation of a valid board, but the invalid board looks like nonsense, demonstrating that their brain is using its expertise as a cognitive shortcut.

More generally, when confronted with a complex situation, an expert can pay attention to the key experiential data and ignore the rest. Gary Klein describes how an expert always has a mental model to which he is comparing the situation, a story if you will, that describes what should be happening. When what actually happens differs from what he expects to happen, the expert knows something is wrong, and re-evaluates the situation, as Klein illustrated with several anecdotes from firefighters. And part of that model is being aware of when things _don't_ happen as expected. And it may not be a conscious model; in fact, Klein describes many instances where the firefighters attributed their decisions to a sixth sense or ESP. But it is a model born of experience; the unconscious brain has experienced the situation over and over again until it knows how certain factors will affect the outcome (Klein calls these factors "leverage points").

posted at: 00:57 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal