You can look at my home page for more information, but the short answer is that I'm a dilettante who likes thinking about a variety of subjects. I like to think of myself as a systems-level thinker, more concerned with the big picture than with the details. Current interests include politics, community formation, and social interface design. Plus books, of course.
Cognitive subroutines and learning
I was reading Emotional Design by Don Norman the other day, and he was contemplating ways in which we could leverage emotional machines to improve the learning process. This got me kick-started again on thinking about applications of the cognitive subroutines theory that I've been playing with. As a side note, I think I'm finally emerging from the dearth of ideas I was suffering for a week or so. Apologies for the banality of posts during that time.
So the question of the day is: How do we leverage cognitive subroutines for the sake of learning? What does this theory tell us about how to teach people something new?
I covered this a little bit in the footnotes of that first post. To teach somebody a new physical action, it requires breaking it down into easily digestable chunks. Each chunk is practiced individually until it's ingrained in the subconscious and can be performed autonomously. In other words, we build and train a cognitive subroutine that can then be activated with a single conscious command like "hit the ball" instead of having to call each of the individual steps like "take three steps, bring the arms back, jump, bring the right arm back cocked, snap the arm forward while rotating the body, and follow through". Watching toddlers figure out how to walk is also in this category. At first, they have to use all of their concentration to figure out how to take a step, but within a short period of time, they just think "I wanna go that way" and run off.
For physical activities the analogy to cognitive subroutines is pretty straightforward, and was what I was thinking of when I first came up with this idea. How does it map to other, less concrete activities? Let's take the example of math. We start out in math learning very simple building blocks, like addition and subtraction. We move from there to algebra where we build in an abstraction barrier. As we learn more advanced techniques from calculus to differential equations, we add more and more tools to our toolbox, each of which builds on the one before. Trying to teach somebody differential equations without them understanding calculus cold would be a waste of time. So in a relatively linear example like math, the analogy to cognitive subroutines is also straightforward.
What about a field like history? Here it becomes more difficult. It's unclear what the building blocks are, how the different subfields of history interrelate, and what techniques are necessary at each level. Here we start to get a better picture of where the cognitive subroutines analogy may start to fail. It applies when there are techniques to be learned, preferably in a layered way where each technique depends on learning the one below it, much in the way that subroutines are built up and layered. Trying to fit more broad-based disciplines such as history into that framework is going to be a stretch.
Perhaps history might be a better example of the context-dependent cognitive subroutines, where we have a few standard techniques/theories that get activated by the right set of inputs. So we have our pet theory of socioeconomic development and see ways to apply it to a variety of different situations (I'm totally making this up, of course, since I'm realizing that I don't actually know what a historian does). Actually, this makes a lot of sense. In fact, I'm doing it right now; I came up with a theory (cognitive subroutines), and am now trying to apply this theory everywhere to see how it fits. By trying it in a bunch of places, I'm getting a better sense of what the proper input conditions for the theory are, and can see how to refine it further.
So for history, the important thing to teach may not be individual theories, but the meta-theory of coming up with good theories in the first place. In other words, critical thinking skills. As mentioned in my new directions post, I think such skills are broadly applicable, from politics to history to evaluating advertising. With such meta-skills, there would be an infrastructure in place for building up appropriate cognitive subroutines, and for understanding the limitations of the cognitive subroutines we already have.
One last thought on the subject of cognitive subroutines and how they apply to learning. What does the theory have to say about memorization-based subjects? From medical school to history taught poorly, there are many subjects which are memorization-based. I don't think there's really anything to be gained here. Memorization, like cognitive subroutines, is all about repetition, but I don't think that the cognitive subroutine theory gives us any new insight into how we can improve somebody's memorization skills.
I also tend to think that memorization will become less and less useful moving forward, as I noted in my information carnivore post. Why memorize when you can Google? However, developing the cognitive filtering subroutines necessary to handle the flood of information available is going to be tricky. That was the point of that information carnivore metaphor, but it's interesting that it comes back up again in this context.
Anyway. There's some fertile ground here for thought, again trying to think of ways in which this theory can be less descriptive, and more prescriptive. I'll have to spend some time trying to flesh things out.
posted at: 23:26 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal
Emotional Design, by Donald Norman
I go back and forth on my feelings about Donald Norman. I think that his observation of The Design of Everyday Things was a really important insight in understanding how omnipresent the role of design is. I liked his idea of information appliances in The Invisible Computer. But I've always been left a little bit annoyed at how simplistic his analysis tends to be. Alas, Emotional Design continues in that vein.
Interestingly, Emotional Design ties into Blink and Sources of Power. In the prologue, Norman is trying to establish that "emotion is a necessary part of life" and then states "The affective system makes judgments and quickly helps you determine which things in the environment are dangerous or safe, good or bad." Sounds an awful lot like thin-slicing to me. His ideas also share characteristics of Bloom's inner-judges in Global Brain, as Norman refers to research showing that "positive emotions are critical to learning, curiosity, and creative thought" and anxiety tends to narrow thought processes, much like the Bloom's inner-judges help reward creative behavior with positive emotion and vice versa.
So there's not much that's new to me in Emotional Design. I did like his partition of thought and design into the visceral (pre-conscious initial reactions), behavioral (learned structures corresponding to our experiences (which I think is essentially the same idea as cognitive subroutines)), and reflective (conscious thought, generalizations and recursion). He spends some time delving into how the three levels interact in design; for a good chef's knife, it's satisfying on the visceral level ("Ooh, shiny!"), behavioral level (it performs consistently and precisely), and reflective level (appreciating how its form follows its function). More importantly, he addresses situations where the three levels are in conflict, where something is viscerally attractive, but reflectively repugnant, like junk food, or viscerally repugnant and reflectively attractive, like most of modern art.
The rest of the book kind of meanders around discussing various aspects of this three-level approach to design, and then takes a bizarre turn into making the case for machine emotions. I think he's trying to make the point that machines need to have the ability to learn autonomously and be able to express their inner state more effectively. In other words, we know that we get cranky when we get hungry. He suggests that machines should become cranky when they're low on power, so that those interacting with them, whether machine or human, could know what's going on internally. I think this is stupid - a power gauge is a much easier thing to read. I also think that the ability for a machine to learn reflectively, in the manner of the cognitive subroutines that I am suggesting as a model for our brains, is a far more difficult problem than Norman suspects.
There's not a lot here. I finished the book yesterday, and it's already pretty much completely faded from my consciousness. I'm glad I got it from the library, because I would have felt gypped if I'd bought it. It is encouraging in one sense - I think I have enough ideas from my blog in various forms to write a far more interesting and thought-provoking book. Now I just need to buckle down.
posted at: 23:24 by Eric Nehrlich | path: /books/nonfiction/general | permanent link to this entry | Comment on livejournal
Clay Shirky on cognitive maps
Clay Shirky had an interesting idea in an article over at Many-to-Many, where he divides the world between radial and Cartesian thinkers. Here's how he makes the distinction:
Radial people assume that any technological change starts from where we are now - reality is at the center of the map, and every possible change is viewed as a vector, a change from reality with both a direction and a distance. Radial people want to know, of any change, how big a change is it from current practice, in what direction, and at what cost.
Cartesian people assume that any technological change lands you somewhere - reality is just one point of many on the map, and is not especially privileged over other states you could be in. Cartesian people want to know, for any change, where you end up, and what the characteristics of the new landscape are. They are less interested in the cost of getting there.
It's a handy distinction. The radial thinker says "Okay, this is where we are, let's see where we can go from here." The Cartesian thinker says "Over there is where we need to be. I don't care where we are, but let's go that way." It's the practicalist vs. the idealist, the engineer vs. the scientist. Incremental improvement vs. paradigm shifts. Shirky applies the distinction to help dissolve some of the differing perspectives on Wikipedia, and clarifies why he thinks the two sides are talking past each other.
The interesting thing was what happened when I tried to figure out which kind of thinker I was. My first reaction was, "Oh, yeah, I'm totally a radial thinker", thinking about my tendencies at work where I figure out the minimum change I can make to get something working right now. That's partially out of efficiency (aka laziness), and partially a result of having seen far too many Cartesian thinkers get bogged down trying to do a total redesign in a world of changing requirements. So when presented with a feature request, I tend to take stock of what I have already implemented, and think about the easiest way to kludge it to add the feature, rather than spend (waste) time thinking about what future features might be added, thinking about how I should design to handle the most general case, etc. From this viewpoint, it seemed obvious that I was a radial thinker.
Then I thought about it some more, and realized that in my personal life, I'm far more of a Cartesian thinker. I have a vision of an ideal, but it's far from what I currently have, and making a few minor changes will make very little headway in terms of moving me towards that ideal, so I don't bother doing anything at all. We can see this in my lack of progress towards finding a new host for this blog, or towards becoming a social software programmer, or even in little things like how long it took me to buy a bed.
So now I'm both a radical and a Cartesian thinker. That doesn't make sense. Except that I think it does, in light of my theory of context-activated cognitive subroutines. In one context, I think one way. In another, I think the other. When I poke and prod further, I can think of reasons why I have different opinions in different contexts; I'm a radial thinker at work because I've seen too many efforts fail at trying to achieve the ideal general case, whereas my approach of rapid prototyping and incremental improvement has done well for me so far. I'm a Cartesian thinker in my personal life because I tend not to compare myself to others, and instead compare myself to my potential, to a putative ideal version of myself. Different contexts, different identities.
And I can break it down even further. In my life at work as a programmer, I'm a radial thinker, as previously noted. In my dealings with management, though, I'm still an unrepentant idealist. I know there are reasons for timesheet software or process and micro-management, but I can see where I think we should be, and get really frustrated that we seem stuck in an entirely different part of the phase space. Such frustration is a Cartesian reaction, because Cartesian thinking (in Shirky's definition) doesn't accept reality as the starting point, but only as a possible destination. So even my work identity is fractured along these lines. Lots of grist for the cognitive subroutine theory in this seemingly simple observation of different thinking patterns.
I'll close with some thoughts on the radial vs. Cartesian dichotomy that Shirky suggests. In the long run, I think the radial thinkers will have the advantage, for all the reasons that Shirky has mentioned previously with regard to Wikipedia. Cartesian thinkers spend a lot of time discussing how things should be, and complaining that the world doesn't match the ideal they have in their head -
danah's response illustrates this attitude where she says essentially that the radial thinkers' improvements are horizontal moves that don't address the underlying problems she was with Wikipedia (or Brittanica for that matter). Radial thinkers don't spend their time exploring the entire possible phase space of what might be possible; they start with the way things are, and get to work changing it. It's using one's effort efficiently. In my work life, some of my most frustrating coworkers have been incredibly intelligent PhDs who want to spend several months perfecting a mathematical model or nailing down every possible contributing factor to an analysis, instead of saying "Okay, it's good enough, let's see what we can do." Again, it's the engineer vs. scientist viewpoint. There's a place for the academics, and for the dreamers, to help imagine new ideals, and guide the incremental changes of the radial thinkers. But in the end, the radial thinkers are going to be the ones building tools and getting stuff done.
posted at: 01:28 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal