In Association with

Who am I?

You can look at my home page for more information, but the short answer is that I'm a dilettante who likes thinking about a variety of subjects. I like to think of myself as a systems-level thinker, more concerned with the big picture than with the details. Current interests include politics, community formation, and social interface design. Plus books, of course.

RSS 0.91

Blogs I read

Recent posts

Directories on this blog



Mon, 04 Apr 2005

conversational alignment
This is a post I've been thinking about for a while, partially wrote, but never got around to finishing. And I'm only finishing it today because I want to write another post that refers to it. Welcome to the wacky world that is my mind.

Here's the question of the day: why is it that we have better and longer conversations with people that we know well? It seems like it should be the other way around - with people that we don't know, there's endless amounts to talk about, since no history is shared. With our good friends, we know all of their stories, we know all of the inside jokes, things that would otherwise take thirty minutes to explain can be referenced in a single word. And yet I can often find myself talking for hours with my best friends, whereas with people I don't know, the conversation dies out in minutes, if not seconds. So understanding what the difference is matters to me, because I like good conversations.

After thinking about it for a while, I decided that it is not despite, but because of, those hours and hours that I have invested learning all of my friends' histories and inside jokes that we have good conversations. We have invested that time in developing an understanding of each others' mindsets. We can move past surface issues like definitional considerations and on to the really interesting idea cracking that lies underneath. We can use those inside jokes and references to skip over the boring parts and get to the heart of philosophical issues.

Essentially, all those hours we've spent learning about each other has let us align our reality coefficients, so that we are living in the same reality when we speak. As that footnote suggests, there has to be an initial similarity of reality coefficients to make conversation possible at all, but I think that reality coefficients can be jostled into closer alignment by steady application of conversation. The more we talk with somebody, the more we learn to view reality through their eyes, understanding why they place the values on things that they do. And by doing so, we can get to the core value differences and start exploring why those differ, which is often really interesting.

Meanwhile, with people we don't know, we can start talking, but the conversation will often get hung up on very shallow things like a sharing of history ("Where'd you go to school? Oh, MIT? Wow, you must be smart!"). And there's nothing wrong with that - you have to go through that stage to get to the more interesting stuff. But often, when faced with the effort of trying to get to know new people and put in the work necessary to get them aligned with my internal cognitive structure, I throw up my metaphorical hands in despair, and either go find some of my good friends or come back home and spew on my blog.

I guess this whole post is a restatement of the idea of exformation from The User Illusion, where exformation is the context that we use to interpret incoming communication. Since all incoming communication, whether speech or text, is relatively low bandwidth, it is up to our brains to unpack the coded information, using the "exformation" context, to make sense of it. I think the bit that is new here (although I haven't read that book in years so it's possible he talks about this) is the idea that a greater familiarity with somebody leads to a context that is more shared, and therefore communication that is less likely to be misinterpreted.

Huh. Just pulled out the book, and Norretranders doesn't quite make the point, but has an apropos quote:

The least interesting aspect of good conversation is what is actually said. What is more interesting is all the deliberations and emotions that take place simultaneously during conversation in the heads and bodies of the conversers.

With people we don't know, "what is actually said" is pretty much the same as "all the deliberations and emotions". Because there is no shared context, we are forced to communicate through the narrow bandwidth of speech. With good friends, a shared context of "exformation" has been developed so that we can transmit much higher volumes of information through speech because a few words will evoke whole sets of memories. As I said earlier, "things that would otherwise take thirty minutes to explain can be referenced in a single word". So our greater familiarity with each other allows us to have much broader exchanges of ideas because we are leveraging that familiarity to exchange vast swathes of information. Or to tie it into my recent line of thought, greater familiarity means building up similar cognitive subroutines, such that the same stimuli evoke the same reactions.

Anyway. More thought required. I think there's some interesting stuff here, especially in the idea that becoming better friends is a re-alignment of reality coefficients. And that leveraging those reality coefficients is why we have better conversations with our friends than with strangers. But I'm getting tired, and I have one more quick post to write, so I'll stop here for now.

posted at: 23:11 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sat, 02 Apr 2005

Context sensitivity
I've talked about the importance of context to cognitive subroutines before, but I wanted to pick up on it again this morning. I've just spent most of the last three weeks in New York City, living a very different kind of life in a different place. I walked almost everywhere I went, I was going to shows almost every evening, etc. And I was curious if, when I got back, whether it would feel weird to be back after having spent that much time living a different life. And the answer is no.

This is fascinating to me. If we only had one set of responses, I think three weeks would be long enough to start shifting those responses to a new paradigm. But that didn't happen. What I think happened was that the old routines didn't really apply in New York (things like the impulse of hopping in a car to get anywhere, or the idea that I should buy things in bulk), so I developed a new set of New York responses. As soon as I got back to my old life in Oakland, with its set of environmental inputs, the old routines were re-activated. But the thing that's really interesting to me is how seamless it felt. It didn't even occur to me that one set of behavior patterns should feel out of place until I asked myself the question whether it felt weird to be back.

I think this demonstrates the real power of context, that our environment controls how we respond so smoothly that we don't even notice when our behavior patterns are wildly different. We think of ourselves as having a central core of behavior (and to some extent we do), but it's amazing how easy it is to alter that behavior by changing the environment. The obvious examples are things like the Milgram experiment, where just by having a white-coated person in authority tell them to, people were willing to shock an unseen patient into unconsciousness. But it shows up in all aspects of our lives. We behave differently at work than at home. We behave differently on vacation.

This is why I think Lakoff's work on framing is so important. By changing the frame, we change the context, and people respond differently without even realizing it. Our behavior changes utterly seamlessly. Our consciousness papers over the gaps and makes it all seem consistent, even when it manifestly isn't.

It starts to get pretty disturbing when you think about framing as a form of brainwashing. Framing's goal is to change what people think, by changing their view on an issue. This is what I don't like about Lakoff's work, that he suggests that we must fight frames with frames. I think he might be right, that such a battle may be our only option, but I'd love it if we could teach people the self-awareness necessary to understand frames, understand context, and be dispassionate enough in their observation of themselves to see how their behavior changes in response to such frames. I know it's a pipe dream, though. Most people have a strong sense of themselves, believing in their continuity from moment to moment. Our consciousness is wired to preserve that illusion (which is an interesting question in itself - why should our consciousness do that? What benefit do we get from not seeing ourselves as a set of context-activated cognitive subroutines? And how did our consciousness get so good at explaining away all the little inconsistencies of our unconscious? I'm thinking specifically of the way that people have been hypnotized will rationalize their behavior even when it is ludicrous. Wow, this is a long parenthetical. Um, anyway).

While I think this line of thought is a little depressing, that we are nothing more than automatons responding thoughtlessly to our environment, there is one upside - it answers the question of how we change ourselves - we change the environment. I mentioned this before in the setting of social identity, but it is perhaps more widely applicable. I've been trying to figure out ways to modify my behavior for a long time, so maybe this will help. Perhaps to write more, I need to join some sort of writer's club. Certainly joining an ultimate frisbee league did wonders for my physical fitness. Would going back to grad school help put me in the frame of mind necessary to pursue work on social software?

On the other hand, I don't want to take this line of reasoning too far. I do believe there is a central core of tendencies that shapes how our unconscious cognitive subroutines develop. No matter how often I get plopped into a loud bar or party environment, I don't think I will ever suddenly morph into that cool dude who is utterly smooth with that situation. The cognitive subroutines are already in place to respond negatively to that set of inputs. To change that behavior would require a lot more than more exposure to that environment.

I suppose it's possible to do a slow morph, though. I mentioned this in the case of physical activity, but perhaps it's all about taking small steps, and changing one's response a little bit at a time. I've already taken a bunch of steps along this path, I think. I'm far more comfortable in dinner party conversations and the like than I was a few years ago. I can even survive in a bar or club like environment for a couple hours now, when I would have fled instantly several years ago. Continued exposure is starting to change my reactions. This is a case where the strategy of tossing somebody in the deep end to teach them how to swim is ineffective, I think. There's too much to process for that to work effectively. But by slowly changing the environment from one of comfort towards one of challenge, the cognitive subroutines will also be modified slowly, such that by the end, it will seamlessly handle the challenging environment and the person will be stunned at how easy it all seems. I think we've all had that feeling when we've learned something new, when we finally get it right - we say "Wow, that's easy!" with a tone of pleasant surprise. We never would have imagined that we could learn it, but by building up the behavior step by step, when it all comes together, it does seem easy.

There's some interesting stuff here. Who knew all this would come out of my initial question of "Does it feel weird to be home?" One thought is that I need to spend more time blogging. Just sitting down and starting writing, even with only a vague thought to start with. The exercise of developing an idea is one of those things that I need practice on, and the only way to get better at it is to practice and to continue to develop my comfort with it. Heck, one of these days, I may be able to turn myself into a real writer. Tomorrow maybe I'll get into some of my initial thoughts on Latour's book. Or maybe I'll talk about something entirely different.

posted at: 08:26 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Tue, 22 Mar 2005

What is powerful, part two
[Apologies for the barrage of posts - I'm trying to be more disciplined about spending a couple hours writing in the morning, and, well, I generate a lot of verbiage. The editing part still needs work obviously. But you'll have to suck it up. Or just skip it.]

In the previous post, I suggested a definition of powerful, as it relates to art and ideas, as being that which connects people. But being the contrary person I am, I'm immediately going to offer another viewpoint. Last night while thinking about what the value of a network of ideas was versus an individual idea, I wondered if I could tie this whole discussion into the science of networks, as described in Six Degrees. Perhaps in the tipping point phenomenon. I mentioned in my first cognitive subroutines post how I occasionally have flashes of insight, where ideas realign into a new pattern. Is that a tipping point in my neural net? Do different people have different threshold levels of evidence, such that some generalize quickly, and others need a preponderance of evidence?

Then another thought struck me. The thing that makes the small world phenomenon work is the unanticipated links between disparate parts of the network. The small world phenomenon doesn't work if people only know their local friends. It's only when a few people (not many at all according to Watts) can link their local set of friends to a set of friends far away. The far links are the powerful ones that make the entire network "small".

Once I thought of it that way, the extension to ideas was obvious - ideas that connect wildly disparate modes of thought are powerful, because they link up different areas of the idea network. The most powerful ideas are the ones that cross disciplines, connecting things that nobody thought were even related. Maxwell unifying electricity and magnetism. The electron shell theory providing a basis for the chemical periodic table. I like this perspective because it makes the connection to the science of networks explicit. We can think about how the different idea networks interrelate, and how to construct links between them that will make the idea network as a whole more compact.

So this is a different definition of powerful than the one in the previous post. That previous post started with art and moved to ideas; can I do the reverse and apply this new definition to art? It's unclear. What does it mean to connect different areas of art? To take one example, music that breaks barriers is often seen as revolutionary. Rock and roll built off of the blues, but brought it into the mainstream. I suspect the same is true in art, but I'm not sure I know my art history well enough to come up with any examples. Perhaps Gauguin's incorporation of Pacific Island art into his work.

Now we have two definitions of powerful. One is about the effect something has on us personally, and our connections with each other. The other is about the effect something has on the network, growing the capabilities of the network by providing more links, where the advancement of the field is perceived as being a good in its own right. Is one definition "better" than the other? It's hard to say. But I find it interesting that my speculation on art as a web has opened up into this whole separate discussion on value and power. Down the rabbit hole we go.

posted at: 09:08 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

What is powerful?
In yesterday's post, I quipped "art is in the network, not in the nodes." While walking around yesterday, I started trying to figure out what I meant by that. It's a cute quip, but what does it mean? I also wanted to tie it into the ideas I presented towards the end of this post, where I say "It's about the network of ideas. An individual idea isn't very useful or exciting to me. It's about how it hooks into a big picture." Again, the network, not the nodes.

Where to start? Let's start with the idea of value. Or to put it more bluntly, power. What does it mean to be powerful? In art, we think of a piece as being powerful when it has an effect on us. Generally an emotional effect, but it may have an intellectual impact on us. Picking up from yesterday's discussion, though, the power is not in the piece itself; it is in the connection between the piece and the viewer. We can all think of pieces of art that have a powerful effect on us, that are disdained by the world at large. The TV show Buffy is a good example - many would not even call it art, but it resonated strongly with me. It may not be powerful to the general audience, but it is to this audience of one. I think this demonstrates that the locus of power is not in the work itself, but in my connection to it.

What do we mean when we say a piece of art is powerful, when we imbue the object itself with that quality? We generally mean that it has a powerful effect on most people that view it. There are always going to be curmudgeons or naysayers who dislike any given work. But the greatest of works are the ones that speak to everyone. They bring people together, by evoking similar reactions in a whole group, demonstrating that no matter what their surface differences, they have the same reaction to this piece. They create an instant community. I think Brahms Requiem is a good example of this. When we performed it soon after 9/11, it brought the whole symphony hall together into a powerful statement of mourning and hope.

How does this definition of power extend to the world of ideas? Are ideas powerful insofar as they help create connections between people? This is an attractive definition. What is the single most powerful idea in the world? "For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life." This idea has bound together hundreds of millions of people into a single faith. It has provided the basis for innumerable communities, both local and global.

What are some other powerful ideas in this bridging sense? The idea of the scientific method is one. The world of science extends across nations and continents. Perhaps sports, as I mentioned in that instant community essay. It also explains why it's so important to me to do my thought development in a blog, in public, garnering feedback. The ideas in and of themselves are interesting, but what I really want is to think of ideas that provide a new viewpoint on the world to myself and others. And I can't do that in isolation, only in connection with others.

I think the interesting thing here is that we have a definition of powerful as the quality that allows people to connect to each other. Art or ideas do not have an inherent value; they have value in their ability to connect people. Being the social creatures that we are, we place the highest value on things that let us create social bonds among us. I like this idea a lot. It re-orients us to the value of human connection, and indicates that our connections with our friends and family are our most valuable possession. And that is a message that I totally support.

posted at: 08:43 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Mon, 21 Mar 2005

Art as a web
DocBug put up an interesting post, wondering why we put all the fame and glory on a particular artist, when their work is often the result of a dense web of collaboration, influences and support. I started responding to that post in a comment, and then realized I had a lot more to say than I thought I did, so I'm responding in my own blog.

Here's the basic concept. Our culture has a tendency to try to objectify things, not necessarily in a pejorative sense, but in the objectivity sense most commonly associated with journalism. That there is a thing, and it has these properties that are part of the thing's ineffable nature. That things are one thing or another, in a Platonic ideal sort of sense. By associating qualities specifically with an object, rather than describing the object as possessing a quality that it could later give up, it tends to confuse things. This is one of the reasons that people like Robert Anton Wilson suggest we use a version of English called E-Prime, which abolishes "to be" and all of its variants.

How does this apply to the situation in question? We want to be able to easily assign credit or blame to people, to have a simple relationship between cause and effect. To take an unrelated example, when somebody does something hurtful to us, it's easier to say "They are evil" than it is to understand why they might have chosen to take that action. It's simplistic thinking, but it has pervaded our society, and holds true in art as well. If we like or dislike an art piece, we give credit/blame to the artist. We tend to project all of our personal feelings and perceptions of the art onto to the artist, and, in our own minds, give the artist all of those qualities.

This is why it is so easy to get in an argument about art; two people may have very different reactions to a piece of art, which they both associate with the piece of art itself, rather than with their own relation to art. So they can't understand what the other person is talking about, because they are seeing two completely different pieces of art, even though they're looking at the same physical object. The meaning is not in the art itself, but in each person's individual connection to the art.

And this is where I think I can tie it back into the original point that Bug was making. Art has no value in and of itself. If an artist makes a beautiful piece, and nobody ever sees it, or if a composer writes a beautiful song, and nobody ever hears it, is it art? I would contend that it is not. Art is about creating that connection between the artist and the audience via the piece of art. In geekspeak, art is in the network, not in the nodes.

That's also true for the creation of art, as Bug points out. Art does not get created in a vacuum. Artists need tools to do their work. They influence each other. They are influenced by what's going on in society. Looking at a piece of art divorced from all of its sociopolitical context is almost nonsensical. It's making the mistake of assuming that the piece of art carries all of its context with it, that any qualities associated with the art are contained within the object, not in the network. I'm pretty sure I'm restating the basic postmodernist position at this point, from my meager understanding of it, so I'll leave it at that, and move onto another question.

How did we end up here? Why is our American society so inclined to try to stuff all of the properties of an object into the object itself rather than the network of relationships surrounding the object? How did we get to a position that our president could declare entire nations evil, and be taken seriously? (okay, that's not directly relevant to this essay, but I think it's a manifestation of the same phenomenon).

Here's what I think. A hundred years ago, Americans would have had a very different perspective. At that point, we were all deeply embedded in our communities. There was a tight web of relationships in any given town, as none of us could be self-sufficient, so we had to know the butcher, or the farmer, or whatever. (I'm idealizing here - go with it). This let us appreciate the power of the network, of realizing how we depended on each other in a long-term sense.

In the modern age, we've moved to a far more self-sufficient model, where our relationships with many people happens in a purely transactional mode. I go to the supermarket, I pick out some stuff, I hand them money, and I leave. All of the networks and relationships necessary to make that happen, from the shipping and distribution networks, to the bar code scanner, to the credit card reader, is hidden. It's implicit, not explicit. So I treat the supermarket, and all of its employees as mere objects, rather than as people. I feed in money, I get out groceries. No human interaction. To use Fight Club's description, we are a single-serving society.

I'm going to posit that Asian and European societies do not have this same object-oriented perspective. (Wow. I just realized that object-oriented is the perfect nerd description of it, because a software object in OO design carries all of its properties and methods with itself. Damn.) Asian societies because of the pervasive influence of Zen and Buddhism and Hinduism, which explicitly state the way that we are all interconnected. And European societies, because they have done a better job of clinging to the human side of interaction, of having the denser communities.

The connection between the American single-serving society and the American tendency to view art (and everything else) in an object-oriented fashion is still a bit fuzzy, but I think it makes sense. When we treat everything in our lives as objects from which we are trying to get stuff, and which we evaluate based on whether it has the qualities that we need at any given point in time, it's not surprising that we start to associate the qualities directly with the object itself, rather than with the network of relationships associated with the object.

I think there's some really fertile ideas here, especially in trying to think about what it means for the value to be in the network, how that could be measured, and how that could be applied if we recognized it explicitly. But I'm going to pick up on those another time. Or not.

posted at: 06:44 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 17 Mar 2005

Cognitive subroutines extensions
In my last post about cognitive subroutines, I extended the idea to allow for us to use other people as part of our internal routines. I was using this in the idea of team building, but this idea of leveraging elements outside of ourselves can be extended even further. While I was at the Whitney yesterday, I was poking around their bookstore and saw a book called Me++: The Cyborg Self and the Networked City, by William J. Mitchell. I picked it up, flipped through it, and every page I flipped to seemed to have an interesting observation. So I bought it on the spot. The other book I'd brought on this trip (Politics of Nature by Bruno Latour) was just proving too dense for me to deal with, so I figured I would read this instead. It's excellent. He describes how our individual selves are slowly melting into the environment where it's hard to say where our "self" ends. A great non-cyber example he gives is of a blind man walking down the street using a stick to navigate. Is the stick part of his sensing system? Absolutely. Is it part of "him"?

Tying this back into the cognitive subroutines theory, in the same way that cognitive subroutines can rely on other people to perform part of their processing, it's obvious that it can rely on other external mechanisms as well. I don't bother remembering where anything online is any more, because I can just use Google. On the output side, I don't have to think about the individual physical actions necessary to drive a car; I just think "I want to go there", and it pretty much happens automatically. So we can use elements of our environment to increase our processing power, and to increase our ability to influence that environment.

In fact, this is really interesting, because it gets back to a question I asked at the end of this post, which was how to reconcile this theory with the ideas in Global Brain. By expanding the scope of the cognitive subroutines to include external influences and external controls, we then build in the power of the collective learning machine, because each of us will choose which elements of the external environment to leverage. Things that are useful, whether as mental constructs for easing cognitive processing or as physical artifacts for increasing our control, will get resources shifted towards them.

This is essentially the idea of the meme at work. A good idea, a good viewpoint of looking at the world, is viral in nature. I come across a way of looking at things. I start using it, and it explains a lot to me, and I find it valuable. I start telling other people about it, whether at cocktail parties or via this blog. If they find it useful, they pick it up. And so on and so forth. It gets incorporated into their internal cognitive subroutines, and soon it is embedded so deeply that they can't distinguish it from "reality".

I was thinking about this recently in the context of books. I like reading, obviously. I like books with ideas, books that express a certain viewpoint on the world. I was trying to answer the question of why I read, what makes a book like Me++ so compelling to me? I think it is this opportunity for picking up new ideas, new cognitive subroutines that I can then apply elsewhere. I described in that original cognitive subroutines post that moment when a bunch of synapses light up, and a whole new set of connections are made in my brain. There's almost an audible click as ideas lock into a new formation. And books are a way of finding those formations. They are an opportunity to hook the ideas I have in my head into the unfathomably large set of ideas that is already out there in the space of human knowledge. Books help me to find ways to hook my ideas into those of thinkers past, as well as giving me the ability to leverage the insights of those thinkers, by not having to recreate their work.

It's about the network of ideas. An individual idea isn't very useful or exciting to me. It's about how it hooks into a big picture. This is probably because I'm a highly deductive thinker. When I was a physics student, I would struggle woefully for the first half of the term, as they introduced individual concepts in an isolated context. At some point, though, the light would go on, and I'd see the whole structure, and then it all made sense; I could see how the individual concepts fit together, and how to use them. I need those kinds of structures to sort through ideas. That may be an individual thing, though.


This isn't the clearest post I've done. But I like the direction this is heading. I think I have a provisional way of hooking the cognitive subroutines theory into the global brain network emergence theory. I like Me++'s idea of extending ourselves out into infinity, and how that applies. I like how I can tie it into my own tendencies, from liking to read, to deductive thinking. This is actually getting to the point where it's almost coherent and consistent. Now I just have to put together an outline. Yeah. Any day now.

posted at: 09:00 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sun, 13 Mar 2005

Cognitive trust
[Bonus post that I wrote at the airport last night]

I liked this quote from Emotional Design:

"Cooperation relies on trust. For a team to work effectively each individual needs to be able to count on team members to behave as expected. Establishing trust is complex, but it involves, among other things, implicit and explicit promises, then clear attempts to deliver, and, moreover, evidence. When someone fails to deliver as expected, whether or not trust is violated depends upon the situation and upon where the blame falls." (p.140)

This would seem to be the team equivalent of cognitive subroutines. I can imagine that analogous negotiation and trust-building is happening within the swirl of our subconscious as we navigate through the world. Stereotypes that seem to work well get reinforced, and encoded into cognitive subroutines. Assumptions that prove to be wrong are trusted less the next time, with more restrictions placed on their activation conditions.

It's interesting to me because it provides an obvious extension of the cognitive subroutines theory to interpersonal interactions, at least in a team sense. I've talked about team building before (and actually say something very similar to Norman's quote), and part of what I think makes a good team is that we can offload tasks onto other people; as I put it in that post, "my teammates trust me to deal with fixing the bugs; once it's reported to me, they forget about it and move on." A team can achieve more than the sum of its parts because each can farm out processing to others who are in a better position to handle a given situation.

It's the cognitive equivalent of labor specialization. If I'm good at software, and my coworker isn't, then it makes sense for them to ask me to perform a software task that they need to do, because I'll do it in far less time than them. In return, my coworker who is better in lab may run an experiment for me. Both of us stick to what we're good at, and we can leverage our expertise to make everybody more productive and efficient.

The other analogy that I like is that if we treat the brain as a set of cognitive subroutines that can call each other, then there's no reason not to think of other people as subroutines that we can also call upon. When we first start working with another person, we don't quite know what their API is or what their capabilities are, but as we learn to trust and respect them, we can learn to call upon them with little more overhead than we do a subroutine in our own head. It's kind of a bizarre concept, but it's the first step in the steps towards a Global Brain if it works.

posted at: 15:46 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 10 Mar 2005

Cognitive subroutines and learning
I was reading Emotional Design by Don Norman the other day, and he was contemplating ways in which we could leverage emotional machines to improve the learning process. This got me kick-started again on thinking about applications of the cognitive subroutines theory that I've been playing with. As a side note, I think I'm finally emerging from the dearth of ideas I was suffering for a week or so. Apologies for the banality of posts during that time.

So the question of the day is: How do we leverage cognitive subroutines for the sake of learning? What does this theory tell us about how to teach people something new?

I covered this a little bit in the footnotes of that first post. To teach somebody a new physical action, it requires breaking it down into easily digestable chunks. Each chunk is practiced individually until it's ingrained in the subconscious and can be performed autonomously. In other words, we build and train a cognitive subroutine that can then be activated with a single conscious command like "hit the ball" instead of having to call each of the individual steps like "take three steps, bring the arms back, jump, bring the right arm back cocked, snap the arm forward while rotating the body, and follow through". Watching toddlers figure out how to walk is also in this category. At first, they have to use all of their concentration to figure out how to take a step, but within a short period of time, they just think "I wanna go that way" and run off.

For physical activities the analogy to cognitive subroutines is pretty straightforward, and was what I was thinking of when I first came up with this idea. How does it map to other, less concrete activities? Let's take the example of math. We start out in math learning very simple building blocks, like addition and subtraction. We move from there to algebra where we build in an abstraction barrier. As we learn more advanced techniques from calculus to differential equations, we add more and more tools to our toolbox, each of which builds on the one before. Trying to teach somebody differential equations without them understanding calculus cold would be a waste of time. So in a relatively linear example like math, the analogy to cognitive subroutines is also straightforward.

What about a field like history? Here it becomes more difficult. It's unclear what the building blocks are, how the different subfields of history interrelate, and what techniques are necessary at each level. Here we start to get a better picture of where the cognitive subroutines analogy may start to fail. It applies when there are techniques to be learned, preferably in a layered way where each technique depends on learning the one below it, much in the way that subroutines are built up and layered. Trying to fit more broad-based disciplines such as history into that framework is going to be a stretch.

Perhaps history might be a better example of the context-dependent cognitive subroutines, where we have a few standard techniques/theories that get activated by the right set of inputs. So we have our pet theory of socioeconomic development and see ways to apply it to a variety of different situations (I'm totally making this up, of course, since I'm realizing that I don't actually know what a historian does). Actually, this makes a lot of sense. In fact, I'm doing it right now; I came up with a theory (cognitive subroutines), and am now trying to apply this theory everywhere to see how it fits. By trying it in a bunch of places, I'm getting a better sense of what the proper input conditions for the theory are, and can see how to refine it further.

So for history, the important thing to teach may not be individual theories, but the meta-theory of coming up with good theories in the first place. In other words, critical thinking skills. As mentioned in my new directions post, I think such skills are broadly applicable, from politics to history to evaluating advertising. With such meta-skills, there would be an infrastructure in place for building up appropriate cognitive subroutines, and for understanding the limitations of the cognitive subroutines we already have.

One last thought on the subject of cognitive subroutines and how they apply to learning. What does the theory have to say about memorization-based subjects? From medical school to history taught poorly, there are many subjects which are memorization-based. I don't think there's really anything to be gained here. Memorization, like cognitive subroutines, is all about repetition, but I don't think that the cognitive subroutine theory gives us any new insight into how we can improve somebody's memorization skills.

I also tend to think that memorization will become less and less useful moving forward, as I noted in my information carnivore post. Why memorize when you can Google? However, developing the cognitive filtering subroutines necessary to handle the flood of information available is going to be tricky. That was the point of that information carnivore metaphor, but it's interesting that it comes back up again in this context.

Anyway. There's some fertile ground here for thought, again trying to think of ways in which this theory can be less descriptive, and more prescriptive. I'll have to spend some time trying to flesh things out.

posted at: 20:26 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Wed, 09 Mar 2005

Clay Shirky on cognitive maps
Clay Shirky had an interesting idea in an article over at Many-to-Many, where he divides the world between radial and Cartesian thinkers. Here's how he makes the distinction:

Radial people assume that any technological change starts from where we are now - reality is at the center of the map, and every possible change is viewed as a vector, a change from reality with both a direction and a distance. Radial people want to know, of any change, how big a change is it from current practice, in what direction, and at what cost.

Cartesian people assume that any technological change lands you somewhere - reality is just one point of many on the map, and is not especially privileged over other states you could be in. Cartesian people want to know, for any change, where you end up, and what the characteristics of the new landscape are. They are less interested in the cost of getting there.

It's a handy distinction. The radial thinker says "Okay, this is where we are, let's see where we can go from here." The Cartesian thinker says "Over there is where we need to be. I don't care where we are, but let's go that way." It's the practicalist vs. the idealist, the engineer vs. the scientist. Incremental improvement vs. paradigm shifts. Shirky applies the distinction to help dissolve some of the differing perspectives on Wikipedia, and clarifies why he thinks the two sides are talking past each other.

The interesting thing was what happened when I tried to figure out which kind of thinker I was. My first reaction was, "Oh, yeah, I'm totally a radial thinker", thinking about my tendencies at work where I figure out the minimum change I can make to get something working right now. That's partially out of efficiency (aka laziness), and partially a result of having seen far too many Cartesian thinkers get bogged down trying to do a total redesign in a world of changing requirements. So when presented with a feature request, I tend to take stock of what I have already implemented, and think about the easiest way to kludge it to add the feature, rather than spend (waste) time thinking about what future features might be added, thinking about how I should design to handle the most general case, etc. From this viewpoint, it seemed obvious that I was a radial thinker.

Then I thought about it some more, and realized that in my personal life, I'm far more of a Cartesian thinker. I have a vision of an ideal, but it's far from what I currently have, and making a few minor changes will make very little headway in terms of moving me towards that ideal, so I don't bother doing anything at all. We can see this in my lack of progress towards finding a new host for this blog, or towards becoming a social software programmer, or even in little things like how long it took me to buy a bed.

So now I'm both a radical and a Cartesian thinker. That doesn't make sense. Except that I think it does, in light of my theory of context-activated cognitive subroutines. In one context, I think one way. In another, I think the other. When I poke and prod further, I can think of reasons why I have different opinions in different contexts; I'm a radial thinker at work because I've seen too many efforts fail at trying to achieve the ideal general case, whereas my approach of rapid prototyping and incremental improvement has done well for me so far. I'm a Cartesian thinker in my personal life because I tend not to compare myself to others, and instead compare myself to my potential, to a putative ideal version of myself. Different contexts, different identities.

And I can break it down even further. In my life at work as a programmer, I'm a radial thinker, as previously noted. In my dealings with management, though, I'm still an unrepentant idealist. I know there are reasons for timesheet software or process and micro-management, but I can see where I think we should be, and get really frustrated that we seem stuck in an entirely different part of the phase space. Such frustration is a Cartesian reaction, because Cartesian thinking (in Shirky's definition) doesn't accept reality as the starting point, but only as a possible destination. So even my work identity is fractured along these lines. Lots of grist for the cognitive subroutine theory in this seemingly simple observation of different thinking patterns.

I'll close with some thoughts on the radial vs. Cartesian dichotomy that Shirky suggests. In the long run, I think the radial thinkers will have the advantage, for all the reasons that Shirky has mentioned previously with regard to Wikipedia. Cartesian thinkers spend a lot of time discussing how things should be, and complaining that the world doesn't match the ideal they have in their head - danah's response illustrates this attitude where she says essentially that the radial thinkers' improvements are horizontal moves that don't address the underlying problems she was with Wikipedia (or Brittanica for that matter). Radial thinkers don't spend their time exploring the entire possible phase space of what might be possible; they start with the way things are, and get to work changing it. It's using one's effort efficiently. In my work life, some of my most frustrating coworkers have been incredibly intelligent PhDs who want to spend several months perfecting a mathematical model or nailing down every possible contributing factor to an analysis, instead of saying "Okay, it's good enough, let's see what we can do." Again, it's the engineer vs. scientist viewpoint. There's a place for the academics, and for the dreamers, to help imagine new ideals, and guide the incremental changes of the radial thinkers. But in the end, the radial thinkers are going to be the ones building tools and getting stuff done.

posted at: 22:28 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Tue, 01 Mar 2005

Prescriptive context
Picking up on the identity as context post (as an aside, I need to figure out a way to thread posts, like on a bulletin board, except with comments - I've got to start doing research on my blogging software options - yes, I know I've said that before), it's time to think about how such ideas can be used. This is part of my new attempt to move away from my typical passive descriptive stance and towards an active prescriptive role, because all the cool pundits offer solutions as well as new ways of looking at the world. And I want to be a cool pundit, after all.

One obvious consequence of the idea that we are choosing our identity by choosing our social groups is that we can modify our identity by putting ourselves in situations where the environment reinforces behaviors we want to encourage. I'm thinking specifically of Alcoholics Anonymous here, where part of the power of AA is the social structure that it provides to help alcoholics quit. It is always easier to do something when other people are doing the same thing around you. Our herd instinct takes over and helps to reinforce the behavior.

We can leverage our social tendencies even more explicitly. For instance, it is drilled into us that it is important to keep promises to others, that trust is the framework around which our society is built. It's entirely possible that such behavior is wired into us evolutionarily via social feedback mechanisms. So when we really want to change our behavior, we make an announcement publicly that we are planning to do so. Then all of the social feedback mechanisms are called into play, and we are more likely to stick to our resolution. This is the basic idea of the wedding, for instance.

As a specific example, I started this blog in part as a public resolution of this type. I had all of these interesting thoughts, but I would never get around to writing them down. Putting them in a blog, thereby getting encouragement and feedback from readers, made it easier to motivate myself to write down the next set of observations, which engendered more feedback, and so on, creating a virtuous circle of behavior modification. At this point, I think it's self-sustaining, where I am enough in the habit of writing that I don't necessarily need the public feedback, but it took over a year for that to happen. And I don't think I would have had the self-discipline to write consistently for a year if it were just for myself; as a counterexample, I have tried many times to start keep a personal journal, and always fail. So by leveraging my social instincts in terms of not wanting to disappoint my (few) readers, I was able to change my behavior.

Another example is the importance of teamwork to a project. On a good team, everybody is doing their best, not wanting to disappoint their teammates. The team jells, and synergistically achieves much more than each person would have achieved working independently. From a personal point of view, I tend to be more productive when working with a partner. I am willing to accept failure for myself, but I don't want to fail somebody else. Again, leveraging our social instincts changes the way we behave.

A further consequence of the "identity as context" theory is the negative connotations. I mentioned how it applies to cults in the original post, but it can be applied more widely than that. For example, expectations play a huge role in determining how we behave. I've alluded to this before in the context of education; kids that are told they're smart will often act smarter. Kids that are told they're stupid will act stupid. It's a self-fulfilling prophecy. Part of the advantage that gifted kids have is that they are placed in gifted programs, surrounded by other smart kids and say to themselves "Hey, I can do that!" They are placed in social contexts where they will succeed. Meanwhile, kids placed in a remedial program will think of themselves as stupid, blaming every failure on themselves, leading to a vicious circle of self-unconfidence.

So what's the upshot of this post? If we believe the idea that social context helps to determine how we behave and thereby who we are, then we can take advantage of the idea that, as I quipped last time, "I choose to be the self that is activated by this group." By choosing the right group, we can modify our own behavior and create a new self. It's never easy; changing one's tendencies is hard work. That's why it's so important to use all of the tools at our command to help reinforce such changes.

Man. This post was much harder to write than I thought it'd be. It just never quite came together. But I've poked and prodded at it for well over an hour now, so I'm going to give up. I'll write a clarifying post if necessary. I might take a break for a couple days to let some ideas simmer and see if I can come up with a clearer line of attack.

posted at: 22:43 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sun, 27 Feb 2005

Identity as context
Picking up on the cognitive subroutine thread, I had another thought yesterday. What is our self, our identity? To some extent, it is the holistic sum of all of our cognitive subroutines. After all, we judge somebody by how they react to different situations. At work, we like to see how people handle pressure. In social situations, we like people that are comfortable and easy to talk to. Since we don't have a way to read minds, all we have to judge other people by is the way they interact with us and with the world around them. There may be those that claim that we have some essential "character" that determines how we will react in a general sense, but I'm pretty skeptical of that idea (Aaron Swartz had a good post about "dispositionism" today). And I feel similarly skeptical about the idea of an eternal ineffable soul. Just so you know what my assumptions are.

What are the implications of the idea that we may be no more than the emergent interaction of our cognitive subroutines? If my speculation that the subroutines are activated by our environment and the context that we are currently in, it means that we are different people in different situations in a very real sense. If I'm hanging out with my college friends, I'm a different person than when I hang out with my family or when I'm at work. They each activate different aspects of my personality, changing how I react to things and the way I view the world. I know it isn't an earthshaking observation that we act differently in different social circles, but it's nice that it falls out of the cognitive subroutine theory so cleanly.

This puts our social interactions in a different light. In some sense, we look for groups of people that help us be the person that we want to be. Since each social group activates different aspects of ourselves, by choosing who we socialize with, we are choosing our identity. This is most obvious in high school with the forming of cliques, from cheerleaders to band members ("This one time? At band camp?") to nerds to burnouts. But it continues throughout our lives. We find people with whom we feel most comfortable, where we feel we can say "I can be myself." My current thoughts make me wonder whether saying that is equivalent to saying "I choose to be the self that is activated by this group."

Another aspect of the whole identity as context corollary to the cognitive subroutines theory is that it provides insight on why cults work. Everybody always asks how people get sucked so deeply into cults. Well-designed cults all share a few common tactics. The most important of these is to remove new cult members to an isolated compound where the cult members see nothing but other cult members. In the language of this post, it's removing any alternative contexts from their lives. No visits are allowed from family members, because that would elicit a different person than the one the cult is creating. In their isolated compound, they reward behavior beneficial to the cult, and punish unwanted behavior like questioning authority. Again, training of new cognitive subroutines.

What's another common cult tactic? Giving their members new names. The old name has too many cognitive subroutines associated with it, too many aspects of personality that the cult is trying to suppress. By giving the member a new name, the cult is essentially starting a whole new set of cognitive subroutines that have no connection with the old life. They are creating a new person, essentially. Names are powerful things. For a long time, I really think I behaved differently when I was around people that called me Perlick versus people that called me Eric. I think I've now harmonized the different aspects of my personality for the most part, but it's interesting to see how powerful a name can be.

Then again, I've always been particularly impressionable and susceptible to outside influences. When I was a kid, my mom could tell who I'd been hanging out with on any given day, because my speech patterns would actually change. I don't think it's anywhere near as extreme any more (there's a whole post buried somewhere in the idea of how we are all the sum of our influences, but over time, the influences become commingled so that it becomes harder to tease out individual influences), but I'm sure there's still an effect. For instance, I know my writing became more florid for a while after I read David Foster Wallace, with lines like "In our world of postmodern ironic world-weariness, something about the buzz as Barry Bonds steps into the batting box, as 40,000 people hold their breath together, breaks through our ennui and evokes images of a more primitive time, of gladiators and arenas. It's an exciting feeling. The mob mentality rises to the surface and we lose ourselves in it." So maybe it's just my perspective that sees identity as being context-contingent. But I don't think so.

One last caveat: I should emphasize that I'm not postulating that our minds are in any way actual computers that consist of self-programming subroutines. I do think that it's a useful metaphor for analyzing several aspects of human behavior, in a variety of contexts. I think that this post illustrates that it may even have applications in questions of identity. For me personally, it's a good reminder that choosing how I spend my time socially is choosing what kind of person I want to be. I could choose to be a soulless corporate drone. I could choose to be an alcoholic partier. I could choose to be an outdoors type. Right now, I seem to be choosing electronic ranting loner. Unabomber, here I come! Hrm. Maybe I should rethink that choice...

posted at: 19:57 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Fri, 25 Feb 2005

More thoughts on thin-slicing
I sent off a note to Malcolm Gladwell through his website with the nitpicks I mentioned in my review of Blink, in particular the height study and the Ted Williams story. Much to my surprise, Gladwell wrote me back thanking me for the observations and loving the Ted Williams story. Cool!

While thinking about it some more, I realized that the prejudice favoring tall people may actually be a form of thin-slicing in action. As the New Yorker article suggests, "In our height lies the tale of our birth and upbringing, of our social class, daily diet, and health-care coverage. In our height lies our history." If that's the case, then favoring tall people makes perfect sense. Tall people would tend to be healthier and stronger than short people in a world of scarcity. These days, when all of our needs are satisfied, at least in most of the industrialized world, the remaining variation is primarily due to genetics, but it would be understandable if some vestige of a bias towards height remains. So I took that idea and sent it off to Gladwell. We'll see what he thinks of it.

I also wanted to pick up on one of Beemer's comments where he points out that cognitive subroutines and thin-slicing are both ways to "optimize away mental processing". He lists a few examples such as peer pressure and deference to authority, where the answer you get will be right most of the time and is extremely energy efficient. Given that the situations where such strategies arise are not often situations where the wrong answer means immediate death, it's not surprising that our brains are optimized for efficiency rather than 100% accuracy. Man. I think I had another observation, but I've totally blanked.

One last thought on the subject for the night. At some point, I'm going to have to reconcile my thoughts on cognitive subroutines with the ideas of The Global Brain, which I quite liked. I don't see any obvious correlations between them, but since I currently find value in both of them, I feel like there should be a way to bring them together. More food for thought. But it's Friday night and I'm tired, so I'm going to drop it for now.

posted at: 22:10 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Wed, 23 Feb 2005

Cognitive subroutines and context
More thoughts on yesterday's cognitive subroutines post after thinking about it some more, partially in response to Jofish's comment.

Jofish brings up the importance of leveraging the real world. We don't have to store a hypothetical model for everything in the real world, because we can use the real world to store information about itself, and use that to jog our memory. This is partially why people can find things more easily in a physical spatial environment than in a file system; the physical cues and landmarks of the real world help guide them to their destination. To some extent, the brain uses inputs from the real world to decide which of the cognitive subroutines to run.

This gets back to a running theme of mine that I never fully developed, which is the importance of context. I wrote a footnote post about it at one point, but never returned to the subject. One of the things that fascinates me about our brains is how incredibly contextual they are. For instance, my memory is totally associative. When I get to the grocery store, I often can't remember what I'm supposed to get, until I walk down the aisle, see something, and my memory is jogged. I've mentioned this phenomenon in social contexts as well.

When I put the importance of context together with the idea of cognitive subroutines, a neat idea pops out. Perhaps these cognitive subroutines are like computer functions in yet another way. They have a certain set of inputs which defines their behavior, much like a function prototype defines the inputs for a computer function. When our brain is presented with a situation with certain stimuli, it grabs among its set of cognitive subroutines, finds the one with the closest matching set of inputs, and uses it, even if it's not a perfect fit. In other words, these cognitive subroutines are called in an event-driven fashion based on incoming stimuli.

An interesting idea, but is there any evidence to support it? I think there may be in the existence of logically inconsistent positions. We all have positions on various issues that may conflict with each other. The canonical one is the person who is pro-life in opposing abortion, but pro-death in supporting the death penalty. How can the person reconcile these opposing viewpoints? Within a single hierarchical logical structure, it's difficult. However, if the brain and its beliefs are treated as a set of separately created cognitive subroutines, each of which is activated by its own set of inputs, then the contradiction goes away. Each belief isn't part of a large scale integrated thought structure; it's contained within its own idea space, its own scope to use the programming term. Within that scope, it's self-consistent, and it doesn't care about what happens outside of that scope.

Only if you make the effort to try to reconcile all of your individual beliefs do contradictions start to pop up. But it's a difficult task to break the beliefs out of their individual scopes, so most people don't bother unless they are philosophers.

And to tie this all back to my favorite unifying topic, of stories, the effectiveness of stories lies precisely in their ability to activate certain contexts within our brains. This is why Lakoff emphasizes framing; by framing issues in a certain way, the conservatives set the context that the audience uses and actually choose which cognitive subroutines are activated in considering that issue. Advertisers seek to take advantage of this as well; commercials showing beautiful women drinking beer are trying to activate certain cognitive subroutines to connect the concepts.

Wow. When I started this post, I didn't know I was going to be able to tie all of my hobby horses together into one overarching model, but there ya go. I know I'm ignoring a lot of details, and making a bunch of simplifying assumptions, and using an overly reductive model of the mind, and being unclear on language, but, hey, that's what you get when you read a blog. Eit.

P.S. The Firefly critique is written. I'll get to it tomorrow. Unless I end up expounding more on this subject.

posted at: 20:48 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Tue, 22 Feb 2005

Cognitive subroutines
This is going to be a relatively long post, mostly inspired by reading Blink, by Malcolm Gladwell, and Sources of Power, by Gary Klein, both books that explain how and why our unconscious decision-making capabilities are often better than our conscious ones, and also explain when such capabilities fail and need to be over-ridden.

I was sitting there thinking about these issues last week while sitting on stage during our concert of Schumann's Das Paradies und Die Peri. We have a section in the middle of the concert where we sit for about 45 pages with only a couple pages of singing in the middle to keep us awake. So for four nights in a row, I had plenty of time to sit and think. And on Friday night, I had one of those moments where I connected a bunch of ideas, and synapses lit up, and I found a story that really works for me explaining some of this stuff. I was actually sitting there in the concert trying to figure out if I could get out my Sidekick and send myself a reminder so that I wouldn't forget the synthesis, but I couldn't. Fortunately, the idea was strong enough that I jotted down the basic outlines when I got home. This is all probably pretty obvious stuff, but it put things together in a way that made a lot of sense to me, bringing together a bunch of different ideas. So I'm going to try to lay things out here.

The basic idea builds off of Klein's idea of expertise getting built into our unconscious. Our brain finds ways of connecting synapses that leverage our previous experience. Why does it do that? I'm going to assume that it's a result of the constraint stated in The User Illusion, that consciousness operates at only 20 bits per second. The information processing power of our conscious mind is very low, so our unconscious mind has to find ways of compensating for it.

Here's the basic analogy/story that I came up with, being the programmer that I am. When I'm writing code, I often notice when I need to do the same task over and over again. As any programmer knows, when you're doing something over and over again, you should encapsulate that repeated code into a subroutine so that it doesn't need to be copy-and-pasted all over the place. I would imagine that a self-learning neural network like our brain does a similar task. So far, so obvious.

This relates pretty well to my own experience as a learning machine. When I'm learning a new game, for instance, my brain is often working overtime, trying to figure out how the rules apply in any given situation, going through the rules consciously one by one to figure out what the right move should be. As I play the game more and more, I learn to recognize certain patterns of play so that I don't have to think any more, I just react to the situation as it's presented. This is what Klein describes as Recognition-Primed Decision Making. To take a concrete example, when I was first learning bridge, the number of bidding conventions seemed overwhelming. I had this whole cheat sheet written out to which I continually referred, and every bid took me a while to figure out. As I played more and more, I learned how each hand fit into the system, so that I could glance at my hand and know the various ways in which the bidding tree could play out. As Klein describes it, my expertise allowed me to focus on the truly relevant information, discarding the rest, allowing me to radically speed up my decision making time.

Back to my story. Thinking about wiring our unconscious information processing architecture as a bunch of subroutines leads to a couple obvious conclusions. For one, it's easy to imagine how we build subroutines on top of subroutines. A great example is how we learn a new complicated athletic action. It also applies on the input side.

Another obvious result is that because subroutines are easy to use cognitive shortcuts, they may occasionally be used inappropriately. What happens when a subroutine doesn't quite fit what it's being used for? Well, in my life as a programmer, I often try to use that subroutine anyway. It doesn't end up giving me quite what I want, so I find a way to kludge it. I'll use the same subroutine, because I don't want to change it and mess up the other places that it's called, but I'll tack on some ugly stuff before it and after it to compensate for the ways in which it doesn't quite do what I want.

How does this relate to our brains? I think a prejudice is essentially the same as a cognitive subroutine. It does a bunch of processing, simplifies the world down to a few bits, and spits out a simple answer. And, in most cases, the subroutine does its job, spitting out the right answer; it wouldn't have been codified into a subroutine if it didn't. Much as we may not want to admit it, prejudices exist for a reason. However, when we start to blindly apply our prejudices, using these canned subroutines without thinking about whether it's being applied under the appropriate conditions, then we get into trouble. Gladwell calls this the Warren Harding error.

What is the right thing to do? Well, in programming, the answer is to think about how the subroutine is used, pull out the truly general bits and encapsulate them into a general subroutine, and then create specific child subroutines off of that, assuming we're in an object-oriented environment. In general when using a subroutine, certain assumptions are made about what information is fed into the subroutine, and what the results of the subroutine will be used for. If those assumptions are violated, the results are unpredictable. A more experienced programmer will put in all sorts of error checking at the beginning of each subroutine to ensure that all the assumptions being made by the subroutine are met.

How does this apply to the cognitive case? I think this is a case where it gets back to my old post about questioning the assumptions. If we try to understand our brain, and understand our kneejerk reactions, we will be in a much better position to leverage those unconscious subroutines rather than letting ourselves be ruled by them; our intelligence should guide our use of the cognitive shortcuts and not vice versa.

This idea of cognitive subroutines also gives me some insight into how to design better software. I picture this cognitive subroutine meta-engine that tracks what subroutines are called, and strengthens the connections between those that are often called in conjunction or in sequence, to make it easier to string those routines together, eventually constructing a superroutine that encompasses those subroutines. It seems like complex problem-solving or pattern recognition software should be designed to have a similar form of operation, where the user is provided with some basic tools, and then the software observes how those tools are used together, and constructs super-tools based on the user's sequence of using the primitive tools (alert readers will note that this is the same tactic I propose for social software). I'm somewhat basing this on a book I'm reading at work called Interaction Design for Complex Problem Solving, by Barbara Mirel, where she discusses the importance of getting the workflow right, which can only be done by studying how the users are actually navigating through their complex problem space.

So there you go. Treating the brain as a self-organizing set of inheritable subroutines. I'm sure this is obvious stuff. Minsky's Society of Mind probably says this, but I've never read it. Jeff Hawkins's book On Intelligence probably says something similar as well (I should probably read it). And I suspect that Mind Hacks is in the same space. So it may be obvious. But, hey, it's new to me. And it just makes a lot of sense to me right now, in terms of how I learn to do new complex activities, and how it relates to my work as a programmer. I'll have to think some more about if this can actually be applied in any useful manner to what I do. And about the shortcomings of the theory.

P.S. Tomorrow we'll get back to more light hearted subjects, like why I think the TV series Firefly failed, with a compare and contrast to what Joss did right in Buffy.

new complicated athletic action: When learning a new action, we break the action down into individual components and practice them separately. When I was learning how to spike a volleyball, the teacher had us first work on the footwork of the approach. Left, right, left. Left, right, left. We did that a bunch of times, until it became ingrained into muscle memory. Then we practiced the arm motion: pulling both arms behind our back, bring them forward again, left arm up and pointed forward, right arm back behind the head, then snapping the right arm forward. Then we coordinated the arms with the footwork. Once the entire motion was solid and could be performed unconsciously, then we threw a ball into the mix. That had to come last because the conscious mind is needed to track the ball and coordinate everything else to make the hand hit the ball in the air for the spike. Only if everything else is automatic do we have the processing power to make it happen. If we had to think about what steps we needed to take, or how to move our arms, we would never be able to react in time to get off the ground to hit the ball. It's only because it's been shoved into our unconscious that we can manage it.

Another recent sports example for me is ultimate frisbee. I've been working on my forehand throw for the last year or so. After several months, I finally got it to the point where I could throw it relatively reliably while warming up. However, as soon as I got in the game, I would immediately throw the disc into the ground or miss the receiver entirely. It was incredibly frustrating because it demonstrated that I could only throw the disc when I was concentrating on the mechanics of how to throw the disc. As soon as I was thinking about where I wanted to throw the disc, or how to lead the receiver, the mechanics went away, and the throw failed. This last tournament I played, though, the muscle memory of the throw had apparently finally settled in, so when I saw a receiver open, I thought "Throw it there", and the disc went there. The relevant neural infrastructure had finally been installed, so that I could concentrate on the game, and not on the throw, and it was incredibly satisfying. I threw three or four scores, which was more than I ever had before, and only threw it away once the entire day, ironically on a play where I had too much time to think, so that the conscious machinery kicked back into play rather than letting the unconscious muscle memory do its thing.

input side: I guess the analogue on the input side would again be game play recognition. A beginning chess player will have to laboriously trace out where each piece can move and can maybe see the results of a single move. An intermediate chess player will recognize how to handle certain subsections of the board, and be able to project out a few moves. The expert chess player will instantly take in the position of the whole board, and understand how the game will develop as a whole. And this is definitely a cognitive shortcut born of repeated experience. This study demonstrates that chess masters perform vastly better than novices at being able to recognize and remember valid board configurations, but were no better than novices at recognizing invalid boards. In other words, because the novice perceives the board as a collection of individual pieces, they can not tell the difference between a valid and invalid board. Meanwhile, the expert, because they perceive the board as meaningful chunks of board positions, can rapidly grasp the game situation of a valid board, but the invalid board looks like nonsense, demonstrating that their brain is using its expertise as a cognitive shortcut.

More generally, when confronted with a complex situation, an expert can pay attention to the key experiential data and ignore the rest. Gary Klein describes how an expert always has a mental model to which he is comparing the situation, a story if you will, that describes what should be happening. When what actually happens differs from what he expects to happen, the expert knows something is wrong, and re-evaluates the situation, as Klein illustrated with several anecdotes from firefighters. And part of that model is being aware of when things _don't_ happen as expected. And it may not be a conscious model; in fact, Klein describes many instances where the firefighters attributed their decisions to a sixth sense or ESP. But it is a model born of experience; the unconscious brain has experienced the situation over and over again until it knows how certain factors will affect the outcome (Klein calls these factors "leverage points").

posted at: 21:57 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Tue, 15 Feb 2005

The Passion of the Geek
I was IM-ing a friend of mine a few days ago, and was telling her that I wasn't sure I wanted to remain a programmer, commenting that I wasn't really a geek at heart. She replied "you'll always be a geek though. you can be a pundit geek", which got us into a brief discussion as to what defined a geek. My attempt: "a geek is somebody whose passion for something overrides their fear of social ostracism", building off of Paul Graham's essay on nerds and popularity. Because that's pretty much what a geek is - somebody whose love of Star Trek or sci-fi or Buffy or computers or physics matters to them more than what other people think. It's why people often fear and envy geeks simultaneously; people fear geeks because geeks' blatant disregard for the social norms that they spend so much time trying to observe implies that perhaps those social norms are not the laws of nature they seem to be, but are, in fact, just arbitrary rules. They envy the geeks because it would be so freeing to not worry about what other people think and just pursue one's own passions, let the rest of the world be damned. So, in at least some parts of my life, I'm a total geek. In others, I am still all too prey to the insecurities of social ostracism. Part of what I'm trying to do with my life is find new passions to pursue.

I guess I don't have as much to say on the subject as I'd thought. But this will get its own post anyway, because I really wanted to title the post "The Passion of the Geek".

posted at: 22:18 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 20 Jan 2005

Cognitive effort
I bought a bed last weekend, and it was delivered two days ago. Yes, I finally decided that I should stop sleeping on the futon that I had bought used in grad school nine years ago. And two nights of sleeping on the nice new bed has made me go "Wow! Why did it take me so long to decide to do this?" A good question. One I actually thought about for a bit, and here's my answer.

It's a matter of energy and attention. We all have certain things that we don't question in our lives, whether it's our religion, our devotion to a given sports team (Go Cubs!), our affiliation with certain groups, etc. We can't question everything. While I love the idea of always being able to pry open the black box to see why something is the way it is, I can't always do that because it takes time and energy. Most of the time, I have to just accept the black box as is, and use it.

So I make a decision, and I move on, and I don't question the decision any more. Whether it's buying a car or a new laptop or what software to run my blog on, I find something that works well enough for the moment and forget about it, leaving more of my time and attention for things I find interesting, like reading or thinking about what I'm going to write on here. It's a matter of conserving cognitive effort for things I care about.

To give credit where it's due, this idea is mostly stolen from Paul Graham's essay on nerds, where he points out that most nerds are unpopular in school because being popular is a full time job (between choosing clothes, going to the right parties, etc.), and nerds don't care enough to bother.

So, in this specific case, every year or so I'd think about getting a new bed, and decide against it because I was sleeping fine on the futon, and a new bed is expensive. Each year the futon was getting worse and worse and my disposable income was rising, and this year the lines finally crossed, I got the new bed, and it was so easy that it prompted this post of wondering why it took so long. And that's often the way it is. My post about productivity laments this aspect of myself, but I think it's understandable in light of a theory of cognitive effort. Or maybe I'm just making elaborate justifications.

Oh well. Given that this is the fourth post of the evening, I think I'm going to shut up now, turn off my brain, and watch my tape of the O.C. recorded earlier.

posted at: 22:40 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Infinite games in childhood
A thought struck me this morning on my BART ride into work, in response to Carse's talk. He describes infinite games as where the point of playing is to continue to play. Doesn't this describe childhood? Over Christmas break, I was visiting some friends with kids, and I was playing Uno with their four year old. And he was just so happy to be playing that he didn't care who won or lost, or how he was doing; he was just excited about playing. Us adults get so worked up about winning and losing that we define ourselves by our results, but a bunch of kids playing baseball will often play for hours without keeping score because the point of the game is the game itself, not the result.

In fact, it's the adults that ruin kids by injecting finite games into their play. We all knew a Little League dad who was just miserable to be around because he'd be screaming at everybody because he wanted his kid's team to win. But, as Carse put it, "Evil is where an infinite game is absorbed completely into a finite game." To destroy that sense of play, that sense of joy, for the sake of something as prosaic as winning and losing is wrong.

It's interesting to think what a society based on a childlike state of mind would be like. I think I'd quite like it. Then again, it would essentially be the state of anarchy, which is a concept that appeals to me in theory. But in the "real" (aka adult) world, rules are necessary. People won't play nice with each other, alas.

It also makes me wonder when we lose that sense of childlike joy. Not everybody does, obviously, and the ones that don't are often among our most innovative thinkers (e.g. Feynman and Einstein). But most of us do. I certainly have. I never get that zap of "Wow, this is really cool!" any more, where I'm doing something for the sheer pleasure of doing it. I need to learn to be more immature again :)

Anyway, I thought that the observation that only adults play finite games was interesting. Thought I'd share.

posted at: 21:56 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Mon, 10 Jan 2005

More thoughts on gifted education
I've rambled about education before, particularly with regards to gifted education. But I've never been bashful about repeating myself. So here we go again.

Here's a thought experiment that a friend posited a couple weeks ago. From a purely academic point of view, how long would it take a smart kid, working at their own pace with appropriate guidance, to learn the material up through 8th grade or so? Let's say through basic algebra in math, reading and writing in complex sentences, some basic understanding of science, a first pass at American and world history, that sort of level (although, given the horrendous state of public education, that might qualify as a high school education at this point. Yikes!). My guess is four years. Or less. I did K-8 in 7 years, skipping two grades, and I could probably have skipped at least two or three more if it weren't for socialization issues (sixth grade, for instance, was a total waste as the teacher refused to let me work ahead because she felt there were certain things sixth graders did and that was that).

Those pesky socialization issues. What, really, are we teaching our children for those other four years? I can tell you what I learned. I learned that I don't have to work hard to succeed (at least in that environment). I learned that being out of the box often means being crammed back into the box. I learned that I can get away with mediocre work because nobody cares. And I went to an extremely good public school. I can't even imagine what it's like for students in a bad one.

It's really frustrating. I can see some of these acceptance-of-mediocrity tendencies in myself even now, which is how the topic came up when I was talking with my friend. It makes me wonder why we accept such an awful system if people really believed that children are our future. Or are we aspiring to the dystopia alluded to in The Incredibles, where because everybody is special, nobody is?

If I were a cynical Rand-ian, I'd claim that the school system, as presently constructed, is designed to habituate us from birth to not make waves, especially those of us that are smart, because ambitious smart people are disruptive innovators that change power structures. School teaches us to sit still, keep our mouths shut, and conform to the majority. We're taught to obey authority blindly (because teachers hate being challenged), which I think contributes to our acceptance of pseudo-science. If you squint the way I currently am, you can see many of the problems of our society reflected in our education system.

So what would I do differently? I have nothing that could be construed as realistic. To really teach kids right, you need to spend a lot of quality personal time with them, allowing them to pursue their interests in a guided fashion. There are some things that everybody should know, like the basics I outlined above, but beyond that, leveraging the natural enthusiasm of children would seem to be a natural thing to do. And given that children are natural scientists, it seems like we could take much better advantage of that than we currently do with our memorization of orthodox science dogma. Not that I'm saying we should doubt the current scientific paradigm, but that we should give students the opportunity to ask why and, when possible, figure out where the paradigm came from, as Postman suggested.

I don't know what I'd do if I had kids. My friend pointed me at the Montessori method, which looks promising. I'd almost be tempted to home school them. But there is a genuine need for socialization. The smartest person in the world is completely ineffectual if they can't persuade other people to their way of thinking, a skill I continue to hope to learn. I don't know how one teaches that to kids though. Cooperative learning environments? Play groups? I don't really know.

Lots of hard questions, as there always are when I address education. And it's getting late, and it's time for this to be out of my hands, so out it goes.

posted at: 23:49 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Tue, 28 Dec 2004

Information Carnivore followup
As usual, Beemer had an interesting response to my last post. I was going to respond on LiveJournal, but decided to use my privileged position on the blog itself. Bwa ha ha ha. More people read it this way. Yeah. Not that readership matters. Because I'm more interested in the discussion. But if more people see it, then there's more likely to be discussion. Yeah! Um. Anyway.

Three things to follow up on.

  1. Text, as Beemer points out, is a great medium because it is random access and low-bandwidth. However, I wonder if this is known to be the advantage that we think it is. I think Beemer and I have both read so much, in so many forms, that we have the trick of using text as a random access, low-bandwidth medium. It's unclear to me that others know that trick. Many people, when confronted with a lot of text, just give up, rather than quickly scan through it to determine if there is anything of interest. Including me. I just downloaded a 15-page paper off the net on the theory that I'll read it later. Which won't happen. But I think that this lack of text-parsing ability may relate to the complaint I opened my last post with, which wondered why many people just give up when confronted with my long posts. So this text-parsing may be a skill worth thinking about, and eventually teaching, in addition to the critical thinking skill of parsing multiple sources of input.
  2. Speaking of which, in that post I was thinking of input in terms of text and alternate news streams, but I think it applies more broadly. While I was home at my parents' house, I was watching football, bouncing back and forth between two games, a skill which I've pretty much mastered at home using my picture-in-picture TV, but which was a bit trickier with only the "Last" button. My mom got annoyed and told me to pick one. I realized that the skill to handle multiple streams of input may be just as applicable in video or audio. And I'm not particularly good at it. I know people who are more habituated to TV who can have the TV on in the background while reading and listening to the radio, and still notice when something interesting happens. I think the generation of kids today is one step beyond with their ability to juggle video games on top of all that. It's a multi-modal environment, and developing the skills to handle that is just a matter of growing up in that environment, I think.
  3. Lastly, I wanted to follow up on Beemer's information carnivore observations. I had actually intended some of those analogies, but hadn't made the connection explicitly in the post. Part of the analogy is the greater efficiency in being higher up the food chain. Carnivores need to eat less often than herbivores. And he also observes the downside - a carnivore is exposed to greater concentrations of toxins. As far as the information carnivore goes, the greater efficiency of using secondary sources is necessary because otherwise the vast amount of information out there would overwhelm us. However, we are susceptible to greater concentrations of toxins, by which I mean biases and inaccuracies. At each level of the information food chain, there is a selection process. By the time it gets to, say, Rush Limbaugh, the "news" has been consistently slanted to the right so many times that it may hardly resemble the original story. I think the term information carnivore sums up these advantages and dangers concisely, and reminds us that we are dependent on others for processing information, thus reminding us of the biases and inaccuracies that may be built up by the time a story reaches us.

    It also reminds us that we stand at the top of an information pyramid. With the advantage that, if we choose to move lower down the chain, we can. We can open up the black boxes, find the original sources, do our own data compression, and determine whether it matches the summary that we were given. Obviously, we won't choose to do this often because it requires a lot of time and effort, but it's probably worth doing a couple times to find "information herbivores" that process data and stories into the form that we want. A simple example is finding a movie critic that we like. When confronted with a new critic, we read their reviews of movies that we've seen. We evaluate his opinion, compare it to our own, and when we find a critic that often matches our tastes, we begin to use their reviews to guide us in deciding which movies to see. Do the same thing for books, for products, for groceries, for news, etc., and you begin to see the information carnivore ecology at work.

posted at: 23:32 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sat, 25 Dec 2004

Information carnivore
Sometimes posts start with no more than a good post title. Like this one. Actually, okay, this post started with some thoughts I've been having about different ways of perceiving and handling information. It's something that's been in the back of my mind for a while. In fact, one of my first rants on this blog concerned the subject.

One of the things the concerns me with this blog is that a lot of people I know don't read it because my posts are too long. Part of the length is due to my lack of editing, to be sure. Part of it, though, is because I feel that these are relatively complex issues that I'm tangling with, and that it takes time to take them out, look at them, see them from a couple different perspectives, and decide what I think. It worries me that America seems to be moving towards the soundbite society, where we want simple answers that can be conveyed in a few seconds, where we don't have to pay too much attention. To some extent, I think that this last presidential election was a signal that the American public has chosen simplicity, even if wrong, over acknowledging the complexity of the world.

But I don't want to get distracted by politics here. Let's get back to information modes. One of the other realizations I had recently is that I am a product of a narrow slice in time. My preferred mode of information consumption is the printed word (or the word on screen). I am pretty adept at scanning web pages or books to extract the information I'm looking for. I can handle multiple sources of information, evaluate them and make my own decision as to which source I believe. Part of where I was going with the critical thinking section of my new directions post was trying to figure out how to teach this skill to others.

But what if the need for this skill is just because of this temporary phase that we're in, where information is stuck in a text format? I was surprised to learn of the rise of podcasting, because I dislike audio information transfer so much. But for others, it makes sense. And, as video and other multimedia editing tools become more powerful and common, that will start to dominate text, I'm sure. And people like me that grew up with text and are comfortable with it as a primary medium will slowly get passed by as outmoded. It's already starting to happen; on sports sites that I visit like ESPN, a good portion of content is being delivered in video rather than text, which drives me nuts.

So I wonder if the need for my skill of sifting through large amounts of text is one that is soon-to-be (or already?) outmoded. There's been this eight year run or so where the World Wide Web made everything available in text, and really made having such a skill valuable. But soon it's going to go be only relevant to those of us that read books. Is there really value here, or do I just think it's valuable because, well, I do it? Am I already doomed to the long slow decline of technical obsolescence that Douglas Adams describes for those over 30?

However, there's a related skill that I think will continue to be useful. The term I came up with this morning (and the one that inspired me writing a post at all because it made a good post title) was being an "information carnivore". It's taking information that others have already processed and finding ways to use it. I'm not much of one for primary sources. The amount of effort it takes to learn the specialized language of Derrida or Foucault is just not worth it to me to find out what they say. So I read secondary summaries, whether in books or online, synthesize them, and extract information for myself, consulting the primary sources as necessary to elaborate upon a point.

My new directions post included a section on thinking about helping teach others the critical thinking skills necessary to be an "information carnivore". I haven't really thought through the details yet. Beemer made an interesting comment, suggesting that the way to teach people the way to do something is to put them in a situation where they need it. Offhand, one way of doing that would be to get away from the teacher in a classroom being a voice of authority, and more towards a discussion leader. Provide alternative viewpoints, including mutually exclusive ones, and require students to determine which viewpoint to believe by taking into account other information sources. Grade them on their ability to make a good case for their viewpoint, not necessarily on having the "right" answer.

This wasn't as coherent as I thought it would be when I started, partially because I'm distracted by writing this as I'm watching football on Christmas afternoon (yes, I've finally found a way to combine my hobbies). I'll post what I have, and think about it some more. Comments welcome, as always.

posted at: 17:44 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Wed, 15 Dec 2004

Social context in the Monkeysphere
I'm going to cheat here, and respond to one of Beemer's comments in the blog itself rather than with another comment. Mostly because he brought up some points I wanted to address but hadn't gotten around to. This is what I meant when I mentioned that I had a whole big ball of ideas that I was going to tug on a loose end of and see what came tumbling out.

Beemer points out that his Monkeysphere appears to be a lot larger than 150 people. And that perhaps it didn't matter how big a person's Monkeysphere was, but what the shape of the monkeyfunction was. This makes sense. No, really. Well, maybe it doesn't, but try to keep up anyway. I knew what he meant. And I'm going to tackle both of those assertions separately.

I first came across Dunbar's number (the limit of 150 to human organizations) in the book The Tipping Point, and it's fascinated me ever since. I think that it's not necessarily an absolute limit on how many people an individual person can know, but it is a fairly strict limit on how large an organization can get before social feedback mechanisms no longer work. In other words, beyond 150 people, you need to have a structure or hierarchy or some sort of management organization to make things function because otherwise stuff will fall through the cracks, and people won't care because it is affecting people outside of their monkeyverse. I glancingly addressed this in post about different management structures a while ago, so I won't get into it here.

So how many people can one know? Know in the sense of feeling like if you ran into them at a bar, you'd acknowledge them, say hi, be able to talk for a bit about friends and/or family. It's probably more than 150. One of the keys here is, wait for it... context. You knew I was still on that hobbyhorse, right? I think one of the keys to the expansion of our monkeyspheres is taking advantage of different contexts. I know a lot of people that I consider friends, but only within a certain context. I have folks I know from the chorus, who I often go out to dinner with after a concert, but never interact with them outside of chorus. I have a similar relationship with folks from my ultimate frisbee team. Or from work. Then there are friends who have jumped the threshold and have become part of all aspects of my life (there's a whole separate post which I've thought about writing about why it's difficult for me to achieve that sort of crossover, and what I can do to make it easier to deepen and strengthen friendships so they jump the threshold of the context in which they are started, but I haven't figured it out yet).

Within each of those contexts, I may know only 100 or 150 people, but overall, I can know more because I use the contexts to keep them straight. Or something like that. There's always that weird moment when you meet somebody in a different context, and sometimes you don't even recognize them. I've definitely had that experience a couple times when I'm wandering around San Francisco, and somebody from my ultimate team says hi, and I do a double-take and need a reminder of who they are - they look familiar, but my brain can't place them because they're outside of the context within I normally interact with them.

To address Beemer's second post, I don't think the shape of the monkeyfunction matters so much as how we handle people outside of our monkeysphere. Even if the limit is closer to 1000 than 150, it's still well short of the millions of people in a nation. Or the billions of people in the world. How do we handle that case? I had an interesting speculation about that today (I was sitting in meetings all day today, so I had plenty of time to think about responses to Beemer's comment).

The way in which we handle the case of America seems to be that we have created a "friend" called "America" which we include in our monkeysphere. And anybody else who's "friends" with "America" is automatically included in our monkeysphere. This takes place at lots of levels; for instance, I definitely have a soft spot for fellow MIT alumni, even if I don't know them at all, just because I feel we have a shared experience. We share the same "friend", "MIT".

It actually reminds me of the Fakester phenomenon on Friendster, where people were creating fake personas such as New York City, or the Giant Squid, and connecting to each other via these Fakesters. I wonder if this was just a concrete manifestation of an everyday phenomenon, where we use institutions such as America or MIT as friend placeholders to expand our monkeysphere to handle the social institutions that we have that are much larger than Dunbar's Number.

This leads to the question of how do we design better Fakesters, i.e. how do we create institutions that do a better job of binding us together? In politics, how do we use such things to bridge political divides? Or how do we use them to help create world communities as opposed to resolutely nation-state-oriented institutions? And, if you've been reading my blog for a while, you won't be surprised to hear that my guess is that stories are the answer. Stories are what bind communities together. Stories give us the protagonists that we can use as Fakesters to expand our monkeyspheres. The country of America is nothing more than a shared story, starting from the Founding Fathers, through the Civil War and Abraham Lincoln, through WWI and WWII and the Greatest Generation, and JFK and Camelot, and Vietnam, and a story that is collaboratively being created anew every day by its citizens. It's a shared dream.

So that's my response to Beemer's comments. And, just as a note, I know a lot of my posts recently have been less than polished. There's been enough stuff backed up in my brain that I decided it was better to just start getting some of it down rather than try to find the one angle by which it would all fall apart neatly. So apologies for some of the incoherence.

And also understand that this is a work in progress. To some extent, this blog is an excuse for me to publicly map out my brainspace, and I'm very interested in getting feedback. If you don't want to comment on Livejournal, please feel free to drop me an email to the address at the bottom of each post. Thanks to all who read. I appreciate the fact that you're interested in what I have to say. Okay, I'll stop babbling now.

posted at: 22:50 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Tue, 14 Dec 2004

Truth vs. Context
Beemer had an interesting response to my last post, which he called Truth vs. Context, and I'm going to steal that for the title of this post. As a warning, I have a ton of ground to cover, and this entry is probably going to span at least four posts, if not more. Just the notes I scribbled out and emailed to myself were over a page. God help us all.

I agree with Beemer that there is an objective physical reality. I mean, I was trained in physics: how could I disagree with that statement? After all, somebody once quipped "Reality is that which, when you push against it, it pushes back." (if anybody knows who said that, please let me know - I am clearly paraphrasing it because I couldn't find it on Google) However, I don't think that helps us in this case. In fact, modern physics actually demonstrates the importance of context. Not only that, but scientific philosophers such as Kuhn and Latour have demonstrated that even "objective" science has a large degree of subjectivity in it when it comes to the existence of paradigms and black boxes that nobody questions.

But objective physical reality only goes so far. When I drop an object, it falls to the ground. I can repeat that experiment over and over again and be assured of getting the same result. However, the same is decidedly not true of social interactions. For instance, if I asked somebody "Do you want $2 or $0?", you would think the answer would always be "$2" (I just heard an echo of "I want my $2" in my head). But it's not. It depends on the context.

In fact, history demonstrates that there are virtually no impossibilities when it comes to social interactions. We've tried it all. Dictatorships, democracies, anarchies. Cannibalism. Matriarchies, patriarchies, hierarchies. Any rule that we think we can put our finger on and claim is universal, there has probably been a society somewhere in history that did the opposite. Anywhere I go on this planet, I'm pretty well assured that if I hold a book three feet off the floor and let go, it's going to drop to the floor. I have no such assurance about language. Or customs (is it polite to belch?). Or hand gestures (do you know the equivalent of the middle finger in Korea?). In all of these things, context matters.

Given the fundamental relativity of such things, there looms a larger question: given two competing social contexts, how does one decide which one is "better"? In an Enlightenment universe, reason would determine everything, but I think that reason is fundamentally limited here because reason is a tool; it can not determine overall goals. To give credit where it's due, many of these thoughts were instigated by the discussion over at Dave Policar's journal, particularly his comment trying to reconcile opposing concepts of how things should work. This whole post is essentially an attempt to examine some different ways of reconciling such opposing concepts, in part by evaluating the contexts in which they make sense. In light of the specific point I was making, the separation between goals and execution was also articulated in that discussion. I believe that reason is a good tool with which to evaluate alternative execution strategies. However, it's unclear to me that it can be similarly used to evaluate social goals.

This also explains the schism I mentioned in my last post between the Postmodernist Left and the Enlightenment Left. They are covering two separate areas. The Enlightenment Left covers the physical world. The Postmodernist Left covers the social world. And the tools appropriate for one world do not transfer easily to the other. It's "Truth vs. Context", the title of this post.

So, given two competing social systems, two opposing contexts, how do we choose one? For instance, how can we decide between the Strict Father and Nurturant Parent models of Lakoff's Moral Politics? Lakoff takes a stab at resolving that at the end of the book, but he essentially just puts down his opinions to decide in favor of his progressive politics.

I'm not sure there is a way. Any metric we choose to decide between them can be dismissed as biased because everything in the social universe is biased by definition by the chosen context. I've struggled with this question before. The conclusion I came to that time (also with Beemer's help) was that perhaps it could be demonstrated that "Good" systems reinforce themselves, whereas "Evil" systems eventually annihilate themselves. In other words, "Good" systems are infinitely sustainable and create a virtuous circle. How one goes about showing that is a really good question.

I was going to go on and start trying to apply some of these ideas to ethical systems, but I've been writing here for about two hours, so I think I'm done for the evening. I'll try to get back to it tomorrow. The number of branches of investigation available along these lines is dizzying; hence the four or five emails I sent to myself today with different paths to explore. Or I may get bored with it and go explore something else entirely.

posted at: 22:52 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Context in modern physics
I mentioned that modern physics actually demonstrates the importance of context. This is a total aside, which is why I'm putting it in a separate post (think of this as a long DFW-esque footnote), but while I was contrasting the "objective physical reality" with the contextual social world, I realized that the objective physical reality is a thing of the past. In an earlier post, I mentioned the "Enlightenment Left", which believes in the preeminence of reason and the ability of logic to conquer all. They envisioned a clockwork universe, set in motion by a Prime Mover, and following Newton's Laws throughout time, where if one knew the positions of every particle in the universe at a given time, and had enough computing power, you could then predict the position of every particle until the end of time.

However, we now know that is not possible. Chaos/complexity theory has demonstrated the extreme sensitivity of systems to initial conditions, as is most famously illustrated by the butterfly effect, where a butterfly flapping its wings could cause a storm halfway around the world. In other words, to stretch a metaphor too far, chaos theory demonstrates the important of context (initial conditions) in even something as prosaic as simple classical mechanics.

Quantum mechanics is another area of modern physics that can be construed as demonstrating the importance of context. The two slit experiment is a good example. The photons somehow "know" whether the other slit is open or not, and decides whether or not to create the interference pattern or not based on that information. You can come up with all of the "probability wave" explanations you want, it's just spooky and counter-intuitive. And I won't even get into the EPR paradox and entanglement, mostly because I don't really understand either. But it all points to the futility of trying to analyze a system in isolation, without knowing everything else it is interacting with, its context.

posted at: 22:44 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

The Ultimatum Game
I mentioned that there would be cases when people would answer the question "Do you want $2 or $0?" with "$0". This is what actually happens in the Ultimatum Game, described here, with references. The basic idea is that there are two players, asked to split up a pot of money, say $10. The first player gets to decide an appropriate split between the two players. The second player is given the option of accepting the split as offered, or rejecting the split, in which case neither player gets anything. In the isolated case of "Do you want $2 or $0?", a responder would almost always take $2. But if the responder is playing the Ultimatum Game, and they know that when they're getting $2, the first player is getting $8, half of all respondents reject the $2 and take nothing.

This is a fascinating result to me. It demonstrates the all-encompassing importance of context. In an isolated context, people answer one way. In a social context, they answer differently, where feelings of fairness are brought into play. In fact, one study used MRI scanning to demonstrate that unfair offers activated a part of the brain that is associated with negative emotions, including, one would assume, spite. The paper goes on to point out that the MRI results demonstrated a conflict between "the emotional goal of resisting unfairness and the cognitive goal of accumulating money."

One might wonder where humans have learned what "fairness" is, and why it is built into our brain chemistry. This paper gives some insight into how such an instinct evolved. In it, they run computer simulations and demonstrate that the fairness instinct can evolve in the Ultimatum Game if participants are given a history. If it were a one-off game, the first player would always make the split uneven, and the second player would decide that something is better than nothing. However, if there are repeated iterations, the second player can spite the first player by holding out for a "fair" split, and enhance the likelihood of getting a better deal in the future. In other words, fairness only matters when you are likely to interact with the same people repeatedly - "When reputation is included in the Ultimatum Game, adaptation favors fairness over reason. In this most elementary game, information on the co-player fosters the emergence of strategies that are nonrational, but promote economic exchange." And the MRI studies demonstrate that such strategies, such feelings of fairness, are actually built into our brain chemistry.

This leads to an important result in my mind. Because we are primates, and prisoners of the monkeymind, things like fairness and social justice only matter when we are dealing with those within our monkeyverse. If we are dealing with people within our social universe, people who we are likely to run into again, even if they are only Familiar Strangers. We don't rip off the guy at the corner convenience store because we stop in there regularly. We pay our fair share at dinners with our friends because we know we will be going out to dinner with them again.

However, if we are dealing with strangers, with people we don't feel are part of our world, and with whom we will not have to interact with again, then all the rules of fairness go out the window. We are returned to pure self-interest. It's like a one-off game of the Ultimatum Game. We feel fine cheating the people we don't know, because in an emotional sense, they aren't people to us. They don't evoke our rules of fairness. They are objects in the world, to be used and disposed of.

How do we expand our monkeyverses, keeping us from doing stupid things like stealing from strangers, committing hate crimes, and invading foreign countries? My answer is probably not surprising: we use stories as a way of giving us the details about other people that change them from cardboard cutouts into people. By turning them into real three-dimensional people, stories can activate our monkeybrain and all of the accompanying emotions of fairness and guilt. Such emotions leverage the way our social brains have evolved, hopefully getting us to treat each other better. It's a theory. And one I'll probably return to at some point.

posted at: 22:17 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 25 Nov 2004

The Incredibles as a commentary on gifted education
I liked this New York Times article, deconstructing the Incredibles as a commentary on the debate within education on how to handle gifted children. Read it soon before it becomes for pay (or email me because I downloaded a copy). The article also refers to the short story Harrison Bergeron, by Kurt Vonnegut, which danah boyd posted last month, in case you're interested.

Both the article and the story address the eternal question of gifted education - do we separate out the gifted kids at the cost of telling the other kids they are not special, or do we keep everybody together with the risk of holding the gifted kids back from learning at their own pace? I've covered some of this territory before, but that's why it's an eternal question. I'm not sure I have anything new to say, except to tie it into my post about social rejection and reiterate that things that are available to everyone are not valued.

Education's a tough thing. I was thinking about this recently (partially in response to reading this article recommending against going to Ivy League schools), and pondering the benefits from the top-notch education that I was fortunate enough to receive. It certainly wasn't the classwork; I doubt I could remember ten percent of what I learned at MIT at this point. Part of it is confidence; even though I don't remember the specifics of equations any more, I remember that I was able to figure it out, and if I could handle the firehose, everything else feels easy in comparison.

I think most of the benefit from going to MIT was the exposure to other really smart, talented people. Despite the feelings of inadequacy, it's good for me to be challenged, to realize I'm not all that, to continue to have grandiose dreams. I'm talented enough that I could skate through this world pretty easily if I chose to; I have already achieved all the material trappings of success that the world demands, from the fancy car to the nice condo to the international travel, which is part of why that post struggles with the question of whether that's enough.

If I were judging myself by the standards of "society", it would be. I have everything that people say they want (well, except a wife). But I hold myself to higher standards, partially because I have friends that achieve extraordinary things, partially because I've been told all my life that I should be doing those extraordinary things as well. Maybe I'm delusional. Maybe I'll find that place to stand and I'll just break the lever. But I'm going to keep trying. Wait til I write up my latest pretentious ideas, developed during a long conversation with Brad and Jill.

Anyway. Gifted kids and education. Hard question. I'll stop now. Well, this post at least. I have a backlog of stuff to talk about. A big backlog.

posted at: 09:58 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sun, 21 Nov 2004

Whither social software
There's a new group blog out there, started by Stowe Boyd, devoted to figuring out the Operating Manual for Social Tools. Since I'm always interested in this stuff, I'm going to be following along closely to see what ideas they have.

danah boyd asks a really good set of questions in one of her first posts:

The goals of sociological networks are very clear, but what are the goals of people-generated networks for public consumption? What are the goals of the designers vs. the goals of the people producing these representations? Is one motivation to empower people to find new ways to relate? Is the goal to have a more efficient way of spreading memes? Is the goal to make people reflect on their relationships? What are the goals?

danah's point, which she elaborates on in this post is that social software, as presently constructed, is often taking the wrong approach. They come up with a set of neat activities and ways to use their software, release it to the public, and then get annoyed when people don't use it in the way they expect (the whole Fakester phenomenon on Friendster is an example). danah points out:

Focusing on sociability means understanding how people repurpose your technology and iterating with them in mind. The goal should not be to stop them but to truly understand why what they are doing is a desired behavior to them and why the tool seemed like a good solution.

This ties back to my commentary on Clay Shirky's discussion of situated software. The most usable software is that which is written to solve a specific problem, to achieve a specific goal of specific users. Of course, this brings up the question of which user to target. Shirky suggests in his latest essay that for social software, the answer is non-intuitive - the user is actually the social group. He uses mailing lists as an example of different group strategies (netiquette, FAQ lists) that have evolved to counteract the desires of specific users (those who wish to flame).

For the rest of this post, I'm going to take a stab at answering danah's question: What are the goals for social software? From my perspective, there are a couple different ways that social software can go. One is to treat the individual as the center of their social universe, as Esther Dyson suggests. From this angle, the goal of social software is to accurately map out a person's social network, and act essentially as an uber-rolodex. One example would be that I think it'd be really useful if my calendar could include all of the cool concerts/clubs/presentations that my friends are going to, using RSS or something to put it all together. Unfortunately, given the dynamic nature of events, and the effort associated with entering events into a calendar, it seems unlikely that something like this would work from a reality standpoint.

However, I do think that there are some possibilities here. Mailing list software that somehow rates the relevance of posts, and moderates them according to your previous preferences. Meeting software that has some idea about who should actually attend meetings, based on previous meeting attendance and some sort of feedback mechanism for each user that rates the usefulness of the meeting to them. It's all in the intelligent agents area, but if you squint, it could be seen as social software, based on smoothing the interfaces between people.

Another approach to social software is to pick up on Shirky's idea that groups, not users, are the atoms of social networking. I like this approach a lot. It's hard to visualize, though, because there's nothing out there that's really built with this approach in mind. I think it's also difficult to imagine because there are a wide variety of groups out there, each of whom have their own agendas. And each member of those groups has their own agenda. Some groups might be happy with an email list - the community associated with TEP has been using a mailing list for a couple decades to keep in contact with each other. Other groups that are much larger may need more sophisticated mechanisms of interaction, such as those developed by Slashdot. Others that are doing collaborative work may use a Wiki.

If I were going to start a social software company, I would take the advice of the contributers to the Operating Manual blog, and admit that "late binding of goals is better than thinking you know why people are going to be using your stuff." Rather than assume that people or groups have a specific task in mind when they get this software, make the software easily customizable and modular so that groups can find a version that fits their need. In other words, build the tools, not the end product. Then, by seeing which tools are most commonly used to construct social software by various groups, the company could tweak the tools to serve those groups better.

I think there are several companies onto this idea already. Socialtext is one example, except that they seem to be focusing on a wiki-centric approach, which may not be always applicable. CivicSpace is taking a similar approach, except that they are starting with a very specific toolset, which may again be of a limited applicability. From what little I've seen, these types of companies have developed a hammer and are out looking for nails, instead of looking to develop an entire toolbox.

To take another angle, Nielsen points out: "There is only one valid way to gather usability data: observe real users as they use your site to accomplish real tasks. This is actually the simplest of all the methods: just see what happens!" Given our minimal understanding of how groups interact, I think that his observation applies with even greater force in social software. Put some tools out there and see what happens. As Shirky puts it:

Once you regard the group mind as part of the environment in which the software runs, though, a universe of un-tried experimentation opens up. A social inventory of even relatively ancient tools like mailing lists reveals a wealth of untested models. There is no guarantee that any given experiment will prove effective, of course. The feedback loops of social life always produce unpredictable effects. Anyone seduced by the idea of social perfectibility or total control will be sorely disappointed, because users regularly reject attempts to affect or alter their behavior, whether by gaming the system or abandoning it.

I think the next decade or so is going to be really exciting because of this opportunity for experimentation. As more and more of our life becomes mediated by the computer, it means that hard data on our interactions with each other will become readily available to help sociologists discover how groups really interact. It's going to be difficult. We all have our beliefs about how the world works and how people interact, and will often cling to those beliefs even in the face of contrary evidence (e.g. the "blue" states' disbelief that Bush could be re-elected). But I think that these tools will help us learn to design more and more productive ways of collaborating with each other and exchanging ideas. And given the increasing complexity of the world, and the inability of any of us to know everything, such collaborative tools will be necessary to support the virtuous circle of innovation. So, in some sense, social software is the key to our future as an innovative society. Damn. I have seen the future and it is social software. So who's going to start a company with me? :)

posted at: 21:40 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 11 Nov 2004

I've been meaning to write this post for several months (I referred to it two months ago over on Livejournal, so it's older than that). It's basically an exploration of how to deal with the situation where one's personal values do not correspond to the values of an organization to which one belongs. I started thinking along those lines when I was going through a rough spot at work, but I think it's more broadly relevant, especially in light of what us liberals are going through after the election last week, where we love America but don't really like what it seems to stand for these days.

Here are my thoughts on several strategies one can use to cope with the situation (whether at work or in politics or in life):

  1. Suck it up. Subjugate one's personal values to those of the organization. These are the people who take the game as it's currently constructed and aim to win it under those rules, no matter how stupid the rules are. These are the people who climb the corporate ladder, kowtowing to those above them, and accumulating people below them to expand their empire. It's also the way the conservatives have decided to play electoral politics. Since the system is flawed and voters are swayed by media, they take advantage of it to cement their political standing. Obviously, I don't agree with this approach. I especially don't like it because of my tendency to question the assumptions in any system.
  2. Leave. Join a new organization. Quit your job, and find another company. Or, in the case of the election, move to Canada. There are some cases where this is necessary, where the organization is too far gone and there's nothing you can do to pull it back. But it's cowardly. It's walking away.
    1. A variant of this one is mentally checking out. This is where you're still there physically, but you don't care any more. You'll do what is asked of you grudgingly, but you'll do the minimum necessary to get by. It's not quite walking out the door, but it's close.
    2. Another variant is withdrawal. I see this as being a particularly tempting one in light of the election. Let's just withdraw to our enclaves in San Francisco and Boston and New York and leave the rest of the country to rot. I can't really argue with this one because it's basically what I've done. But I feel like I want to make an attempt at an outreach effort to expand the enclaves. It may be pointless. And this energy may only last a couple weeks. But for now, that's the direction I'm going.
  3. Cause trouble. Make a ruckus. When asked to do something that you feel is wrong, kick and scream wildly and make sure everybody knows that you hold yourself morally superior to the person that gave you the order. This is pretty satisfying in the short term, but clearly unproductive in the long term. It doesn't convince anybody to change their behavior, and, in fact, entrenches them in their ways to avoid giving you any satisfaction whatsoever. I'm speaking from personal experience, of course. I see this as what the anti-war protesters are doing. All it does is convince the other side that we have no basis for our position, because our only method of defending it is inarticulate ranting and raving.
  4. Defend your position. This can go a couple ways.
    1. The less productive way is the one where you try to pick apart your opponent's stated reasons. The appropriate analogy would be to the endless discussions on usenet where people would make line-by-line rebuttals to other posters. Tiring and annoying for anybody but the pedants. Unfortunately, this seems to be the way chosen by many Democrats after the election - an example is the trying to get religious conservatives to reconcile their "pro-life" position on abortion with their support of the death penalty. Other good examples are in the comments on this post by a Bush voter. It's missing the forest for the trees.
    2. The more productive way is the one where you try to understand your opponent's worldview. What are their overarching concerns? What motivates them? Only when you can put yourself in their mindset can you figure out what arguments might convince them. Lakoff's work takes this approach and I think it's the way to go. As a side note, this is also the way UI and application design should be done - start with the overall goal and work backwards rather than start with the technical details and work forward. Unfortunately, most project managers would disagree. But I digress.
  5. Set an example. Live your life the way you think it should be lived. This is somewhat inspired by reading about Martin Luther King and the tactics of non-violence. His vision and his unwillingness to knuckle under to how other people thought he should behave set an example that all could follow. A less glorious example is one described by Joel Spolsky, on how to make your working environment as a programmer better, by properly running things for yourself and letting others see you as being more productive. The idea is to set such a good example that others will strive to emulate you. It's hard. It's like being a saint. But it's probably the most effective method of conversion. When I was a kid the Christians that impressed me most were not the ones who were loud and active in their faith, spreading the gospel and trying to convert everybody. It was the ones who quietly lived their life in Christ. If you asked them, they would share their faith, so it wasn't as if they were hiding it. It was just part of who they were. And that sort of quiet dignity was far more persuasive than any rhetoric could ever be. This is sort of a variant of #2b, without the bitterness.

None of these strategies are original or anything, of course. But I've found that it's been helpful for me to mentally lay them out and think about which one I am using in a given situation. And just the process of enumerating them has helped me to recognize some of the more unproductive ways in which I deal with exasperating situations where I am feeling excluded and unrepresented. Admitting you have a problem is the first step and all that (speaking of which, one of the other backlogged posts is an examination of why truth is often expressed in banal aphorisms). At work, I was originally using response #3, and have since lapsed into #2a. Part of the goal of writing this up is to help inspire myself to aim for #5 in the office.

As far as the political situation in America goes, I feel like I should aim for #5 and #4b. #4b is part of what is inspiring this goal of writing more. We'll see how long that lasts. I don't have the patience of a saint unfortunately. Or the work ethic. :)

posted at: 23:30 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sun, 24 Oct 2004

The Animal Who Tells Stories
One of the issues brought up in response to my last post was that we, as humans, are really poor at statistically evaluating risk. We're really good at remembering spectacular stories, or relevant anecdotes, but we're really bad at taking numbers in the abstract and turning them into guides to behavior. And this isn't just true in the evaluation of risk. In some sense, we use stories to define who we are. I was thinking about this and realized that the usage of stories ties together a whole bunch of scattered interests that I have (and was the subject of one of the first rants I wrote).

Anybody who has a stake in persuading us to do something understands the power of stories. Advertising is the obvious example. What are commercials other than miniature stories, designed to elicit the appropriate consumer behavior from the audience? Or, as Neal Postman described it, the "Parable of the Ring around the Collar". Jesus used parables as well. He understood that moral laws such as the Golden Rule would be too abstract for most people, so he used parables to bring it down to specifics of how he expected people to behave in difficult situations.

Politics is another obvious application of stories as persuasion tactics. In 1988, the story of Willie Horton became a millstone around the neck of Dukakis, and eventually helped drag him to defeat. I don't know if Horton was an exception to the rule, or a commonplace occurrence, but it was irrelevant. All it took was one case, because it created the story. Similarly, the cognitive framing that Lakoff refers to is an attempt to use language to evoke a story in our brain. "Tax relief" evokes the story of the cruel government oppressor, stealing our money away.

Stories are how we structure our memories. If you ask me about what I was doing on June 25, 1994, I'd say, "Um, what?" But, when you prompt me that that was the day that my friends Brian and Jen got married, I'd be able to tell you all sorts of details about that day. Our memories are not filed like a computer's, with dates and times. Our memories are filed like, with tags on various memories that are associatively linked in a spaghetti-like fashion. (No, I'm not a cognitive scientist, and I don't even play one on TV, but it makes sense to me). In fact, I might even argue that communication in all of its forms is motivated by the desire for us to tell stories to each other, to say something.

I suspect that our brains are still fairly unevolved from the hunter-gatherer state in a lot of ways - Dunbar's number is a good example. And there's really no need for a hunter-gatherer to have to process data in a numerical fashion or in an abstract sense. Simple stories in the form of myths are sufficient. And I can't blame people that are paid for it to take advantage of that by framing things in the form of simple stories. But I wish that we could use that power for higher purposes.

Stories are one of the most powerful ways for us to get outside of our immediate experience and feel what it's like to be somebody in another place or another culture. It can break down the barriers between us - the story of "America" and what it means to be in the land of opportunity and freedom is one of the few things that ties together this sprawling country of ours, from the liberals of the West Coast to the conservative Midwest, from the newest immigrant to the WASP who can trace their lineage back to the Mayflower.

Wouldn't it be great if we started developing similar myths that made us aware that we are members of a global community rather than separated by our outdated nation-states? It would seem to be obvious in light of the many global problems facing us, from terrorism to global warming to ever-more-interconnected economies. But because the last several centuries have been devoted to developing the story of the nation-state (if my feeble historical knowledge is any guide, the first nation state in the modern sense was Bismarck's Germany in the 1800s), we are trapped in that story line, that frame, and haven't figured out how to move on from there.

The other thing that is really necessary is the awareness of strangers as more than an abstract concept. Because most of us do not deal with people unlike ourselves in our daily life, it is easy to demonize them as The Other, rather than realizing that they are probably people who are mostly like ourselves, just struggling to get by in this world. One of the things I liked about the Extreme Democracy salon was Tom Atlee's presentation on deliberative communal decision making, how even people who thought they believed very different things were able to find commonalities when they talked about the issues in their lives. It kind of relates to an old post of mine about examining our assumptions. And it leads me to think that if we could use stories to help increase our connections, rather than using it to foment separatism (e.g. against Muslims, or "welfare mothers", or proponents of gay marriage, or Indian companies outsourcing jobs), wouldn't the world be a better place? I know I'm being idealistic again, since it's in people's self interest to try to make their slice of the pie bigger at the expense of others (Thanks Mancur Olson). But stories would be one way to try to make people aware that our self interests are now so intertwined that what is good for one is often good for all. Anyway.

There's a lot of fertile ground for thought here. Actually, while I was thinking about this entry, and realizing that it ties several broad areas of my interest together, I started randomly speculating that if I were ever to write a book, this would be a good topic. Use the examples I gave above as a kick start to explore several chapters' worth of case studies of how we use stories to remember our past, to persuade each other, and to organize our lives. Then go interview some cognitive scientists to get the basis for that. Plus some historians for their perspective (oral history, bards and storytellers, etc.). It could be pretty cool. And the title would be "The Animal Who Tells Stories"; hence the title of this post. Anyway. I'm sure there's a bunch of similar books out there already (if you know of any, please let me know). And I doubt I'll ever get motivated enough to pull it together.

posted at: 17:46 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Tue, 12 Oct 2004

Beemer put up a thoughtful comment in response to my last post. To quote one part:

Smart kids, especially the ones who go places like MIT, often get this idea that they need to be Einstein or Newton, which is frankly silly. Because that's not how the world works -- it's the total contribution of everyone, in a whole bunch of different dimensions, not just superstars in narrow but visible fields.

I'm certainly guilty of this. It's easy to believe when you're a smart kid in a small pond that you have what it takes to be a superstar. And it's even still conceivable when you're at MIT, because you're surrounded by superstars. And it's not like they blow you out of the water with their intelligence - you hang out with them, have interesting conversations, things like that. Part of what distinguishes the superstars is just staying productive, as I mentioned in that last post. But I think there's another part as well, which is the ability to focus on a narrow field.

One of the things that eventually got me to drop out of grad school was that I didn't care enough about physics to make it my life. My fellow students spent morning, day and night studying physics (true story: our quantum field theory prof told us that we'd have to spend three or four hours a night studying, plus weekends if we wanted to keep up. I didn't. Most did). We'd be out to dinner on a Friday evening, and they'd be talking physics. And I just didn't have that level of commitment. There were so many other things to think about or talk about.

But in this age of increasing specialization, it really is a full-time job to become an expert in a field, to keep up with all the latest journals, to practice your chosen profession, etc. To become a world-renowned expert, you have to pretty much sacrifice everything else in your life to your field. David Foster Wallace touches on that in his descriptions of the tennis prodigies in both Infinite Jest and the essay titled "Tennis Player Michael Joyce's Professional Artistry as a Paradigm of Certain Stuff about Choice, Freedom, Discipline, Joy, Grotesquerie, and Human Completeness" in his book of essays. And I'm not willing to make that commitment. To anything.

I think part of my reluctance is that the learning curve levels out so fast. Beemer once pointed out to me that it's always more exciting taking intro classes, because you're getting exposed to a whole new way of looking at the world and a whole new vocabulary. Once you get past that, then it's a whole lot of slogging while you pick up all the nuances and details that only matter to other practitioners in the field. And then it's the focused rat race I mentioned earlier.

I get excited about learning new things and being exposed to new ideas. And staying in one field doesn't really offer that opportunity, because of the levelling of the learning curve. Physics is a good example - after you take mechanics, electromagnetism and quantum mechanics in your first two years of college, what happens? You take them again as upperclassmen, exploring more nuances. And then again as grad students. Over and over again. Sure, you're learning more details, and more refined mathematical techniques, but you're not learning anything fundamentally new.

I've mentioned it before, but my ideal job is one where I scan results in a whole bunch of fields and try to figure out how to synthesize them. Constant stimulation. I'm not sure I have the self discipline to sit down and learn stuff like that, unfortunately - there's so much stuff that I've thought about learning at least the intro-level stuff, like sociology or biochemistry, or even economics, that I've never followed up on. But it's in the right ballpark. This blog and my reading list which spans a bunch of areas is my feeble attempt to move in that direction.

Once I get to the point where I'm modestly competent at something, I'm bored. This has been true throughout my life - I have an attention span of about 2-3 years on anything. I burned out on college bad by my senior year. Grad school? I lasted three years. My first job? Two and a half years. I've been in my current job four years, but my role has changed a few times in that time. This is my sixth year in the chorus (it took longer because chorus only meets once a week) and I'll probably quit in the next couple years, because I've gotten the routine down and want to move on to something else. I'm still in the learning stage for frisbee, because I've only been doing that for a year or so, so it's still a challenge and a lot of fun. I need to find new challenges for myself on a pretty regular basis. Although I'm still considering what the next one should be (yes, folks, I know your answer is "dating!")

So that's my personal choice. I'd rather be modestly competent in a bunch of different things, with a broad but shallow knowledge base, rather than become the total expert in one thing. It fits me better personality-wise. Or so I tell myself. When I'm feeling more cynical, I say that it's just because I'm lazy and unwilling to put in the effort to be really good at something, instead relying on my native talents to get me up to the mediocre level. But every now and then, it twinges. I want to be a superstar. I do. But I'm not willing to sacrifice the rest of my interests to do it.

I'm not really sure what the point of this post is. Just to warn y'all, and as you've probably already observed, this round of posts will probably be fairly introspective. Part of that whole midlife crisis is thinking a lot about who I am, what I do, and my role in the world. So I'll be writing it up and posting it here, but I don't really expect anybody other than me to be interested.

P.S. I was sharing some of these theories with a coworker today, especially with regard to the shape of the learning curve, and he pointed out that certain fields differ on the shape of the learning curve. Some fields tend to be knowledge-based - his example was biology or medicine. Medical school is basically a whole ton of memorization. It's a linear learning curve - the longer you spend learning stuff, the more you know, and the more competent you are. Fields like physics or mathematics tend to be more concept-based; once you grasp the theoretical system, you can understand the rest and contribute. In support of his distinction, he noted that Nobel Prizes in Medicine often reward scientists' work done at a relatively advanced age, whereas Nobel Prizes in Physics often rewards work done while a young scientist or even a grad student. It's an interesting thought. I wasn't quite sure how to tie it in to the rest of my post, so I left it for a postscript.

posted at: 22:20 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 07 Oct 2004

Productivity and existentialism
I've been tossing this post around in my head for close to a month now, and it's not coming together, so I'm just going to get down what I have and invite feedback to see what others think. Be warned, it's a long one, with lots of whining.

It starts with my tendency to procrastinate. A lot. It's one of the tendencies that I like least about myself. I'll put off something, and keep on putting it off, until it absolutely has to be done, and then I do a half-assed job on it. For some reason, it's just really hard for me to get started. This was made even more evident when Christy and UBoat were staying with me, because they're good at starting on things.

Christy said at one point, "Are there any projects that you've been meaning to do around the house that you haven't gotten around to?"
I said, "Well, I've been thinking about tearing down the ugly gold wallpaper in my bathroom, and painting the walls instead."
Christy said: "Great! Let's get started!"
Me: "What do you mean?"
Christy pries up an edge of wallpaper and rips down a strip.
Christy: "There, now you have to do it!"

And a couple weeks (and several trips to Home Depot) later, I had a refinished bathroom. It's not like it was hard. It just took somebody to get me started. I'd been thinking about doing this project for three years. And now, it was done.

Just get started. I think that's the key. Joel on Software would say Fire and Motion. I've been trying to do this recently; this is why my blog page finally got the redesign with a cool sidebar and stuff, why I finally upgraded my computer to Win2k and installed the 80GB hard drive I bought last year, why I finally moved the living room light fixture last weekend. And that's a huge improvement. If I can just get one such project done, or even started, in a weekend, I'm doing well.

I think that part of my problem personally is from a tendency to set my goals too big. I get spun up into worrying about this huge big problem and stress about how I'm ever going to get it all done, instead of breaking things down into manageable chunks, and just launching into them one by one (like Christy ripping down that strip of wallpaper). So I'm trying to be better about that as well; just making lists of small chunks to do, and start on one of them when I'm not doing anything. We'll see if I can keep this up; this weekend, I have to finish fixing up the ceiling after cutting holes in it last weekend to install the new living room light fixture, as well as replace the guest bathroom faucet if I have time - we'll leave the list at that in the interest of keeping things achievable.

And I could end this post right here, except that it leads to a whole different set of questions. Which is, what's the point? Am I a better person for having done these projects?

This is something I'm struggling with. I often feel like I'm not accomplishing anything. I mean, by many people's standards, I do a lot of things. I have a full-time job, I sing in a chorus, I play ultimate frisbee, I hang out with my friends, I read, I post in this blog, I occasionally cook, etc. But none of these things are really lasting - they're purely experiential. I don't really feel like I'm building anything of importance.

To put the question another way, is it "enough" to just take care of myself and my stuff? Many people consider it a victory just to make it through each day. They go to work, maybe hang out after work with their buddies at the bar or sit at home and watch TV, and go to sleep, to repeat it all the next day. It's all kinds of hubris for me to think that I should be accomplishing more than that kind of mere survival, but there it is.

I should probably start doing volunteer work. After all, if I don't think it's "enough" just to take care of myself, then clearly I should be helping others. (or, yes Mom, starting a family, but we won't go there) (except to say that having children is one way in which people achieve the goal of building something that is more than themselves). But how should I contribute? Where can I be best used? And that opens up a whole 'nother can of worms, because I feel like I should hold out for something where I can contribute something more than just a warm body, because there are plenty of warm bodies out there. Which is, again, selfish and egotistical. I should just get over myself, and go do something. It's just the procrastination problem in another form, I suppose.

Here's another question: Is it better to do something poorly than to not do anything at all? I hate doing a poor job on something - it's so demoralizing that it makes me just want to give up. On the other hand, if I avoid doing anything I'm bad at, I'll never get better, never improve. And, by being so scared of failing, I'm procrastinating myself into paralysis. It all comes back to just getting out there and trying stuff.

In other words, I'm having an existential mid-life crisis. Yay turning 30, and realizing that I'm pretty much who I'm going to be for the rest of my life. I'm not magically going to turn into a creative genius at this point. I'm not the type of person who's filled with an overabundance of ideas. I'm not a glass half full type, who sees the world as a blank slate, ready to be filled with my creations. I like to think I'm a moderately competent analyst (in the dictionary sense of analysis - "The separation of a whole into its constituent parts for individual study"), somebody who can break things down, find the cracks, and generally play devil's advocate. This makes me an adequate programmer and debugger, thankfully, but I'll never be a great hacker because I don't have that spark of creativity, that sense that something is wrong with the universe that must be fixed right now.

So I'm trying to find my niche. Where can I make a difference? What can I do? Is just getting through life enough? It doesn't feel like it to me. It's odd - I've probably been more productive in my personal projects over the past few months than I have been in years, and yet it feels even more pointless than ever. So what's missing? The obvious answer is making a difference in other people's lives, to feel like I'm having some sort of an effect. Does that mean doing volunteer work? Or spending more time with my friends? I don't know. Maybe I shouldn't even be aspiring to anything more; just accept my life for what it is. Anybody have thoughts?

posted at: 21:52 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Wed, 29 Sep 2004

Why do we write?
I've spent some time over the past couple days thinking about why Infinite Jest annoyed me so much. I went and read several gushing reviews of the book, as well as interviews with Wallace where he explains what he was trying to do. Part of what Wallace was apparently trying to convey was that life is messy and complicated and it doesn't come to a neat conclusion. He wasn't trying to write a typical narrative novel; in fact, he's explicitly rejecting the conventions associated with the form. I think that's a copout, though. If you're writing a novel, you're making an implicit agreement with the reader that you will follow the conventions, or at least have a good reason to not follow them. To have the reader work for 900 pages and then say "Ha! Just kidding!" is an elementary school amusement, of the order of Lucy pulling the football away from Charlie Brown.

While I was talking to my friend Wilfred about it, he pointed out that there are no finite points in postmodern relativism. Like the game of Eschaton, the map is not the territory. But, again, if that's the point of Infinite Jest, I cry copout. I don't need a novel to tell me that. I can look at the world and know that it's infinite and unable to be described completely in writing. But does that mean we should despair and not even make an attempt to write? Of course not!

Why do we write? We know that we can not capture all of life, so what's the point? Here's my answer. We may not be able to create a complete map, but we can create a useful one. All of writing is an attempt to create a useful abstraction of the world. It is distilling it down to interesting or useful tidbits that can be captured. It's making a map of life that others can hopefully use to assist them in finding their way, by benefitting from our experience.

Why do I write this blog? It's because I like trying to create such abstractions. To try to distill my experience and thoughts into little nuggets that I can refer back to later. I know that most of my writing contains gross simplifications and generalizations. But that does not necessarily invalidate its viewpoint, so long as it is understood that my views are on a specific subject at a specific time, and not a general description of life. Yes, my map is not the territory. But it can still be a useful guide in navigating this complex world.

Why do people write nonfiction works? They are often sifting through their experience and sharing the portions that they think are relevant. Just because Kunstler is tremendously biased against cars doesn't mean we should ignore everything he says if we like cars. He may not capture all the subtleties of the debate in developing a community, but he provides a viewpoint, one that we can weigh and judge in light of our own experience. And if it makes sense to us, we integrate it into our own guides to the world, our own maps.

I love that feeling when I read something, and a little light goes on, and my view of the world is shifted in response to this new viewpoint. Reading that first interview with Lakoff was like that - it just opened up a new way of looking at politics and the world that made so much sense. There are often a few nuggets in any book I read that I want to hold on to. I started writing book reviews for myself just for that reason - to grab the bits that jumped out at me, that made me open my eyes and look at something in a new way.

I think the same motivation holds true for fiction works as well. Authors are trying to communicate something, get an idea across to their readers. Wallace had several points that he was trying to make with Infinite Jest. I don't think he was entirely successful, but that may be just because I'm still annoyed. A romance novelist is reinforcing a fantasy view of the world where love at first sight exists, and everybody lives happily ever after. A science fiction writer may be speculating on what the effects of technology will be in the future. There is a reason they're writing, and it's to get some idea out of their brain and into the reader's.

And it's hard. Communication is one of the trickiest things we do as human beings. Given the incredibly low bandwidth we have to communicate with in speech and writing, it's amazing that we can convey what we are feeling and thinking to each other. I'm influenced here by the book The User Illusion and Norretranders's description of exformation, which is the enormous amount of context that we each use to interpret the words on a screen in front of us, or a conversation with a friend. It's similar to the idea of reality coefficients, where it's really hard to communicate with somebody who's using a different context or a different set of assumptions.

Which brings us back to postmodernism, oddly enough. One of the great insights of postmodernism was that the meaning of a work was not solely in the work itself. It was also in the context that a reader brought to the work. Using that in the example of Infinite Jest, Wallace intentionally evoked the context of a narrative novel, but then intentionally rejected the conventions associated with it. He broke the implicit contract he had with his readers. Given his affection for postmodernism, perhaps he was trying to point out this importance of context explicitly by showing how much we depend on it. Perhaps.

P.S. This post was written while wearing a bandanna, one of David Foster Wallace's trademarks. Thought you'd like to know.
P.P.S. Any postmodern theory referenced in this post is unlikely to be accurate, since I am a total poser when it comes to postmodern theory.

posted at: 21:32 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Thu, 09 Sep 2004

Self promotion
In the spirit of Paul Graham, I'm going to start this entry with a question that re-occurred to me a couple days ago: What is the right balance of self promotion to do?

I was sitting in a project update meeting and noted the difference in how much detail people put into their various updates. Some people wanted to walk the whole group through all the details to show all the work they've done. Others said "Everything's on track" and left it at that. And it got me to thinking about where the right balance lies. This is something I constantly struggle with. I don't want to be an arrogant jerk and brag about my accomplishments. But instead I end up deprecating myself all the time and never talking about anything I've done. And that doesn't seem right either.

In the latter case, I'm relying on other people to be interested in what I'm doing and asking me for details. I guess some might call this a passive-aggressive approach to life. On the other hand, I don't want to take an aggressive-aggressive approach to life where I'm shoving my life into the face of everybody I meet; we all know and hate the type A personalities who flaunt their wealth and their BMW, tell everybody about the major deals they closed, and the top university they went to. But sitting back and waiting to be asked doesn't work either, because you need to unveil enough about yourself to entice them into asking more.

I think that part of the problem is that I have pretty high standards for myself. I have friends that do all of these amazing things so my life feels pedestrian in comparison. I didn't help run Howard Dean's Internet campaign. I haven't gone backpacking around the world. I didn't try out for the U.S. Olympic fencing team. I haven't had my artwork discussed in the New York Times. I haven't discovered a minor planet. In fact, I've accomplished basically nothing with my life. Or at least that's how it sometimes feels, relative to the people I know.

So I don't really understand the impulse to announce my mediocre deeds to the world the way some people seem to. That's part of why I've resisted the temptation to turn this blog into a diary type journal. I did that for a while on a different site, but I got bored with it. My life isn't interesting to me, so I can't imagine it being interesting to others. And maybe that's the key. If I were truly invested in what I was doing, I would enjoy talking about it, and that investment would be apparent to others and would spark their interest. Hrm...

Another reason I think that people exhibit self-promoting behavior is as a form of validation. To stereotype it harshly, it's "Look at what I did! Isn't it neat?" Since I'm relatively secure in the work that I do, I don't feel that need. I do my work, I know I do it well, and that's good enough for me. Well, most of the time. Sometimes, it would be nice if some of the management at my company had some clue as to what I did. And that's where being more aggressive about presenting my accomplishments would be more useful.

It's the all-too-normal phenomenon where only the jerks get ahead, because only the jerks press their case. It's also why the jerk is always the one who gets the girl, because while us self-effacing types are sitting in the corner thinking "She'd never be interested in me", he's going up to the girl, and telling her how wonderful he is. And since she doesn't have any way of knowing what the geek in the corner is like, she goes off with the guy that's promoting himself.

The thing is, I don't want to be that jerk. I don't want to assume that everybody I meet is enthralled with me. I'm actually often embarrassed by my life, because the things that impress people are the things of little significance to me (e.g. "Oh, you went to MIT?! Wow, you must be smart!" *sigh*), so I downplay them ("I went to school in Boston"). But on the other hand, I'm still ambitious. So I need to figure out how to be more aggressive in self promotion without crossing that line, wherever it is.

One last thought I had on the subject was that I think I make the mistake that Lakoff refers to of believing that "the truth shall set you free". In other words, I believe that my accomplishments speak for themselves. But Lakoff's First Law states that "Frames trump facts." I need to learn to frame my accomplishments in the most complimentary way possible. This is what people mean by crafting one's resume. In the past, I tended to view such framing as cheating. I'd think something like "People aren't really that dumb, are they?" In fact, I used it as an intelligence test - I had more respect for those who could figure out my contributions without it being shoved in their faces.

But Lakoff has observed it's not a matter of intelligence. It's a matter of winning the battleground at the preconscious level. It's setting up the context so that other people can even see the facts. I can't just assume that people will take the time to figure out what I'm doing, even if I'd like them to. I have to make sure that my accomplishments fit into their worldview, fit their frame. Or accept that I'm not going to get ahead without selling out in that fashion (which is a whole separate article that I've been mulling, on how to reconcile life when one's personal values don't mesh well with an organization's values).

Back to the original question. What's the right level of self promotion? As always, it depends. It depends on what you want to get out of the self promotion. On what level of it you feel comfortable doing. On whether it's part of your natural personality. There's a balance to be found between the overly aggressive jerk and the self-effacing passive geek.

From my personal perspective, I think I need to be more aggressive. I need to get away from my dogmatic pursuit of an idealistic world, and try to work more within the parameters of the world we live in. I also need to spend more time doing the things I find interesting, like writing this blog, and less on the boring things, so I can actually answer enthusiastically when people ask me what I've been up to. And I need to figure out if there's a way in which I can actually make a difference. I am ambitious. I want to do cool things that other people talk about in hushed tones of awe. I just need to figure out where to try to make my mark, a mark that I think is worthy of self promotion. If I could do that, I think the rest of this stuff would sort itself out. Alas. If only it were that easy.

posted at: 21:56 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sat, 14 Aug 2004

Changing my mind
So picking up the threads of my post about harshness, I realized that one of the possible sources of people not wanting negative feedback is that people never want to admit they're wrong. In fact, they don't even want to admit they don't know (thanks to my father for pointing that out). They want to always have an answer and they'll stick to the answer they chose, because changing their mind would mean admitting their fallibility. And this is kind of scary in a lot of ways.

I'll be the first to admit that I always always have an answer at the ready. Generally I can take a plausible stab, but even if I can't, if somebody asks me a question, I'll come up with something. But the place where I think I differ from a lot of people is that I'll freely admit I'm wrong when new evidence is presented. "Oh, yeah, I didn't think of that." And, actually, there's plenty of areas where I happily admit I know nothing, from home and car repair to the intricacies of most academic subjects. But when I'm shown to be wrong, I'll say "Oops, you're right" and learn and move on.

To take things epistemological for a second, I tend to believe in Karl Popper's theory of falsifiability. There are no hard truths in science. None. We can never "prove" a scientific theory the way we can prove a mathematical theory. The test of whether a theory is scientific is whether it's falsifiable, whether there exists an experiment that we can do whose results would prove the theory is incorrect. This is why I consider Scientific Creationism to be an oxymoron, because any evidence which might falsify Creationism (such as evidence that the earth is billions of years old) is waved aside with "God made it that way".

So believing in falsifiability as grounds for a scientific theory, and applying it to my own life, I'm pretty sure that not all of my beliefs are true. I've probably made some mistakes (shocking, I know). So when I find evidence that I'm wrong, I'll admit I'm wrong, change my theories to align with the new evidence, and move on. In other words, I always want the right to change my mind. Changing one's opinions in light of new evidence isn't a sign of weakness or flip-flopping; it's a sign of an open mind.

But many people don't appear to believe that. Even if they're proven to be wrong, they'll cling to their beliefs in an attempt to hold on to their self-image, tied up with their need to always be right. We can see that by who we choose in our leaders. I was thinking along these lines already when a friend forwarded me this New York Times article, showing the extraordinary lengths to which the Bush camp is going to paint Kerry as a flip-flopper, as a wishy-washy person, who (*gasp*) may have changed his mind sometime in the last three years about Iraq. Vice President Cheney puts forth the Bush campaign's case in a nutshell: "We need a commander in chief who is steady and steadfast."

Why is this important? Why is the Bush camp hammering so hard on this point? Because a large segment (probably the majority) of people believe that changing your mind is a sign of weakness. And I just don't get that. I think being able to admit you're wrong is a sign of strength. I couldn't believe Bush's press conference where he said he couldn't think of a single mistake his administration had made. He truly believes that admitting any mistake makes him less of a man, less of a leader. And his campaign for re-election reflects that, as the article shows.

If Bush gets re-elected (and I think he probably will), it will indicate to me that this country is full of people who would rather bury their heads in the sand than admit they were wrong. Something along the lines of "We chose Bush, and we're sticking with him". They would prefer the mythical cowboy figure that is always right in a world of moral turmoil, to the candidate who acknowledges the confusion, the turmoil, and reacts to it. It's easier, certainly. It's comforting to believe in the one true path where there's good, there's evil, and nothing in between. But I don't think it's realistic. Situations change. New evidence arises. And to ignore the new evidence because you don't want to be seen changing your mind is just dumb.

And it's not just Bush, obviously. People everywhere do the same thing. The managers at my old startup insisted everything was going fine up until the moment everybody got laid off. Heck, I know I have my blind spots where I hold onto my beliefs. But when I'm shown to be wrong, I'd like to think I'd admit my mistake. That's the kind of person I aspire to be, at least.

Now that I think about it, this all relates back to Lakoff's theory of Moral Politics. In the Strict Father view, good and evil are constant factors in the world, where you are born bad, and must exercise discipline to stay good. Changing one's mind is a sign of weak discipline, a sign that you are susceptible to the enticements of evil (aka Satan). Only the strong and steadfast can overcome evil and lead the forces of good. Man. This makes perfect sense. Bush and Cheney are running a textbook "Strict Father" campaign. And enough people in the country believe in that morality system that it will probably work. Okay, that's depressing, so I'm going to stop here.

posted at: 02:24 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Wed, 11 Aug 2004

Instant Community and Values
In a slight departure from my rants about organizations and responsibility and harshness, I'm going to go off on a digression here for this post, one brought on by a thought I had while writing up my experience at Greg Maddux's 300 wins. I was wondering why I cared. I mean, this multimillionaire did his thing playing a kid's game. So what?

That leads, of course, to the larger question: Why do we care about sports? Well, okay, some people don't, but those of us that do? It's ridiculous to care about the exploits of guys playing a game that has no effect on my life. My theory is that following sports is a way of instantly becoming part of a community. When you follow a certain team, you go through ups and downs with the rest of that team's fans. You commiserate when your team loses, you celebrate when it wins. You can strike up conversations with people you don't know at all, using merely the shared commonality of rooting for that team. My sister and I did this at the Giants-Cubs game - the folks behind us were Cubs fans, so we were sharing tidbits of Cubs trivia, and experiences of rooting for the Cubs in years past. I don't think I ever even learned their names. But we had a shared history via the medium of sports.

And I think that's really the power of sports. It's not about the athletes on the field; they're merely an excuse. The real power is in the communal experience of the fans. We, as human beings, are evolved to be social creatures. Sports fanhood provides a way of binding us together into a social community. We don't have to have anything in common besides our fanhood. It's similar to the power that network television had until the advent of cable; when you sat down and watched the evening news, you knew that most of the nation was watching it with you. It was a shared experience that provided one of the binding threads of the nation.

I think that sports serves a similar function - when I come in on Monday morning, I can say to the other football fans, "Can you believe that catch that Brandon Lloyd made yesterday?! Holy cow!!" and we'll be off talking about the game and other events of the weekend. It's even better because being a fan of sports doesn't involve the risk of rejection that joining most communities does. You can be a sports fan by declaring yourself one.

Speaking of instant communities, the other obvious one is joining a church. You immediately have people to do things with, because churches generally organize lots of social activities to help embed you into the community. It's a marvelous piece of social engineering, because tying you into the social fabric of the church means that you can't leave the church without breaking all of your social ties as well. And the thought of that is too much for most people to bear (again, we are social animals). I'm not saying this is done as a Machiavellian plot by individual pastors; it's a consequence of the church being the center of existence for two millenia and having had lots of opportunities to refine its approach to getting and keeping members. But that's another story.

One of the things that I don't like about either of these communities (sports or churches) is that they seem too superficial to me. This comes back to a theory that a friend of mine proposed of "reality coefficients" (1) (yes, I couldn't resist using a David Foster Wallace-style footnote). Our reality coefficients align in a small way via sports or a church, but it restricts interactions to a really shallow level. And for some people, that might be what they're looking for. But I'm looking for more, because I've had it in the past, with a community of friends that I felt like I could talk to about almost everything. And I'm no longer willing to settle for anything less. Which may make me a snob. Anyway.

What was I talking about? Right, sports. Yeah.

Despite all that snootiness, I still love sports. I love playing them, I love watching them, I love the buzz in the crowd when you're all rooting together for something. It's a primitive impulse being part of a mob of people like that, and one that probably appeals to us all because it's primitive. In our world of postmodern ironic world-weariness, something about the buzz as Barry Bonds steps into the batting box, as 40,000 people hold their breath together, breaks through our ennui and evokes images of a more primitive time, of gladiators and arenas. It's an exciting feeling. The mob mentality rises to the surface and we lose ourselves in it.

Okay, now I'm heading off in a totally different direction. I'll stop there and figure out which way I want to go later.

(1) Reality coefficients are essentially the value we place on various things in life. Some people rate following sports as being important, others don't. Same for church. Or school. Or quantum hydrodynamics. Pretty obvious, so far. My friend's insight was that when you don't share the same set of reality coefficients as another person, the two of you essentially live in different worlds/realities. In my world, the Cubs losing in playoffs last year was brutal. Others were not even aware of the game even taking place. I can't have a conversation with them about that game because it didn't even show up in their world.

My friend extended the idea further, saying that we can only have conversations with others where our values overlap. This is where the instant communities come in. Because of the commonality of experience in being a fan of a certain team, I can commiserate with another fan of that team, regardless of whether our values line up in other areas, so long as the conversation remains restricted to discussing the team. If it wanders off into other areas, we may end up in violent disagreement.

Another consequence of the theory is that it explains "small world syndrome", where it seems like everybody cool that you meet is friends of friends or knows somebody that you know. Why? Because we live in different realities depending on the values we place on certain things. Our closest friends and confidantes will naturally share the most values with us; we'll be able to converse on the greatest variety of topics. Their friends will naturally share similar values as well. So it's not surprising that we all know each other. There are other networks of friends out there who have completely different values that barely intersect our network at all. So the number of people who share our values, who are in our network of friends is a very small number compared to the total population. Hence small world syndrome.

posted at: 15:58 by
Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sat, 31 Jul 2004

Taking responsibility
This post is going to be a little bit disjointed. I apologize from the start. I have several themes running through my head, and I know they all tie together, but I may not be able to express the connection coherently. But I've been mulling it over for a few days now, and it's not getting any clearer, so I'm just going to take a stab at writing it all down and see what happens.

I was watching a PBS show called Now with Bill Moyers the other day, primarily because I found out that he'd be interviewing George Lakoff, whose work I adore. Before Moyers interviewed Lakoff, he talked with another political analyst about the 9/11 Commission Report. The other guy (whose name I can't remember) made an interesting comment about the report, noting that the report did a good job of identifying institutional failures, but refused to identify specific people that could have helped prevent 9/11. The system was blamed, not individual people.

I perked up when I heard that. It ties in perfectly with Clay Shirky's observation that "Process is an embedded reaction to prior stupidity". In fact, I'll quote more from Shirky's article to make the point:

"We need a process to ensure that the client does not get half-finished design sketches" is code for "Greg fucked up." The problem, of course, is that much of this process nevertheless gets put in place, meaning that an organization slowly forms around avoiding the dumbest behaviors of its mediocre employees, resulting in layers of gunk that keep its best employees from doing interesting work, because they too have to sign The Form Designed to Keep You From Doing The Stupid Thing That One Guy Did Three Years Ago.
Why are we so afraid to blame people? Why is it that it's okay to criticize the process, the institutional failures, but it's not okay to say (to use Shirky's mythical employee) "Greg, you fucked up"?

It's interesting to me because it ties back into my previous post about harshness. People are so afraid of being seen as mean or of being negative that we have to delicately talk around the problem instead of confronting the issues directly. Sometimes people screw up. And it has to be okay to say that, rather than talk about how the proper process wasn't in place to prevent mistakes from happening.

Where am I going with this? I guess I want to try to outline an alternate universe that runs according to the rules I think I'd prefer to live by. In our current world, it's not okay to blame people directly, to tell them they screwed up, and in fact, to say anything negative about them whatsoever, because you'll get tarred with the epithets of "negative" or "harsh" or "bitter" or "cynical" (who, me?). This extends to management within companies and even governments. We can't say "No, that person is a useless loser who will drag the rest of us down", we have to say things like "We don't feel that he can contribute to the team at this time." Instead of saying, "Y'know what? Eric totally forgot to check his code and that's why it broke", we say "We need to have a process in place for checking code to ensure that this does not happen in the future." All of these things drive me nuts.

I think I'd prefer a much more human-centered world. We, as people, need to run our own lives. We, as people, need to take responsibility for our own lives. My primary asset in this world is my intelligence, my ability to judge situations and respond accordingly. Why should I give up my ability to exercise that asset to the vagaries of some process? I guess that less self-confident people might want to make that trade, in a futile attempt at numbing security, where they could never be criticized, they could never be wrong, because they didn't make any choices - it was all the process. But I don't want to live that way. I believe in my judgment. I don't know how I could live life if I didn't. So I want the freedom to use that judgement. I don't want to be hemmed in by "gunk" formed "around avoiding the dumbest behaviors".

Yes, I'll screw up. People that are making decisions often do. But that's okay. I'll admit that I screwed up, and I'll fix it and deal with the consequences, and we'll move on. One of the reasons that I think my coworkers like me is that I'll admit my mistakes. I'm honest and upfront and say "Whoops, I didn't think of that". But I will happily take the brunt of accepting responsibility for my mistakes if it means having the freedom to make decisions. Freedom and responsibility. Always paired.

Too many people in this world run scared from responsibility, from ever admitting they made a mistake. Ben Franklin had it right when he said "The man who trades freedom for security does not deserve nor will he ever receive either." The security of protecting one's ego from the fear of screwing up isn't worth it. It's too much of a sacrifice. I'm not willing to make it. I don't understand why so many other people are. We all admire the people that are secure enough in themselves to say "Yes, I made a mistake." Well, okay, I at least admire them. Nothing was more ludicrous to me than having our president insist in a press conference that his administration had made no mistakes over the past three years. I guess there are those who gleefully pounce on other people's admissions of error and try to use such admissions to drive them into the ground as a way of distracting attention from their own shortcomings and insecurities. That could be a factor, I suppose. I wish we didn't live in a society that encouraged such gadflies. Anyway.

Having the freedom to make mistakes is a large portion of why I want to work at a small company in the future. Startups haven't had time to accrete all the gunk associated with process. Startups have to be more free form, because you don't have a margin for error. You can't afford dead weight in the form of process. It's less secure - each decision you make could affect the success of the project and therefore the company and your job. But it's more freedom. And that's a trade I will gladly take. I want to work in an environment where my manager trusts my judgment, and gives me the freedom to use it. I'll take my lumps when I make a mistake. It's worth it.

I feel like I have some grand vision lurking in the back of my brain of an idealized utopia. I see peeks of it through discussions like this one where several disparate elements come together. A vision of a world which is ultimately human-centered. It's almost objectivist. A world where the freedom to exercise judgment is widespread, and those who make poor choices are called on it and suffer the consequences of their actions. Do I think it's realistic? Unfortunately, no. Too many people want to run away from responsibility, want to live a life of childhood where they are given what they want without having to work for it. They take praise and other gifts indiscriminately because they feel their very existence earns them such plaudits. I have a very different value system. I don't know how to reconcile the two. It may not be possible. The best I can do is try to run my life the way I feel it should be run.

posted at: 01:15 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Tue, 27 Jul 2004

One of the things that always surprises me is how gentle other people are around each other, and how fragile some people's self-image is. There have been a couple occasions over the past few months where I asked for somebody's opinion, and they prefaced their comments with "I know this is going to sound really harsh, but..." After hearing what they had to say, I didn't think their comments were harsh at all. I felt that they were an accurate description of what I was doing. They were a negative assessment, sure, but I'm self-aware enough to know that I'm not perfect, and in those cases in particular, I was aware of my suboptimal behavior. So that got me wondering what makes a comment harsh. Is every negative comment considered harsh? Do we live in a world where only positive feedback is desired?

I don't get it. I really don't. I want to be criticized. I want to find out what I'm doing wrong. How am I going to get better otherwise? I'm secure enough in myself to know that I do some things well, and some things poorly. I don't need continuous affirmation from others to make me feel good. I also tend to be pretty hard on myself, so I am generally not surprised when other people share their "harsh" assessments of me. If they criticize me unfairly, I can generally make a case for why I think they're wrong. If I can't, then it's time for me to do some deep thinking, and some self-analysis, to figure out why I can't convince them.

But most other people don't seem to be that way. They have some self-image of themselves that is unflinchingly positive. I guess. So any criticism is an attack on their whole image of themselves and must be fought with every fiber of their being. And they're so insecure that they want unabashed praise for everything they do to make themselves feel better. I think. Again, I don't really get it.

I'm going to continue along this line of reasoning to get myself really in trouble. Because you know what the next step is. If people are looking for uncritical praise and love, that's not something they are going to get from other people. Because, well, we all get annoyed with each other occasionally. So where do you get such a thing? Jesus. Religion, and Christianity particularly, is designed to fill this need for uncritical love. What are we told as children? Jesus loves you, no matter what you do. Jesus will always forgive you. No matter how much of a screwup you are, no matter what you do wrong, Jesus loves you. And that's a comforting, warm feeling. It's nice to think that there's somebody who's always on our side, who will always praise us.

But I feel that it's an empty sort of praise. As I noted in my review of Moral Politics, I hate being praised for things I don't do well. I don't want affirmation. I want praise when I do something well, and I want to know when I do something poorly. Some of this is my "Strong Father" upbringing (using Lakoff's terms), I suspect. Earned praise is really satisfying. Unearned praise feels like an insult. Self-satisfaction is something I don't understand. I'm always striving to improve, to get better, to learn more. I'm not always successful, of course. Sometimes I'm just lazy. But I don't ever get the feeling, "Wow, I'm great where I am. Nothing left to work on." I don't know if other people feel that way, or if that's my unkind projection. But it's certainly consistent with a naive reading of the New Testament.

Anyway. Rather than dig that hole any deeper, I'll move on. I've been struggling with this question of harshness because it's come up several times in my life recently, both at work and socially. I feel like I can't address the real issues, because people would react negatively to the criticism, and we'd never get to discuss the issues. But avoiding the issues and trying to apply band-aids to avoid giving out that criticism doesn't seem to help either. So I don't know what to do. I guess I need to learn to couch my criticism in a way that the recipient is going to be open to. That sort of finesse is definitely a skill I do not have. I can generally see the problems, but don't know how to handle the discussion of the problems in a useful manner. I don't know. I remain extremely thankful that I have found a group of friends with thick skins who are secure and self-confident enough that I can tell them the "harsh" assessments and have them help me figure out more diplomatic ways of handling the situation.

I'll end here with a great quote from Interface, by Stephen Bury (aka Neal Stephenson). Interface has some incredibly insightful ideas about how politics and media interact in this country. Or maybe they're not insightful - I just happen to agree with them. But it's fascinating to me how I can go back and read Interface and find the seeds of many of my political rants. So, without further ado, another quote expressing this idea about harshness far more succinctly than I managed:

This that I am saying to you is not abuse. It's the truth. It's just that sometimes the truth is so harsh that when people hear it spoken, it sounds like abuse. And one of the problems we got in this that everyone is so easy to offend nowadays that no one is willing to say the things that are true.
Heck, yeah. Let's address the real issues. Let's call things the way we see them. And if people are too insecure to deal with it, then they'll learn to deal. In the meantime, I'll be the one spouting harshness. Probably in this very forum...

posted at: 16:07 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sun, 18 Jul 2004

Social networks and rejection
I wanted to follow up on my recent post with more thoughts about social networks. I was thinking about the networks I'm part of, and how they interact, and realized that a component that is often missing from social software is the requirement for rejection. Nobody ever turns down a friend request in Orkut. But it's more than that. In real life, you can't just join a group by saying "I'm part of this group." It doesn't work that way, and we all know it. The group has to accept you, and start to include you, and think of you as a member of the group. Otherwise, you become that person that we have all met, the one yapping "Hey, guys, what are we doing?" as everybody else plots ways to ditch you. Sometimes it happens more subtly (folks "forget" to call you when they get together), sometimes it happens more explicitly (fraternities flushing pledges), but it's part of life as a social animal.

And despite a societal move towards equality for everybody, I don't think it's a bad thing that social groups are inherently discriminatory. We don't let everybody in. We choose who we want to associate with. If we didn't, then there would be no value to our groups. We feel that there is a certain character that is associated with our group, and those that don't fit in don't belong. I'm not advocating racism or sexism or any other phobia here, but a recognition of the idea that we are all different and that we all have the desire to associate with those similar to us. Taken to extremes, it can be a bad thing, sure - cliques in high school can be painful, and discrimination can be taken to the point of the Ku Klux Klan. But a group that knows its own character and looks for similar folks, but not to the point of automatically rejecting all others? That seems to be a reasonable compromise.

This idea isn't as well formed as I thought it was when I started. I started thinking along these lines last weekend, when we had a reunion of many of my college friends. It was amazing to me that I was more comfortable talking to people who I've seen once or twice in the past ten years than I am talking to my coworkers who I see for hours each day. What does that say? It's also interesting that the vast majority of these friends were from my college fraternity, many of whom I never actually lived with - they had lived at TEP before I arrived, or after I graduated. I think this is probably an extension of the friends of friends idea. People that we like and are similar to tend to like similar people. So people that were chosen to live at TEP after me by people who were chosen by me tend to be people I like. But it's only weakly transitive - now that it's been almost three (four year) generations from when I lived there, our values have diverged.

To return to the point I started from, though, one of the reasons I think that TEP has a strong contingent of people that I feel close to is precisely because there is a formalized acceptance process for joining that group called Rush. The current brethren have to make a decision as to who to accept and who not to. And that process reinforces a certain character of the group, the qualities that they value, however elusive those qualities may be to define.

I'll have to come back to this at some point. I think that there's some value to be had here, in associating value of social networks with exclusivity, but I can't quite articulate it right now. Alas. I think it's related to some of the ideas being expressed in this post about friendship over at Many-to-Many. I'll keep on thinking about it...

posted at: 06:42 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Wed, 25 Feb 2004

Constructing the self story
I was talking to a friend last night who passed along an interesting observation to me: that people actively seek out evidence to support their worldviews. I've always believed that our perceptions color our view of the world in a passive way; that we see what we expect to see. Where one person sees the wonders of science and evolution, another sees evidence of the Grand Design of the Creator. But I hadn't ever really considered it something that people did actively. It's interesting because it begs the question of how one can adjust one's worldview to change one's life in a desired fashion. What does it even mean to try to support one's worldview?

In a totally separate conversation over AIM today, a friend and I were talking about the new Mel Gibson Passion movie, and he commented: "i'm really perplexed as to why people adamantly believe this is historically accurate". My off-the-cuff response was: "these are people who've never taken a real liberal arts class in their life, so they don't understand how history is constructed. history isn't a recitation of facts, it's a viewpoint - a construction of a narrative." I thought it was a pretty clever thing to say at the time, but that's it.

I later realized that these two separate quotes are conceptually linked. And the link is the idea of a self story, a narrative that we tell about ourselves. This idea of the self story is a large part of Orson Scott Card's work, and I have been attracted to it for many years. Card's view is that all of us have a vision of ourselves, one that we strive to support. We pick and choose pieces of our life to support that vision. An inescapable continuation of this idea is that nobody is evil in their own minds; they have constructed a self story where their actions make sense, no matter how inexplicable they are to the rest of the world (I allude to this in my rant about extremism). I've played with this idea in other forms before, but I want to return to it again in this forum and explore it a bit more.

Let's start with the history quote. There's a common saying that "History is written by the winners." This acknowledges that there is no such thing as an objective history. A recitation of facts is not history, despite the lesson plans of our middle school teachers. A historian generally has a theory in mind, a narrative that they are trying to support, and they go looking for evidence for that theory. This was something I didn't quite understand about history when I mused about this before, although I did apply the idea to literary criticism. But another quote about history also illustrates the point I'm trying to make: "Those who forget the past are condemned to repeat it." Santayana was not saying that we would literally relive the past, of course. He was pointing out that by learning from the stories that passed before, we can learn how to live better in the future.

It all comes back to stories. This is Card's view of the world - he believes that what separates us as humans from animals is our ability to tell stories, and our ability to incorporate stories and myths into ourselves and make them part of our self story. Put that way, it sounds a bit detached, but let's use the example of myths of America. One such myth is that promulgated by the NRA, that being an American is about being able to bear arms against our oppressors. Another is the nineteenth century doctrine of Manifest Destiny. People believe in these myths, take them to heart, and use them to define what it means to be an American. This is why one of the most powerful epithets that can be used in a debate is to be un-American, with the added confusion that the term means many different things, depending on which set of myths about America one subscribes to.

Getting back to the original discussion, what does it mean to actively seek out evidence to support one's worldview? It means living our life in such a way as to support our self story, our ongoing narrative of who we are. If we think of ourselves as socially awkward, we will throw ourselves at difficult social situations, fail and then justify the failure by saying that it's just who we are. If we think of ourselves as having bad luck, we will find a way to interpret events in such a way as to support that. My friend even posited cases like having a belief that all cars fall apart, and then driving one's car into the ground to prove it. The point I'm trying to make is that we live our lives in accordance with our self-constructed narrative.

How do you change that narrative? If it's self-constructed, why is it so hard to change one's outlook? Why shouldn't I be able to say "Poof! I'm more sociable!" I think that this can be attributed to lack of knowledge, habit and fear. Lack of knowledge, in that it's hard to realize that one can take better control of one's life. Card's work also expresses this idea; in Speaker for the Dead, a character says "We [humans] question all our beliefs, except for the ones we really believe, and those we never think to question." If you have always assumed that you are a certain way, you never think to question it. Habit, because once you get used to doing things a certain way, it's hard to change. You settle into a routine. And fear. Fear is the toughest one. What we're talking about here is altering the self story. This strikes at the very core of who we are. We are our self story. So changing that means changing who we are at a fundamental level. This is justifiably scary - who are we if we're not ourselves?

So it's hard. I think there's hope of doing it. But reconstructing one's narrative in such a way as to make the change one wants without affecting how it integrates into the rest of one's worldview is tricky to say the least. But taking control of one's life can be empowering. Many works of fiction that I like explore this idea, with a notable one being V for Vendetta. Lois McMaster Bujold has a great quote along these lines from Countess Vorkosigan - something like (terribly paraphrased because I can't find the quote right now) "If one accepts the consequences of one's actions, then the corollary is if one desires some consequences, one better start taking action in such a way as to make those consequences happen."

Anyway. This has degenerated into even less coherence than usual. I'll pick up another time with a narrative-centric viewpoint of the world, applying the idea of narrative construction to everything from marketing to ourselves to government.

posted at: 16:22 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Wed, 18 Feb 2004

The perception of safety
I really liked Malcolm Gladwell's book, The Tipping Point. When a friend of mine lent me an article that Gladwell had written in the New Yorker, I went looking for it on the web and found that Gladwell publishes all of his articles on his website (that article, entitled "The Talent Myth: Are smart people overrated?" is one I want to discuss at some point as well). But the article I want to talk about today is his article about SUVs, which I saw a link to this morning.

I think the most interesting thing about the article is the thesis Gladwell develops that people buy SUV for the feeling of safety vs. actually being safe. He goes to the testing grounds of Consumer Reports to compare a Chevy Trailblazer vs. a Porsche Boxster, and finds that the Trailblazer handles terribly (duh), meaning that if you were driving one, you would not be able to avoid collisions.

Most of us think that S.U.V.s are much safer than sports cars. If you asked the young parents of America whether they would rather strap their infant child in the back seat of the TrailBlazer or the passenger seat of the Boxster, they would choose the TrailBlazer. We feel that way because in the TrailBlazer our chances of surviving a collision with a hypothetical tractor-trailer in the other lane are greater than they are in the Porsche. What we forget, though, is that in the TrailBlazer you're also much more likely to hit the tractor-trailer because you can't get out of the way in time. In the parlance of the automobile world, the TrailBlazer is better at "passive safety." The Boxster is better when it comes to "active safety," which is every bit as important.
It reminds me of an example of language usage that I wanted to include in yesterday's rant. When I was in driver's education in high school, our instructor was adamant about using the term collision instead of accident. Accident implies something that is out of one's control, something that just happens. Collision implies an actor, one who causes the collision. His point was that accidents/collisions don't just happen; they can be prevented, and an alert and safe driver will be able to avoid them, or at least drastically reduce their incidence. The use of the term "accident" in some ways contributes to a culture of learned helplessness, where things just happen.

Gladwell points out that this culture contributes to the success of SUVs. If you will inevitably get in a collision, if you have no control, then the only thing you can do is to armor yourself up as best as you can, which is why people buy SUVs, even though there is no basis for their safety in reality. In fact, they are generally more unsafe because SUVs are treated as trucks and don't have to meet normal auto safety standards:

In a thirty-five-m.p.h. crash test, for instance, the driver of a Cadillac Escalade--the G.M. counterpart to the Lincoln Navigator--has a sixteen-per-cent chance of a life-threatening head injury, a twenty-per-cent chance of a life-threatening chest injury, and a thirty-five-per-cent chance of a leg injury. The same numbers in a Ford Windstar minivan--a vehicle engineered from the ground up, as opposed to simply being bolted onto a pickup-truck frame--are, respectively, two per cent, four per cent, and one per cent.
I find the whole culture of helplessness a peculiar one. America has such a culture of fear that it is terrified of anything that might be out of their control. Gladwell mentions the Firestone tire recall. The actual number of fatalities associated with that defect was miniscule compared to the number of traffic accidents in a given year. But because it disturbed people's sense of control over their own safety, it cost Firestone millions of dollars. 9/11 is a similar story. Approximately 3000 people died on that day. A terrible tragedy indeed. However, according to the FBI, 16,037 people were murdered in the U.S. in 2001. Even if you count only the ones in September, that's about 1300. I think a murder in any form is just as senseless as a terrorist bombing. But Americans were so terrified about the uncontrolled aspect of the bombing that we're still living in a security state two years later. It doesn't make sense.

The part that's really weird is that we live in terror of uncontrolled intrusions into our safety, but don't want to take control of the aspects of our life that we can control. We want to be given an utterly safe world, which is an impossibility (side note: Gladwell has a great line about the perceived safety of SUVs: "And what was the key element of safety when you were a child? It was that your mother fed you, and there was warm liquid. That's why cupholders are absolutely crucial for safety. If there is a car that has no cupholder, it is not safe."). Americans seem to have given up the idea of individual responsibility in this age of litigation. And yet, the phrase "individual responsibility", when applied to welfare moms, is a big winner on the conservative campaign trail. The whole hypocrisy of America drives me nuts.

Anyway, I'm beating this topic into the ground since most of my readers probably agree with me. Neat article. Read some of Gladwell's other articles if you have time.

posted at: 00:33 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal

Sun, 11 Jan 2004

I was reading an article by Phil Agre today, and was struck by a thought that I wanted to develop a little bit more. So I'm writing about it. One of the things I like about Agre's work is that he is not a technological determinist; he believes that the use of technology is dependent on the community in which it is embedded. If the culture does not have a place for the technology, it will not get used, no matter how many whiz-bang features it has. He made a comment in this particular paper that, referring to organizations such as the Christian Coalition, "Their hopes for community are resolutely geographical, and they are indifferent to attempts to recover a nostalgic sense of community online."

The idea of community being resolutely geographical is understandable from a conservative (in the sense of those who want things to stay the same) viewpoint. As little as fifty years ago, if you grew up in a town, chances were that you would return there, work there, and raise your own family there. The community had a continuity which extended across generations. And that's a comforting feeling of belonging. However, such a community serves its misfits poorly. If somebody didn't mesh with the community for whatever reason, they were doomed to a lifetime of ostracism, except for the few who were able to run away or find other options.

The issue has personal relevance to me, since I grew up in a town where I never quite fit in. I always wondered why I was so weird (a question, mind you, that still hasn't been answered). Fortunately, I went off to Boston for college and met a group of wacky folks with whom I fit in immediately, and realized I had just been in the wrong place before. And the internet has let me stay in touch with those folks over the past 14 years, even though we're now spread all across the country. They are my hometown community in a way that the place where I grew up could never have been. Heck, it has the advantage that I can drive all the way across the country without having to stay in a hotel.

This opens up the discussion of what truly makes a community a community. Is it merely those that I happen to live near? Is it those who I share something in common with? Of course, before the rise of cheap communication over the last twenty years or so, this distinction was irrelevant. The idea of being able to form a community with people who were not geographically co-located with you was laughable. But with email and cheap long distance and even cheap airfares, the idea is no longer ridiculous. In fact, we see it every day with chat rooms, email lists, journal clubs and the like. People meet each other over the internet, and only later meet in person.

But even while these technologies make it easier to support a geographically distributed community, people are redistributing themselves to co-locate their ideological community with their geographic community. These technologies make it easier than ever to find where "people like you" are, as well as providing connections that can be drawn upon to make it easier to move there. So people move to where they feel comfortable.

A few years ago, I went to my ten year high school reunion. I hadn't seen anybody from my high school for about nine of those years, since I spent all of my summers at MIT, and later Stanford. It was great to see everybody again, and find out how they'd changed over that time. I was surprised to find out that almost everybody had stayed in or returned to Illinois. Most of them still lived in the Chicagoland area. Some had made it as far as Iowa or Pittsburgh. They felt more comfortable in such places with "Midwestern" values. It was almost alien to me, as somebody who had left there as soon as I could, and never even considered moving back. Different cultures, different places.

We can choose which communities we want to be part of with greater freedom than ever before. With the Internet and cable and all sorts of radio stations, we can find people who share our opinions. It's becoming more acceptable to first find a place that suits you, and then find a job, as discussed in The Rise of the Creative Class, so we are clumping into physical locations of like-minded people. It all ties together. In my rant about political extremism, I commented on the dangers of seeing only your side of the story. But it is now possible to surround yourself with like-minded people to an unprecedented extent with your choice of home and your choice of media to consume. In fact, it's preferable, because nobody likes being different or an outcast. So us left coast liberals shake our heads and wonder how anybody could possibly vote for a moron like Bush, while folks in the Midwest wonder why those liberals don't see the threat to our national security.

Hrm. How'd I end up ranting about politics again? Anyway. I just thought it was interesting that new technologies and cultural changes have created a nation which is slowly splitting itself apart into communities of choice. The communities you choose to be part of define who you are. And with us each having a choice of a greater number of communities than ever before, the possibilities of who you can be are endless. Now if we could only figure out how to keep communicating with those outside of our circle. But that's a topic for another rant...

posted at: 14:20 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal