You can look at my home page for more information, but the short answer is that I'm a dilettante who likes thinking about a variety of subjects. I like to think of myself as a systems-level thinker, more concerned with the big picture than with the details. Current interests include politics, community formation, and social interface design. Plus books, of course.
Truth vs. Context
Beemer had an interesting response to my last post, which he called Truth vs. Context, and I'm going to steal that for the title of this post. As a warning, I have a ton of ground to cover, and this entry is probably going to span at least four posts, if not more. Just the notes I scribbled out and emailed to myself were over a page. God help us all.
I agree with Beemer that there is an objective physical reality. I mean, I was trained in physics: how could I disagree with that statement? After all, somebody once quipped "Reality is that which, when you push against it, it pushes back." (if anybody knows who said that, please let me know - I am clearly paraphrasing it because I couldn't find it on Google) However, I don't think that helps us in this case. In fact, modern physics actually demonstrates the importance of context. Not only that, but scientific philosophers such as Kuhn and Latour have demonstrated that even "objective" science has a large degree of subjectivity in it when it comes to the existence of paradigms and black boxes that nobody questions.
But objective physical reality only goes so far. When I drop an object, it falls to the ground. I can repeat that experiment over and over again and be assured of getting the same result. However, the same is decidedly not true of social interactions. For instance, if I asked somebody "Do you want $2 or $0?", you would think the answer would always be "$2" (I just heard an echo of "I want my $2" in my head). But it's not. It depends on the context.
In fact, history demonstrates that there are virtually no impossibilities when it comes to social interactions. We've tried it all. Dictatorships, democracies, anarchies. Cannibalism. Matriarchies, patriarchies, hierarchies. Any rule that we think we can put our finger on and claim is universal, there has probably been a society somewhere in history that did the opposite. Anywhere I go on this planet, I'm pretty well assured that if I hold a book three feet off the floor and let go, it's going to drop to the floor. I have no such assurance about language. Or customs (is it polite to belch?). Or hand gestures (do you know the equivalent of the middle finger in Korea?). In all of these things, context matters.
Given the fundamental relativity of such things, there looms a larger question: given two competing social contexts, how does one decide which one is "better"? In an Enlightenment universe, reason would determine everything, but I think that reason is fundamentally limited here because reason is a tool; it can not determine overall goals. To give credit where it's due, many of these thoughts were instigated by the discussion over at Dave Policar's journal, particularly his comment trying to reconcile opposing concepts of how things should work. This whole post is essentially an attempt to examine some different ways of reconciling such opposing concepts, in part by evaluating the contexts in which they make sense. In light of the specific point I was making, the separation between goals and execution was also articulated in that discussion. I believe that reason is a good tool with which to evaluate alternative execution strategies. However, it's unclear to me that it can be similarly used to evaluate social goals.
This also explains the schism I mentioned in my last post between the Postmodernist Left and the Enlightenment Left. They are covering two separate areas. The Enlightenment Left covers the physical world. The Postmodernist Left covers the social world. And the tools appropriate for one world do not transfer easily to the other. It's "Truth vs. Context", the title of this post.
So, given two competing social systems, two opposing contexts, how do we choose one? For instance, how can we decide between the Strict Father and Nurturant Parent models of Lakoff's Moral Politics? Lakoff takes a stab at resolving that at the end of the book, but he essentially just puts down his opinions to decide in favor of his progressive politics.
I'm not sure there is a way. Any metric we choose to decide between them can be dismissed as biased because everything in the social universe is biased by definition by the chosen context. I've struggled with this question before. The conclusion I came to that time (also with Beemer's help) was that perhaps it could be demonstrated that "Good" systems reinforce themselves, whereas "Evil" systems eventually annihilate themselves. In other words, "Good" systems are infinitely sustainable and create a virtuous circle. How one goes about showing that is a really good question.
I was going to go on and start trying to apply some of these ideas to ethical systems, but I've been writing here for about two hours, so I think I'm done for the evening. I'll try to get back to it tomorrow. The number of branches of investigation available along these lines is dizzying; hence the four or five emails I sent to myself today with different paths to explore. Or I may get bored with it and go explore something else entirely.
posted at: 22:52 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal
Context in modern physics
I mentioned that modern physics actually demonstrates the importance of context. This is a total aside, which is why I'm putting it in a separate post (think of this as a long DFW-esque footnote), but while I was contrasting the "objective physical reality" with the contextual social world, I realized that the objective physical reality is a thing of the past. In an earlier post, I mentioned the "Enlightenment Left", which believes in the preeminence of reason and the ability of logic to conquer all. They envisioned a clockwork universe, set in motion by a Prime Mover, and following Newton's Laws throughout time, where if one knew the positions of every particle in the universe at a given time, and had enough computing power, you could then predict the position of every particle until the end of time.
However, we now know that is not possible. Chaos/complexity theory has demonstrated the extreme sensitivity of systems to initial conditions, as is most famously illustrated by the butterfly effect, where a butterfly flapping its wings could cause a storm halfway around the world. In other words, to stretch a metaphor too far, chaos theory demonstrates the important of context (initial conditions) in even something as prosaic as simple classical mechanics.
Quantum mechanics is another area of modern physics that can be construed as demonstrating the importance of context. The two slit experiment is a good example. The photons somehow "know" whether the other slit is open or not, and decides whether or not to create the interference pattern or not based on that information. You can come up with all of the "probability wave" explanations you want, it's just spooky and counter-intuitive. And I won't even get into the EPR paradox and entanglement, mostly because I don't really understand either. But it all points to the futility of trying to analyze a system in isolation, without knowing everything else it is interacting with, its context.
posted at: 22:44 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal
The Ultimatum Game
I mentioned that there would be cases when people would answer the question "Do you want $2 or $0?" with "$0". This is what actually happens in the Ultimatum Game, described here, with references. The basic idea is that there are two players, asked to split up a pot of money, say $10. The first player gets to decide an appropriate split between the two players. The second player is given the option of accepting the split as offered, or rejecting the split, in which case neither player gets anything. In the isolated case of "Do you want $2 or $0?", a responder would almost always take $2. But if the responder is playing the Ultimatum Game, and they know that when they're getting $2, the first player is getting $8, half of all respondents reject the $2 and take nothing.
This is a fascinating result to me. It demonstrates the all-encompassing importance of context. In an isolated context, people answer one way. In a social context, they answer differently, where feelings of fairness are brought into play. In fact, one study used MRI scanning to demonstrate that unfair offers activated a part of the brain that is associated with negative emotions, including, one would assume, spite. The paper goes on to point out that the MRI results demonstrated a conflict between "the emotional goal of resisting unfairness and the cognitive goal of accumulating money."
One might wonder where humans have learned what "fairness" is, and why it is built into our brain chemistry. This paper gives some insight into how such an instinct evolved. In it, they run computer simulations and demonstrate that the fairness instinct can evolve in the Ultimatum Game if participants are given a history. If it were a one-off game, the first player would always make the split uneven, and the second player would decide that something is better than nothing. However, if there are repeated iterations, the second player can spite the first player by holding out for a "fair" split, and enhance the likelihood of getting a better deal in the future. In other words, fairness only matters when you are likely to interact with the same people repeatedly - "When reputation is included in the Ultimatum Game, adaptation favors fairness over reason. In this most elementary game, information on the co-player fosters the emergence of strategies that are nonrational, but promote economic exchange." And the MRI studies demonstrate that such strategies, such feelings of fairness, are actually built into our brain chemistry.
This leads to an important result in my mind. Because we are primates, and prisoners of the monkeymind, things like fairness and social justice only matter when we are dealing with those within our monkeyverse. If we are dealing with people within our social universe, people who we are likely to run into again, even if they are only Familiar Strangers. We don't rip off the guy at the corner convenience store because we stop in there regularly. We pay our fair share at dinners with our friends because we know we will be going out to dinner with them again.
However, if we are dealing with strangers, with people we don't feel are part of our world, and with whom we will not have to interact with again, then all the rules of fairness go out the window. We are returned to pure self-interest. It's like a one-off game of the Ultimatum Game. We feel fine cheating the people we don't know, because in an emotional sense, they aren't people to us. They don't evoke our rules of fairness. They are objects in the world, to be used and disposed of.
How do we expand our monkeyverses, keeping us from doing stupid things like stealing from strangers, committing hate crimes, and invading foreign countries? My answer is probably not surprising: we use stories as a way of giving us the details about other people that change them from cardboard cutouts into people. By turning them into real three-dimensional people, stories can activate our monkeybrain and all of the accompanying emotions of fairness and guilt. Such emotions leverage the way our social brains have evolved, hopefully getting us to treat each other better. It's a theory. And one I'll probably return to at some point.
posted at: 22:17 by Eric Nehrlich | path: /rants/people | permanent link to this entry | Comment on livejournal