In Association with Amazon.com

Who am I?

You can look at my home page for more information, but the short answer is that I'm a dilettante who likes thinking about a variety of subjects. I like to think of myself as a systems-level thinker, more concerned with the big picture than with the details. Current interests include politics, community formation, and social interface design. Plus books, of course.

Blogs I read

Directories on this blog

Top-level
/books
/books/fiction
/books/fiction/general
/books/fiction/mystery
/books/fiction/scifi
/books/nonfiction
/books/nonfiction/fun
/books/nonfiction/general
/books/nonfiction/management
/journal
/journal/events
/links
/misc
/plugins
/plugins/state
/rants
/rants/management
/rants/people
/rants/politics
/rants/religion
/rants/tv

Recent posts

Sat, 01 May 2004

Jaron Lanier
I've been a big fan of Jaron Lanier since I first heard him talk several years ago at Stanford. So when I read that he was going to be speaking at the Bay Area Future Salon, I made sure to be there. Really interesting stuff. I'll try to preserve the flavor of the talk by recounting his talk in chronological order, at least according to my notes, but he skipped around a lot, so apologies for the confusion. Plus, as usual, I won't be able to resist editorializing.

One of the things I like a lot about Lanier is that he's a great public speaker, willing and able to adapt his talk on the fly to his audience. He did that again for this talk, where he started out "exploring the talk space", by giving descriptions of several different sorts of talks he could give, and ideas he'd been exploring. The talk was nominally about the Singularity, an idea that Vernor Vinge has been promoting for about ten years, where we can see that change, especially technology change, is changing at an ever faster rate; therefore, at some point, the change rate will essentially go vertical, and we will no longer be able to predict past that point. Vinge associates this with the moment when we create self-programming, self-aware computers (think the world of Terminator 2 for a pop culture reference). Others associate it with when nanotechnology makes the first molecular self-assemblers. In any case, it's been a powerful meme, one taken up by Ray Kurzweil, among others. Jaron Lanier composed his response to this "cybernetic totalism" in his "One Half of a Manifesto". This talk was aimed at conveying ideas about the other uncompleted half of that manifesto.

However, when he surveyed the room, he realized that most people in the room didn't really believe in the Singularity (my comment was that since technology has to be embedded into social contexts to be adopted, and social institutions adapt more slowly than technology, I wasn't particularly worried), he skipped the anti-Singularity portion of his talk, except to note that software had become the anti-Moore's Law. He showed a bunch of interesting graphs allegedly from NIST showing that software is taking longer to develop, is much slower to run, and is becoming harder to manage (he showed one great graph illustrating the difference between estimated completion of software projects and actual completion was growing exponentially). So he's not worried about software taking over the world no matter how far Moore's Law takes us, because, well, we suck at software.

He then went on to throw some other ideas out there. One question he raised was "How do people use theories of the future to inform the present?", a question touched on by Peter Schwartz when I saw him talk, and one that informs the whole concept of scenario planning. So he used the rest of the talk to throw out his own theory of the future that was, as he put it, somewhere "between truth and bullshit" hopefully hitting "the sweet spot of utility".

He started by discussing the idea of different ramps. A lot of engineers and scientists like what he called the ramp of technology progress, the idea that we are increasing our power of technology and our understanding of the world in a continuous fashion. We are on a one-way ride to a future powered by technology. And things like Moore's Law and the idea of the Singularity seem to reinforce there's no getting off this ramp and that this is inevitable. But Lanier has issues with developing technology for technology's sake. I'll get to those in a second, in an attempt to follow the way the talk actually went.

So if we don't believe in the technology ramp, what should we believe in? Well, Lanier pointed out that some people believe in a ramp of moral improvement, where we are all becoming better people, and will eventually all be paragons like Martin Luther King or Gandhi. As Lanier pointed out, there _may_ be continuous progress made along those lines, but it's hard to see because there's a lot of noise in that system, and people today are still doing awful things to each other.

After rejecting that ramp, Lanier proposed a McLuhan-esque ramp, one based around media and ever-increasing interpersonal connection. This ramp roughly goes along the lines of grunting to language to arts to writing to printing to today's computer-mediated-communication and beyond to things like virtual reality, of which Lanier is a big proponent. But Lanier immediately asked the question: what are the job requirements of a ramp? If a ramp comes up to us and applies for a job as our dominant paradigm, what should we be looking for? This is a meaningful question, since, as he said at the beginning, theories of the future determine how we act in the present, so choosing the right ramp can be of great importance.

The criteria he proposed are that the ramp allows us to talk about the future in a way that is not immediately self-destructive. He started with the moral ramp, because he had the most snide comments about it. One of the problems with the moral ramp is that any moral system so far developed that gave a strong enough sense of identity to provide guidance, also created a sense of exclusion, which immediately leads to problems. As soon as some people are on the outside, they want to fight to get in, and thus the system is empirically self-destructive. He does concede that there is definite progress on the ramp of morality; if we look at the past, we see that things have been much worse. But there's definitely still a lot of work to be done, and going around trumpeting one's own superiority is probably not the way to do it.

We took a detour here (which is a good thing - Lanier's digressions are more interesting than most people's entire talks). I'm not sure whether these are my comments or his, but improving technology actually has enabled the moral ramp; because we have been brought closer together, both by improved transportation and improved communication, we are more likely to treat each other as people rather than disembodied enemies. Then I have another side comment of his about the "Marin approach", where people check out and do their own thing, which he derides because it doesn't accomplish anything and is narcissistic.

Back to the ramp discussion, and on to the technology ramp. He points out that our planet is very much like a group of extremely clever bored teenage hackers trapped in a guest house equipped with a chemistry lab, all sorts of electronics, etc. What is the inevitable result? The house blows up. "Boredom is the most powerful force in the universe to smart people." and "Armageddon is a disease of young men." Because of that, there is an inherent tendency towards an academic desire for stasis, for completeness, where everything just stays the same (I'd say that this is extremely related to the conservative movement's desire to turn back the clock to the happy days of the 50s). But it's impossible. As we all grow more connected in an attempt to create wealth, we are also creating hazards as well. Most technology is value-neutral; it can be used for constructive or destructive things. So the technology ramp is also the ramp of being able to do increased damage to each other. He detoured again into the history of arms races, starting with the Greeks who realized they could overcome being outnumbered by fighting as a synchronized unit in a phalanx. They kicked off an organizational theory arms race that basically hit its own singularity when the atomic bomb was created, whereby everybody dies if a war happens. But because the technology ramp enables and encourages increasing technology without bounds, he feels that it will inevitably lead towards self-destruction as well, since if the technology exists, he thinks people will be unable to resist using it.

So he dismissed both the moral and the technology ramp because they were inherently self-destructive. But he proposed some other criteria for a good ramp. One quality is that it needs to be challenging for smart people; again, smart people get bored easily - they need to be engaged, or they'll go off and blow things up. It needs to be fascinating in a non-destructive fashion. It needs to sustain interest in technology without the accompanying tendency to kill ourselves. He believes that we are at the stage where we need to turn away from the technology ramp and to the McLuhanesque ramp. He feels that we are not at the edge of technology transition, as the proponents of the Singularity would have you believe, but at the edge of a cultural transition.

Then he got really distracted for a bit. He discussed the cephalopods that he brought up in the talk I saw several years ago, except that this time he had a video of the mimic octopus, which are just mind-blowing. Nobody in the room believed it was real - they all thought it was computer animation. The octopus can control both its skin color on a pixel-by-pixel basis, and distort its shape using muscles under the skin to mimic its background incredibly well. The video clip in question had a scuba diver sneak up on one that was hiding in a bush, and we all were shocked when it suddenly appeared and swam away. Even on a frame-by-frame rate. Very neat stuff. Roger Hanlon from Wood's Hole is the marine biologist that he references, but I haven't had a chance to follow up.

He then started talking about early childhood, and the confusion of reality and fantasy that is part of childhood, where anything you imagine pops into being, making the child basically a god. If a kid imagines a mile-high tarantula made of amethyst crawling around Palo Alto, they can see it there in their mind's eye immediately. If somebody actually wanted to create such a thing, it could never happen. So the moment when kids realize that the world is not theirs to control, that they are not gods, is a moment of leaving childhood. The real world has its benefits, like companionship and other people, but it's still quite a loss. But it immediately begs the question for the child: what parts of the real world _can_ be controlled?

Then he got distracted again by his continuing interest in post-symbolic communication, which he also referenced in that earlier talk. The idea is that "Symbolism is a speed hack". If I actually wanted to show you a mile-high amethyst tarantula over Palo Alto, it would take forever to construct it. But if I use the symbols to represent it, you can imagine it without me having to do that work. It's easier and faster. The natural followup is what would our lives be like if symbols weren't necessary, if it was as easy as thought to create things? How would that be possible? Given his history with virtual reality, Lanier's answer isn't surprising. If children grew up in a world of virtual reality, where imagining made things real, that they could share and show to others, they would develop a completely different language than the one we have based on symbols.

He tied that idea back into the cephalopods. Given their ability to mimic things, they could use their morphing abilities to communicate. Rather than use symbols to represent an object, they could just turn themselves into it. They don't appear to do that, but that's probably a good thing. If they developed language and the ability to transfer information between generations, Lanier feels that they would rival humans in their mastery of the world. In fact, he made the analogy that "humans plus virtual reality equals cephalopods plus culture". An interesting thought to be sure. And the idea that our childhood matters, since that is when we are brought up to speed on a whole host of accumulated learning in the form of culture, is pretty compelling.

Lanier started talking a little bit about his previous work in virtual reality, from way back in the early days of the 80s, when everything was incredibly clunky. He said there was some really interesting work done with avatars, where people given distorted bodies adapted incredibly fast, whether they were given really long legs, or huge hands, or whatever. In fact, if you gave them a completely different body, such as a lobster body with too many limbs (the middle limbs of the avatar were controlled by a combination of trunk and hand movements), people were still able to figure it out quickly. Somebody in the audience pointed out his own similar experience in real life when he learned to drive a forklift; even though it had five levers and all sorts of pedals, by the end of the day, he was just lifting things without thinking about the controls.

I'm going to go off on my own here, because I didn't get a chance to mention this at the talk, but I'm pretty sure that the key here is feedback. If you provide consistent, useful feedback to a human, they can figure out how to do pretty much anything, whether it's drive a forklift, control an avatar, or do a handstand. This is why current software is so terrible; it provides inconsistent, unuseful feedback, so it's impossible to figure out what's going on. The editor vi is a great example; it's apparently incredibly powerful once you learn to use it, but if you're thrown into it, it's bewildering. The behavior of keys changes depending on what mode you're in, or what order you hit them. Terrible feedback, making it impossible to learn naturally. In a virtual world, providing useful feedback is critical to engaging the learning sections of our brain. In the real world, there are all sorts of constraints and ongoing feedback from our bodies that we integrate in our brain to learn. Driving a car is a perfect example - you see how the car changes positions in the lane depending on your movement of the steering wheel, and that's it - people learn how to drive pretty much instantaneously. And yet, put the same people on a driving video game, they do much worse, because the connection between their controls and the feedback that they get is minimal, making it much harder for them to adjust. Anyway. Back to Lanier's talk.

Lanier went on a rant that believing in strong artificial intelligence correlated with designing terrible software. Designing for AI is designing for the computer, not for the person, and therefore such software has horrible usability. He said that the Turing Test was terrible for AI because you can't tell the difference between the computer getting smarter and the human getting dumber to compensate. It's not empirical. His example was credit ratings (the same one he used in his previous talk), where credit companies have these really dumb algorithms that determine your rating, which allegedly predict your creditworthiness. Instead, sensible people do dumb things, like borrow money they don't need, to increase their credit rating. He also referred to Ask Jeeves vs. Google in the search engine wars. Ask Jeeves tried to put the intelligence in the server. Google made the server side simple and relied on people to do its work for it.

He related a cute anecdote about a meeting back in the 50s between Marvin Minsky and Doug Engelbart. Minsky was so excited, talking about all the ways he was going to make computers better and smarter. Engelbart said "You'll do all that for a machine, but you won't do anything for people?" Engelbart, of course, went on to do the famous 1968 demo, which basically demonstrated every user interface technology that has been used since, including the mouse, windows, file systems, etc.

Lanier then implored the audience, a majority of whom were software engineers, to design for the user. Design for people. Join the McLuhan ramp, oriented towards people, rather than the technology ramp, oriented towards the machine. He pointed out the benefits - that the results would be empirical, because you can measure the happiness of the user, that it's just darn cool and intense because helping other people is evolutionarily bred into us, and that it's just better. And I actually piped up here to argue with him. While I totally agree that software should be user-oriented, I thought it was unrealistic for Lanier to try to sweep the culture away like that. For one thing, his definition of empirical is pretty weak. Most software engineers like their metrics of algorithmic efficiency, and cycle time, and things like that, compared to the messiness of dealing with real breathing humans that don't think like you want them to. Lanier responded that those were "fake metrics", that didn't measure anything useful - you could develop software that met those metrics, and still not produce something usable.

Again, I totally agree with him, as my previous post illustrates, but I think it's hard to change the culture. As I pointed out, "That's nice, but the people doing the hiring of software engineers like those fake metrics and will hire people that fill them", which was the basis of my complaint in that post. I kind of suspect that Lanier has the opinion that I once heard from Richard Stallman, when he advocated that all software engineers should only work for companies that had open source code. When somebody asked him what they should do if they were at a company that didn't, Stallman told them to quit, because if they were any good, they should be able to dictate the terms on which they work. And while that may work for the geniuses like Stallman and Lanier, for the rest of us trying to pay our rent and the like, it's not so easy.

But I do agree with Lanier on the intensity and pleasure of a user-centered engineering effort. The first time I built something as a consultant where the customer turned to me and said "Wow. This is so cool! I could never do this before!" I was hooked. It was so much more satisfying to me than the ineffable rewards of academia that my graduate school career tried to promote. And I have run my career ever since in that way. But I'm not sure I'm hireable. I don't have the right skill sets. I don't care about the right things. I don't write code in the most efficient way, so when they ask me the interview question of "write a function that reverses a string" I do it in the non-optimal way, and lose points. Anyway. Not that I'm bitter or anything.

Lanier, as usual, had a bunch of interesting ideas. I'm not sure I agree with all of them, but they're thought provoking, and tie into a lot of my own thoughts. If you get the chance to hear him speak, I'd highly recommend it.

Most of the rest of my notes are one-liners:



posted at: 05:46 by Eric Nehrlich | path: /journal/events | permanent link to this entry | Comment on livejournal