Archive for the ‘cognition’ Category

Convergence08

Monday, November 17th, 2008

Over the weekend, I attended the Convergence08 unconference, which focused on future technologies like biotech, nanotech, artificial intelligence, etc. I had to miss the Saturday morning sessions, as I had a chorus rehearsal for this week’s Mahler concerts, but I was there on Saturday afternoon and most of the day Sunday.

The first session I attended was on “Building a better search engine”, which I chose because I work at Google (although on nothing related to search). The attendees speculated about the next big jump in information finding technology, including:

  • Personalized agents that know you and just find the right information – I brought up privacy, and the general response was that privacy was overrated and should be ignored for the sake of this discussion as better results would trump privacy.
  • Semantic technologies with natural language understanding – somebody from Powerset was there pushing this idea, and somebody else recommended Semantifind. I’m extremely skeptical of such technologies, as I’ve spent most of the past ten years figuring out how to translate between different disciplines as a generalist, and I already understand language. I think it’s going to be a long time before computers can figure out the implicit frames that influence comprehension.
  • Social search – leverage our social networks to find more relevant results. If a trusted associate noted something, it’s probably more relevant than a random stranger noting the same thing. The issues I raised is the modelling of the social network – I would trust certain friends to make recommendations about stereo equipment but definitely not about clothes and vice versa. And unless the software can gather enough data to model those subject- and pairwise-specific interactions, it’s not going to get the desired results.

As an aside, it was interesting to me that I’ve gone from being a technological positivist where technology will solve our problems, to being skeptical of most technical solutions, partially because I now think the hardest and most interesting problems are not solvable by technology per se, but instead require the design of new social technologies to coordinate people in new ways.

The next panel I attended was called something like AI and Sense making. I’m fascinated by the question of how we make sense of the world as my continuing obsession with stories makes clear. This was a session where people discussed the idea of sense making (Gary Klein’s work with firefighters was a big influence), how it could be embedded into technology and possible business ideas built on such technology. The discussion was interesting but because sense making is a fuzzy cognitive concept, one attendee afterwards commented that it was difficult to separate sense making from general AI. Two recommendations for further reading I want to record for myself: Perspectives on Sensemaking, an article by Gary Klein, and Sensemaking in Organizations, a book by Karl Weick.

One useful construct from the session was the idea that we create a frame, view everything coming in through that frame, but keep track of whether things are corroborating with reality. Once the discrepancy with reality grows too large, we have to consider junking the existing frame and finding a new one that fits the data better, which I see as yet another form of Bruno Latour’s process.

Then it was time for the keynote speech by Paul Saffo, which I had been eagerly anticipating after having seen him speak several years ago. I was not disappointed – even though it covered many of the same topics as that previous talk, it was entertaining and informative. Tidbits that I wrote down:

  • “If you don’t change direction, you’ll end up where you are heading.” (in other words, inaction is a choice with consequences)
  • The future will still have a lot of dull parts (riffing on Hitchcock’s claim that “Drama is life with the dull parts cut out of it”). We look forward to all the excitement of the future but forget that amid all that excitement will stlil be dull parts.
  • “Change is never linear” (s-curve, s-curve)
  • “Cherish failure, especially someone else’s” – this was a theme from the other talk I attended, where he pointed out that the consequences of the s-curve is that the time when everybody decides that a technology is a failure and that it will never work is the time when it might be just about to take off. Which actually made me wonder about my dismissal of semantic technologies in the session earlier, as part of the reason I dismissed it is that it’s been “just around the corner” for 20 years now, which, in Saffo’s world, means it may be just about to finally succeed.
  • “Look for indicators” – form a quick opinion, but then look for proof that you’re wrong, which he elaborates in his strong opinions, weakly held blog post.
  • “Use forecast techniques until reality gets too complex” – this was an interesting riff where he said that even our forecasting techniques continually get outmoded and need to be updated. He believes that we’re in such a phase transition now, where the old qualitative models are breaking down, but new quantitative models haven’t arrived yet. The four factors that he thinks will drive the next generation of forecasting models are Moore’s law, better forecasting algorithms, more and better data, and more of our lives being stored in digital form thanks to Facebook. My eyes lit up, as that’s a perfect explanation of why I joined a forecasting group at Google.
  • Three book recommendations: the novel Daemon, by Daniel Suarez, “A general theory of bureaucracy” by Elliot Jaques, and the “creative destruction” work of Joseph Schumpeter.

Sunday morning started with a panel on synthetic biology. There were a variety of panelists, with backgrounds in physics, software, and biology, but my favorite was Denise Caruso of the Hybrid Vigor institute, as she questioned the assumptions that the optimistic scientists were making. Her focus area has been on risk analysis, especially in new fields where the risks are difficult to quantify, but her point is that the benefits are equally difficult to quantify, so we shouldn’t be going in with the assumption that innovation is automatically good. Her belief is that we need to come up with better processes and methods for assessing risk with interdisciplinary input. You can see why a generalist like me would be a fan (I actually asked a question during Q&A supporting her viewpoint). I chatted with her a bit afterwards, and also attended the breakout session after lunch with her on innovation and risk, which brought together interesting conversations and different perspectives (the work that Etan Ayalon is doing at GlobalTech Research looks particularly interesting to me). I also liked Caruso’s concept of Bayesian regulation, where it’s not black and white, but involves conditional probabilities.

I missed the next session as I ended up chatting with folks from that first after lunch session for about half the next session, and then had to prepare for my session, “How do organizations think?” I threw it open as a discussion forum expanding on the ideas in my post on organizational cognition, and had a good discussion with the eight people who attended. We talked about different people’s experiences with different organizational structures and what might work to improve those. One key concept that was identified was that designing an organizational culture and structure has to first start with the purpose for which the organization is built. Different structures will serve different purposes, and incongruities between the structure and purpose will cause friction. People expressed interest in possibly having a follow up session after the conference was over, but I didn’t get everybody’s contact information, so I hope they get in contact with me.

I ended up bailing out on the end of the conference during the longevity panel, as I had other plans for the evening, but all in all, it was a good experience – I met a couple new interesting people, had some good discussions, and found new food for thought, which were pretty much my goals for the weekend. But now it’s time to get back to my normal life.

Technorati tag:

Time Perspectives of Philip Zimbardo

Wednesday, November 12th, 2008

One of the great advantages of working at Google is that famous people want to come visit. That’s how I got to see Ferran Adria a few weeks ago. Yesterday, it was John Hodgman and Jonathan Coulton. Later this week is Chip Kidd. You get the point. But what’s even nicer is that they record all the talks and post them to their own channel on YouTube, so even if you don’t work at Google, you can see the talks. Or, in my case, if you are too lazy to get over to the room at the right time, you can catch up later.

Last weekend, I was catching up on a couple such talks that I had meant to see but missed for one reason or another (including Nancy Pelosi’s visit), and the one by Philip Zimbardo caught my eye. Zimbardo is infamous for the Stanford Prison Experiment where he simulated a prison with Stanford students and was aghast at how quickly the prison became real to all involved. Within a day, the guards were finding ways to humiliate the prisoners and previously healthy “prisoners” were having mental breakdowns (the images from the experiment were eerily echoed by Abu Ghraib). I’m currently reading Zimbardo’s book The Lucifer Effect, where he discusses the idea that evil is not a dispositional but situational; in other words, people who do bad things are not inherently evil, but instead all of us have the potential to do evil in the right situation (as his Stanford students demonstrated in their “prison”).

His latest book is called The Time Paradox, about the psychology of time, and he came by Google to give a talk on the subject. He started the talk with a discussion of the marshmallow experiment. The subjects were four-year-olds who were given a choice – they were given a marshmallow, but if they could wait a few minutes before eating it, they would be given two marshmallows for their patience. Some of them ate the marshmallow, some of them managed to hold out and wait for the greater reward. Here’s the astounding result: the experimenters returned to their subjects fourteen years later and found that the ones that had the self-control to wait had better grades and better SAT scores by a statistically significant margin. In other words, one brief experiment on a four-year-old was highly correlated with their future performance in life. So what’s going on?

Zimbardo identifies the two types of children as having different time perspectives: the ones that ate the marshmallow have a present time perspective, focusing on the immediate gratification of the marshmallow. The ones who waited have a future time perspective, able to trade off their present gratification for future results. And our culture rewards those who can make those future tradeoffs (e.g. study now rather than play to enhance one’s college test scores).

Zimbardo later extended the classifications of time perspectives to include past-positive (focusing on the good things that have happened in the past), past-negative (focusing on the bad things), and present-fatalistic (feeling unable to influence the events impacting one’s life). You can find out your time perspective tendencies by taking the online test. Zimbardo has done survey work to show that the results of the marshmallow experiment are not isolated – the time perspective of a person is strongly correlated with the results that person achieves in many areas of life.

What I like about this is that it feels right to me as an explanatory mechanism. There are certain areas where I am heavily future-weighted, such as financial planning where I will forgo something I want today for the sake of something I am saving to buy next year. There are other areas where I am past-weighted, such as in my social identity where some part of me still clings to my self-identity from when I was a teenager. There are still other areas where I am present-weighted, with a focus on whatever feels right at the moment (which contributes to my inability to maintain a regular exercise program). I was also able to immediately start classifying other people around me into the various categories.

The sign to me of a powerful classification system is when it breaks open a problem where everything seemed ambiguous before. Back when I was a programmer, I used to love that feeling when I finally hit upon the right way to represent the data and everything suddenly became easy. And I’d been feeling muddled about certain people recently where I just didn’t understand why they behaved the way they did. But when Zimbardo gave me this new template for thinking about how people think, their behaviors immediately fell into place.

The other nice part about Zimbardo’s use of time perspectives is that they are not fixed. A future perspective can be taught. So we could test four-year-olds, identify the ones that have a present perspective and give them the tools to develop a future perspective, thus improving their ability to adapt to adult life where future tradeoffs are always necessary. Zimbardo trashes the idea that success is genetically determined, instead focusing on the idea that the tools for success can be taught with the right system – not surprising from the man who identified that evil is present in all of us in the wrong system.

I recommend watching Zimbardo’s talk if this sounds interesting, as he explains it better and more rigorously than I do. But I wanted to share this idea that had immediate impact on me. I’m curious if others have the same feeling – maybe this is just obvious to others, or maybe it’s a helpful template for thinking about the world. What do you think?

Organizational Cognition

Friday, November 7th, 2008

Over the past seven weeks (good golly, where does the time go?) at Google, I’ve noticed a funny habit of mine. Whenever I overhear a conversation involving something that is related to my team’s work, I drop whatever I’m working on and wander over to listen in. Now, one might guess this is due to my slacker ways and desire to gossip as much as possible, and one would not be entirely incorrect. However, I was listening to one such conversation earlier this week as two of my groupmates were trying to suss out exactly how an analysis should work, and realized that there’s more going on here.

Such conversations are where an organization’s thought processes are made visible (audible?). In other words, if an organization could be perceived as a group mind, then those conversations are the equivalent of watching the synapses between neurons firing. It’s the only way to get insight into how the group mind operates. This may sound a little wacky, but I’m inspired by Edwin Hutchins’s book Cognition in the Wild, where he extracts observations about cognition by watching how a group mind in the form of a navigation team operated. In the best case, meetings can be a reflection of this organizational cognition, an aspect which Peter mentioned in the comments of my meetings post.

So by listening in on such conversations, one can map out how the organization operates. Which assumptions are taken for granted? Who are the stakeholders mentioned regularly as needing to be consulted? How are decisions made in such conversations – is it by consensus, persuasion, or hierarchy? These sorts of observations may not be directly relevant to one’s job, but it’s invaluable to an amateur anthropologist like myself in understanding the different forces that are at work within the organization.

Even better, conversations often arise when new organizational territory is being mapped out, either in the form of current assumptions being questioned, or in the development of a response to a new stimulus. When everything is running smoothly and there’s a defined process for how to do things, there is no need for conversation as everybody knows how to do their job. But when one reaches the limits of one’s understanding, then one has to consult another person, and such consultations are where somebody like me can see how the organization, in the form of its constituents, learns. To use the framework of Latour, conversations are where we can see the organizational Collective perform the Consultation process, where it grapples with an outside influence (what Latour calls “Perplexity”). Seeing the Collective go through a round of growing and learning is exactly what I mean by saying that conversations are a window into the cognitive process of the organization.

This is also exactly the sort of fuzzy stuff that I once would have scorned as a hard scientist and logic-driven engineer. These sorts of ephemeral observations about an organization are difficult to quantify and would not have even been on my radar ten years ago. And now they are the sorts of things that capture the value I bring to an organization, as my ability to attend meetings and listen to passing conversations and extract this sort of organizational knowledge is a testament to my ever-improving observational skills. I liked Rands’s description of it as the culture chart, as opposed to the formal organizational chart. The culture chart isn’t written down anywhere and is supremely fuzzy – it can only be intuited by reading between the lines of conversations.

So that’s my rationalization for why listening in on conversations when starting a new job is important. It’s the best way to understand how an organization operates: which assumptions are stated (and more importantly, not stated), which stakeholders matter and which can be ignored, etc. And, in case you’re wondering, all this listening to conversations is why I have to stay at work until 9pm a couple nights a week to catch up on my individual contributions. Once I am more efficient at my assigned tasks and up to speed on how Google works as an organization, I hope to get my hours down to a more reasonable number.

Switching Costs

Wednesday, October 22nd, 2008

Earlier this week I switched my RSS reader from Bloglines to Google Reader.

I’d been meaning to check out Google Reader for months, if not years, but had never gotten around to it, as Bloglines was serving me well enough for what I needed, and I’d gotten used to its quirks.

But over the past couple weeks, Bloglines started failing at its primary purpose, delivering RSS feeds on demand, as it stopped properly updating feeds. It didn’t bother me too much at first as I was busy enough that reading blogs was a luxury, but it was starting to get annoying. And then somebody twittered about a TechCrunch article describing how Bloglines users were fleeing to Google Reader, which provided instructions on making the move. Ten seconds later, I was moved to Google Reader, and now I probably won’t go back.

Let’s parse out what happened here, as I think it’s instructive.

  1. A few years ago, I started reading enough blogs that updated infrequently that checking them one by one was becoming ridiculous. So I started looking for an RSS reader, and chose Bloglines as it met my requirements well enough at the time (Barry Schwartz, of The Paradox of Choice, would call this “satisficing” – speaking of which, I need to review that book at some point). In particular, it was web-based so that I could read blogs from work or home without duplication, which was the key differentiator from Thunderbird, the other major contender.
  2. I stuck with the choice for several years, even as bits of it started to annoy me, as the perceived switching costs were too high. Given that there are no lock-in effects in this software (no data that I couldn’t export), the switching costs were purely cognitive. In other words, the cognitive effort of switching was the major lock-in for this product. Also, the benefits of switching were minimal – Bloglines was meeting my needs, so it was unclear how other software would be better in that core functionality.
  3. Once Bloglines started to fail in its primary purpose (making it easy for me to see the latest in my desired feeds), the benefit of switching became relatively greater (other RSS readers were succeeding where Bloglines was failing).
  4. Once I read the TechCrunch article, I had “social proof”, the term Cialdini uses to label our tendency to want to see others doing something before doing it ourselves. Knowing that there were dozens of other people making me the same switch helped convince me to make the jump. That was the critical tipping point.
  5. The actual switch took about ten seconds (export from Bloglines, import into Google Reader). To reiterate, the effort of switching had nothing to do with the actual work it would take to switch – it was the cognitive effort of having to re-open a decision that I had already made.

What’s my point here? In the Web world, switching is often fairly painless, as most vendors provide a way to easily get one’s data out of their system (and if they don’t, that’s a bad sign). Companies are generally relying on us to pick a system and get comfortable with it, so that habit and the perceived cognitive effort of making a change is a far greater impediment to switching than other possible lock-in effects. In such a situation, the company has to never make it easier to contemplate the switch; in other words, if the company continues to fulfill its value proposition to the user, users will stick around, but as soon as they lapse, users may leave in droves (as appears to be happening to Bloglines).

Another way of thinking about it is that the game between companies and users is all played in people’s minds. While economists may believe that people are rationally maximizing their potential economic gain, most of us are far less rational in our decision-making. We use brand names over equivalent generics because of advertising or because we “trust” the brand name more. We stick with products or services that are clearly inferior to newer ones because it’s too much effort to re-open the decision we originally made. Companies that understand this game will be telling stories to convince people to use their products or services, rather than trying to convince them with data. For instance, the book Positioning is all about creating new primary needs in the minds of consumers to give them the necessary impetus to switch.

So focus on the value proposition your company offers to its customers. If you can make sure that the value of your product keeps on increasing, you can benefit from the perceived effort of switching and keep customers even in situations where they might rationally choose another product or service. Ideally, of course, your product is the best in class, but every little edge counts, right?

Now I just have to get over the cognitive effort of switching from Windows to Mac…

P.S. I have been at Google for exactly one month as of today. Crazy how the time flies!

Self Haxx0ring

Thursday, September 25th, 2008

As noted in my last post, this is my first week as an employee of Google. I’m trying to get up to speed on the types of things that I will be doing, which meant spending most of today learning about the ad system, revenue forecasting models, how the Google backend works, etc. Unsurprisingly, there’s a tremendous amount of Google-specific knowledge developed over the past ten years on these topics. Somewhat surprisingly for an engineering organization, the internal documentation is fairly good, and is well-linked to other relevant documentation.

This led to an interesting situation, where my brain started feeling very odd with a sort of buzzing sensation, and I started having trouble thinking clearly. I eventually figured out where I’d felt that way before – it had been during Mike Murray’s Hacking the Mind talk, when he had pulled a buffer overflow attack on the audience by opening loops (by starting stories) and never closing them (by finishing the stories). What had happened is that I had started exploring one topic, seen a link to another topic, opened a new tab to look into that topic, seen a link to a third topic, opened a tab for that, etc. By the time I noticed my brain feeling odd, I was up to ten open tabs on various topics. Yup, I had committed a buffer overflow attack on my own mind to the point where I’d impaired my own functioning.

Once I figured that out, though, it was easy to figure out what to do – I had to close the loops in my mind and offload the storage from my brain. So I started a to-do list with the different topics of interest to be explored later, and the links that I had open in each tab for that topic. I also started a list with links to the tabs that had particularly useful information for future reference. Once I created those lists, I started closing the tabs and thus convinced my brain that it could free up those memory pointers. With the swap space that had been devoted to those open loops, I then had the brainpower to actually think again.

It was an interesting process to watch, but it also highlights a danger of my typical way of learning. I like learning in a top-down fashion, where I learn the big picture of a system first, which gives me a framework on which to attach my understanding of pieces of the system. At the previous startups I had worked at, I could read all of the documentation written by the startup in less than a day, and get a big-picture understanding of how everything fit together pretty quickly. Trying to do that at Google was like trying to drink from the firehose and demonstrated that there are some systems that are just too big for me to hold completely in my brain at once.

So I’ll have to be more disciplined about organizing how I’m learning things this week. As noted, I’ve opened up a couple files to help me organize what I’ve learned so far and what I still need to learn. I’ll also need a few more files to offload information once I’ve learned it, at least until I see it enough times that it can be transferred to long-term embedded storage, which has more capacity.

Anyway, I was entertained by this incident of accidentally malicious self-hacking, so I figured I would share.

A Whole New Mind, by Daniel H. Pink

Monday, July 7th, 2008

Amazon link
Official book site

My friend Wes recommended this book to me after my social capitalist post where I claimed that we were moving from a world defined by technology to one defined by social connections. Daniel Pink’s book describes a similar transition from an emphasis on left-brain thinking towards right-brain thinking.

Pink starts the book by describing the characteristics of L-directed thinking and R-directed thinking (his descriptions of left-brain and right-brain thinking). These are probably familiar to most readers – the left brain controls language, is detail-oriented, and thinks sequentially and deductively, while the right brain is better at reading emotions and context (the right brain does facial recognition), thinking inductively by synthesizing from “isolated elements together to perceive things as a whole” and seeing the big picture. Pink emphasizes that neither brain half should be dominant – that they are designed to complement each other and create one functioning whole (hence “whole new mind” as the title).

Pink then explores the forces changing the world from an emphasis on L-directed thinking to R-directed thinking. The 20th century was dominated by technology, from the assembly line and machine guns at the beginning of the century to the atom bomb and space flight in the middle, to electronics and the Internet at the end. Pink observes that three forces are combining to de-emphasize technology in the developed world:

  • Abundance – when material wants are satisfied, then the differentiating factor is no longer functionality, but design e.g. the market dominance of the iPod over other MP3 players with more functionality.
  • Asia – like Friedman’s World is Flat, Pink believes that anything that can be outsourced will be. Things that can’t be outsourced include high-touch jobs that involve direct human interaction e.g. nursing is one of the fastest-growing professions in America.
  • Automation – any job that can be reduced to a set of instructions will be automated, not only on the factory floor, but also in offices. Jobs that require human intuition or empathy are the only ones likely to be safe from automation.

The rest of the book is a description of the R-directed skills that Pink thinks will be important in the decades moving forward: Design, Story, Symphony, Empathy, Play, Meaning. The ones that stood out to me were Story and Symphony, which isn’t surprising given my interest in stories and in the value of synthesis. I especially liked the description of Symphony as seeing relationships: “People who hope to thrive in the Conceptual Age must understand the connections between diverse, and seemingly separate, disciplines.” is a nice one-sentence summary of what I value in being a generalist.

At the end of each chapter, Pink also provides a set of activities to exercise that particular skill, which include seeing that skill performed well (design museums, story-telling festivals, etc.), trying it oneself (drawing, finding ways to learn more and empathize with coworkers), or reading books on the subject. I’ll have to try some of them myself.

Overall, it’s a decent book that’s a quick read. Nothing particularly new to me, but it was nice in providing a reference work to explain some of the ideas that I believe. I’d recommend it as a library book if you’re interested in moving your life or career in the direction of the right brain.

Intelligent organizations

Monday, June 30th, 2008

Tobias Lehtipalo asked a really interesting question on the pmclinic list, which essentially was: Can we apply the principles described by Jeff Hawkins’s model of the brain in On Intelligence to organization design?

To review, Hawkins suggests that the brain is composed of a set of pattern-recognition layers. Each layer is trained to look for certain patterns of inputs, and react accordingly. If the layer sees inputs that don’t match what it thinks it should be seeing, it passes the inputs upwards to the next layer to see if the next layer knows how to handle it, eventually filtering up to consciousness. So when we first learn an activity like biking or driving, we have to pay conscious attention, but as time goes on, our brains learn to handle most of the mundane details of such activities subconsciously without having to access higher layers.

Lehtipalo suggests an analogy for the organization:

An intelligent organization is one where employees act independently but in concert guided by management input and their own senses. As long as employee experiences matches the high level guidance no approval is needed for action. However when there is a misfit between management prediction and reality the staff experiences are communicated upward and management adjusts their model of the business and their guidance accordingly.

Employees handle most decisions at their “layer” of the organization, and only pass upwards the ones where the decision is ambiguous within the context of the high-level goals of the organization. Once those decisions are made by higher management, the team knows how to handle those situations in the future.

This could be perceived as a typical hierarchy, managed with ISO 9000 and three-ring-binders of process, but Lehtipalo is hinting at something more. Rather than treating employees as neurons capable of only routinized decisions, we need to start taking advantage of the intelligence already in place at each level of the organization.

This conception of the organization as collective intelligence fits in well with where my own thoughts on organizational design have been going. As I suggested in the social technologies post, I think “that future organizations will consist of consensus-driven vision creation, and then each person determining how they can contribute to that vision.” In other words, the organization will figure out an overall purpose that matches its strengths with opportunities, and minimizes its weaknesses, and then members of the organization figure out how they can use their own strengths to pursue that organization vision (yes, it’s a fractal organization design, where each level is self-similar, and thus could be broken into several levels of hierarchy without changing the process). The role of managers in such an organization would be to guide the vision-building process to achieve consensus, and then to guide organization members in finding the place best suited for them within that vision. If you’re a science fiction fan, it would look like Ender’s army in the book Ender’s Game.

In an interesting coincidence, this week’s book in the email business book club is Do the Right Thing, by James F. Parker, former CEO of Southwest Airlines. Today’s excerpt contained the following quote:

The ultimate success of any organization requires consistently excellent performance at every level. Vibrant and successful organizations … are built on a culture of engagement, in which employees believe in the mission they are trying to accomplish and know that they are contributing to its success. People who are given the room to succeed usually will.

As usual, I see the same patterns everywhere once I start looking, but I think this is congruent with the other thoughts above. Successful organizations have a compelling vision that employees value and to which they want to contribute. They find ways to contribute to that vision appropriate to their level (at Gore and Associates, people are not given specific responsibilities when hired, and just wander around the factory until they can figure out what they should be doing to help). This means that managers don’t have to tell their employees what to do – they just have to make sure that employees are put in situations where their decisions will match up with the organization vision. On the flip side, if there isn’t a good match between a given employee and the vision, then perhaps the employee and organization may not be a good fit with each other and should part ways (Zappos believes this so strongly that it offers new employees $1000 to quit).

I should include a caveat here that this is meant to describe organizations of talented “knowledge workers”, full of employees who are comfortable in a free agent world where they move freely between organizations as opportunities arise. I don’t know if it would work as well in a more prosaic environment such as manufacturing, although Ricardo Semler seems to be making it work.

To really make your head spin, we can take it a step further and think about how this all ties into Latour’s theory of the collective. In Latour’s world, a collective is confronted with a new situation or new inputs that are not part of the current collective (he calls this “Perplexity”). The collective undertakes a period of Consultation to decide how to respond to the new inputs. Once a decision has been made with appropriate inputs from all relevant stakeholders, a new Hierarchy of the collective is formed, and then the Hierarchy is Institutionalized to preserve the new order. Or, to put it in the terms of a recent post, learn a new behavior, then latch it.

Intelligent organizations need to streamline their learning and latching process so as to be able to respond faster when confronted with new situations. At the same time, to be successful, such organizations need to properly handle the “Consultation” period and make sure all relevant stakeholders are properly represented, which is more difficult in an ever-more-interlinked world (who would have guessed twenty years ago that a multinational corporation like Nike could be threatened by grassroots organizations?). To accelerate the learning process while ensuring the right things are learned, organizations need to figure out better ways to take advantage of the collective intelligence of their employees, such that all available intelligence is concentrated on the problem at hand, with each employee thinking about the problem at their level.

I still haven’t quite figured out how this all ties together yet, and whether the type of organization I’m describing is realistic, but I do think that the vision is starting to coalesce. There are many practical questions still ahead – as Lehtipalo asked in a followup email:

Is it possible through the structure of the organization, through processes, reward systems and with the help of various communication tools to make intelligent behavior the *natural* thing for the organization – i.e. make it the path of least resistance?

In other words, how do we set up the incentives of the organization such that employees do the right thing? It’s a hard question, so I’m going to leave it for another time.

P.S. 10 posts in June! Unsurprisingly, this is the most posts in a month since last summer. Amazing how not having classes gives me more time to blog.

Intelligence in Google world

Tuesday, June 10th, 2008

In a comment on my strategic intuition post, Seppo asked the interesting question, “How will Google change the way we *think*?” In particular, he notes that sheer accumulation of facts once was a metric of intelligence, but in a world where Google is accessible from our pocket phones, mere facts don’t have the value they once had. So Seppo asks what it means to be intelligent in the always-on Google world where we are practically omniscient? And how will our intelligence evolve to include Google?

One question is what is the value of facts? If I can look anything up in Google, then why bother remembering it myself? I think this one is pretty easy to address, as it’s essentially the question of why anybody should bother learning arithmetic when calculators are available. The reason to learn basic facts or techniques is that those basics need to be embedded into our subconscious as cognitive subroutines before we can build more advanced ideas upon those foundations. We can’t learn algebra unless we know arithmetic cold, and we can’t learn calculus until algebra is unconscious. I think the same holds true for networks of facts – if I just have a set of facts that I have looked up, I can’t see the interconnections between the facts and see the big picture patterns that relate those facts (this is my impression of what historians do). The only way to start seeing larger patterns is to have spent the time memorizing the basics until they are unconscious.

Seppo notes that critical thinking is another critical component of intelligence in a “sea of facts” world: “you need to be able to know where to get information, how valid that information is, the reliability of the source… and that comes from having a certain amount of that experience stored in your head where it can immediately be synthesized.” If you don’t have a core set of facts stored in your brain, you have no basis on which to evaluate new facts coming in. One of the consequences of the “sea of facts” is that people can pick and choose which facts they want to believe, and the facts that we place in our brain will be the filter by which we accept or reject new facts (Farhad Manjoo’s book True Enough points out that since we no longer share a common set of basic facts, reality itself is fracturing). A basic curriculum of facts is necessary to make sure we are all evaluating new facts with the same set of criteria.

In another example of the Internet providing fodder appropriate to my blog posts, the Atlantic just published an article by Nick Carr called “Is Google making us Stupid?”. Carr’s concern is that the river of information presented by Google and blogs and always-on connectivity has created such an overload that our minds are adapting by becoming browsers, grazing at the edge of the river. He worries that we are losing the ability to concentrate deeply, to read books, to handle anything that requires more than the typical 1.5 minutes necessary to read a web page. I’m less concerned than Carr because I think we will continue to adapt and evolve. One might argue that the same shortening of attention span happened with television (Clay Shirky makes a powerful argument that we took all the extra time and cognitive surplus created by productivity gains in the twentieth century and wasted them on television because we didn’t know what else to do) and we’re learning to adapt to that with longer form television like The Wire.

I think that our initial reaction to any new technology is to over-use it in the default mode presented by the technology. But over time, instead of adapting ourselves to the technology, we learn how to adapt the technology to suit us. Part of that is sheer familiarity – kids that grow up in Google world will adapt to it in ways that we can not even foresee yet. We who are coming to it as adults are always going to be non-natives that don’t speak the language. It’s not surprising that it feels clunky and awkward and threatening to us, as our brains were formed in a different time and in a different environment.

My personal take is that the way intelligence will evolve is by outsourcing. In a globalized world, companies succeed by focusing on using their limited resources to do what they do better than anybody else in the world and outsourcing anything not directly connected with that focus. Similarly, in Google world, an intelligent individual will load their brain and memory with the foundational facts and skills necessary for them to build the advanced skills they want, and outsource the rest either to the Internet or to other people (I’ve played around with these ideas before, in posts on how we extend our cognitive subroutines to use external objects or even other people). Adding in the recognition that we have a limited capacity in our consciousness and in our memory means that we need to outsource wisely and carefully; we need to recognize which concepts are non-core and can be looked up when necessary, and embed the remaining foundational concepts and skills into our subconscious.

Another interesting concept to me is how meta-information, the information about the information, will increase in value. What’s valuable about Google isn’t the information that it links to directly – it’s the sorting of that information by PageRank. That meta-information is what has driven Google to dominance. This interests me because I’m moving in that direction myself. I had a conversation with a coworker last week where I knew the answer to exactly zero of the questions that he asked me. But I knew precisely who did know the answer to each question, and was able to explain the relevant work each person was doing to the question. One of my roles at each of the companies where I have worked is to be a mini-Google, the place to start when you don’t know where to start, as I understood enough of each person’s work at the company to place it into the broader context. Like Google, I cache just enough information about a resource (information or person) to determine the utility, and link people to the actual resource if appropriate. Huh. Never put it in quite that way before, but that might be a good way to pitch my generalist skills.

This point is a bit more disjointed than I’d like, but I’m going to put it up anyway. I think there’s a lot of interesting thought to be done in this area, especially in developing tactics for maximizing brainpower in an always-on Google world. How do you use Google to augment your intelligence?

Strategic Intuition and Expertise

Wednesday, June 4th, 2008

On Monday night, I went to a talk by William Duggan, a Columbia business school professor who studies strategy, on a concept that he calls strategic intuition. Duggan has written a book on the subject, and has set up a blog to discuss the concept.

Duggan started by discussing the differences between expert intuition and strategic intuition. Expert intuition is built up by practice and familiarity with situations, of the sort described by Gary Klein in Sources of Power or Malcolm Gladwell in Blink. Expert intuition is using one’s built-up experience to instantly and unconsciously recognize the right thing to do in a familiar situation or its variants.

Duggan then differentiated strategic intuition by explaining that strategic intuition is the ability to recombine previous ideas into a wholly new pattern to address new situations. He uses von Clausewitz’s strategic primer, On War, to describe the process:

Clausewitz gives us four steps. First, you take in “examples from history” throughout your life and put them on the shelves of your brain. Study can help, by putting more there. Second comes “presence of mind,” where you free your brain of all preconceptions about what problem you’re solving and what solution might work. Third comes the flash of insight itself. Clausewitz called it coup d’oeil, which is French for “glance.” In a flash, a new combination of examples from history fly off the shelves of your brain and connect. Fourth comes “resolution,” or determination, where you not only say to yourself, “I see!”, but also, “I’ll do it!”

The rest of Duggan’s talk was describing different examples of strategic intuition, such as Napoleon’s strategy in a critical battle. He pointed out that none of these people invented something new – they just recombined previous elements in new ways. For instance, he described the Google guys as combining data mining techniques from their academic research, AltaVista’s search crawling, the idea of academic citations used as a ranking method, and Overture’s ad placement. Duggan gleefully used T.S. Eliot’s quote “Immature poets imitate, mature poets steal” to illustrate the value of looking out into the world to find the missing piece that might make all the difference.

I like the strategic intuition concept in general. I’ve experienced that flash of insight a few times; as I describe in my cognitive subroutines post, “I had one of those moments where I connected a bunch of ideas, and synapses lit up”. Strategic intuition also appeals to me in that it provides a useful role for a generalist; specialists excel at expert intuition, but only generalists can bring the wide-ranging set of ideas and freedom from preconceptions that are necessary for strategic intuition in Duggan’s model.

I am a bit skeptical of how well supported this model is. He claims it’s based off the intelligent memory hypothesis of how the brain works, which I assume is what is described by Hawkins in On Intelligence. I see how that would apply to expert intuition, which builds in common responses at lower layers of the neocortex, but it would seem to fall short in strategic intuition. This may be answered in his book, so I may have to pick that up at some point (after I’ve finished the ten books lying on my floor in various stages of completion).

I’m also skeptical of Duggan’s contention that this primarily happens in the mind of one person. He started the talk by asking people where they got their good ideas, and got answers like “in the shower”, “while running”, and “late at night” and used those answers to scoff at the value of typical group brainstorming sessions. I find this interesting, because I think by talking, and often get great ideas in conversation with others. If gathering a bunch of ideas into one’s brain is advantageous for strategic intuition, it would seem to be even better to combine the ideas across two or more brains. Thinking by myself often gets me stuck in ruts that I can’t escape (which makes me unable to achieve the “presence of mind” Duggan cites as being key), and talking to somebody else breaks me out of those ruts. It seemed like Duggan undervalues the role that conversation with others can play in strategic intuition (again, perhaps something he covers more in the book). I think this is one of the roles that a generalist plays – being able to combine ideas from multiple people to create flashes of insight that could not be conceived from within any one person.

Duggan’s concept of strategic intuition does help to answer a question I’ve been struggling with since watching a Malcolm Gladwell talk about what constitutes genius. In that talk, Gladwell differentiates between genius and expertise. Genius is just being flat-out smarter and seeing things others can’t. Gladwell uses the example of Michael Ventris, the man who was able to decipher the Linear B language in a couple years in his spare time, after others had spent decades trying to figure it out. Other examples would be people like Einstein or Tesla.

Gladwell contrasts genius with expertise by citing the “10,000 hour rule”, where he claims that it takes 10,000 hours (approximately 3 hours a day for 10 years) of deliberate practice to become a world-class expert at something. Gladwell finds it interesting that talent or genius has almost nothing to do with it – if you have the persistence to put in that 10,000 hours of deliberate practice, you will be an expert. He uses the interesting example of Roger Wiles proving Fermat’s Last Theorem – Wiles wasn’t a genius, and was not particularly gifted among mathematicians, but Gladwell observes that he was probably the first mathematician to just work at Fermat’s Last Theorem for 10,000 hours and he eventually cracked it. Another example would be somebody like Edison with his 99% perspiration quote.

The 10,000 hour rule really dismayed me when I first heard Gladwell speak about it partially because it makes so much sense. It takes that sort of dedicated repetition and practice to build up the unconscious machinery and cognitive subroutines to see beyond the basics. This applies in games like chess and tennis, where dedicated prodigies can become world-class competitors as teenagers (ten years after they start), as well as most careers. And the question that faced me was where I was spending my 10,000 hours.

Duggan’s talk gives me some hope in providing a new framework for the value a generalist might have. Strategic intuition is the ability to bring disparate elements together by seeing the world with a fresh perspective (what von Clausewitz called “presence of mind”), which is precisely the value I hope to achieve as a generalist. Rather than extend the limits of an existing field as an expert might do, it’s the ability to remix fields and combine them in new ways. I wonder if it’s possible to spend my 10,000 hours as a generalist, and, as Seth Godin put it, specialize in being a generalist. I guess we’ll find out.

Nassim Nicholas Taleb and Nonlinearity

Monday, April 14th, 2008

Over the weekend, I went for a walk and listened to Nassim Nicholas Taleb’s talk at the Long Now (viewable at the Whole Earth site, and summarized here). I’ve been doing this for a few weekends now – I can never pay enough attention to listen to a talk like that if I’m at home because I get distracted, but going for a nice long hour-and-a-half walk is a good way to burn off some energy and get educated at the same time. I recommend the Long Now podcast if you’re looking for good talks from interesting intellectuals.

Taleb recently published The Black Swan, a follow-up to his original book, Fooled by Randomness, and uses the talk to discuss some of the ideas from that book. I won’t try to summarize the whole talk but he made two key points that I want to record for future reference. Both points derive from how our intuitions and our mental tools are not equipped to handle nonlinear models. This may seem like an abstruse topic but had very real consequences in the subprime meltdown, when investors theories’ did not take into account non-linear exponential failures of their models.

Taleb posits two worlds: Mediocrestan and Extremistan. He describes Mediocrestan by having the audience imagine a group of 100 people and their distribution of weights. Then he says to determine how the average weight of the group would change if we added the heaviest person in the world to that group. It turns out to not affect the average that much – even if we add a 1,000 pound person, it shifts the average by only 0.5% or so. This is the world of the normal Gaussian distribution that we understand very well with standard deviations and the like.

Now do the same thought experiment, but use people’s wealth instead. Imagine a group of 100 typical people, and their average wealth. Now add Bill Gates to the group. At this point, 100 of the 101 people in the group are below average in wealth, and Bill Gates has approximately 100% of the wealth of the group. This is the world of Extremistan, where outliers can blow up the normal distribution. This is the world of the Black Swan.

And what’s interesting is that we are so bad at dealing with Extremistan. We just don’t intuitively get it, even though we are surrounded by examples of it. Finance and wealth. Book publishing (a significant portion of all book sales are Harry Potter books). The music industry. eBay. We live in an Extremistan world, but our intuition (evolved in a simpler time without network effects) is still stuck in Mediocrestan. So we have to beware of our instincts, because they will get the wrong answers. And we have to beware of charlatans using Mediocrestan theories because they are calculable – it’s like physicists treating everything as a simple harmonic oscillator because that’s the only equation they can solve.

Another example of Extremistan comes from a completely different source. I’m currently reading Poor Charlie’s Almanack, a book of the wisdom of Charlie Munger, Warren Buffett’s investment partner. Munger notes: “If you look at Berkshire Hathaway and all of its accumulated billions, the top ten insights account for most of it.” He also quotes Buffett as saying:

“I could improve your ultimate financial welfare by giving you a ticket with only twenty slots in it so that you had twenty punches – representing all the investments that you got to make in a lifetime. And once you’d punched through the card, you couldn’t make any more investments at all. Under those rules, you’d really think carefully about what you did, and you’d be forced to load up on what you’d really thought about.”

The typical investment strategy is diversification – invest in lots of things and trust in the average, which would work in Mediocrestan. Buffett and Munger have internalized the idea of Extremistan in investing and exploited it to their advantage by realizing that there will be successes wildly out of proportion to the norm and targeting only those investments.

The other illustration of nonlinearity that Taleb used was to imagine an ice cube melting into a small puddle. Now imagine starting with the puddle and trying to reconstruct what the ice cube looked like. You can get the volume of the ice cube, but you can not derive the shape of the ice cube because there are an infinite number of shapes that could have melted and left that puddle. In other words, there is not sufficient information in the final state to determine the initial state; information is lost in this process. He uses this observation to illustrate why he doesn’t trust theories; because the observable world does not constrain theories enough, many theories can fit existing data without providing predictive power.

This multiplicity could also be illustrated by taking the sequence: 1, 2, 3. What’s the next number? Most of us would answer 4. But the answer could be anything from 0.1 to 100,000. I can construct an equation that would give any answer you chose as the fourth entry in that sequence. There are an infinite number of possibilities that fit the available data. Taleb reminds us of this multiplicity and displays extreme skepticism when decisions are made based on believing just one possible theory.

Taleb’s ice cube reminds me of a discussion from Zen and the Art of Motorcycle Maintenance. Pirsig quotes Poincare as saying “If a phenomenon admits of a complete mechanical explanation it will admit of an infinity of others which will account equally well for all the peculiarities disclosed by experiment.” This is the dirty secret of science – theories are worth nothing, because an infinite number of theories can explain any experimental result, including such outlandish ones as the Flying Spaghetti Monster. Popper’s claim that theories must be falsifiable to be scientific is a consequence of this – every theory is always one experimental observation away from being disproven. Scientists live in a world where they are sifting through an infinity of possible theories, trying to choose one that best fits their observations, but knowing that their theories can never be proven true, only proven false.

I don’t really have any deep analysis here. I liked the visual imagery Taleb used to illustrate his points, and wanted to record that in this post. After listening to his talk, I may have to get The Black Swan from the library this summer to see if the rest of the book is of similar quality.

RSS feed

LinkedIn profile

Twitter

Ripping the groomers on a snowboard yesterday, doubles sand volleyball this morning: California living!

Recent Posts

  • Hosting update
  • Don’t act like a special snowflake
  • Antifragile, by Nassim Nicholas Taleb
  • 2013 review
  • Inequality, Globalization and Technology
  • Random Posts

  • The Cheese Monkeys, by Chip Kidd
  • Priceless, by Frank Ackerman and Lisa Heinzerling
  • Trust, but Verify
  • Branding
  • Inversions, by Iain M. Banks


  • Archives

  • Categories