Eric Nehrlich, Unrepentant Generalist » cognition

Archive for the ‘cognition’ Category

Mapping out Organizational Space

Saturday, December 27th, 2008

I really liked Tim O’Reilly’s post today about how companies like Google and WalMart are incorporating IT into their organizational DNA. O’Reilly’s post describes how those example companies are mapping out a new way of organizing people built around integrating IT into how the organization functions:

Sensing, processing, and responding (based on pre-built models of what matters, “the database of expectations,” so to speak) is arguably the hallmark of living things. We’re now starting to build computers that work the same way. And we’re building enterprises around this new kind of sense-and-respond computing infrastructure. …It’s essential to recognize that each of these systems is a hybrid human-machine system, in which human actions are part of the computational loop.

I particularly like O’Reilly’s description of the organization as a group mind that incorporates both people and machines, as it fits in with my thoughts on organizational cognition. The organization also incorporates culture, processes and many other feedback loops that structure how the organization accomplishes its tasks.

Let’s start by taking a quick look at two existing organizational models:

  • Small teams – the pre-industrial-age organizational model. In a small team, no organizational structure is needed because everybody knows what everybody else does, and decisions can be made organically or by consensus. New team members are indoctrinated into the way things work by social pressures. Whether discussing hunter-gatherer bands or artisan guilds, it’s rare that organizations grew to more than 30 people without splitting into smaller groups. There’s a reason that even modern managers understand the power of small targeted teams. Communications limited the size to which a team could grow, as the number of communication pathways grows exponentially with the size of the team.
  • Hierarchy – the industrial-age organizational model. Information and decisions are funneled up to the appropriate decision-maker, and the resulting decision is distributed out to the employees who carry out those decisions. This was ideal in a world of limited communications, as each employee knew that information flowed up to their manager, and decisions flowed down from their manager, so they only had one primary communication link to maintain. Hierarchies also simplified assimilation of new people because the hierarchy defined each employee’s responsibilities, generally in an organizational handbook.

There have been various hybrid organizational models where there are hierarchies of teams and other configurations, but teams and hierarchies have been the basic building blocks for most organizations.

We are in a fascinating time where the number of possible organizational solutions has gotten much larger, as technology has removed the communication limitations that previously eliminated many potential configurations. We are just now figuring out what the new possibilities are, evaluating their strengths and weaknesses, so that we can find the appropriate option for a given venture. To put it in geekier terms, we are starting to map out the vastly expanded search space for organizational structures.

I think O’Reilly’s post identifies one direction, where organizations integrate computers so that certain decisions (like Google ranking web pages) don’t need to be handled by people and instead information deluges are handled by software. One of my interests is in trying to map out other possibilities, what they would look like and how they would fit various organizational purposes. My previous post about the future of organizations discussed how the new limitations may be social rather than technical, which implies that we need to start designing new social structures that can take advantage of the newly available technology.

One possibility that I’m playing with is that of overlapping teams with clearly defined roles. The good teams I’ve been on involved people who trusted and respected each others’ contributions to the team’s overall goals. I’d like to think that a fractal organization could be built off of such teams which each have a team goal, and then each team trusts the other teams to accomplish their goals in order to satisfy the organization goals. There would be a ton of communication necessary to distribute information within the organization to where it needs to go, but I think that is becoming more realistic by the day.

Another possibility is the free agent world, where there are no continuing organizations. Instead, coalitions of individuals are formed for specific projects, accomplish those project by bringing in other people as needed, and then disband to pursue other projects with different people. This would be the endpoint of the world where everybody becomes a consultant in their specialty.

I’m sure there are lots of other possibilities that I haven’t considered. For instance, I’m definitely interested in what we can learn from how World of Warcraft guilds are organized to accomplish their goals when every player is free to leave guilds that don’t work for them. Or how organizations mobilize volunteers to work for them – I’m sure there’s much to learn from Obama’s campaign this year. I’d love to hear of other ideas that people have on how to organize people.

P.S. I finally created the Google group/email list to discuss organizations that I mentioned in that future of organizations post, so go ahead and join up if you’re interested.

The Future of Organizations

Thursday, December 11th, 2008

Paul Graham’s latest essay claims that small organizations are the future:

“But in the late twentieth century something changed. It turned out that economies of scale were not the only force at work. Particularly in technology, the increase in speed one could get from smaller groups started to trump the advantages of size. …For the future, the trend to bet on seems to be networks of small, autonomous groups whose performance is measured individually. And the societies that win will be the ones with the least impedance.”

This is interesting to me because I’ve been thinking about organizational cognition recently, which is the question of how an organization creates a group mind that knows more than its individual constituents. If the trend is towards smaller organizations, then perhaps the problem isn’t how to get large organizations to operate more effectively, but instead how to facilitate cooperation between organizations. These are similar problems, but existing organizational solutions like hierarchies don’t work for inter-organization collaboration, which creates urgency to find more flexible solutions.

This move towards less formal organizations to accomplish tasks is also covered well in Clay Shirky’s Here Comes Everybody. Shirky cites Ronald Coase’s theory that companies exist because the transaction costs associated with organizing people were more expensive than the associated inefficiencies of not necessarily finding the best person for each individual task. According to Shirky, new technologies lower the Coasean floor and create the possibility of impromptu evanescent gatherings of people accomplishing things together that could simply not have been organized previously.

So how small can organizations get? Are we approaching a full free market world where we recruit different people for each individual project (the analogy I use in that post is movie making)? I don’t think so. And here’s why.

My theory is that the new Coasean floor is going to be set by social trust. While we are in a world where I could hire a programmer to do a task from Elance or oDesk, I have to admit that I would be very nervous about doing so for any critical task. Why? Because I wouldn’t know the person and wouldn’t trust them.

It takes time and experience together to build the trust necessary for a team to function effectively and efficiently. Teams do not begin jelling as high performance units until each member of the team trusts the others to the point where he or she feels comfortable outsourcing parts of their intelligence to them. In other words, even though we have the physical technology now to collaborate informally and spontaneously, we do not have the social technology yet to fully exploit those capabilities (which, now that I re-read that post, reminds me that I need to get back to that topic at some point).

So where does the social trust Coasean floor lie? Katzenbach and Smith suggest that the highest performing teams have between 6 and 15 people – the lower bound is set by not having enough variety of skills within the group to really create a group mind, the upper bound by communication inefficiencies. That range sounds right to me as well, based on my own experience with various teams at various companies. To really get an answer, we’d have to map out the performance curve of groups as they grow; in other words, 2 people working on a project together might get less done than those 2 people working independently because of the communication overhead, but they might be more effective because they can bounce ideas off of each other. How that scales up to 3, 4, 5, or 10 people depends on the people, and the organization, and the communication technologies in place. But I would guess that the sweet spot is in the 6-10 person range.

If that is the team size which is most efficient from a social trust perspective, we return to the original question I posed above: how do we facilitate communication and collaboration between such small teams? What are the social and physical technologies we can use to transfer knowledge and expertise so that teams can build off of each others’ work? I don’t know what the answers to these questions are yet. Some people would suggest semantic knowledge management technologies to parse knowledge and distribute it automatically to the right people. Others would suggest quantitative approaches where measuring for the desired results will spur appropriate action. I tend towards humanistic approaches where trained generalists build the bridges between such teams, but I’m slightly biased as that’s one of the roles towards which I strive.

I think these social technology design questions have the potential to created fantastic productivity benefits over the coming decades. We’re hitting the limits of what physical technology can do for us. We have more and more powerful computers that sit idle most of the time, as users stare at them trying to figure out the interface. No amount of technology seems to remove the need for meetings to synchronize the organization. And we’re at a fascinating time when the physical technology Coasean floor has been removed, opening up new experimental possibilities for social technologies to help solve these organizational questions. I plan to continue exploring this topic, and hope that you will join me.

P.S. To be specific, a few of us from Convergence08 are starting a regular get-together where we exchange ideas on the topic of how organizations think and work, and share articles and resources via email; in fact, this post was inspired by discussion from that list. Let me know if you’d be interested in joining us.

Situational vs. Dispositional Management

Saturday, December 6th, 2008

In my post about Philip Zimbardo’s work, I mentioned the concepts of situational vs. dispositional tendencies. One might see these as being obscure cognitive constructs. However, a recent situation made me realize that beliefs about these tendencies have direct consequences on management styles. So let’s dig into this some more by starting with a description of the two tendencies before getting into consequences for management.

Dispositionists believe that our tendencies and behaviors are fixed because of the kind of person that we fundamentally are. They search for explanations in a person’s character to explain their behavior. So when a criminal steals, they try to find the “flaw” in that person’s character that would make them perform such an act. This explanatory search can concentrate on nature (looking for genetic correlations) or nurture (looking at the childhood surroundings), but the search is for a character trait within the person that explains their actions.

Situationists like Zimbardo think that people’s behaviors, while influenced by their parents and upbringing, are dominated by the situation in which they act. The Stanford Prison Experiment is the most glaring example of this, where intelligent, well-adjusted college students turned into abusive guards and mentally unbalanced prisoners within three days of being placed in a prison environment. Because of these overwhelmingly powerful situational effects, if somebody performs “evil” actions, it is not necessarily an indication of a fundamental character flaw on their part; instead the situation must be examined to see how it contributed to the actions.

When comparing the two, the dispositional viewpoint is easier to understand, with a simple narrative to explain somebody’s actions (“Lucifer is a bad person, which is why he did bad things”). The tricky thing here is that saying something is something raises warnings flags for me (see my review of Wilson’s Quantum Psychology for a longer take on the difficulties of “is”-ness). Attributing a characteristic as a fundamental component of something, as is implies, simplifies the narrative, but at the cost of making us more vulnerable to the true complexities of life (I’m reading Taleb’s The Black Swan right now which expands upon this idea). The dispositional viewpoint also has dangerous consequences in how we raise kids: treating intelligence and talent as fundamental characteristics of children actually retards their development, as they don’t even try to improve themselves. I think that while we have dispositional tendencies, we need to recognize the situation defines how we behave. But I’m going to stop with the discussion of the tendencies themselves (since others have done it better), and focus on the managerial consequences of these two ways of thinking about people.

In a dispositional workplace, life is relatively simple – you interview candidates, find the ones that have the right fundamental attributes (e.g. “Smart and Gets Things Done”), and then focus on removing obstacles to progress so that these people get things done, in accordance with their nature. It’s a nice, tidy view of the world. Unfortunately, I think it’s too simple, as it ignores the influence of the system on the attributes that people display – people that are tremendously successful and effective under one system might be completely ineffectual and unmotivated in another system where their strengths go unused, as sports teams find out each year in free agency.

Situational management is about designing the system to match people with the appropriate environment to get the desired organizational results. This system design can take a couple forms:

  • Designing a financial incentive system that rewards appropriate behavior, although Robert Austin cautions us as to the difficulties with this.
  • Desiging a culture and vision that reinforce the desired employee characteristics towards a common goal, such that employees “believe in the mission they are trying to accomplish and know that they are contributing to its success”, as a former CEO of Southwest Airlines puts it.
  • Understanding employees’ strengths and weaknesses and giving them jobs that leverage their strengths and minimize their weaknesses. The canonical example of how not to do it is promoting a great software engineer into management, since the manager mindset is completely different than the engineer mindset. Good management in this scenario is about placing people in situations that maximize their chances for success while contributing to organizational goals.

What’s I like about this conception of management is that it means that management is a design position. Management in a dispositional world is about hiring the right people and then getting out of their way – it’s passive and uninvolved. Management in a situational world is an iterative systems design problem with constraints – managers have to pick a vision, align employees with that vision, work towards the vision, re-evaluate progress, possibly pick a different vision that aligns better with the strengths of the employees, etc. It’s an ongoing active process that involves being involved with all aspects of the business, understanding how employees work best, what the organization’s capabilities are, monitoring the environment outside of the organization to understand how to align potential outputs with environmental demand, etc, and using that understanding to better design how the company works. This is the type of manager I aspire to be someday.

Spreading Ideas and Framing

Friday, November 28th, 2008

Noah Brier wrote an interesting post yesterday about how certain ideas spread virally even when people disagree with them. His examples include Sarah Palin or Wired’s “Blogging is dead” article, where the blogosphere is buzzing about how bad an idea something is, but are still spreading the original idea far beyond its original audience because they can’t resist the urge to respond critically to it. I left a comment on the post, as it relates to some thoughts I’ve had over the years, and then realized that I would need a full blog post to unpack the one paragraph I wrote. So I’m writing one.

I had this vague intuition for years that arguing against an idea still supported the idea. I never was able to fully articulate this intuition until I read George Lakoff’s work on framing, which explained how arguing against a proposition still reinforces the proposition as stated. Lakoff’s classic example is “tax relief” – even arguing against “tax relief” reinforces certain connotations, including the idea that relief implies an affliction. So referencing a worldview, even if one is arguing against it, still reinforces that worldview.

So why do people do it? I have a feeling that we are wired to play finite games, where we are trying to win the game with the rules as stated, rather than infinite games, where part of the challenge is to step back and re-define the rules (James Carse’s book Finite and Infinite Games is obviously a big influence on me). So people get caught up in trying to win the argument within the context that they are given, rather than thinking about their overall picture and whether they are contributing to that. In other words, I agree with Noah’s point that “the best way to fight this kind of behavior is to not talk about it. But most people can’t help themselves.” We want to win the finite game, even when the game as framed will contribute to the other side’s success. To avoid that, we need to be thinking about playing the infinite game instead.

Getting to Yes is another framework for thinking about these sorts of issues, as it emphasizes figuring out your principal interests and focusing on those, rather than getting sucked into zero-sum positional bargaining about specific issues. If we go into a negotiation focused on winning every individual point, we may often fail to actually achieve our interests (much like Internet pundits arguing against certain issues, but only providing them more visibility and respectability in the process).

So when faced with a screed which makes us want to argue and tear down an opposing perspective point by point, we need to step back and figure out if we’re just contributing to their worldview by doing so. We need to concentrate on our overall vision and figure out whether what we’re doing is contributing to that end goal. We need to find opportunities to reframe the discussion to find points of commonality (e.g. both sides of the abortion issue agreeing that they’d like to see fewer unwanted pregnancies) so that everybody can feel like they are moving towards their goals.

Or, sometimes, we just need to accept that the other person is too locked into their viewpoint for us to be able to convince them. If their frames are so strong that all incoming information will be mapped to their frame, such that no facts or arguing will convince them, we need to recognize that and move on rather than continue to futilely waste our time. This is definitely one of the hardest skills to learn on the Internet.

Man. I really need to get back into writing regularly. There’s a whole trove of interesting territory around zero-sum vs. non-zero-sum thinking that I need to explore at some point. It’s fascinating stuff to me, and while it’s a frame that I’m probably over-applying right now, I think it has some explanatory power.


Monday, November 17th, 2008

Over the weekend, I attended the Convergence08 unconference, which focused on future technologies like biotech, nanotech, artificial intelligence, etc. I had to miss the Saturday morning sessions, as I had a chorus rehearsal for this week’s Mahler concerts, but I was there on Saturday afternoon and most of the day Sunday.

The first session I attended was on “Building a better search engine”, which I chose because I work at Google (although on nothing related to search). The attendees speculated about the next big jump in information finding technology, including:

  • Personalized agents that know you and just find the right information – I brought up privacy, and the general response was that privacy was overrated and should be ignored for the sake of this discussion as better results would trump privacy.
  • Semantic technologies with natural language understanding – somebody from Powerset was there pushing this idea, and somebody else recommended Semantifind. I’m extremely skeptical of such technologies, as I’ve spent most of the past ten years figuring out how to translate between different disciplines as a generalist, and I already understand language. I think it’s going to be a long time before computers can figure out the implicit frames that influence comprehension.
  • Social search – leverage our social networks to find more relevant results. If a trusted associate noted something, it’s probably more relevant than a random stranger noting the same thing. The issues I raised is the modelling of the social network – I would trust certain friends to make recommendations about stereo equipment but definitely not about clothes and vice versa. And unless the software can gather enough data to model those subject- and pairwise-specific interactions, it’s not going to get the desired results.

As an aside, it was interesting to me that I’ve gone from being a technological positivist where technology will solve our problems, to being skeptical of most technical solutions, partially because I now think the hardest and most interesting problems are not solvable by technology per se, but instead require the design of new social technologies to coordinate people in new ways.

The next panel I attended was called something like AI and Sense making. I’m fascinated by the question of how we make sense of the world as my continuing obsession with stories makes clear. This was a session where people discussed the idea of sense making (Gary Klein’s work with firefighters was a big influence), how it could be embedded into technology and possible business ideas built on such technology. The discussion was interesting but because sense making is a fuzzy cognitive concept, one attendee afterwards commented that it was difficult to separate sense making from general AI. Two recommendations for further reading I want to record for myself: Perspectives on Sensemaking, an article by Gary Klein, and Sensemaking in Organizations, a book by Karl Weick.

One useful construct from the session was the idea that we create a frame, view everything coming in through that frame, but keep track of whether things are corroborating with reality. Once the discrepancy with reality grows too large, we have to consider junking the existing frame and finding a new one that fits the data better, which I see as yet another form of Bruno Latour’s process.

Then it was time for the keynote speech by Paul Saffo, which I had been eagerly anticipating after having seen him speak several years ago. I was not disappointed – even though it covered many of the same topics as that previous talk, it was entertaining and informative. Tidbits that I wrote down:

  • “If you don’t change direction, you’ll end up where you are heading.” (in other words, inaction is a choice with consequences)
  • The future will still have a lot of dull parts (riffing on Hitchcock’s claim that “Drama is life with the dull parts cut out of it”). We look forward to all the excitement of the future but forget that amid all that excitement will stlil be dull parts.
  • “Change is never linear” (s-curve, s-curve)
  • “Cherish failure, especially someone else’s” – this was a theme from the other talk I attended, where he pointed out that the consequences of the s-curve is that the time when everybody decides that a technology is a failure and that it will never work is the time when it might be just about to take off. Which actually made me wonder about my dismissal of semantic technologies in the session earlier, as part of the reason I dismissed it is that it’s been “just around the corner” for 20 years now, which, in Saffo’s world, means it may be just about to finally succeed.
  • “Look for indicators” – form a quick opinion, but then look for proof that you’re wrong, which he elaborates in his strong opinions, weakly held blog post.
  • “Use forecast techniques until reality gets too complex” – this was an interesting riff where he said that even our forecasting techniques continually get outmoded and need to be updated. He believes that we’re in such a phase transition now, where the old qualitative models are breaking down, but new quantitative models haven’t arrived yet. The four factors that he thinks will drive the next generation of forecasting models are Moore’s law, better forecasting algorithms, more and better data, and more of our lives being stored in digital form thanks to Facebook. My eyes lit up, as that’s a perfect explanation of why I joined a forecasting group at Google.
  • Three book recommendations: the novel Daemon, by Daniel Suarez, “A general theory of bureaucracy” by Elliot Jaques, and the “creative destruction” work of Joseph Schumpeter.

Sunday morning started with a panel on synthetic biology. There were a variety of panelists, with backgrounds in physics, software, and biology, but my favorite was Denise Caruso of the Hybrid Vigor institute, as she questioned the assumptions that the optimistic scientists were making. Her focus area has been on risk analysis, especially in new fields where the risks are difficult to quantify, but her point is that the benefits are equally difficult to quantify, so we shouldn’t be going in with the assumption that innovation is automatically good. Her belief is that we need to come up with better processes and methods for assessing risk with interdisciplinary input. You can see why a generalist like me would be a fan (I actually asked a question during Q&A supporting her viewpoint). I chatted with her a bit afterwards, and also attended the breakout session after lunch with her on innovation and risk, which brought together interesting conversations and different perspectives (the work that Etan Ayalon is doing at GlobalTech Research looks particularly interesting to me). I also liked Caruso’s concept of Bayesian regulation, where it’s not black and white, but involves conditional probabilities.

I missed the next session as I ended up chatting with folks from that first after lunch session for about half the next session, and then had to prepare for my session, “How do organizations think?” I threw it open as a discussion forum expanding on the ideas in my post on organizational cognition, and had a good discussion with the eight people who attended. We talked about different people’s experiences with different organizational structures and what might work to improve those. One key concept that was identified was that designing an organizational culture and structure has to first start with the purpose for which the organization is built. Different structures will serve different purposes, and incongruities between the structure and purpose will cause friction. People expressed interest in possibly having a follow up session after the conference was over, but I didn’t get everybody’s contact information, so I hope they get in contact with me.

I ended up bailing out on the end of the conference during the longevity panel, as I had other plans for the evening, but all in all, it was a good experience – I met a couple new interesting people, had some good discussions, and found new food for thought, which were pretty much my goals for the weekend. But now it’s time to get back to my normal life.

Technorati tag:

Time Perspectives of Philip Zimbardo

Wednesday, November 12th, 2008

One of the great advantages of working at Google is that famous people want to come visit. That’s how I got to see Ferran Adria a few weeks ago. Yesterday, it was John Hodgman and Jonathan Coulton. Later this week is Chip Kidd. You get the point. But what’s even nicer is that they record all the talks and post them to their own channel on YouTube, so even if you don’t work at Google, you can see the talks. Or, in my case, if you are too lazy to get over to the room at the right time, you can catch up later.

Last weekend, I was catching up on a couple such talks that I had meant to see but missed for one reason or another (including Nancy Pelosi’s visit), and the one by Philip Zimbardo caught my eye. Zimbardo is infamous for the Stanford Prison Experiment where he simulated a prison with Stanford students and was aghast at how quickly the prison became real to all involved. Within a day, the guards were finding ways to humiliate the prisoners and previously healthy “prisoners” were having mental breakdowns (the images from the experiment were eerily echoed by Abu Ghraib). I’m currently reading Zimbardo’s book The Lucifer Effect, where he discusses the idea that evil is not a dispositional but situational; in other words, people who do bad things are not inherently evil, but instead all of us have the potential to do evil in the right situation (as his Stanford students demonstrated in their “prison”).

His latest book is called The Time Paradox, about the psychology of time, and he came by Google to give a talk on the subject. He started the talk with a discussion of the marshmallow experiment. The subjects were four-year-olds who were given a choice – they were given a marshmallow, but if they could wait a few minutes before eating it, they would be given two marshmallows for their patience. Some of them ate the marshmallow, some of them managed to hold out and wait for the greater reward. Here’s the astounding result: the experimenters returned to their subjects fourteen years later and found that the ones that had the self-control to wait had better grades and better SAT scores by a statistically significant margin. In other words, one brief experiment on a four-year-old was highly correlated with their future performance in life. So what’s going on?

Zimbardo identifies the two types of children as having different time perspectives: the ones that ate the marshmallow have a present time perspective, focusing on the immediate gratification of the marshmallow. The ones who waited have a future time perspective, able to trade off their present gratification for future results. And our culture rewards those who can make those future tradeoffs (e.g. study now rather than play to enhance one’s college test scores).

Zimbardo later extended the classifications of time perspectives to include past-positive (focusing on the good things that have happened in the past), past-negative (focusing on the bad things), and present-fatalistic (feeling unable to influence the events impacting one’s life). You can find out your time perspective tendencies by taking the online test. Zimbardo has done survey work to show that the results of the marshmallow experiment are not isolated – the time perspective of a person is strongly correlated with the results that person achieves in many areas of life.

What I like about this is that it feels right to me as an explanatory mechanism. There are certain areas where I am heavily future-weighted, such as financial planning where I will forgo something I want today for the sake of something I am saving to buy next year. There are other areas where I am past-weighted, such as in my social identity where some part of me still clings to my self-identity from when I was a teenager. There are still other areas where I am present-weighted, with a focus on whatever feels right at the moment (which contributes to my inability to maintain a regular exercise program). I was also able to immediately start classifying other people around me into the various categories.

The sign to me of a powerful classification system is when it breaks open a problem where everything seemed ambiguous before. Back when I was a programmer, I used to love that feeling when I finally hit upon the right way to represent the data and everything suddenly became easy. And I’d been feeling muddled about certain people recently where I just didn’t understand why they behaved the way they did. But when Zimbardo gave me this new template for thinking about how people think, their behaviors immediately fell into place.

The other nice part about Zimbardo’s use of time perspectives is that they are not fixed. A future perspective can be taught. So we could test four-year-olds, identify the ones that have a present perspective and give them the tools to develop a future perspective, thus improving their ability to adapt to adult life where future tradeoffs are always necessary. Zimbardo trashes the idea that success is genetically determined, instead focusing on the idea that the tools for success can be taught with the right system – not surprising from the man who identified that evil is present in all of us in the wrong system.

I recommend watching Zimbardo’s talk if this sounds interesting, as he explains it better and more rigorously than I do. But I wanted to share this idea that had immediate impact on me. I’m curious if others have the same feeling – maybe this is just obvious to others, or maybe it’s a helpful template for thinking about the world. What do you think?

Organizational Cognition

Friday, November 7th, 2008

Over the past seven weeks (good golly, where does the time go?) at Google, I’ve noticed a funny habit of mine. Whenever I overhear a conversation involving something that is related to my team’s work, I drop whatever I’m working on and wander over to listen in. Now, one might guess this is due to my slacker ways and desire to gossip as much as possible, and one would not be entirely incorrect. However, I was listening to one such conversation earlier this week as two of my groupmates were trying to suss out exactly how an analysis should work, and realized that there’s more going on here.

Such conversations are where an organization’s thought processes are made visible (audible?). In other words, if an organization could be perceived as a group mind, then those conversations are the equivalent of watching the synapses between neurons firing. It’s the only way to get insight into how the group mind operates. This may sound a little wacky, but I’m inspired by Edwin Hutchins’s book Cognition in the Wild, where he extracts observations about cognition by watching how a group mind in the form of a navigation team operated. In the best case, meetings can be a reflection of this organizational cognition, an aspect which Peter mentioned in the comments of my meetings post.

So by listening in on such conversations, one can map out how the organization operates. Which assumptions are taken for granted? Who are the stakeholders mentioned regularly as needing to be consulted? How are decisions made in such conversations – is it by consensus, persuasion, or hierarchy? These sorts of observations may not be directly relevant to one’s job, but it’s invaluable to an amateur anthropologist like myself in understanding the different forces that are at work within the organization.

Even better, conversations often arise when new organizational territory is being mapped out, either in the form of current assumptions being questioned, or in the development of a response to a new stimulus. When everything is running smoothly and there’s a defined process for how to do things, there is no need for conversation as everybody knows how to do their job. But when one reaches the limits of one’s understanding, then one has to consult another person, and such consultations are where somebody like me can see how the organization, in the form of its constituents, learns. To use the framework of Latour, conversations are where we can see the organizational Collective perform the Consultation process, where it grapples with an outside influence (what Latour calls “Perplexity”). Seeing the Collective go through a round of growing and learning is exactly what I mean by saying that conversations are a window into the cognitive process of the organization.

This is also exactly the sort of fuzzy stuff that I once would have scorned as a hard scientist and logic-driven engineer. These sorts of ephemeral observations about an organization are difficult to quantify and would not have even been on my radar ten years ago. And now they are the sorts of things that capture the value I bring to an organization, as my ability to attend meetings and listen to passing conversations and extract this sort of organizational knowledge is a testament to my ever-improving observational skills. I liked Rands’s description of it as the culture chart, as opposed to the formal organizational chart. The culture chart isn’t written down anywhere and is supremely fuzzy – it can only be intuited by reading between the lines of conversations.

So that’s my rationalization for why listening in on conversations when starting a new job is important. It’s the best way to understand how an organization operates: which assumptions are stated (and more importantly, not stated), which stakeholders matter and which can be ignored, etc. And, in case you’re wondering, all this listening to conversations is why I have to stay at work until 9pm a couple nights a week to catch up on my individual contributions. Once I am more efficient at my assigned tasks and up to speed on how Google works as an organization, I hope to get my hours down to a more reasonable number.

Switching Costs

Wednesday, October 22nd, 2008

Earlier this week I switched my RSS reader from Bloglines to Google Reader.

I’d been meaning to check out Google Reader for months, if not years, but had never gotten around to it, as Bloglines was serving me well enough for what I needed, and I’d gotten used to its quirks.

But over the past couple weeks, Bloglines started failing at its primary purpose, delivering RSS feeds on demand, as it stopped properly updating feeds. It didn’t bother me too much at first as I was busy enough that reading blogs was a luxury, but it was starting to get annoying. And then somebody twittered about a TechCrunch article describing how Bloglines users were fleeing to Google Reader, which provided instructions on making the move. Ten seconds later, I was moved to Google Reader, and now I probably won’t go back.

Let’s parse out what happened here, as I think it’s instructive.

  1. A few years ago, I started reading enough blogs that updated infrequently that checking them one by one was becoming ridiculous. So I started looking for an RSS reader, and chose Bloglines as it met my requirements well enough at the time (Barry Schwartz, of The Paradox of Choice, would call this “satisficing” – speaking of which, I need to review that book at some point). In particular, it was web-based so that I could read blogs from work or home without duplication, which was the key differentiator from Thunderbird, the other major contender.
  2. I stuck with the choice for several years, even as bits of it started to annoy me, as the perceived switching costs were too high. Given that there are no lock-in effects in this software (no data that I couldn’t export), the switching costs were purely cognitive. In other words, the cognitive effort of switching was the major lock-in for this product. Also, the benefits of switching were minimal – Bloglines was meeting my needs, so it was unclear how other software would be better in that core functionality.
  3. Once Bloglines started to fail in its primary purpose (making it easy for me to see the latest in my desired feeds), the benefit of switching became relatively greater (other RSS readers were succeeding where Bloglines was failing).
  4. Once I read the TechCrunch article, I had “social proof”, the term Cialdini uses to label our tendency to want to see others doing something before doing it ourselves. Knowing that there were dozens of other people making me the same switch helped convince me to make the jump. That was the critical tipping point.
  5. The actual switch took about ten seconds (export from Bloglines, import into Google Reader). To reiterate, the effort of switching had nothing to do with the actual work it would take to switch – it was the cognitive effort of having to re-open a decision that I had already made.

What’s my point here? In the Web world, switching is often fairly painless, as most vendors provide a way to easily get one’s data out of their system (and if they don’t, that’s a bad sign). Companies are generally relying on us to pick a system and get comfortable with it, so that habit and the perceived cognitive effort of making a change is a far greater impediment to switching than other possible lock-in effects. In such a situation, the company has to never make it easier to contemplate the switch; in other words, if the company continues to fulfill its value proposition to the user, users will stick around, but as soon as they lapse, users may leave in droves (as appears to be happening to Bloglines).

Another way of thinking about it is that the game between companies and users is all played in people’s minds. While economists may believe that people are rationally maximizing their potential economic gain, most of us are far less rational in our decision-making. We use brand names over equivalent generics because of advertising or because we “trust” the brand name more. We stick with products or services that are clearly inferior to newer ones because it’s too much effort to re-open the decision we originally made. Companies that understand this game will be telling stories to convince people to use their products or services, rather than trying to convince them with data. For instance, the book Positioning is all about creating new primary needs in the minds of consumers to give them the necessary impetus to switch.

So focus on the value proposition your company offers to its customers. If you can make sure that the value of your product keeps on increasing, you can benefit from the perceived effort of switching and keep customers even in situations where they might rationally choose another product or service. Ideally, of course, your product is the best in class, but every little edge counts, right?

Now I just have to get over the cognitive effort of switching from Windows to Mac…

P.S. I have been at Google for exactly one month as of today. Crazy how the time flies!

Self Haxx0ring

Thursday, September 25th, 2008

As noted in my last post, this is my first week as an employee of Google. I’m trying to get up to speed on the types of things that I will be doing, which meant spending most of today learning about the ad system, revenue forecasting models, how the Google backend works, etc. Unsurprisingly, there’s a tremendous amount of Google-specific knowledge developed over the past ten years on these topics. Somewhat surprisingly for an engineering organization, the internal documentation is fairly good, and is well-linked to other relevant documentation.

This led to an interesting situation, where my brain started feeling very odd with a sort of buzzing sensation, and I started having trouble thinking clearly. I eventually figured out where I’d felt that way before – it had been during Mike Murray’s Hacking the Mind talk, when he had pulled a buffer overflow attack on the audience by opening loops (by starting stories) and never closing them (by finishing the stories). What had happened is that I had started exploring one topic, seen a link to another topic, opened a new tab to look into that topic, seen a link to a third topic, opened a tab for that, etc. By the time I noticed my brain feeling odd, I was up to ten open tabs on various topics. Yup, I had committed a buffer overflow attack on my own mind to the point where I’d impaired my own functioning.

Once I figured that out, though, it was easy to figure out what to do – I had to close the loops in my mind and offload the storage from my brain. So I started a to-do list with the different topics of interest to be explored later, and the links that I had open in each tab for that topic. I also started a list with links to the tabs that had particularly useful information for future reference. Once I created those lists, I started closing the tabs and thus convinced my brain that it could free up those memory pointers. With the swap space that had been devoted to those open loops, I then had the brainpower to actually think again.

It was an interesting process to watch, but it also highlights a danger of my typical way of learning. I like learning in a top-down fashion, where I learn the big picture of a system first, which gives me a framework on which to attach my understanding of pieces of the system. At the previous startups I had worked at, I could read all of the documentation written by the startup in less than a day, and get a big-picture understanding of how everything fit together pretty quickly. Trying to do that at Google was like trying to drink from the firehose and demonstrated that there are some systems that are just too big for me to hold completely in my brain at once.

So I’ll have to be more disciplined about organizing how I’m learning things this week. As noted, I’ve opened up a couple files to help me organize what I’ve learned so far and what I still need to learn. I’ll also need a few more files to offload information once I’ve learned it, at least until I see it enough times that it can be transferred to long-term embedded storage, which has more capacity.

Anyway, I was entertained by this incident of accidentally malicious self-hacking, so I figured I would share.

A Whole New Mind, by Daniel H. Pink

Monday, July 7th, 2008

Amazon link
Official book site

My friend Wes recommended this book to me after my social capitalist post where I claimed that we were moving from a world defined by technology to one defined by social connections. Daniel Pink’s book describes a similar transition from an emphasis on left-brain thinking towards right-brain thinking.

Pink starts the book by describing the characteristics of L-directed thinking and R-directed thinking (his descriptions of left-brain and right-brain thinking). These are probably familiar to most readers – the left brain controls language, is detail-oriented, and thinks sequentially and deductively, while the right brain is better at reading emotions and context (the right brain does facial recognition), thinking inductively by synthesizing from “isolated elements together to perceive things as a whole” and seeing the big picture. Pink emphasizes that neither brain half should be dominant – that they are designed to complement each other and create one functioning whole (hence “whole new mind” as the title).

Pink then explores the forces changing the world from an emphasis on L-directed thinking to R-directed thinking. The 20th century was dominated by technology, from the assembly line and machine guns at the beginning of the century to the atom bomb and space flight in the middle, to electronics and the Internet at the end. Pink observes that three forces are combining to de-emphasize technology in the developed world:

  • Abundance – when material wants are satisfied, then the differentiating factor is no longer functionality, but design e.g. the market dominance of the iPod over other MP3 players with more functionality.
  • Asia – like Friedman’s World is Flat, Pink believes that anything that can be outsourced will be. Things that can’t be outsourced include high-touch jobs that involve direct human interaction e.g. nursing is one of the fastest-growing professions in America.
  • Automation – any job that can be reduced to a set of instructions will be automated, not only on the factory floor, but also in offices. Jobs that require human intuition or empathy are the only ones likely to be safe from automation.

The rest of the book is a description of the R-directed skills that Pink thinks will be important in the decades moving forward: Design, Story, Symphony, Empathy, Play, Meaning. The ones that stood out to me were Story and Symphony, which isn’t surprising given my interest in stories and in the value of synthesis. I especially liked the description of Symphony as seeing relationships: “People who hope to thrive in the Conceptual Age must understand the connections between diverse, and seemingly separate, disciplines.” is a nice one-sentence summary of what I value in being a generalist.

At the end of each chapter, Pink also provides a set of activities to exercise that particular skill, which include seeing that skill performed well (design museums, story-telling festivals, etc.), trying it oneself (drawing, finding ways to learn more and empathize with coworkers), or reading books on the subject. I’ll have to try some of them myself.

Overall, it’s a decent book that’s a quick read. Nothing particularly new to me, but it was nice in providing a reference work to explain some of the ideas that I believe. I’d recommend it as a library book if you’re interested in moving your life or career in the direction of the right brain.

RSS feed

LinkedIn profile


New post: Corporate culture as illustrated by the five monkeys experiment:…

Recent Posts

  • Corporate culture as illustrated by monkeys
  • The Leadville experience
  • My first Death Ride
  • Asynchronous Ask Me Anything
  • Raising money for World Bicycle Relief
  • Random Posts

  • Look Up More
  • My first Death Ride
  • A Supposedly Fun Thing I’ll Never Do Again, by David Foster Wallace
  • Community media usage
  • Going to Ohio

  • Archives

  • Categories