You can look at my home page for more information, but the short answer is that I'm a dilettante who likes thinking about a variety of subjects. I like to think of myself as a systems-level thinker, more concerned with the big picture than with the details. Current interests include politics, community formation, and social interface design. Plus books, of course.
Balancing control and autonomy
I previously linked to this New Yorker article on how the army is self organizing to handle the challenges of Iraq. After putting up the link, I had a conversation with a coworker that evoked some more thoughts. One was the observation that the army is composed of units, each of which can be run autonomously, from squad to company to battalion. The commanders of each unit are given overall mission definition, and are left to figure out how to use their unit to accomplish their goals. I wonder if a company could be structured that way, such that any unit would be functionally capable of operating independently. I think this is part of what "matrix management" is attempting to do, but it never seems to work.
Part of the issue is the unwillingness of management to give up control to their subordinates. Even when they do give up control, they often restrict behavior with processes and SOPs to such an extent that the subordinates have no freedom of action. There's some good reasons for that - the processes are often put in place to prevent bad things from happening to the company. However, by not giving the employee any freedom of action, the company is also preventing its employees from contributing in new and unforeseen ways. In other words, it's a balance between "doing no harm" to the company, and the risk/reward of giving employees control.
The right balance is hard to find. I think in an organization composed mostly of inexperienced people, the first choice might be better; McDonald's and the franchise mentality of having a three-ring binder of regulations exemplifies this. However, in an organization composed of talented, independent people, such restrictions are insulting (not that I have an opinion). Of course, the pendulum can swing too far, and give the employees too much independence; Malcolm Gladwell's essay on Enron describes the consequences of that. As usual, it's a matter of context; each company will have a different blend of competencies, and that blend should determine the management's approach to determining this balance. There's no such thing as the One True Management Style. It's always contingent. Managers, not MBAs.
posted at: 00:42 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal
Followup to Trust, but Verify
I wanted to pursue a couple things I mentioned in my last post. I speculated that customer enthusiasm might be a sufficient factor in making decisions in my P.S. to that post. But I was thinking about it this morning and realized that there are some great counterexamples to that. Apple has a nearly cult-like following in terms of customer satisfaction and yet has never broken through to the mass market. They've done okay, of course, but never as more than a fringe industry player. BMW is another good example of a company that elicits great customer satisfaction while serving a niche market. I'm not sure what it means, but it does poke holes in my theory that a great story and customer satisfaction is enough.
For many things, quantitatively and analytically maximizing customer value and throughput is the way to go. Very few of us have brand preferences for things like toothpaste. The different brands are fungible. So the companies can't rely on building a brand and eliciting customer satisfaction. It's a numbers game of minimizing product cost and maximizing customer selection. And that _can_ be handled analytically by the tools that Bonabeau describes.
Another great example is Amazon. Every now and then, when you go to the Amazon web page, you'll get an alternative user interface, where they've moved some things around. You go back 15 minutes later and it's back to normal. What's up with that? Apparently Amazon occasionally has some new UI ideas that it wants to try. It changes its front page for a while. 10,000 people try it. Then they switch back ten minutes later after they've collected enough data. And that's a large enough sample that you can observe statistically significant effects. I read an article at one point that described how Amazon tested the position of the "one-click ordering" button in various places, and determined that the place where it eventually ended up increased the likelihood of ordering by 1 or 2%. Seems like a minor change. But for their volume of sales, it translates into an enormous amount of money.
That's sort of what I mean by "Trust, but Verify". Their UI designers had some thoughts on how to improve the conversion rate. They mocked them up, tested them, got real data, and was able to make an informed decision. Bring the iteration time of finding results down, and you increase performance, and reduce the penalty of making poor decisions. I'll talk about this more when I finish reading Experimentation Matters.
posted at: 00:07 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal
Trust, but Verify
After hearing me talk about how much I enjoyed Gary Klein's Sources of Power, a friend of mine forwarded me this Harvard Business Review article, titled Don't Trust Your Gut, by Eric Bonabeau. Bonabeau takes on the recent books promoting the use of intuition in business, calling out Gary Klein specifically, and attempts to make the case that trusting intuition is dangerous in situations involving complex interdependencies, because "the required computations outstrip the mind's processing capabilities". He recommends using "a computational decision-support tool".
If I were more cynical, I would speculate that Bonabeau works for a company making such tools, but I'll leave the ad hominem attacks aside for now, instead attacking his ideas. For one thing, the very idea that complex interdependencies are more likely to be correctly tracked in software than in people's brains is laughable to me. Social software, the best attempt of many smart people to capture the complex interplay of social relationships among people, is so inadequate that it can be described as autistic, despite trying to do a job that each of us can handle unconsciously. To expect that decision-support tools can capture even more complex situations in a realistic and useful manner is idealistic at best.
I think that the author significantly misunderstands the point of research such as Klein's. Of course you should use algorithms in cases where the inputs and outputs of a process can be well described by numbers. But in the real world, the numbers are often so distorted as to be unrecognizable, if not completely made up (classification schemes are a good example). A good executive will know how to drill down and get to what's really going on, where a program will take garbage in, and produce garbage out.
I would also cynically observe that most of the time, the quantitative decision making that happens is there for one, and only one, reason: to justify the decision that was already made by the gut reaction of the person in charge. The person in charge doesn't want to be left responsible, so they force all of their subordinates to generate numbers to support the decision that's already been made, and the subordinates are told when the numbers are "wrong" and need to be "tweaked".
Instead of the model that Bonabeau proposes (spend lots of time up front with decision-support tools before making a decision), I would propose an alternative vision for management, courtesy of Ronald Reagan: Trust, but Verify. In other words, trust your gut and spend your time figuring out how you can verify or invalidate your gut decision as quickly as possible. This is somewhat influenced by the fact that I'm currently reading the book Experimentation Matters, which recommends front-loading any project with experiments so that you can change course sooner if you're going down the wrong path.
I believe that this approach makes more sense for a variety of reasons. From my own personal experience, when I was Rush Chair at TEP, working in a complex time-crunched environment running the lives of 22 brothers and a multitude of freshmen, I quickly found out that it was far better to quickly make the wrong decision than to dither and eventually make the right one. If I made the wrong decision, people would go off, try to do it, figure out it didn't work, and do the other choice. That would often happen in less time than I would have dithered in making the original decision. And even after all that dithering, I would often have made the wrong choice, so I just delayed the inevitable. Making decisions quickly, even the wrong ones, often lead to getting on the right path sooner.
Admittedly, this only works if you can get immediate feedback on a decision. But the tools to do that are growing more powerful every day, as the book describes. I think simulations are at least as likely to be accurate than most of the software that Bonabeau describes in his article, and simulations let you see the results of your decisions in a very swift and controlled fashion. Go with your gut, see what happens, and revise. Create a tight feedback loop, and run through several cycles, evaluating the results each time (as a side note, this is a process that Gary Klein describes expert decision-makers going through in the field). I'm a big fan of rapid prototyping for engineering development (see the book Serious Play for a description) and really don't see why the same principles couldn't and shouldn't be applied to project management. Just as the classic "waterfall methodology" has been outmoded by strategies such as "extreme programming", I expect that the typical "Gate-Phase Process" to eventually be outmoded by an "Extreme Management" strategy. "Extreme Management", like extreme programming, would be a test-based methodology; you could make decisions from the gut quickly, but would immediately be looking for ways to verify that decision as quickly and cheaply as possible.
I think that if as much time and effort were spent on getting the iteration and feedback time for simulations down as is spent on the "decision-support tools" that Bonabeau recommends, the world of "Extreme Management" would not be far off. Maybe I should write a book!
P.S. Speaking of books, I wanted to get one more angle on this subject that's related to my other book idea, stories. Imagine that there are two managers pitching a new product to an executive committee. One has a great story, explaining exactly what niche her new product will fill, and lots of specific details of how it will change the lives of people who buy it. She has some numbers to support her idea, but her emphasis is on the testimonials she's gotten from customers who adore the idea of her product. The other manager puts up chart after chart of numbers demonstrating that there is an underserved market niche of some sort, but has talked to no customers and generates no excitement. Which one would be more convincing? I would say the one with the good story. A good story can be verified, and the numbers can be run. A product has to excite people; it can't just be a soulless attempt to describe how Rational Evaluative Maximizing Models will benefit. But I'm totally biased, of course.
posted at: 23:37 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal
A friend of mine at Signature shared the theory of Launch Chicken with me. Say you're in a project with a tight schedule with several different areas contributing to its success, say a product launch. Let's say that you know that the area you are responsible for is not going to make the launch. You're supposed to hit the abort button, and let the project manager know immediately. But, you know that another area is even further behind than you. So you hold out, hoping that they'll abort first, taking the blame for delaying the launch, and giving you the time you need to finish your area. Now it becomes a matter of will, like the original game of Chicken, where two kids are driving cars at each other. Who will chicken out first? Of course, what happens if nobody chickens out? Bad things, like the collision that happens in the original game.
How can such catastrophic distortions of information be avoided? My coworker and I were kicking the question around last week, wondering how a project manager would be able to make the right decision based on the carefully massaged data that they are fed at project review meetings. He asked the question, "In a great organization, do you think that the compression of information being fed to the decision makers is less biased/contrived, or are the decision-makers just superior at sifting out the truth from the pre-digested information they get?"
I think it's probably a combination of both (I'm always distrustful of bi-valued questions). I would suspect that good leaders are able to detect soft spots in people's presentations, where the numbers don't reflect reality, and go check out the raw data to find out what's "really" going on. By doing so, not only will they get a more accurate picture, but they'll also encourage people to present a more "honest" picture at the next presentation. It's a virtuous circle of trust and accuracy.
It also ties into my ideas of what an effective information carnivore looks like. Somebody who understands they are higher up the information chain, and are getting only pre-digested summaries of information, but understands their ability to open up those summaries to get a more complete picture. They can't do that all the time, because they are very busy, and they need to leverage the efficiency of the summarized form, but when problems arise, they understand that the summaries are inherently incomplete. Good information carnivores make good managers.
posted at: 20:11 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal
Management by conversation
I've been going in circles on my current assignment at work for close to a week. Somebody else was assigned to the project today, and we sat down and I started talking through what I thought needed to get done. And it all just flowed right out. It always kills me when that happens; I sit and "think" and get nothing done, but when I talk to somebody, it all comes together. Which reminds me of two stories.
One is of an IT support organization at a college someplace (I don't remember any details). The room where all of the techs sat had a teddy bear at the front counter. The rule was that before you could talk to any of the techs, you first had to explain the problem you were having with your computer to the teddy bear. About half the time, the person would start talking through the problem, and say "Oh, I forgot to do..." and walk out. It's a pretty clever system.
The other is of my days at TEP. My junior year at TEP, I was Rush Chair. Rush was a big deal at MIT at that time, because freshmen chose where they wanted to live in their first four days at MIT. And once they moved into a place with similar-minded people, they tended to get comfortable and never move again. So if you didn't make an impression in those first four days, you didn't get freshmen. No freshmen, no pledges. No pledges, fewer brothers, bigger housebills, eventual financial devastation. Anyway.
So Rush was a big deal. At TEP, we'd worked out a system where the Rush Chair was generally a junior, so that the Rush Chair Emeritus was available as a senior to help them survive the experience. One year after I graduated, I decided to go back and hang out for Rush, and participate in the Crock Opera. Except my leisurely vacation was not to be, because the Rush Chair Emeritus had decided to transfer, leaving the current Rush Chair without a sage to appeal to. So I filled that role.
And I learned something about "leadership". My role basically consisted of hanging out at TEP. Underclassmen would run up to me and say "Perlick, what should I be doing?!" I'd say, "Well, what do you think you should be doing?" "I think I should be doing this!" "Okay, then, go do that." "Great! Thanks!" I probably offered occasional refinements, but mostly my role was to be an external authority to validate their perceptions of what needed to be done.
And that's what happened today when talking to my co-worker. I didn't really need him to straighten things out. I would have done almost as well talking to the teddy bear. It was just the process of laying things out in words that helped me to clarify priorities.
This is part of why I started writing this blog - to take ideas that are running in circles in my head, and see if the very process of trying to write them up makes them clearer. Sometimes it does. Other times, it's at least a good forum for venting.
I'd relate this observation to some larger point about managers needing to learn that their job isn't to control, but to facilitate and empower their employees, but it's late, and it's obvious, anyway.
posted at: 22:17 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal
Autistic management software
I was talking to a friend today who's turning into a manager, and using Microsoft Project to lay out schedules and the like. And I was horrified, given my aversion to such things. But he pointed out that it was handy for him to be able to point at his schedule and say he couldn't do something because something else was a higher priority. Because the priorities had been placed into a concrete form that he could point to, his superiors accepted it without question. I commented that I couldn't get past the poor quality of the abstraction that Microsoft Project imposes on project management, but that when dealing with people who can't distinguish the abstraction from reality, it made sense.
Then a neuron lit up in my brain. The poor sense of social interface design that leads programmers to write oversimplified project management software is the same sense that leads to autistic social software. It's trying to find the easiest instantiation of complex social cues into software. Simplification is inevitable - you have to simplify in order to function under the onslaught of information present in the world. But choosing the right abstraction models, ones that emphasize the relevant quantities, is essential. And maybe the ones chosen by Microsoft Project are right for some people. They don't make sense to me, though. I like Google's model better.
In a Good Morning Silicon Valley last week that a friend forwarded to me, there was an interesting excerpt from Playboy's interview with Larry Page, one of the founders of Google. He described Google's project tracking system:
We also have systems that automate and track the management of all our projects. This allows an enormous amount of freedom. One time an engineer told me, "I'm not working on what you think I'm working on." He explained that his work had evolved into something extremely relevant and important, but there was no place to track it in our system. I said, "Why don't you enter it into the system?" "I can do that?" he said. I'm like, "Yeah, who else is going to do it?" We have a system that engineers can update to put themselves on another project. Someone else might say, "Whoa, wait a second. I don't want people to be able to do that." Well, it turns out you have two choices: You can try to control people, or you can try to have a system that represents reality. I find that knowing what's really happening is more important than trying to control people.You can see why my friend thought I'd like that quote in light of my rant about timesheets last week. He and I spent a few minutes chatting about whether Google can possibly be as non-evil as it sounds. Hard to say, really.
Management by numbers
This week at work we were asked to start using timesheet software to track the hours that we work on various projects. I hate timesheet software. Hate with a fiery passion. But when a coworker asked me why, I had to confess I really didn't know. He pointed out that it only takes a couple minutes each week to fill out. That it's useful for budget allocation and making sure that projects are being appropriately resourced. That it can provide an indication to management of when people are spending time on things they shouldn't be. So where's the downside? Why does it upset me so?
After some reflection, I think I have an answer. And it gets back to a common theme I've been on with regard to being treated as a person. The thing that bothers me about the timesheet method of management is that it treats me as a resource. Not a person. The timesheet reduces me to a number to be crunched into budget allocations and project management. And I think I find that fundamentally degrading. One of the commonalities among the management structures I find interesting was that they were people-based. In fact, the Gore management structure was set up specifically because Gore the founder realized that he didn't know everybody at his plant any more.
I understand that larger organizations need some sort of process and bureaucracy to be able to function. But I believe somewhere in my little idealistic brain that the process can be used to serve the people, rather than having people be crunched down to fit the process. I hate going through the exercise of trying to reduce the work that I'm doing down to the over-broad categories that other people have made up. Eventually, of course, I do what everybody else does, and just make up numbers. Which is fine if the numbers were then treated as the fictions that they are. But as soon as such numbers are entered in a timesheet, they take on a life of their own, where they are treated as the reality that define the budget process, and I am but a flimsy shadow of the numbers. And I think that is what I don't like.
If you want to know what I'm doing, take the two minutes and ask me. Or I can write up a paragraph summarizing what I'm doing that will take you 30 seconds to read. But don't ask me to fit my work into arbitrary categories for the sake of a budget process that fundamentally ignores the realities of employees as people. It mischaracterizes and trivializes the work I'm doing by reducing it to a category that you have defined. "Oh, you're doing research. Okay. Well, that's not important. You should be spending your time doing product development instead." What does that even mean? If you recategorize my work into a different budget bin, does that change what I'm doing? No. Because I feel that I have the best perspective on the question of how I can use my skills to advance the project. No amount of budget-fiddling will change that. The way to get me to change what I'm doing is to talk to me and convince me that my skills could be better used elsewhere, not to change a budget number.
There's a better management model hidden here someplace. I'm sure of it. I feel like I'm skirting around the outside of it. But, as usual, it may not be realistic for the world we live in. There's too many people who treat rules as immutable laws of nature, and budgets and numbers as realities rather than representations, who don't understand the concept of abstraction. The map is not the territory. And confusing the two only leads to anger and confusion, because any representation will necessarily be incomplete. And bad representations, which is the category I feel things like project codes and timesheets fall into, are especially inadequate at describing what they are supposed to represent. And then when the managers get upset because their representations are inadequate, they take it out on the representees (aka their employees) for not being like the reductive confining simplified representations.
And I think that is why I hate timesheets.
The map is not the territory - Speaking of which, I read the Eschaton section of Infinite Jest this morning. Unbelievably hilarious. I picked up Infinite Jest from the library before my trip last week, after really liking the book of David Foster Wallace essays. In this particular case, the way Wallace constructs the game of Eschaton to illustrate the philosophical point that the map is not the territory, which one of the characters screams at the top of his lungs, is absolutely brilliant. The book overall is kind of uneven so far, though. And really large. I'm about 350 pages in, a third of the way through, and I've had to start an index page to keep track of what scenes happened where so that when a scene references something that happened or some other background material, I can flip back and figure out what's going on. Through about 300 pages, I could do it by keeping it all in my head, but it's too much now. But among this morass of plot and characters, there are these asides that are incisive observations of people and how they think. I really liked the one about why videophones will never achieve mass market appeal, for instance. Or this one about Eschaton. I think I'll eventually get a copy of it, and dogear the digressions I really like, and pretty much skip all the rest of this stuff. Or maybe it all makes more sense when it gets tied together. Certainly a possibility.
posted at: 15:09 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal
As usual, I love the pointers over at Many-to-Many. In particular this week, Clay Shirky pointed to this great article over at Slate by Duncan Watts on the shortcomings of centralized intelligence. Duncan Watts runs the Small World Project, which seeks to test the "six degrees of separation" hypothesis. He's written a book about it, which I've been meaning to read forever.
I really liked this article because it points up the ways in which people will self-organize to accomplish really difficult tasks if given the opportunity. Informal relationships are drawn upon, and the collective intelligence of the organization is put to work in a way that could not happen if the leadership demanded that all the decisions be made at the top of the company. In particular, he points out that centralized leadership works fine if it's dealing with planned-for situations where the bureaucracy has been laid out and the processes have been put in place. But it deals very poorly with unexpected or catastrophically new situations, because it can't take advantage of the information and social networks available in each of its individual workers' heads. It's the difference between the planned central economy of communism and the market orientation of capitalism. Given my recent post about management structures and my belief that the business world is moving towards a time of constant innovation and new situations, it's unsurprising that I liked this article, so I figured I'd link to it.
posted at: 02:26 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal
Different Management Structures
So I've been thinking a bit since my last post, about how to tie those visions of taking responsibility for oneself to management structures. What would a management structure that followed those principles look like? I don't really have any answers yet (nobody does, I suspect), but here's some ideas.
Let's start with what's out there now. I'll start with two of my favorite case studies of unusual management structures. The first is Gore & Associates. This page has a good summary of the management structure there, or you can read Gore's own description. But the basic idea is that Gore implemented a flat management structure, where everybody except him was an "associate" and at the same level. At one point, though, Gore realized that his factory had grown so big that he didn't know everybody (a consequence of Dunbar's number (see previous references to Dunbar's number in this post or this book review)). And when the group of people grows that big, normal social mechanisms break down, which is why larger organizations have bureaucracies and organization charts. Gore disliked that idea, so rather than put in a management hierarchy, he split the plant. And that policy has apparently continued to the present day; any time a plant reaches a size of more than 150 people or so, it splits, and within each plant, a flat management structure is maintained. It sounds like a very cool organization to work for. Autonomy of each employee is encouraged, even required. I read an article at some point where a new associate talked about being lost for a few months, because they were waiting for somebody to tell them what to do. When they finally started exploring the plant on their own, they saw some ways they could help and jumped in.
So that's one case study. The other well known one is the North Carolina General Electric jet engine plant, which is described in this Fast Company article. Same sort of idea. The plant has about 170 workers. No management structure other than a plant manager. Self-organized teams. Management by consensus. It's not easy - they get training in consensus building, and in compromise. But if a worker sees something that needs doing, he either does it himself, or brings it up with the team to get the resources necessary.
And the team members always have the power to change things that don't work out. Says Williams: "All the things you normally fuss and moan about to yourself and your buddies -- well, we have a chance to do something about them. I can't say, 'They' don't know what's going on, or, 'They' made a bad decision. I am 'they.' "Why aren't situations like these more commonplace? They sound incredibly appealing to me. But I can think of several reasons why they aren't the norm. The obvious one is that people like management hierarchies. They like knowing where they stand. It appeals to our innate primate that kowtows to the alpha monkeys above us, and beats up the subservient monkeys beneath us. It appeals to our sense of competition, our desire to "get ahead". After all, if there's no way to keep score, then how do we know if we're winning?
Another reason is the limitations imposed by Dunbar's number. Past a certain size, organizations need some sort of structure to hold it together. It's similar to the need for a skeleton in organisms - you don't see invertebrates get too large in general (okay, "largest invertebrate" on google reveals the 60 foot giant squid but you get my point). Bureaucracy provides the frame on which organizations can grow larger. But it comes at a decided cost in efficiency, as workers' initiative is squelched, and every decision requires a tremendous overhead in communication.
While thinking about this, I also came to the conclusion that today's management structures are well designed. For the industrial age. They reflect the problems faced by the assembly line and the factory, where the goal is reproducibility and task efficiency. It's all about the lessons taught by Frederick Taylor and his theories of Scientific Management. Taylor studied early factories and noticed that many workers were doing their routine repeated tasks in a decidedly inefficient manner. By applying his principles, he was able to optimize the amount of time each task took, thus improving productivity. In other words, Taylor was the first to say that management knows best how the worker should do his job. And from that assertion comes management hierarchies and micromanaging.
How do Taylor's theories hold up in today's world? Not very well. His central initiative was optimizing routine repeated tasks to increase efficiency. There are no more routine repeated tasks in a typical workplace today. Anything that is routine has been automated to be performed by machines. If it hasn't been automated completely, it's been outsourced to someplace where people will do it more cheaply than the US, which is why all low-skill manufacturing has been moved to China or Indonesia or places like that. So what's left here? The non-routine tasks. The ones that require initiative and creativity and innovation to handle.
In this relatively new situation, Taylor's theories are among the worst to apply. When handling problems of innovation, the more fresh perspectives that can be brought upon a problem, the better. The worst thing you can do is to disenfranchise workers from the decision process. In most cases, the workers are in the best position to help innovation since they are closest to the work and can see the best ways to improve it. Places like Gore & Associates and the GE jet engine plant that are doing high-end technology innovation have figured this out. I wonder how long it will take for the rest of corporate America to catch up.
Not only is it more efficient, but it makes employees happier. Everybody likes to feel like they're contributing. That they have control over what they're working on. That they have a sense of ownership and responsibility for what they're working on. And I would submit that a person that doesn't want that sort of responsibility is a person that should not be hired. They're probably a person who likes to freeload off of other people's work, leeching along, trying to avoid blame and vulturing credit where they can. I guess I can only speak for myself, but I'm happy to take responsibility for a project. The only thing I ask is that I be given the concomitant degree of control over the project. Giving responsibility without giving up control is a manager's way of trying to take credit without wanting to be blamed. And it's demoralizing. I've become incredibly allergic to such situations at work. Some might say that I should just step up and make the best of it. And maybe I should. It's something I've been struggling with. Anyway.
One last thought on management structures. While I may dislike Orson Scott Card's column, Ender's Game has some great insights into how to get the most out of gifted people. In particular, check out this description of Ender's command structure at the end of the book, fighting against the buggers, which have a hive mind (that I would compare to a typical top-down management hierarchy).
"...as the battle progressed, he would skip from one leader's point of view to another's, making suggestions and, occasionally, giving orders as the need arose. Since the others could only see their own battle perspective, he would sometimes give them orders that made no sense to them, but they, too learned to trust Ender. If he told them to withdraw, they withdrew, knowing that either they were in an exposed position, or their withdrawal might entice the enemy into a weakened posture. They also knew that Ender trusted them to do as they judged best when he gave them no orders. If their style of fighting were not right for the situation they were placed in, Ender would not have chosen them for that assignment."This is pretty close to an ideal management philosophy in my eyes. Ender, the manager, is responsible for the big picture, assigns his employees to various tasks, keeps them aware of the overall vision, but trusts them to accomplish their task within the context of that overall vision. He doesn't get in their way, he doesn't second guess them or try to micromanage them (in fact, there's a telling sequence earlier in the book when he's learning to use the simulator - he first tries to win by piloting individual ships in his fleet, but quickly learns that that level of micromanagement leads to failure).
What are the advantages of such an approach? Ender's teacher, analyzing his group:
"The bugger hive-mind is very good, but it can only concentrate on a few things at once. All your squadrons can concentrate a keen intelligence on what they're doing, and what they've been assigned to do is also guided by a clever mind. ... Every single one of our ships contains an intelligent human being who's thinking on his own. Every one of us is capable of coming up with a brilliant solution to a problem. They can only come up with one brilliant solution at a time. The buggers think fast, but they aren't smart all over."There you go. Greater available intelligence concentrated on the problems at hand. Where's the downside?
I could spend a lot more time railing about why I think that such management practices haven't made it into the mainstream yet. I mentioned a few possible reasons already. I think the biggest one may be the fear of managers of losing control. It's scary to give up control, when you hold ultimate responsibility for a task. But failure to give up control demonstrates a lack of trust in your employees. And that mistrust is incredibly demoralizing. To me, at least. If you don't think I can do the job, then why are you employing me? If you think I can do it, get out of my way.
Okay, it's late, and I've devolved into plain ranting. I'd like to speculate on this some more at some point. Heck, I've always thought about these things - check out this post from nine years ago (scroll down). Actually, skimming through that post reminds me that I should re-read Kevin Kelly's book Out of Control: The New Biology of Machines, Social Systems and the Economic World. Flipping through it briefly makes it look like he concentrates more on the problems of design rather than social organization. But I think the principles apply in both. Anyway. I stop now.
posted at: 16:55 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal
Big versus small companies
Just a quick observation - something I said at work today and thought was interesting. I was commenting how some people use process as a way of covering themselves in case things don't go well (a reflection of my earlier sentiment). I understand how process can be used to answer questions of importance if it is used appropriately. But I get frustrated when the process becomes the point, rather than the success of the project. I was talking with a co-worker and wondered aloud why the folks in our office in San Francisco had a very different viewpoint than the folks in our home office of Toronto.
I think the difference is that they are part of a large, stable organization. If this project fails, there will be another one. So there is less incentive to take risks, and more of a tendency to follow the letter rather than the spirit of the process, which they think will minimize the chance of them losing their job. Most of us in the Bay Area have come from a startup environment where, if the project fails, that's it. The company's out of business. No more job. So there's no incentive to try to play it safe to keep your job. The most important thing becomes making sure the thing works and gets out to market. It's a very different attitude than trying to not lose your job. And it's a lot more satisfying to me. But it definitely causes cultural communication difficulties.
It came up in the context of a review process. We were asked to rate ourselves on the project status. All of us took the view of rating ourselves in relation to what needed to happen to launch the product. This caused a lot of confusion with them because they were asking us to rate ourselves in terms of the current phase of the process we were in. But we just don't think that way of phase to phase. We're thinking in terms of the final product that's going to go out the door, and what we need to do to get there. It's a completely different mindset - one is end-goal driven, one is process-driven.
And I think it all comes back to the size and stability of the organization. You want to have some stability, otherwise you have everybody jumping ship. But you don't want them too comfortable, because they lose that edge of competition. It's an interesting thought process to consider what the optimum balance should be. Food for another post later, perhaps.
posted at: 15:43 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal
Shirky on software development
I'm a big fan of Clay Shirky's writings, and am subscribed to his mailing list. His most recent post discussed situated software, and I wanted to discuss it some more. So I am.
Shirky teaches classes on social software at NYU, and observed an interesting pattern in the software that his students were submitting for their projects. Rather than designing software that conformed to the programmer-approved qualities of scalability and maintainability, he saw software that was "designed for use by a specific social group, rather than for a generic set of "users"." Go read the whole article for his perspective on it, coming from an enterprise software background.
The post was interesting to me, because I come from the opposite perspective. The idea that software should be designed for specific users in a specific time and place is a concept so obvious to me that I forget that others don't see things that way. It's a function of my background, I suppose. I was never interested in programming for its own sake. I didn't like computer science as an academic subject. As a physics student, I programmed computers to solve a particular problem that I was dealing with in the lab, whether it was controlling an instrument or analyzing a chunk of data. I was never making something for other people to use, and certainly not writing software designed to be used by the general public.
My experience as a professional programmer has only reinforced that attitude. When I was working as a consultant, I did a lot of work writing code to run instrument prototypes. It needed to run the thing, but since the hardware was hacked together, it was okay if the software was as well. And I wasn't writing the software for some mythical "end-user", I was writing software for John or Larry or Darren. The interesting thing was that when others came in and saw the software, they loved it. It turned out that the software I'd written to solve John and Larry's problems also solved the problems of other scientists in their positions. By dealing with the specific case, I had inadvertently dealt with the general case as well.
This experience was reinforced at Signature. Again, I was writing code to run our instrument prototypes, and to analyze our preliminary data. After a couple years at Signature, I was in the interesting position of being a junior software engineer that had been at the company longer than any of the senior software engineers. And the difference in perspectives between me and the other software engineers was enormous, since I had spent a great deal of time with our scientists before there were any other software engineers. At one point, we had a meeting where our software team said that they had to design the database access the way they did because they had to meet the requirements of our customers four years in the future. They had been given a set of specifications describing these mythical end-users, and they wrote the software to satisfy those requirements. Unfortunately, in satisfying the mythical future end-user, they made the software completely unusable by the scientists who were trying to get the machine running today. I ended up having to write a lot of hacked-together software just to make the instruments run, so that our scientists could actually take the data they needed to without dealing with the pain of the database. I was writing specific software to solve specific problems. And it turned out that these hacks were of general use to the scientists - soon after the database incident, it was estimated that I had written 90% of the software that was actually getting used at Signature. My focus on the real people using the software ensured that I survived three rounds of layoffs over the ten senior software engineers. But that was the difference in our perspectives; the software team was writing code for a clean and tidy end-user as specified in their documents. I was writing software for Andy and Vivian and Roger. My software solved real problems for them, because I only wrote software when they came to me asking for help.
It's a completely foreign view to most software engineers. To a typical software engineer, the system is king. Making the system scalable, maintainable or more efficient takes precedence over dealing with those messy end-users. The system's requirements are easy to specify, and they don't change. The user changes their mind all the time. But, in my opinion, helping the user is so much more satisfying than making a great system. Hearing "Wow! I could never do this before!" makes my day in a way that the most elegantly programmed subroutine never could. And that is why I'll never be a "real" programmer. The abstractions aren't real to me. Helping people is.
Part of the problem is that most software engineers (and especially most programming methodologies) were trained in an era of scarcity. It was important to conserve memory and storage space and processor cycles when those were expensive. But now it's a ludicrous idea. It was important to carefully specify every single subroutine far in advance when programming time was done on mainframes and cost huge amounts of money. Now every programmer can write, test and debug in less time than it would take to write the specification (at least for simple systems - when dealing with massive enterprise software systems, these statements would be less true. But treating every project as a massive enterprise software system is just as wrong). Extreme Programming has the right idea in my opinion. Accept that the requirements will change. Always keep a version of the software in a functional state. Test early and often. Work in tight feedback cycles of two to three weeks. I'm a bit iffy on the pairs programming aspect, but I've never tried it, so I'll reserve judgment. But there's a lot of good stuff there. There's also a lot of resistance to it, because that's Not The Way It's Done (tm).
The problem for me is that the establishment still values those who play by the old rules, so it will be hard for me to find a job as a programmer. Never mind that every user I've worked with praises my work (one of our biologists likes to joke that I can read her mind because I regularly produce software that deals with problems she's having). My code doesn't fit proper standards. It's not scalable. Or written in the most efficient way possible. I think those are ridiculous metrics. I'm writing software to be used on an instrument prototype; my software will never be seen outside our lab. Yes, in an ideal world, I should write it in as scalable and maintainable manner as possible. But given the choice between doing the right thing by the code, or solving the problem of the end-user, I'm going to favor the end-user every time. And I'm not going to spend a lot of time stressing about making the code the best it could be. It's serving a purpose: to help our scientists collect the data they need. Any time I spend on futzing with the system to make it "better" in the eyes of the software engineering establishment is time I'm not spending on making it more useful to the people who are actually using it.
Anyway. It's a rant of mine. Mostly because I'm bitter that I'm unhireable as a programmer despite having an excellent record of user satisfaction.
Another interesting thing about Shirky's essay, which is where this all started, is his realization that his students were leveraging their social context. In one case, a communal ordering system, the system dealt with deadbeats by publishing their names. That's it. No pre-payment accounts, no escrow requirements, no late payment penalties. They assumed that the social group would have its own ways of ensuring compliance. And it did. I love it. It ties into ideas of Phil Agre that software is always situated socially. The idea of software existing with its own set of rules apart from society is one that Agre derides, and rightly so in my opinion. It also ties in well with The Social Life of Information. By taking advantage of the social group and its rules, Shirky's students saved themselves an immense amount of work, and made some projects possible that would have been absolutely undoable in the generic case.
The last point I'd like to address is Shirky's surprise at the wide uptake of his students' projects. It should not have been surprising at all. A key thing I learned from reading usability books is that designing for a generic end-user is never as good as designing for a specific person. This is the whole idea behind Alan Cooper's personas. And it's been supported in my own experience. As I noted above in my consulting experience, the software I designed for John and Larry turned out to be of general usefulness in their field, because their problems and their desires were typical of other members of their profession. It's also a well-known principle of marketing; Crossing the Chasm emphasizes the importance of targeting a niche market to start; by solving the problems of a specific set of users, you gain the credibility necessary for your technology to be adopted by the majority. So the idea of designing for specific people or groups is well-known outside the realm of software engineering. It's only surprising that such ideas have not percolated into the mainstream of software engineering yet.
Several of the engineers based at our parent company in Toronto came down to our San Francisco site this week to sync up with us on the project that we are collaborating on. Our project manager thought this might be a good excuse for a team building exercise. Suggestions included going bowling or ice skating, or perhaps doing one of those participatory murder mystery type games. The whole discussion seemed so ridiculous to me that I started trying to pin down why I thought the concept of team building was ludicrous. I don't really have any answers, but I figured it might be worth jotting down my thoughts.
Part of the problem is that I've been heavily influenced by Peopleware by Tom DeMarco and Timothy Lister, which has a whole chapter devoted to team building. Actually, it doesn't. The authors tried to write a chapter on "making teams jell" and realized that they had no prescriptions: "What was wrong was the whole idea of making teams jell. You can't make teams jell. You can hope they will jell; you can cross your fingers; you can act to improve the odds of jelling; but you can't make it happen." Instead they realized that what they _could_ do was figure out ways to ensure that teams _don't_ jell. They called these techniques "teamicide" with examples including defensive management, bureaucracy, physical separation, fragmentation of people's time, quality reduction of the product, and phony deadlines.
So DeMarco and Lister list what not to do. But that leaves the question of what does it take to build a team? So I started looking at our team of folks at the San Francisco site, because it's the team I know best right now, and one that I feel works pretty well together. After thinking about it this week, I felt that the key ingredients to a team were respect and trust and roles. Each member of our team has a relatively well defined role. Everybody else defers to them when it comes to that specialty out of respect for their expertise. Moreover, we trust them to get their job done. This isn't to say that there isn't crossover among roles; but that crossover is built off of the mutual respect that has been developed over the past couple years. And for any given problem, there's generally an obvious answer to the question of who's responsible for dealing with it. Since I maintain the software, all software bugs are thrown to me. And my teammates trust me to deal with fixing the bugs; once it's reported to me, they forget about it and move on. Nobody sticks around to nag me about the bugs or to micromanage how I fix them. It's kind of scary; there's nobody else to point to if things on my plate don't get done. But it's also an empowering feeling, because I'm in charge of my own destiny.
What makes a team a team? Generally, we think of a team as a unit where the whole is greater than the sum of its parts. For that to happen, it can't all funnel through one person, because then the team is limited by that one person. Good teams are about letting go of control and trusting your teammates to get their job done, instead of skulking around after them checking up on them. I think that's the part that managers have the toughest time with - they feel that because they have the responsibility, they need to have the control as well, so they start to micro-manage, disrupting any team vibe which may exist.
It's interesting how these principles apply just as well in any collaborative environment, from the environs of a company to an athletic team to a chorus. Good sports teams are all about having well-defined roles and each person taking responsibility for their own role. Even Michael Jordan didn't win his championships alone; he needed Scottie Pippen shutting down the opponent's star on defense, Horace Grant and Dennis Rodman picking up the difficult rebounds, and John Paxson or Steve Kerr spotting up for the three. Two of the greatest moments in the Bulls run of NBA Championships were Jordan driving down the lane and dishing out to one of his teammates (Pax in 1993 and Kerr in 1998) for an open jumper. They knew their job was to bury open shots, Jordan gave up the control and trusted them to do their job, and they won the game. Heck, every May when they start running NBA finals ads on TV, I still get chills when they show the pass to Paxson and the announcer screams "Pax for three....GOOD!!" Anyway...
Respect and trust and well-defined roles. In light of those requirements, the superficial nature of typical team-building exercises becomes apparent. Respect is not something you get in an afternoon. It's earned over a period of time, as you continue to execute your responsibilities at a high level. Trust is also earned over time; only after somebody has come through again and again will they earn trust. Such respect and trust can only be earned in the workplace performing your role, because it's the trust in your ability to do your job that will get other members of your team to trust you. You may be a great guy outside of work, they may have great respect for your gardening or rock-climbing skills, but if you don't consistently get your job done at work, your teammates will never feel comfortable depending on you. So all of the offsite team-building exercises are bunk. They may be fun, they may be a nice distraction and provide the opportunity to get to know co-workers in a non-work setting, but they do not contribute to better teamwork in the office.
Picking up the other aspect of team requirements I mentioned, well-defined roles can be assigned quickly, but it takes time for people to settle into those roles and figure out how the team fits together. This gets back to my belief that good management is putting people in a position where they can succeed; in this context, that means assigning them a role on the team where they can earn the respect and trust of their teammates. But even if roles have been assigned in an ideal fashion, it still takes time for a team dynamic to form, because the team members need to learn to trust each other in those roles. Like anything else in life, you only gain confidence when you've accomplished a task. All of the theoretical exercises in the world aren't as effective as the actual experience. The same thing applies to a team. The best team-building exercise is accomplishing a task together. And I don't mean some stupid artificial task like a scavenger hunt or a ropes course. I mean finishing a project at work together, where everybody did their job. Once you do that once, a team becomes much more powerful; they gain the confidence of knowing they've done it before and can do it again.
So that's why I think those team building exercises are pointless. They're a band-aid at best. They're the sort of things that managers do so they can say they did something. To refer to DeMarco's list of teamicide techniques, it's defensive management - doing something solely to be able to cover one's ass later.
What would I do differently? I'd treat a new team like I'd treat a new worker. When I was learning a new programming language at work, I started out by doing small toy programs. These programs would have minimal functionality, but getting them to work gave me the confidence to try harder and harder programs. If I were managing a team, I would start them out with small short-term projects, so they get a sense of how roles are distributed among people on the team. If a small project isn't available, a big project can be segmented into smaller chunks to achieve the same effect. By succeeding at these projects, the team develops confidence, and the team members develop their respect and trust in each other. Move on to harder and harder projects. Each one reinforces the team members' trust in each other and their confidence that they can accomplish things as a team. Pretty soon you have a really powerful weapon at your disposal.
The problem with throwing a new team, no matter how talented, at a very hard problem is that there is no sense of shared confidence to fall back on, so when things go wrong, people start to panic and try to take over each other's roles and trust breaks down completely. Things start getting done multiple times because nobody trusts other people's results. People start concentrating on their own narrow little responsibilities and lose sight of the big picture so that they can say "Well, I did my part" when everything goes down in flames. And, yes, I've seen it happen - it wasn't pretty.
Anyway. I've babbled on long enough. Next time, we'll talk about something completely different.
posted at: 10:01 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal
I hate meetings.
(written 9/28/03) Not exactly a rare sentiment, I know. But I've been trying to consider why I dislike certain meetings so much. I think it has a lot to do with how I like to take in information. I'm not a linear thinker. Or, perhaps, I'm too accelerated a linear thinker. When somebody is presenting an argument to me, I can generally see where they're going, and want to skip ahead, because it's a straight line. So I get really impatient when they go through every step of the argument, and tune out and think about how much I hate meetings.
I recently read Edward Tufte's rant about Powerpoint, which is where some of these ideas are coming from. Tufte laments the growing prevalence of Powerpoint and slideware in our organizations, feeling that it weakens verbal and spatial reasoning by forcing all arguments into an abbreviated, bullet-pointed, linear form. The human brain is much better at coordinating things spatially than temporally, so expecting people to remember the bulletpoint from four slides ago and coordinating those with the graphs on the next three is foolhardy.
But, back to meetings. I don't think I'd be going out on a cognitive science limb by saying that different people have different preferred methods of absorbing information. In my case, I prefer a random-access approach, being able to flip back and forth, rather than being held to somebody else's idea of how I should view the information. Other people prefer graphical representations. Some people learn best through hearing, some by reading. It varies wildly. Meetings impose a linear auditory information transfer on everybody, which makes it inefficient for everybody. I can't tell you the number of hour-long meetings which I've missed and/or skipped and been able to extract all the information useful to me by asking somebody three minutes worth of questions. That's an inefficiency rate of 95%!
Back when I was a grad student TA, I hated teaching sections. Even with a lesson plan, I felt that it was hard to convey useful information to people without customizing it for them. I much preferred one-on-one problem-solving sessions, where there was an immediate feedback that allowed me to figure out how to map the problem-solving methods into terms that each individual student would understand. For some of them, the Socratic method of asking questions worked well, for others working examples with them helped, for others a discussion of the general principles was what they needed. Trying to incorporate all of those into teaching a section was impossible. But when they came to the TA's lounge and asked for help during office hours, I felt I could really get through to them.
One of the ideas that has floated around my mind for years is something I read in a science fiction novel, Beggars in Spain, by Nancy Kress. The details of the book are unimportant because it's not really that good, but she postulated an existence of a group of super-geniuses who developed methods for optimizing information transfer between them. By mapping out their preferred brain tendencies, they were able to take the ideas and arguments from one person and transform them into the preferred method of information transfer for another person, so that they essentially became telepathic.
Obviously that's unrealistic, but we can start considering ways we can customize the flow of information to take advantage of our brains' preferred methods of data entry. In a rudimentary sense, that's what I was doing in those one-on-one sessions as a TA. But there's a lot of work to be done in this area. I'm sure a field of study exists studying how people absorb information, but I don't know what it's called or who's studying it. If somebody reads this who knows, please drop me a line. But I think it'd be fascinating and immensely useful.
The world has grown too complicated for any one person to understand it all even at a basic level. But many breakthroughs in science and engineering come from crossover of knowledge, when techniques and ideas from one field are applied to another. With the exploding amount of information in each field, it's almost impossible to keep up even within one's own specific sub-discipline, let alone across fields. So methods of improving information transfer should be a higher priority than ever. It may be our only hope of catalyzing new breakthroughs in the future.
Clay Shirky on Process
(written 9/25/03) I've been starting to read more blogs recently, including VentureBlog, Corante and that of my friend DocBug, and I figure it's time for me to start posting thoughts on the web again. We'll see how long this lasts.
The post in particular that inspired me to post was over at Corante, by Clay Shirky (who wrote a really great article on the perils of grouphood that introduced me to corante in the first place). In this article, Shirky makes the claim "Process is an embedded reaction to prior stupidity".
I've been thinking about process recently, as I've gone from working at a freewheeling startup to working at a larger, established company with a process for everything. Every decision is accompanied by reams and reams of paper. We even had to be trained on the processes so that we could understand what was going on. It's crazy.
In light of that, I like Shirky's statement a lot. It's clear that many of the processes that have been put in place are to correct mistakes that were made in the past. It's a way of institutionalizing knowledge gained. And that's a good thing. But when the processes ossify and get in the way of the main objective, which is to build great products, then it seems like more reflection is necessary. In other words, when the process becomes the end, rather than the means, it's time to re-evaluate the process.
I'd even extend Shirky's statement further. Process is a way of covering your ass as a manager. If you go "by the book", then you can't be criticized, even if the book tells you to do something patently stupid. As people used to say, "You'll never get fired for buying Microsoft" (or IBM before that).
As in all things, there has to be a balance. Process is a good guide to the past, to what has come before. But it should not limit what can be done in the future.
posted at: 08:17 by Eric Nehrlich | path: /rants/management | permanent link to this entry | Comment on livejournal