I’m a big fan of Clay Shirky’s writings, and am subscribed to his mailing list. His most recent post discussed situated software, and I wanted to discuss it some more. So I am.
Shirky teaches classes on social software at NYU, and observed an interesting pattern in the software that his students were submitting for their projects. Rather than designing software that conformed to the programmer-approved qualities of scalability and maintainability, he saw software that was “designed for use by a specific social group, rather than for a generic set of “users”.” Go read the whole article for his perspective on it, coming from an enterprise software background.
The post was interesting to me, because I come from the opposite perspective. The idea that software should be designed for specific users in a specific time and place is a concept so obvious to me that I forget that others don’t see things that way. It’s a function of my background, I suppose. I was never interested in programming for its own sake. I didn’t like computer science as an academic subject. As a physics student, I programmed computers to solve a particular problem that I was dealing with in the lab, whether it was controlling an instrument or analyzing a chunk of data. I was never making something for other people to use, and certainly not writing software designed to be used by the general public.
My experience as a professional programmer has only reinforced that attitude. When I was working as a consultant, I did a lot of work writing code to run instrument prototypes. It needed to run the thing, but since the hardware was hacked together, it was okay if the software was as well. And I wasn’t writing the software for some mythical “end-user”, I was writing software for John or Larry or Darren. The interesting thing was that when others came in and saw the software, they loved it. It turned out that the software I’d written to solve John and Larry’s problems also solved the problems of other scientists in their positions. By dealing with the specific case, I had inadvertently dealt with the general case as well.
This experience was reinforced at Signature. Again, I was writing code to run our instrument prototypes, and to analyze our preliminary data. After a couple years at Signature, I was in the interesting position of being a junior software engineer that had been at the company longer than any of the senior software engineers. And the difference in perspectives between me and the other software engineers was enormous, since I had spent a great deal of time with our scientists before there were any other software engineers. At one point, we had a meeting where our software team said that they had to design the database access the way they did because they had to meet the requirements of our customers four years in the future. They had been given a set of specifications describing these mythical end-users, and they wrote the software to satisfy those requirements. Unfortunately, in satisfying the mythical future end-user, they made the software completely unusable by the scientists who were trying to get the machine running today. I ended up having to write a lot of hacked-together software just to make the instruments run, so that our scientists could actually take the data they needed to without dealing with the pain of the database. I was writing specific software to solve specific problems. And it turned out that these hacks were of general use to the scientists – soon after the database incident, it was estimated that I had written 90% of the software that was actually getting used at Signature. My focus on the real people using the software ensured that I survived three rounds of layoffs over the ten senior software engineers. But that was the difference in our perspectives; the software team was writing code for a clean and tidy end-user as specified in their documents. I was writing software for Andy and Vivian and Roger. My software solved real problems for them, because I only wrote software when they came to me asking for help.
It’s a completely foreign view to most software engineers. To a typical software engineer, the system is king. Making the system scalable, maintainable or more efficient takes precedence over dealing with those messy end-users. The system’s requirements are easy to specify, and they don’t change. The user changes their mind all the time. But, in my opinion, helping the user is so much more satisfying than making a great system. Hearing “Wow! I could never do this before!” makes my day in a way that the most elegantly programmed subroutine never could. And that is why I’ll never be a “real” programmer. The abstractions aren’t real to me. Helping people is.
Part of the problem is that most software engineers (and especially most programming methodologies) were trained in an era of scarcity. It was important to conserve memory and storage space and processor cycles when those were expensive. But now it’s a ludicrous idea. It was important to carefully specify every single subroutine far in advance when programming time was done on mainframes and cost huge amounts of money. Now every programmer can write, test and debug in less time than it would take to write the specification (at least for simple systems – when dealing with massive enterprise software systems, these statements would be less true. But treating every project as a massive enterprise software system is just as wrong). Extreme Programming has the right idea in my opinion. Accept that the requirements will change. Always keep a version of the software in a functional state. Test early and often. Work in tight feedback cycles of two to three weeks. I’m a bit iffy on the pairs programming aspect, but I’ve never tried it, so I’ll reserve judgment. But there’s a lot of good stuff there. There’s also a lot of resistance to it, because that’s Not The Way It’s Done ™.
The problem for me is that the establishment still values those who play by the old rules, so it will be hard for me to find a job as a programmer. Never mind that every user I’ve worked with praises my work (one of our biologists likes to joke that I can read her mind because I regularly produce software that deals with problems she’s having). My code doesn’t fit proper standards. It’s not scalable. Or written in the most efficient way possible. I think those are ridiculous metrics. I’m writing software to be used on an instrument prototype; my software will never be seen outside our lab. Yes, in an ideal world, I should write it in as scalable and maintainable manner as possible. But given the choice between doing the right thing by the code, or solving the problem of the end-user, I’m going to favor the end-user every time. And I’m not going to spend a lot of time stressing about making the code the best it could be. It’s serving a purpose: to help our scientists collect the data they need. Any time I spend on futzing with the system to make it “better” in the eyes of the software engineering establishment is time I’m not spending on making it more useful to the people who are actually using it.
Anyway. It’s a rant of mine. Mostly because I’m bitter that I’m unhireable as a programmer despite having an excellent record of user satisfaction.
Another interesting thing about Shirky’s essay, which is where this all started, is his realization that his students were leveraging their social context. In one case, a communal ordering system, the system dealt with deadbeats by publishing their names. That’s it. No pre-payment accounts, no escrow requirements, no late payment penalties. They assumed that the social group would have its own ways of ensuring compliance. And it did. I love it. It ties into ideas of Phil Agre that software is always situated socially. The idea of software existing with its own set of rules apart from society is one that Agre derides, and rightly so in my opinion. It also ties in well with The Social Life of Information. By taking advantage of the social group and its rules, Shirky’s students saved themselves an immense amount of work, and made some projects possible that would have been absolutely undoable in the generic case.
The last point I’d like to address is Shirky’s surprise at the wide uptake of his students’ projects. It should not have been surprising at all. A key thing I learned from reading usability books is that designing for a generic end-user is never as good as designing for a specific person. This is the whole idea behind Alan Cooper’s personas. And it’s been supported in my own experience. As I noted above in my consulting experience, the software I designed for John and Larry turned out to be of general usefulness in their field, because their problems and their desires were typical of other members of their profession. It’s also a well-known principle of marketing; Crossing the Chasm emphasizes the importance of targeting a niche market to start; by solving the problems of a specific set of users, you gain the credibility necessary for your technology to be adopted by the majority. So the idea of designing for specific people or groups is well-known outside the realm of software engineering. It’s only surprising that such ideas have not percolated into the mainstream of software engineering yet.
Interesting essay. Lots of thoughts and rants. I’m done now.