Evaluating quality of construction

This post was triggered by a comment that Jofish made where he made the claim that “Popperian science just *doesn’t* exist”. I understand where he’s coming from (especially in light of that post, where I was describing how our lives are often re-interpreted after the fact), but I believe that Karl Popper’s principle of falsifiability is relevant to explaining what is good science, and so I wanted to spend some time exploring the topic.

Oddly, an offhand comment in the Bruno Latour I’m reading helped to shape this post. This is surprising because Latour is best known for his work in science studies, where he espoused the “social construction of scientific facts”, so one might think that he would also dismiss the idea of Popperian falsifiable science. But Latour comments that once you get the idea of science being constructed, then you can “answer the more interesting question: Is a given fact of science well or badly constructed?” (p. 90) And I think that is how I can resolve my sympathy for the traditional scientific method with my new obsession with the social construction of everything. Is an experiment well or badly constructed?

One of the things that drove me nuts at my previous companies was the tendency of certain “scientists” to perform terrible experiments. They were terrible not because of the quality of the lab work, but because the experiment was poorly designed such that no questions could be answered regardless of the experiment’s result. I believe that designing (constructing?) a good experiment means that you have a well-defined hypothesis that is falsifiable, and then you run an experiment that will determine whether that hypothesis holds or not under the experimental conditions. You should be able to get a Yes or No answer to that question. Instead, my co-workers would run experiments without any sort of controls because they wanted to see what would happen, without really thinking whether they would learn anything from the experiment.

I guess I can now re-frame that debate in other terms. I was thinking from the perspective of falsifiability, such that the experiments were poorly designed because they did not include the necessary controls to be able to prove anything one way or another. From my co-worker’s perspective, they were still exploring the problem space, so they did not have any hypotheses well-developed enough to test (I would disagree, but I can now see how they might have justified this). Shifting contexts one more time, even if we had designed perfectly Popperian experiments that provided a yes or no answer to our hypothesis, the results could always have been re-interpreted later in light of new information, which was Jofish’s point. The context can always be changed, and even “solid facts” may look very different from another perspective (e.g. the paradigm shifts to quantum mechanics and relativity).

Interestingly, this question of evaluating the construction of artifacts is not limited to scientific experiments. When I was reading Scott Berkun’s The Art of Project Management, I was impressed by how he emphasized the importance of evaluating everything from vision documents to specifications. He asks the question of why the documents exist, and what purpose they are serving, and then asks the putative project manager whether the documents, as written, satisfy those purposes. If the specifications are intended to answer the engineers’ questions about customer needs, are the engineers asking those questions despite having the document? Does the vision document keep the project team all moving in the same direction, or is it so vague that nobody knows what it means? By evaluating the documents in the context in which they were written, we can evaluate whether they are “well or badly constructed”.

And, as usual, Latour makes the same point. In one of his footnotes (note 180, p. 127), he comments that “The same epistemologists who have fallen in love with Popper’s falsifiability principle would be well advised to prolong his insight all the way to the text itself and to render explicit the conditions under which their writing can fail as well.” In other words, one must determine why one is writing something, and then evaluate it based on whether it succeeds at those criteria. Latour’s point is that just because one is writing a paper rather than an experimental proposal does not mean there should be less rigor.

Thinking this way can lead one to start questioning everything. When writing an email, I sometimes start asking myself why I’m writing it, and what I hope the results will be. Heck, I even take it down to the level of paragraphs or sentences, sometimes asking myself what reason I have to include a sentence ((un)fortunately, the same criteria don’t seem to apply in my blog posts). When starting a conversation at work, it is often relevant for me to ask why; if I am blowing off steam and not able to frame any concrete initiatives, it is important for me to recognize that and decide whether it’s worth using up my social capital to blow off steam. If Latour is right, and everything is socially constructed, then I must be able to ask of everything whether it is well or badly constructed.

It may also be relevant that I spent some time last week talking to Jofish and reading his thesis proposal, which involves studying the process of evaluation, so the concept of evaluation is on my mind. As usual, I am seeing evidence for whatever is on my mind everywhere I look. Hurrah for the power of context and framing. But that’s another story.

P.S. The Northeast amuses me some time. I spent time in seven different states last week, visiting my friend in New Hampshire on Sunday, back to Boston on Monday, taking the train home on Tuesday (which went via Rhode Island and Connecticut) to New York, then visiting my friend in Pennsylvania on Saturday (taking the bus which went through New Jersey). In San Francisco, I’m not sure any of these trips would even have gotten me out of the state (Google maps sez Oakland, CA to Reno, NV is just about the same length as Boston, MA to New York, NY). Love these little East Coast states.

2 thoughts on “Evaluating quality of construction

Leave a Reply

Your email address will not be published. Required fields are marked *