I’d been meaning to follow up on last week’s post on conversational alignment but hadn’t gotten around to it. As I admitted in the livejournal comments on that post, I may have over reached in saying that reality coefficients had to be aligned to have a good conversation, because, as Dan pointed out, we can have good conversations with people with whom we disagree. I think the reality coefficient alignment is still relevant, because if the members of a conversation don’t agree that something is worth discussing, there won’t be a conversation, but it distracted from my point about language.
Aligning language is a painfully slow process. It’s a negotiation as we figure out that what we mean when we say something may be very different than what somebody else hears. As I mentioned in one of my comments, after our research group of 12 got re-hired by Sciex, we spent a full week in a conference room trying to get our language and expectations aligned between the biologists and the engineers. 9-5 every day in that room arguing, because when the biologists said something like “platform” they meant something completely different than what the engineers meant. It was an excruciating, and yet utterly necessary, process. And even now, such differences in language continue to arise. We’ll spend an hour arguing and realize that, after stripping all of the language connotations away, we are in “violent agreement”.
But while it’s painful and slow, aligning language is also essential to communication. As I put it in that comment thread, “Without investing the time to align language, people will often talk right past each other, and it’s not a conversation, it’s just noise – no information is getting transferred.” The excruciating nature of language alignment shows up all around us. The pages of legalese necessary to express a simple concept, because everything has to be spelled out with no chance of misinterpretation. The thousand-page contracts for government work, as every detail has to be annotated such that there can be no disputes later over the scope. We sweep all of that context alignment under the table in our everyday conversation, but it’s always lurking unless we have invested the time with our friends to be fairly confident that they will extract what we mean from what we say.
As an aside, I realized this also means that my dream piece of social software is basically impossible, where that dream is of software that essentially enables mind-reading, by translating between one person’s mental structure and context and another’s such that they always get what is meant. It would be the ultimate achievement of collaboration software. It’s alluded to in this post, where I talk about the idea as expressed in a sci-fi novel, and in this comment thread. But what I’ve now realized is that such software would have to have achieved artificial intelligence in order to negotiate the complex context alignment necessary to do such a translation. It would have to understand fully the set of previous experience and history that a participant brings to the conversation, and be able to place the conversation into that context. And that seems pretty unlikely.
As an aside to the aside, it does give me hope that this is a really hard problem. Hope not in the sense that I could solve it, but hope in realizing that this is a possibly rare skill that I believe I possess, where I have a flexible enough mindset to be able to translate between different groups of people. One of my main roles right now is to translate the requirements of our scientists and biologists into requirements that our production software group can understand. And it’s not something many people can do, because one has to understand the needs and desires of both sides very well. I haven’t quite figured out what such a role should be called (technical interpreter?), but I think it’s a valuable skill, and one I should be able to leverage in future career opportunities.
Back to my original thread about language alignment. One of the interesting things I realized after thinking about the work example was that language alignment happens in every arena of life where groups of people interact. Within fields as varied as computers and poststructuralist criticism, you have to learn the jargon to participate, because the jargon is a powerful language structure, used to compress conversations into a manageable size. On a sports team, you have to know what all of the different cries mean (I had a friend once stop by to watch me in an ultimate game, and they were utterly baffled by people shouting things like “Up!”, “Force away!”, “Strike!” and “Switch!”). Such language is essential for groups to function together, because describing what is meant at length every time a topic comes up is unsustainable. The group has to develop its own language such that its members can work together without spending all of their time arguing about what things mean.
This leads to another idea – can any group form without developing its own language? I’m beginning to think the answer is no. The hip kids, the sports fans, the computer hackers, the lawyers – each group has their own language that you have to learn to be accepted as a member of the group, whether it’s the visual language of fashion, or the highly technical language of the hacker. Even within an organization, the different groups will have their own specialized sub-branches of language. What implications does this have for team building? How do we encourage new teams to develop their own language that will help them bind together even closer? Does this get back into some of the issues of trust that I mention in that team building post?
Despite all the difficulties in achieving conversational alignment, I think it is an essential process, partially because groups can not function without such alignment, and partially because it’s just cool. Getting to know somebody well enough that you can essentially read their mind is a great experience. I recently re-read Asimov’s Foundation Trilogy, and think that his description of how the Second Foundation folks interact is the apotheosis of conversational alignment as I’ve described it, where a quirked eyebrow can convey a whole world of meaning. To some extent, we all aspire to know another that well – our closest relationships strive towards such a mind-meld.
Okay, this post got completely out of control. I’ll leave it here for now, and mull things over a while longer.