Sunday, June 26, 2011

Garbage-in, Quality-out

The common think expression “garbage-in, garbage-out” is often thrown out without any vigorous thought. Proponents believe it just has to be true.

First, I have to agree with Charles Babbage who famously said: "On two occasions, I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question."

Second, if we do try to make some sense of the concept, the expression can hold true only if all the source materials are completely rubbish. I believe there is much to be learned from all sources of information. If there are any gems in the collection, then there is the possibility that “good” material can be identified. Indeed, we are all exposed to an enormous amount of data. How well we process this information depends on the sophistication of our filters. And, machines can also develop filters. IBM’s Watson, the Jeopardy playing computer, deduced mostly correct answers from unwashed Internet resources—and it trounced its human competitors.

Third, while the expression has been applied to legal agreements, such as those filed on EDGAR, for my part, I do not think of any of them as garbage. They are precedents. They are expressions of real transactions.

Fourth, common thinkers tend to fixate on language, sometimes debating the difference between ‘will’, ‘shall’ and ‘must’. While this is important, there are many other dimensions to legal agreements, including—and most importantly—the application of legal terms to business transactions. In fact, it is not uncommon for those focusing on the words to miss some key clause or accidentally duplicate clause language. Without a broader view—a checklist—we cannot see the forest for the trees.”

Fifth, and most importantly, we can now see the power of aggregation and filters in practice. Contract analysis reviews a set of agreements and determines how the document is organized, what clauses it contains, and the range of standard and non-standard language. The first task of contract analysis is to aggregate all the source documents into one common outline, creating the agreement checklist. Next, the analysis finds all the matching clause elements, and for each branch of the outline, it constructs a clause library. Finally, algorithms examine all the clauses and identify the core (non-negotiated or deal neutral) language for each provision, together with the full range of deal-specific or alternative terms.

When creating a new model form, the process will sometimes start by identifying the most conforming document in the set: the one containing the most common clause elements and the most standard language. This is done for reasons of expediency. If we start with single conforming document, then we can spend less time normalizing the clause language, and particularly the definitions. (See The Fastest Way to Create a Form).

However, situations arise where we do not find a good (or conforming) document containing all the standard elements of a transaction. These situations are quite apparent from the software analysis. The software shows the source agreements are highly divergent in structure and content. If there are no good examples, then we can assume that we face a situation of sub-optimal source documents. In this case we can use contract analysis to identify the most common deal elements and for each provision find the most conforming clause. The remarkable result is that while no one document represents best practice, the aggregate of all documents does. Or, to end with another bumper-sticker expression: we are smarter than me.”

In conclusion, part of reason for a shift from wordsmithing to transactional analysis is an expanding world view. When our world is limited to handful of precedents that we personally drafted or gathered from trusted colleagues, this tiny universe can be carefully dissected word-by-word. In the last few years, our universe has massively expanded to include all documents filed on EDGAR, enormous volumes financial transactions, and vast collections of agreements generated from an interconnected global marketplace.

We cannot be masters of the digital universe based solely on our personal reading experiences.

1 comment:

  1. Ken Adams responded to my post on his Koncision site. Here's the URL to Ken’s post (http://www.koncision.com/more-about-garbage-in-garbage-out/).

    Continuing the conversation…

    kiiac focuses on what is. Ken addresses what should be. They are indeed complementary. I hope one day to map (or cross-reference) all content to editorial standards.

    Ken thoughtfully presents the case for what should be. Let me address some of the benefits of analyzing what is.

    “What is” is the product of negotiated transactions. It will always be messier compared to a single, optimal standard. It is the product of give-and-take (the addition and subtraction of language) that shifts the benefits and burdens among the parties.

    “What is” also captures the degree of conformity (or divergence) in a set of documents. This degree of conformity is commensurate with the degree of consensus among the drafters. My experience has shown that we can move to standards (and hopefully to optimal standards) slowly and incrementally. I have yet to find a situation where a law firm or corporate legal department will adopt vastly different drafting conventions compared to their current practice.

    Let's continue the debate and hopefully we will uncover observations useful to practitioners attempting to maintain a high degree of professionalism at a time when there is increasing pressure on time (and rates).

    ReplyDelete