September 27, 2009

FINDING OUT SOME OF THE THINGS WE DON’T KNOW

Hubbard, Douglas W., The Failure of Risk Management: Why It’s Broken and How to Fix It (Hoboken, N.J.: John Wiley & Sons, 2009) (“The answer to the second question (whether anyone would know that the risk management system has failed) is also no; most managers would not know what they need to look for to evaluate a risk management method and, more likely than not, can be fooled by a kind of “placebo effect” and groupthink about the method. Even under the best circumstances, where the effectiveness of the risk management method itself was tracked closely and measured objectively, adequate evidence may not be available for some time. A more typical circumstance, however, is that the risk management method itself has no performance measures at all, even in the most diligent, metrics-oriented organizations. This widespread inability to make the sometimes-subtle differentiation between methods that work and methods that don’t work means that ineffectual methods are likely to spread. Ineffectual methods may even be touted as “best practices” and, like a dangerous virus with a long incubation period, are passed from company to company with no early indicators of ill effects until it’s too late.” Id. at 4. I often mention to students, taking my commercial and corporate law courses, that lawyers are involved in a form of risk management. The contracts drafted, the compliance procedures recommended, the check lists used, et., are all about doing, ex ante, the best they can to manage certain legal risks for their clients. That as transactional lawyers they not only seeking to increase the likelihood of good things happening for their clients and decreasing the likelihood that bad things happening. They are trying to make sure that, if the bad things happen, they will not be a complete catastrophe for the client. That even the bad is not worse, let alone the worst. For instance, how much, and what sort of, due diligence is due, not to avoid lawyer malpractice liability but to adequately management the client’s risks. In short, how do lawyers identify risk, manage risk, and, as is the subject of this book, assess their methods for identifying, assessing, and managing risks? They too may be taken in by a placebo effect.) .

Hubbard, Douglas W., How To Measure Anything: Finding the Value of Intangibles in Business (Hoboken, N.J.: John Wiley & Sons, 2007) (My coming to read this book (as well as the book above) had three unrelated sources. First, while sitting in a meeting one of the participants stated that he is ‘a good judge of character.’ Placing aside the self-congratulatory aspects of the statement, I wondered how the person knew, or could substantiate, what he claimed to know about himself. (Or, for that matter, if someone had challenged him on that point, how would they have backed up their challenge.) Perhaps his self-assessment is correct, perhaps it is wrong, and perhaps it is simply meaningless. There are studies reporting that 65% of us think we are of above-average intelligence, the same percentage thinks it of better-than-average looks. I would not be surprised in that same over optimism is found in our assessment of our ability to judge character. The larger issue, however, concerned the fact that some people might defer to this person’s judgment about another person’s character without first establishing (measuring) independently whether the former is (or is not) a good judge of character. That is, other people would be making decisions based on a factor, one’s person’s judgment of another person’s character, without having measured whether the former is a good judge of character or not. Where it may be reasonable to defer to someone judgment when we think that person is a better informed decision maker then we are, we still have a responsible to ascertain whether that person is worthy of such deference in the first place. We cannot just take the person’s word for it that he is a good judge of character. Second, I had the opportunity to listen to a woman who earns her living measuring opinions and ideas (e.g., doing market research). For some reason I came away with the fear that she probably knows/suspects that much of what legal academics (and lawyers) assert as being factually true about the world is not true, or at least not grounded in strong empirical evidence. I was humbled. And, third, I had to prepare to teach my course in Analytical Methods (this is a course developed by Professor Howell Jackson and other at The Harvard Law School), a course which includes components on decision theory, game theory, and statistics. More important, it is a course that underscores the need for lawyers to know more than the law, and to have analytical skills beyond that of mere legal analysis. Reading this book and applying its lessons is useful in better articulating why law students need to be better informed of matters outside the law (e.g., how would they actually ascertain whether law A is more efficient, or less costly, than an alternative law B or no law at all), and for me to better engage in more informed decision-making. Anyway, this is a worthwhile, and nontechnical, read. “Anything you need to quantify can be measured in some way that is superior to not measuring it at all.” Gilb’s Law).