After I voiced skepticism in my recent post about the Business Roundtable’s newfound interest in stakeholders (in addition to shareholders), readers have asked how we could judge –– with hard, meaning statistical evidence –– whether the decision-makers in these firms really do consider all stakeholders. The short answer is that it would be all but impossible to get statistics that would definitively judge this matter. But this answer brings up a broader issue: the difficulty in using statistics to analyze many of the issues of concern to investors, business decision- makers, and public policy-makers.
I admit this question can quickly move the discussion away from the focus of this website –– investing –– but it seems worthwhile nonetheless. Statistics are employed endlessly to praise or condemn business and public policy issues. An intelligent investor need not be a statistician, but facing this barrage of numbers he or she should at least have a sense of how to think about them.
Let’s look at some issues, such as stakeholders versus shareholders, diversity, and a few more relevant questions to show how statistics inflict bias and how, on any particular matter, they will always fall short of providing a reliable judgment of success or failure.
Because the Business Roundtable started all this, it might help to revisit its recent “change of heart.” The Roundtable had long insisted that business managers consider the shareholder –– and only the shareholder –– in their decision-making. Recently, it said that a better way to proceed would be to include the interests of what they call stakeholders: workers, customers, and the communities in which businesses operate. What hard evidence –– that is, statistics –– could help us judge whether these men and women were living up to their newfound commitment?
Would, for instance, a comparison of returns to shareholders and to workers answer this need? If so, what statistic could provide the insight? A historic comparison of relative shares of gains might provide a guide here, but what historic period should we choose as appropriate? Even an answer to these questions would still leave out customers and communities. Would a markup from cost prices adequately assess whether the customer was being treated fairly? If so, what would the appropriate markup be, and for which sorts of products? Would dollars spent on community projects be an adequate basis to judge? Even if the members of the Roundtable could settle on such a figure, how would they trade it off against the “needs” of other stakeholders? For instance, might shareholders and workers feel cheated by what they felt was an excess of business involvement in community affairs, and what measure could properly respond to their complaints?
These questions should shed light on my earlier post in which I speculated that the Roundtable sought this muddled attempt at assessment in order to avoid responsibility to any particular stakeholder. Noteworthy in this regard is that for all the Roundtable’s high-minded language, not one of its members has even begun to offer a way to judge this matter, or how to trade off the interests of one stakeholder against another. Perhaps more noteworthy is that presidential candidate Sen. Elizabeth Warren, the candidate who has most vocally touted this approach, also has failed to suggest ways to carry it out.
There is also the question of diversity in employment. Percentages of various groups in a company’s workforce would seem to be an obvious way to proceed, but the look of that analysis would depend heavily on who was included in which group. Will the diversity-seekers parse all gender, ethnic, national, religious, and racial groups? Do the statistics count Muslim and Hindu Indians as one group of Indians or as two separate religious groups? Is someone who was labeled male at birth but identifying now as a woman count as a woman for statistical purposes? If the statistics depend on self-identification, what guidance, if any, should the firm offer people on answering that question? Does someone of mixed race count as both races or only one, and if one, which one? Should the effort also look at the jobs involved? If all women (however defined) cluster in HR and all people of color (however defined) cluster in Compliance, is that really a diverse workforce? Questions might arise over which groups have which opportunities. For instance, do women have the same opportunity to travel for work (typically a steppingstone to management) as do men? Does a strenuous effort for such diversity “cheat” stakeholders?
This line of questioning could extend without end, because the answer to each question creates yet another question. Measuring customer complaints, for instance, opens questions on whether the firm should count the numbers of complaints, or their severity, or both. What about accidents on the shop floor? Environmental damage is a critical issue these days. Should statistics measure the number of environmental incidents, or their severity? Should they measure particles in the air or carbon dioxide? Should the measures be across the entire firm or for each division? Does environmental repair by one division of the firm balance environmental damage by another? Whatever method is chosen, the figures bias the results, sometimes intentionally, sometimes inadvertently.
I offer no answers because there are none. My point here is not to confuse, but to make readers aware that statistics are inherently biased, and so are always dangerous, especially when used to “prove” virtue or vice in a firm, an individual, or a public policy. A good investor –– a good citizen –– needs to understand this and treat all statistical interpretations with appropriate skepticism.