21 December 2024

Fiduciary Dilemma: The Risk of Defining Risk by Volatility

#
Share This Story

“‘Bomb patterns?’ General Peckem repeated, twinkling with self-satisfied good humor. ‘A bomb pattern is a term I dreamed up just several years ago. It means nothing, but you’d be surprised at how rapidly it’s caught on. Why, I’ve got all sorts of people convinced I think it’s important for the bombs to explode close together and make a neat aerial photograph. There’s one colonel in Pianosa who’s hardly concerned any more with whether he hits the target or not.’”                                                                                                                           Catch-22, Joseph Heller 

Like the top officers in Catch-22, the leading thinkers in the financial industry have long used the elegance of statistics – specifically, standard deviation as a measure of volatility – to frame the definition of risk. It worked so cleanly on the blackboards of academia that it just had to be used in the boardrooms of America.

There was only one problem.

The theory was wrong.

To paraphrase our friendly physicist Richard Feynman, “a guess that is wrong is wrong, no matter how elegant, no matter who made it.”

And that’s a problem for 401k plan sponsors who rely on the investment industry to design menu options and advise participants. More importantly, if the old paradigm is dead, what new paradigm has replaced it? To begin to answer this question, we need to get a better idea of how this problem started in the first place. It begins with a familiar phrase, one that launched the advent of Modern Portfolio Theory (MPT) in the 1950s: “Risk and Return are related.” This is the relationship Harry Markowitz famously wrote of in his groundbreaking treatise “Portfolio Selection” (Journal of Finance, 1952) even before Bill Haley and the Comets took their turn rocking around the clock.

Oh, those were happier days back then, the greatest generation having just won World War Two, the same “good war” Heller would reference a generation later. The big theme of the Allies’ success in the second World War revolved around such words as “logistics” and “operations.” Everything was reduced to numbers, and, with the world about to be thrust into the Sputnik Era, it didn’t take much to transfer portfolio management from a droll accounting exercise to a formula-laden scientific enterprise.

As we all know, formulas (or “formulae,” for those who have been blessed with an education in the Latin language) contain, in addition to various mathematical operands, loads of numbers and variables (variables being merely numbers in disguise as letters). Everyone could easily pick out the number we needed to use for “return,” but folks had a devil of a time identifying the right number to use for “risk.” Finally, Markowitz discovered the solution.

But computers back then weren’t powerful enough for him to use it, so he had to settle on standard deviation. (William Sharpe, who along with Markowitz would late win a Nobel Prize for their work on MPT, in his own seminal paper, admits (albeit in an obscure footnote hidden deep in the article) “Under certain circumstances, the mean-variance approach can be shown to lead to unsatisfactory predictions of behavior. Markowitz suggests that a model based on the semi-variance would be preferable; in light of the formidable computational problems, however, he bases his analysis on the variance and standard deviation.” (Sharpe, W.F., Capital Asset Prices: A Theory of Market Equilibrium under Considerations of Risk, The Journal of Finance, Volume 19, Issue 3, September 1964, 425-42))

There you have it. From the beginning we understood the lower partial moment (i.e., that part of the standard deviation below the mean) was a better measure for risk than standard deviation. Standard deviation equates missing the goal at the same level of risk as surpassing the goal. Using the full standard deviation didn’t make sense to Markowitz at the time (and presumably still doesn’t), but, computers being what they were back then, was all he could settle for.

Soon thereafter, standard deviation became the universal measure of “risk.” Eventually, it found its way in portfolio optimization programs. You know these, don’t you? These are the programs they use to determine a recommended asset allocation. Are you starting to connect the dots here? Do you begin to see why so much we see in the financial industry is defective? It’s because academia and industry have constructed the main apparati (happy, Latin lovers?) based on a flawed theory.

Click here for the full article from Fiduciary News

Join Our Online Community
Join the Better Way To Retire community and get access to applications, relevant research, groups and blogs. Let us help you Retire Better™
FamilyWealth Social News
Follow Us