Bubbles and the US Market In A Time Of Easy Money

From The Daily Reckoning.

The key to bubble analysis is to look at what’s causing the bubble. Based on data going back to the 1929 crash, this current bubble looks like a particular kind that can produce large, sudden losses for investors.

This chart shows the Shiller Cyclically Adjusted PE Ratio (CAPE) from 1880-2017. Over this 137-year period, the mean ratio is 16.75, media ratio is 16.12, low is 4.78 (Dec 1920) and high is 44.19 (Dec 1999). Right now the 29.45 ratio is above the level of the Panic of 2008, and about equal to the level of the market crash that started the Great Depression.

My preferred metric is the Shiller Cyclically Adjusted PE Ratio or CAPE. This particular PE ratio was invented by Nobel Prize-winning economist Robert Shiller of Yale University.

CAPE has several design features that set it apart from the PE ratios touted on Wall Street. The first is that it uses a rolling ten-year earnings period. This smooths out fluctuations based on temporary psychological, geopolitical, and commodity-linked factors that should not bear on fundamental valuation.The second feature is that it is backward-looking only. This eliminates the rosy scenario forward-looking earnings projections favored by Wall Street.

The third feature is that that relevant data is available back to 1870, which allows for robust historical comparisons.

The chart below shows the CAPE from 1870 to 2017. Two conclusions emerge immediately. The CAPE today is at the same level as in 1929 just before the crash that started the Great Depression. The second is that the CAPE is higher today than it was just before the Panic of 2008.

Neither data point is definitive proof of a bubble. CAPE was much higher in 2000 when the dot.com bubble burst. Neither data point means that the market will crash tomorrow.

But today’s CAPE ratio is 182% of the median ratio of the past 137-years.

Given the mean-reverting nature of stock prices, the ratio is sending up storm warnings even if we cannot be sure exactly where and when the hurricane will come ashore.

With the likelihood of a bubble clear, we can now turn to bubble dynamics. The analysis begins with the fact that there are two distinct types of bubbles.

Some bubbles are driven by narrative, and others by cheap credit. Narrative bubbles and credit bubbles burst for different reasons at different times. The difference is critical in knowing what to look for when you time bubbles, and for understanding who gets hurt when they burst.

A narrative-driven bubble is based on a story, or new paradigm, that justifies abandoning traditional valuation metrics. The most famous case of a narrative bubble is the late 1960s, early 1970s “Nifty Fifty” list of fifty stocks that were considered high growth with nowhere to go but up.

The Nifty Fifty were often referred to as “one decision” stocks because you would just buy them and never sell. No further thought was required. Of course, the Nifty Fifty crashed with the overall market in 1974 and remained in an eight-year bear market until a new bull market began in 1982.

The dot.com bubble of the late 1990s is another famous example of a narrative bubble. Investors bid up stock prices without regard to earnings, PE ratios, profits, discounted cash flow or healthy balance sheets.

All that mattered were “eyeballs,” “clicks,” and other superficial internet metrics. The dot.com bubble crashed and burned in 2000. The NASDAQ fell from over 5,000 to around 2,000, then took sixteen years to regain that lost ground before recently making new highs. Of course, many dot.com companies did not recover their bubble valuations because they went bankrupt, never to be heard from again.

The credit-driven bubble has a different dynamic than a narrative-bubble. If professional investors and brokers can borrow money at 3%, invest in stocks earning 5%, and leverage 3-to-1, they can earn 6% returns on equity plus healthy capital gains that can boost the total return to 10% or higher. Even greater returns are possible using off-balance sheet derivatives.

Credit bubbles don’t need a narrative or a good story. They just need easy money.

A narrative bubble bursts when the story changes. It’s exactly like The Emperor’s New Clothes where loyal subjects go along with the pretense that the emperor is finely dressed until a little boy shouts out that the emperor is actually naked.

Psychology and behavior change in an instant.

When investors realized in 2000 that Pets.com was not the next Amazon but just a sock-puppet mascot with negative cash flow, the stock crashed 98% in 9 months from IPO to bankruptcy. The sock-puppet had no clothes.

A credit bubble bursts when the credit dries up. The Fed won’t raise interest rates just to pop a bubble — they would rather clean up the mess afterwards that try to guess when a bubble exists in the first place.

But the Fed will raise rates for other reasons, including the illusory Phillips Curve that assumes a tradeoff between low unemployment and high inflation, currency wars, inflation or to move away from the zero bound before the next recession. It doesn’t matter.

Higher rates are a case of “taking away the punch bowl” and can cause a credit bubble to burst.

The other leading cause of bursting credit bubbles is rising credit losses. Higher credit losses can emerge in junk bonds (1989), emerging markets (1998), or commercial real estate (2008).

Credit crack-ups in one sector lead to tightening credit conditions in all sectors and lead in turn to recessions and stock market corrections.

What type of bubble are we in now? What signs should investors look for to gauge when this bubble will burst?

My starting hypothesis is that we are in a credit bubble, not a narrative bubble. There is no dominant story similar to the Nifty Fifty or dot.com days. Investors do look at traditional valuation metrics rather than invented substitutes contained in corporate press releases and Wall Street research. But even traditional valuation metrics can turn on a dime when the credit spigot is turned off.

Milton Friedman famously said the monetary policy acts with a lag. The Fed has force-fed the economy easy money with zero rates from 2008 to 2015 and abnormally low rates ever since. Now the effects have emerged.

On top of zero or low rates, the Fed printed almost $4 trillion of new money under its QE programs. Inflation has not appeared in consumer prices, but it has appeared in asset prices. Stocks, bonds, commodities and real estate are all levitating above an ocean of margin loans, student loans, auto loans, credit cards, mortgages, and their derivatives.

Now the Fed is throwing the gears in reverse. They are taking away the punchbowl.

The Fed has raised rates three times in the past sixteen months and is on track to raise them three more times in the next seven months. In addition, the Fed is preparing to do QE in reverse by reducing its balance sheet and contracting the base money supply. This is called quantitative tightening or QT, which I’ve discussed recently.

Credit conditions are already starting to affect the real economy. Student loan losses are skyrocketing, which stands in the way of household formation and geographic mobility for recent graduates. Losses are also soaring on subprime auto loans, which has put a lid on new car sales. As these losses ripple through the economy, mortgages and credit cards will be the next to feel the pinch.

U.S. Bank Deregulation Advances, But Hurdles Remain

The momentum for U.S. bank deregulation continues to grow, but it is becoming more likely that it will take the form of multiple smaller bills targeting relief for specific segments of the financial sector as opposed to a single, comprehensive bill, says Fitch Ratings.

The Financial Choice Act (FCA) remains the benchmark for the full deregulation agenda given the upcoming House vote on a revised version that was passed by the House Financial Services committee earlier this month. The updated version (FCA 2.0) is mostly in line with the original bill from 2016 and still calls for the full repeal of the Volcker Rule, the Orderly Liquidation Authority (OLA) and the Department of Labor (DOL) Fiduciary Rule.

Broad and deep deregulation is generally viewed by Fitch as likely to have a negative impact from a bank credit risk perspective; however, the ultimate form of regulatory change and its application by individual banks will determine the ratings implication.

A repeal of Volcker is unlikely to result in banks’ returning to full-scale proprietary trading, but it could carry negative rating implications depending on banks’ response. The elimination of OLA could expose the banking sector to significant systemic risk in the event of a crisis, though resolution planning could be a mitigating factor to large bank failures. While eliminating the DOL Fiduciary Rule would likely benefit banks’ wealth management businesses and asset managers’ profitability, reputational and litigation risks would remain.

Key differences between FCA 2.0 and the original bill include simplifying the threshold for banks to opt out of most regulations, changing operational risk weights for global systemically important banks (G-SIBs), replacing the Consumer Financial Protection Bureau (CFPB) and relaxing some components of stress-testing.

Fitch does not believe proposed changes to the CFPB would directly affect most banks’ and non-bank financial institutions’ credit profiles, though they could reduce the regulatory burden and associated costs. Further revision to bank stress testing as proposed under FCA 2.0 is likely to be ratings neutral.

Why Didn’t Bank Regulators Prevent the Financial Crisis?

The St. Louis Fed On The Economy Blog had an interesting article today, looking at why the GFC happened, and how the Dodd-Frank Act has addressed the issues in the US (yes, the one which may be repealed if Trump gets his way).

But, look at the issues in the Australian context and score for yourself to what extent we currently have the same issues here as they did in the US then. Its scary!

A number of observers have questioned whether bank regulators could have prevented the financial crisis of 2008. While many market participants recognized the exuberance of the housing market, other factors contributing to the crisis led to a “perfect storm” that made it difficult for many stakeholders, including regulators, to foresee the impending meltdown.

Excessive Mortgage Debt

Poor assessment of ability to repay and inadequate down payments doomed many mortgages. Insufficient consumer protections resulted in many consumers not understanding the risks of the mortgage products offered.

Dominance of Variable Rate and Hybrid Subprime Mortgages

The spread of variable rate and hybrid subprime mortgages in a low-rate environment created excessive risks when interest rates rose.

Overheated Housing Market

Rapidly increasing house prices encouraged speculation, which further drove up prices. The availability of easy credit caused many borrowers to take on levels of debt they could not afford.

Lack of Market Discipline in Mortgage-Backed Securities Market

Growth in the private mortgage-backed securities market was fueled by lax standards in assigning credit ratings, which hid building systemic risk.

Safety and Soundness Problems at Large Banks

Many large banking firms had insufficient levels of high-quality capital, excessive amounts of short-term wholesale funding and too few high-quality liquid assets. These problems were frequently compounded by inadequate internal risk measurement and management systems.

Risky Behavior by Nonregulated Financial Firms

Sometimes referred to as the “shadow banking system,” this collection of financial firms included insurance companies and captive finance companies, among others. These firms engaged in activities that increased risks inherent in the financial system as a whole without any meaningful regulatory oversight.

Lack of Broad Oversight

Finally, while various regulators oversaw parts of the financial system, there was no one regulator responsible for the consolidated supervision of systemically important financial firms. Moreover, no authority was assigned the responsibility of overseeing systemic risk.

Hard Data, Soft Data and Forecasting

From The St. Louis Fed on The Economy Blog.

People frequently scour economic data for clues about the direction of the economy. But could the many types of data cause confusion on what exactly the state of the economy is? A recent Economic Synopses essay examined some of this potential confusion.

Business Economist and Research Officer Kevin Kliesen noted that data essentially fall into two camps:

  • Hard data, such as that from government statistical agencies used in constructing real gross domestic product (GDP)
  • Soft data, such as business, consumer confidence and sentiment surveys, financial market variables, and labor statistics

Kliesen crafted two index measures of these types of data, which can be seen in the figure below.1 He noted that these indexes could be useful for quantitatively showing how different types of data can influence forecasts of real GDP and, in turn, the expectations of policymakers.

hard soft data

“The indexes exhibit the normal cyclical behavior one would expect in the data: They increase in expansions and decrease in recessions,” Kliesen wrote.

He also noted that the hard data index showed stronger economic conditions from around the beginning of the recent recovery until late 2013. More recently, however, the soft data index is showing stronger economic conditions.

Effect on Forecasts

To show the possible effects of favoring one type of data when forecasting, Kliesen ran forecasts of monthly real GDP growth using only hard data and using only soft data:

  • Forecasting growth using the hard data resulted in projected growth of a little more than 2 percent per quarter through the end of 2019.
  • Using soft data, however, resulted in a peak of a little more than 4 percent in early 2018.

He then compared the results with the consensus forecasts found in the Federal Reserve Bank of Philadelphia’s Survey of Professional Forecasters (SPF) and the Federal Open Market Committee’s (FOMC’s) Summary of Economic Projections (SEP). The results are in the table below.

Forecasts for Real GDP
Percent Changes, Q4/Q4
Hard Data Soft Data SPF SEP
2017:Q4 1.9 3.9 2.3 2.1
2018:Q4 2.2 2.9 2.4 2.1
2019:Q4 2.4 2.5 2.6 1.9
NOTES: The SEP values are taken from the March 15 release and are Q4/Q4. The SPF values use average annual data and are taken from the Feb. 10 issue.
Federal Reserve Bank of St. Louis

Kliesen concluded by saying that most forecasters and FOMC policymakers have relied more on hard data when forecasting. He wrote: “This is probably prudent, since the hard data flows are used in most macroeconomic forecasting models.”

He also noted: “However, as the upbeat forecasts from the soft data show, there appears to be some upside risk to the near-term forecast for real GDP growth. This upside risk likely reflects the widespread expectation of expansionary fiscal policy and a strengthening in global growth.”

Economists are trying to out why incomes aren’t rising — but workers have a good hunch

From Business Insider.

Economists are often wringing their hands over why, despite a continuous eight-year economic recovery, US workers’ wages remain largely stagnant, extending a trend that began some three decades ago.

Yet anyone who has applied for a job in the last couple of years knows that, while the US unemployment rate is historically low at 4.4%, the labour market isn’t exactly bustling.

Companies have become a lot more reticent about making new investments in the wake of the Great Recession and during the weak economic recovery that has followed it. That includes investing in people, and the hiring process has become slower and more onerous.

It also means wage increases have become even harder to come by.

The recession caused lasting damage to the job market which still resonates to this day. Steven Partridge, vice president for workforce development at Northern Virginia Community College (NOVA), says the crisis created what he calls “degree inflation” in job requirements — a trend correlated with stagnant and sometimes falling incomes as workers lost their jobs and considered themselves lucky to take lower-paying ones.

In other words, because applicants were so desperate and the pool was so wide, the bar for hiring became unrealistically, and often unnecessarily high. The trend has abated, but not fully receded.

“The downturn made everyone push up their education requirements,” Partridge told Business Insider.

Several job market indicators point to underlying weakness — high levels of long-term joblessness, low labour force participation and, yes, a distinct lack of wage growth.

Albert Edwards, market strategist at Societe Generale, deserves credit for doing something that’s rather rare on Wall Street — admitting he was wrong, specifically about the prospect of imminent wage increases.

“Talking about wrong, I have to put my hands up. I have been expecting US wage inflation to roar ahead over the past three months to well above 3%, yet every data release has surprised on the downside,” he wrote in a note to clients.

“Wage inflation, as measured by average hourly earnings, has actually leveled off at close to 2-1⁄2% while wage inflation for ‘the workers’ is actually slowing (see chart below)! Strictly speaking, ‘the workers’ are defined (by the BLS) as “those who are not primarily employed to direct, supervise, or plan the work of others.” Hey, that’s me!”

EdwardsSociete Generale

Fed officials have also struggled to understand the absence of wage increases. In a recent research brief from the San Francisco Fed, staff economist Mary Daly and co-authors reflect on what they see as a surprising trend.

“Standard economic theory tells us that wage growth and unemployment are intimately linked. Wage growth slows when the unemployment rate rises and increases when the unemployment rate falls,” they write. “The experience since the Great Recession has been very different.”

SffedFederal Reserve Bank of San Francisco

“This slow wage growth likely reflects recent cyclical and secular shifts in the composition rather than a weak labour market. In particular, while higher-wage baby boomers have been retiring, lower-wage workers sidelined during the recession have been taking new full-time jobs,” they said. “Together these two changes have held down measures of wage growth.”

Their explanation provides little comfort in the face of the depressed labour market many Americans still face, especially lower-income and minority families.

The Fed authors also suggest a factor in low income growth that might ring true to those families: “As long as employers can keep their wage bills low by replacing or expanding staff with lower-paid workers, labour cost pressures for higher price inflation could remain muted for some time.”

As suggested in that last excerpt, labour’s bargaining power vis-a-vis employers is probably at least as important as unfavorable demographics in explaining slow wage growth. It will take a substantially stronger economy to tilt that balance back in workers’ favour.

Mortgage Crisis 2.0: BofA CEO Wants To Slash Down Payments To Help Poor Millennials

From Zero Hedge.

Among a host of other issues, one the critical things that contributed to the housing crisis of 2008 was the fact that speculative borrowers had nearly no “skin in the game.”  Anyone who decided they wanted a piece of the rapidly inflating housing bubble could go out and buy multiple houses with no money down or, in some cases, even do “cash out” purchases whereby banks would finance more than 100% of the purchase price leaving ‘buyers’ to pocket the excess.

Shockingly, such terrible underwriting standards was a really bad idea.  Turns out that offering investors infinite returns on capital, given that they could purchase millions of dollar worth of assets without ponying up a single penny, causes wild speculation resulting in devastating asset bubbles.

But, in the wake of one of the worst asset bubbles in history, new legislation came along requiring traditional mortgage borrowers to put 20% down when purchasing a new home.

Ironically, the new owner of one of the worst mortgage lenders of the 2008 era, is now arguing that down payment requirements should be slashed in half.  Speaking to CNBC, Bank of America CEO Brian Moynihan, the proud owner of Countrywide Financial, said that his mission is to reduce mortgage down payment requirements to 10% for traditional loans.  Per CNBC:

 “But, you know, I think at the end of the day is people forget that, at different points in your life and different points on what you’re doing in life requires you to think about housing differently as a place for you and your friends, as a place for you and maybe your significant other, and then ultimately, a place for family. That drives change. And so yes, it’s taken more time. And we talked a lot about this, you know, four or five years ago, that if you require a 20% down payment, it takes just a little more time to accumulate 20% than it would 3% or none, which is what the rules were for a short period of time.”

“So our goal, going back to regulatory reform, is should you move the down payment requirement from 20% to 10%? Wouldn’t introduce that much risk.”

“But would actually help a lot of mortgage to get done. And if you look at the statistics, the difference between 80 and 90 LTV –loan-to-value – isn’t much different as it is between 95 and 90. That’s when you start to see real differences in performance statistics. And so we don’t want to wish people into borrowing money that then they have trouble repaying.”

Of course, we’re certain that Moynihan’s sole purpose for wanting to lower down payments is to help those poor millennials living in mom’s basement and has nothing to do with the fact that’s he’s lost a ton of fee revenue to government-backed loans that only require a 3% down payment.FHA

But, why not?  Gradually destroying lending standards worked out really well last time around.

US Real hourly earnings up 0.4 percent over the year ending April 2017

The US Bureau of Statistics says real average hourly earnings increased 0.4 percent, seasonally adjusted, for all private sector employees from April 2016 to April 2017. The increase in real average hourly earnings combined with no change in the average workweek resulted in a 0.3-percent increase in real average weekly earnings over the year.

Real average hourly earnings for production and nonsupervisory employees increased 0.1 percent, seasonally adjusted, from April 2016 to April 2017. The increase in real average hourly earnings combined with a 0.3-percent increase in the average workweek for these workers resulted in an over-the-year increase in real average weekly earnings of 0.5 percent.

These data are from the Current Employment Statistics program. Earnings data for the most recent 2 months are preliminary.

The Growing Skill Divide in the U.S. Labor Market

The St. Louis Fed On The Economy Blog has highlighted a polarization in the labor market, between skilled employees capable of performing the challenging tasks in the cognitive nonroutine occupations and entry-level employees that are physically strong enough to perform the manual nonroutine tasks.

Over the past several decades, the skill composition of the U.S. labor market has shifted. Employers are hiring more workers to perform nonroutine types of tasks (such as managerial work, professional services and personal care) and fewer workers for routine operations (such as construction and manufacturing). This shift in the type of skills in demand is referred to as job polarization.

One way to see evidence of job polarization is to look at employment growth in certain occupational classifications. The figure below divides occupational employment growth into four groups:

  • Cognitive Nonroutine: managers, computer scientists, architects, artists, etc.
  • Manual Nonroutine: food preparation, personal care, retail, etc.
  • Cognitive Routine: office and administrative, sales, etc.
  • Manual Routine: construction, manufacturing, production, etc.

average annual employment growth

The black lines represent the average growth rate across all occupations within a group, while the bars represent the growth rate in that particular occupation.

The fastest growing occupational groups are cognitive nonroutine and manual nonroutine, both growing about 2 percent every year on average since the 1980s. Cognitive routine and manual routine occupations are growing significantly slower, less than 1 percent on average (or shrinking, in the case of production occupations).

Physical Demands of Work

One of the implications of job polarization is a shift in the type of work required to be performed by the average employee. A new survey from the Bureau of Labor Statistics—the Occupational Requirements Survey—gathers data on the type of work performed in each occupation.

The figure below looks at two physical task requirements:

  • The percentage of hours in an eight-hour day spent standing or walking
  • The percentage of workers required to perform pushing or pulling tasks with one or both hands

physical task requirements

The most physically demanding occupational groups are manual nonroutine and manual routine. In both of these groups, on average more than half of the employees are required to push or pull with their hands, and over half of their day is spend standing or walking.

In contrast, less than 35 percent of workers in the cognitive nonroutine and cognitive routine groups push or pull with their hands. These groups also spend much less time standing/walking.

Decision-Making at Work

The last figure looks at two cognitive task requirements:

  • The percentage of workers where decision-making in uncertain situations or conflict is required
  • The percentage of workers whose supervision is based on broad objectives and review of results

These requirements are easier to think about in terms of a spectrum. For example, occupations that require less cognitive activity either lack decision-making entirely or involve straightforward decisions from a predetermined set of choices.

Similarly, occupations with a greater cognitive requirement usually involve broad objectives with end-result review only, while more manual occupations require detailed instructions and frequent interactions with supervision (for example, a consultant’s quarterly performance review versus daily quality control checks in a factory).

cognitive task requirements

By a large margin, the cognitive nonroutine occupations involve more challenging decision-making and less frequent interactions with supervision. The other occupational groups all have fewer than 10 percent of employees engaging in these types of cognitive tasks.

Job Growth According to Skill Requirements

The figures above show a stark contrast between the skill requirements in the two occupational groups growing the fastest. The cognitive nonroutine group requires complex decision-making, independent working conditions and less physical effort, while the manual nonroutine group still requires quite a bit of physical effort and does not involve a high level of cognitive tasks.

US Jobs Up

The US Bureau of Labor Statistics says the March 2017 job openings rate for total nonfarm was 3.8 percent, and the hires rate was 3.6 percent. Job openings rates in finance and insurance; professional and business services; health care and social assistance; and accommodation and food services were higher than the overall rate.  Rates were lower in mining and logging; construction; durable and nondurable goods manufacturing; wholesale and retail trade; transportation, warehousing, and utilities; information; real estate and rental leasing; educational services; arts, entertainment, and recreation; and federal, and state and local government.

Hires rates were higher than the total nonfarm rate in mining and logging; construction; retail trade; professional and business services; arts, entertainment, and recreation; and accommodation and food services. Hires rates were lower in durable and nondurable goods manufacturing; wholesale and retail trade; transportation, warehousing, and utilities; information; finance and insurance; real estate and rental leasing; educational services; other services; and federal and state and local government.

Industries with high job openings rates and high hires rates need more workers, and hiring is strong. In March 2017, industries in this category included professional and business services and accommodation and food services.

Industries with low job openings rates and low hires rates have few job openings and are hiring few workers. In March 2017, industries in this group included federal and state and local government, educational services, and information.

The data is preliminary, from the Job Openings and Labor Turnover Survey and are seasonally adjusted.

Much Doubt Surrounds VIX Index’s Optimism

From Moody’s

Financial markets were recently visited by a rarity. During the past week, the VIX index closed under 10 points on May 8 and 9. Since its start in 1990, the VIX index has closed under 10 points on only 11, or 0.1%, of the span’s nearly 7,000 trading days.

Today’s very low VIX index reflects a great deal of confidence that there won’t be a deep sell-off by equities. Not only is there effectively little demand for insuring against a harsh correction, but sellers of such insurance are will to accept a low price for protection against a market plunge.

This insouciance seems odd given how richly priced the US equity market is relative to corporate earnings and the prospective returns from other assets such as corporate bonds. The current market value of US common stock — according to a model based on pretax profits from current production and Moody’s long-term Baa industrial company bond yield — exceeds its midpoint valuation by a considerable 24%. During 1999-2000’s memorable equity rally, the market value of US stocks first climbed 24% above its projected midpoint in 1999’s first quarter and would remain at least that high through 2000’s second quarter. During January 1999 through June 2000, the actual market value of US common stock exceeded its projected midpoint by 51%, on average.

Another comparison of the two periods shows a similarly striking difference between them. The earlier period averages of a 15.4:1 ratio for the market value of common stock to pretax operating profits and 8.05% for the long-term industrial company bond yield were far above the recent ratio of 11.7:1 and the latest Baa industrial yield of 4.68%.

In stark contrast to the current situation, during January 1999 through June 2000 the VIX index averaged a substantially higher 24.3 points when the market value of US common stock was at least 24% above its projected midpoint. Back then, the market had a greater appreciation of the considerable downside risk implicit in an overvalued equity market.

Two prior cases of a below-10 VIX index preceded vastly different outcomes

January 2007 and December 1993 were the two prior moments when the VIX index spent some time under the 10-point threshold. What followed them differed drastically.

January 2007 was merely 11 months before the December 2007 start to the worst recession since the Great Depression. In contrast, December 1993 was followed by 1994’s 4.0% annual advance by real GDP that was the first of a seven year span that had real GDP growing by a now unheard of 4.0% annually, on average. Far different was 2007’s 1.8% annual rise by real GDP that was at the start of what would be real GDP’s 0.9% average annual rise of the seven-years-ended 2013.

In the year following December 1993’s ultra-low VIX score, the market value of US common stock fell by -3.2% despite 1994’s 18.6% surge by pretax operating profits. A lift-off by the average 10-year Treasury yield from Q4-1993’s 6.13% to Q4-1994’s 7.96% was to blame for 1994’s short-lived drop by share prices. Nevertheless, partly because of 1994’s very strong showing by business activity, the earnings-sensitive high-yield bond spread narrowed from Q4-1993’s 438 bp to Q4-1994’s 350 bp.

For the year following January 2007’s brief stay by a less than 10-point VIX index, a drop by the 10-year Treasury yield from January 2007’s 4.64% to January 2008’s 4.00% failed to stave off a -3.4% drop by the market value of US common stock largely because of yearlong 2007’s -7.5% contraction of pretax operating profits. A swelling by the high-yield bond spread from January 2007’s 287 bp to January 2008’s 674 bp stemmed from the worsened outlook for business activity.

VIX Index and high-yield EDF differ drastically on yield spreads

May-to-date’s average VIX index of 10.4 points favors a 312 bp midpoint for the high-yield bond spread, which is much thinner than the recent actual spread of 377 bp. Throughout much of 2016, the VIX index proved to be a reliable leading indicator of where the high-yield spread was headed. Nevertheless, if only because the VIX index now resides in the bottom percentile of its historical sample, a higher VIX index is practically inevitable. Once the VIX index approaches its mean, the high-yield spread will be much wider than the recent 377 bp. (Figure 1.)