Technically Speaking, February 2016

LETTER FROM THE EDITOR

Many of us have been planning to attend the MTA’s Annual Symposium and February might be the time to start acting on those plans. According to a study done by CheapAir.com:

In 2014, [we] amassed [a database of] 1.5 billion airfares as we watched 4,986,522 trips, recording the lowest fare for each trip every day from 320 days in advance up until one day before flight time. It’s a treasure trove of cool info if you’re an airfare geek; or, a curious use of terabytes, if you’re anybody else.

Since the majority of questions we get asked every year start with “When is the best time to book my flight to…”, the first thing we always do with this data is determine, on average, how far in advance should you book your flight to get the lowest fare.

This year, for domestic flights, the answer is 47 days.

This is actionable market data and there’s even a chart that shows now is the time to start watching airfares for those flying to New York which will kick off on April 6, 2015 and run through April 8. You can learn more about the Symposium here and I hope to see many of you there.

In addition to Technically Speaking, the MTA provides actionable ideas at chapter meetings and at the Annual Symposium. It’s time for many of us to start planning for that meeting. It will be held in New York City.

In the rest of the magazine, we provide you with more traditional market analysis. As always, please let us know what you’d like to see in future issues of Technically Speaking by emailing us at editor@mta.org.

Sincerely,

Michael Carr

What's Inside...

TALKING NUMBERS: TECHNICAL Vs. FUNDAMENTAL INVESTMENT RECOMMENDATIONS

Editor’s note: this paper compares the opinions of technical analysts with those of fundamental analysts using forecasts made on a...

Read More

CMT STUDY SUPPLEMENT P&F CHARTS FOR BEGINNERS: THE ULTIMATE TOOLS FOR MAKING PROFITABLE INVESTMENT DECISIONS

Editor’s note: this was originally published by Intalus as a “How to” guide for Tradesignal users. It is reprinted...

Read More

THE PERPETUITIES THAT ARE NO LONGER PERPETUAL

Editor’s note: This article was originally published at the Global Financial Data blog and is reprinted here with permission.

...
Read More

WHO ATE JOE'S RETIREMENT MONEY? SEQUENCE RISK AND ITS INSIDIOUS DRAG ON RETIREMENT WEALTH

Editor’s note: this was originally published at GMO’s web site in August 2015. The full paper can be found

Read More

RETHINKING INVESTMENT PERFORMANCE ATTRIBUTION

Editor’s note: this was one of the most widely read papers at SavvyInvestor.net in 2015. It was originally published...

Read More

BLOOMBERG BRIEFS: CHARTS THAT SHOW GLOBAL STOCK MARKETS ARE TEETERING AT KEY SUPPORT

Editor’s note: this originally appeared in Bloomberg Briefs on January 21, 2016 and is extracted below.

The MSCI World...

Read More

ETHICS CORNER: WHAT CAN I SAY ABOUT THE CMT EXAM ON MY RESUME?

This might be the most frequently asked question related to ethics – what can I say about my participation in...

Read More

AMBA: THE FULL CYCLE OF TRADE

Ambarella, Inc. (AMBA) by far was 2015’s ‘everything’ trade. Never can I remember a single stock doing so much in...

Read More

CHART OF THE MONTH

Editor’s note: Crestmont Research maintains charts on a number of fundamental data series. Much of this data can be applied...

Read More

TALKING NUMBERS: TECHNICAL Vs. FUNDAMENTAL INVESTMENT RECOMMENDATIONS

Editor’s note: this paper compares the opinions of technical analysts with those of fundamental analysts using forecasts made on a CNBC show. The complete paper can be found at SSRN and was the subject of a recent MarketWatch article.

Abstract: This paper studies the real-time value of technical and fundamental investment recommendations broadcasted simultaneously on the TV show “Talking Numbers.” Considering individual stocks, technicians outperform leading fundamental analysts in predicting upward and downward price movements over investment horizons of one to twelve months. Technicians also deliver a significant alpha with respect to the Fama–French and momentum benchmarks. Regarding market indexes, other equity indexes, Treasuries, and commodities, both technicians and fundamental analysts deliver poor forecasts. The evidence supports the notion that technicians can detect insider buying and selling of individual stocks, whereas fundamental analysis is virtually worthless.

 1.  Introduction

This paper employs a novel database from “Talking Numbers” to assess the value of technical and fundamental analyses. Hosted by CNBC and Yahoo, “Talking Numbers” is a TV show that confronts the investment recommendations of leading technicians and fundamental analysts. The show provides a unique setup for understanding the value of financial analysis. In the first place, the head-to-head simultaneous recommendations of technicians and fundamental analysts on the same assets over similar investment horizons establish an ideal laboratory in which to compare the relative worth of the two investment approaches. As both schools of thought are exposed to the same public information during the broadcast, we are able to examine the extent to which technicians and fundamental analysts possess private information and are able to detect insider trading and process the flow of public information effectively.

Moreover, the analysis of dual recommendations is robust to several biases characterizing the studies of analysts’ forecasts. First, the show participants are well positioned and thus less prone to experience and reputation concerns (Graham, 1999; Sorescu and Subrahmanyam, 2006) as well as career concerns (Hong et al. 2000; Clement and Tse, 2005). Second, the simultaneous recommendations eliminate potential cross-herding between analysts. Third, our experiments are fairly robust to data-mining. To our knowledge, we are the first to visit the recommendations broadcasted in “Talking Numbers” as well as the first to assess the performance of technical recommendations, as the vast literature on technical analysis has studied technical rules.

Overall, we study 1620 dual recommendations (1000 recommendations for 262 individual stocks and 620 for other assets). The recommendations cover the largest stocks (e.g., Apple, Google, Exxon Mobil), the most liquid commodities (e.g., gold, oil), the main exchange rates (e.g., the U.S. dollar), the major bonds (e.g., the U.S. ten-year notes), the major equity indexes (e.g., the various Dow Jones indexes) and prominent sectors (e.g., technology, real estate, pharmaceutical). Thus, our experiments are fairly robust to liquidity concerns or the presence of extreme observations.

Figure 1 highlights the major findings for technical and fundamental individual stock recommendations during the sample period from November 2011 through December 2014. It plots the cumulative abnormal returns (CARs) starting from the recommendation broadcast (Panels A and B) and the cumulative payoffs generated by four spread portfolios (Panel C). In particular, we consider buy-minus-sell and strong-buy-minus-strong-sell technical and fundamental portfolios.

Figure 1. Cumulative abnormal returns (CARs) and portfolio payoffs The top two panels depict the CARs for the technical and fundamental recommendations, starting with the recommendation broadcast (day zero) and ending twelve months afterwards. The p-values in the first test in each panel on the left-hand side correspond to Mann–Whitney–Wilcoxon statistics for the null hypothesis asserting that the CARs at six, nine or twelve months corresponding to buy and strong buy are not significantly different from those corresponding to sell and strong sell. The other tests are Wilcoxon signed-rank tests for the null hypothesis asserting that the CARs at six, nine or twelve months are indistinguishable from zero. The bottom panel presents the cumulative returns of four zero-cost trading strategies: (i) buy minus sell for fundamental recommendations; (ii) strong buy minus strong sell for fundamental recommendations; (iii) buy minus sell for technical recommendations; and (iv) strong buy minus strong sell for technical recommendations. Alpha is the annual Jensen’s alpha obtained from regressing the portfolio’s excess return on the market excess return.

Editor’s note: MarketWatch also included a chart highlighting the conclusion of Figure 1.

The evidence shows that technicians display rather impressive stock-picking skills, recommending the purchase of undervalued stocks along with the sale of overvalued stocks, while fundamental analysts provide no value whatsoever. To illustrate, observe from Panel A that the twelve-month CARs of the strong-sell, sell, hold, buy, and strong-buy technical recommendations are –8.13%, –0.59%, –0.10%, 1.56%, and 8.97%, respectively. In contrast, Panel B shows that the CARs attributable to fundamental analysis do not align with the type of recommendation. If anything, the sell recommendations generate higher CARs than the buy recommendations.

Similarly, observe from Panel C that the value of the fundamental buy-minus-sell portfolio is non-positive throughout the entire sample period and the value of the fundamental strong-buy-minus-strong-sell portfolio rotates around zero. In contrast, the value of the two corresponding technical portfolios is positive and tends to increase with the investment horizon. Specifically, over the sample period, the buy-minus-sell portfolio value is $0.42 for the $1 initial long and $1 initial short positions, recording an annual CAPM alpha of 14.8% (t = 2.35). More prominently, the value of the strong-buy-minus-strong-sell portfolio is $2.30, recording a strikingly large annual alpha of 45.3% (t = 3.58). Accounting for the trading costs upon entering and exiting a position, the threshold cost that would set the alpha of the buy-minus-sell (strongbuy-minus-strong-sell) portfolio to zero is 0.62% (2.91%) per transaction. Of course, such large alphas emerge from a short sample period. Attempting to extrapolate those alphas to considerably longer periods would be excessive. Nevertheless, technical recommendations seem to add value in predicting both upward and downward stock price movements. The latter is perhaps more convincing given the positive trend characterizing financial markets during the sample period.

Further analysis shows that technicians outperform in two dimensions. First, they generate a higher proportion of correct recommendations, whereby a correct recommendation amounts to a buy (sell) recommendation followed by an advancing (diminishing) stock price. Second, they produce higher gains following correct recommendations and lower losses following incorrect recommendations. Consistently, for time horizons ranging from one month to twelve months, positive buy and strong-buy technical recommendations are followed by higher returns than buy and strong-buy fundamental recommendations. Likewise, negative sell and strong-sell technical recommendations are followed by lower returns than the corresponding fundamental recommendations. The success of technicians in picking individual stocks is robust to controlling for common risk factors as well as for firm-level size, book-to-market, volatility, trading volume, and past trends in stock prices. It is also unaffected by the analyst’s gender, by the immediate impact of the broadcast on the stock price (which is found to be highly significant), and by reasonable trading costs. As an aside, there are only 10% females among both technicians and fundamental analysts. Within this subsample of female analysts, the evidence also shows significant outperformance of technical recommendations.

We demonstrate that the inability of fundamental analysts to predict future returns of individual stocks is uniform across various industries (excluding services) and across all the equity styles considered, namely size, book-to-market, past return, and volatility. In contrast, technical stock recommendations produce robust predictions for all styles and industries, excluding mining. The failure to predict returns on mining stocks mirrors the inability of all the participants in “Talking Numbers” to predict future commodity prices. In fact, both schools of thought have uniformly failed to predict returns on all the broader asset classes, for example Treasuries, market indexes, and equity sector indexes.

The difference in performance among individual stocks versus broad indexes could be attributable to arbitrage capital in that investable patterns in broad market indexes immediately attract capital and thus are traded away. Moreover, common wisdom suggests that the abilities to process public information effectively or extract private signals from prices and volume mostly characterize individual stocks. Indeed, in his bestseller on technical trading The New Trading for a Living, Elder (2014) points out (page 36) that “Charts reflect all trades by all market participants – including insiders. … Technical analysis can help you detect insider buying and selling.” Of course, such (illegal) insider trading pertains to individual stocks only.

Three strands of studies are related to our work. The first investigates the value of fundamental recommendations. Jegadeesh et al. (2004) find that the level of analysts’ consensus recommendation provides little value over other investment signals. Stickel (1992) and Womack (1996) document value in revisions of consensus recommendations, while Barber et al. (2001) report the disappearance of that value in the presence of transaction costs. Likewise, Jaffe and Mahoney (1999) and Metrick (1999) exhibit the lack of forecasting value focusing on comprehensive samples of investment newsletters. Here, we show that even considering the elite group of analysts, appealing to the large crowd, fundamental analysts provide no investment value.

The second strand deals with technical rules. Theoretically, Brown and Jennings (1989) and Blume et al. (1994) show that past prices and trading volume could reveal the presence of private information, and Zhu and Zhou (2009) show that combining the moving average with other technical signals improves asset allocations. Empirically, the evidence on the strength of technical analysis is mixed. Brown et al. (1998) show that the Dow rules exhibit some predictive ability. Brock et al. (1992) find that technical rules predict returns on stock indexes. However, such predictability vanishes in the presence of transaction costs, according to Bessembinder and Chan (1998). Allen and Karjalainen (1999) and Sullivan et al. (1999) do not find substantial value in technical rules, while Lo et al. (2000) show that technical patterns predict stock returns. Han et al. (2013) report profitability based on moving averages, and Neely et al. (2014) show that technical indicators exhibit predictive power for the equity premium. Notably, our paper assesses the value of technical recommendations rather than publicized technical rules. Thus, we consider the possibility that the technicians participating in the show use propriety technical rules.

The third strand examines the immediate impact of media-publicized recommendations. Liu et al. (1990), Barber and Loeffler (1993), and Mathur and Waheed (1995) document abnormal returns shortly after the publication of recommendations in the newspaper, and Hirschey et al. (2000) report abnormal returns on the day after the recommendations are posted on the Internet. However, Dewally (2003) detects no market reaction to recommendations posted by a newsgroup on the Internet. Neumann and Peppi (2007) find that the recommendations made by Jim Cramer, the host of the CNBC “Mad Money” program, are followed by abnormal payoffs during the following day, and Busse and Green (2002) find that recommendations broadcasted on the CNBC “Morning Call” and “Midday Call” programs produce abnormal immediate profits within 15 seconds. Relative to these studies, we examine the longer-term value, rather than the immediate impact, of recommendations. Eventually, technical stock recommendations provide value not only for immediate trading, but also for a few months, up to a year, following the broadcast.

Indeed, to our knowledge, we are the first to confront fundamental analysts and technicians. Our setup is unique in that both schools of thought are exposed to the same public information, simultaneous recommendations are made by well-positioned analysts and for similar investment horizons, and the collection of assets covered is comprehensive. A remaining task is to shed more light on the outperformance delivered by technical stock recommendations. In active asset management, performance reflects stock-picking and benchmark-timing skills, where stock-picking skills could further be attributable to industry and/or style rotation. Economic theory (e.g., Admati et al. 1986) typically formulates skills through the managerial ability to process private signals. Empirically, however, one cannot conclude whether a positive alpha fund manager possesses private information or whether that manager perhaps has the ability to process public information more effectively. Of course, there has always been the bad model concern. In particular, performance specifications may improperly control for those factors characterizing the risk–return trade-off; further, they are likely to mis-specify the nature of time variation in both benchmark loadings and risk premiums, if there is such time variation.

In the context of technical recommendations, we rule out the possibility of market timing, industry rotation, or style rotation. Certainly, technicians fail to predict returns on market, sector, and other broad indexes. Putting aside bad-model concerns, technicians could indeed process public information more effectively through their investment toolkits. As noted earlier, using various charts, technicians can help to detect buying and selling insider (illegal) trading. Detecting insider buying (selling) helps to predict upward (downward) moves in individual stock prices. The remainder of the paper is organized as follows. Section 2 explains the nature of the “Talking Numbers” broadcast and the participating analysts as well as the methodology used to convert the content of the show into ultimate investment recommendations. Section 3 describes the data. Section 4 reports the empirical findings corresponding to individual stocks. Section 5 extends the analysis to the other asset classes noted earlier. Section 6 concludes. A list of the assets covered in the show and the recommendation classification system are detailed in the appendices.

Editor’s note: sections 2 through 5 are omitted but can be found in the original paper at SSRN. All references can also be found in the original paper.

6.  Conclusion

This study employs a novel database from a TV broadcast in a head-to-head confrontation of the performance of fundamental analysts versus technicians to assess their economic value. The data are composed of fundamental and technical simultaneous recommendations for the same underlying assets with the same investment horizon. The unique setup of the broadcast, featuring synchronized dual recommendations, a great variety of asset classes and the presence of leading professionals, offers an ideal laboratory in which to assess the value of financial analysis. Ultimately, both technicians and fundamental analysts are exposed to the same public information and their recommendations could differ due to the distinct toolkits applied.

The simultaneous broadcast equates analyst exposure to herding, eliminates time gap biases such as cross-herding among analysts, and allows one to control for the immediate short-term effect of the broadcast itself. The high profile of the participating analysts levels the playing field, thereby mitigating the biases related to analysts’ quality, experience, and career concerns. In addition, the broad focus of the program and the comprehensive list of assets covered make our findings general and mitigate the concerns about illiquidity biases and exceptional observations.

Consistent with the semi-strong market efficiency hypothesis, the fundamental analysis reveals no ability whatsoever to predict future returns on all the assets examined, excluding the 24 U.S. dollar. Surprisingly, the technicians exhibit a significant predicting ability of individual stock returns, which could point to market inefficiency even among the universe of the largest and most-traded stocks. For a start, trading individual stocks based on technical recommendations yields large payoffs even after accounting for reasonable transaction costs. Moreover, such stock-picking ability is unaffected by controlling for common risk factors, the firm’s characteristics, including past returns, industry effects, the analyst’s gender, the potential immediate impact of the broadcast, and outliers.

However, the predictive ability of technicians characterizes individual stocks only (and the U.S. dollar). In contrast, returns on more general asset classes, including the market indexes, equity sectors and non-U.S. equity indexes, bonds, and commodities, are unpredictable. Such differential results support the notion that the predictive ability of technicians relies on the possession of proprietary investment toolkits. Considering the nature of technical analysis, one appealing explanation is that such toolkits enable their users to extract private information from informed buying and selling activities, which are more applicable to individual stocks and less so to broader asset classes. Of course, arbitrage capital is invested more in general assets and indexes, thereby eliminating abnormal profits, if there are any, from trading those assets.

Contributor(s)

Doron Avramov

Doron Avramov is at The Hebrew University of Jerusalem, email: davramov@huji.ac.il. 

Guy Kaplanski

Guy Kaplanski is with Bar-Ilan University, Israel, email: guykap@biu.ac.il. 

Haim Levy

Haim Levy is also at The Hebrew University of Jerusalem, email: mshlevy@mscc.huji.ac.il.

CMT STUDY SUPPLEMENT P&F CHARTS FOR BEGINNERS: THE ULTIMATE TOOLS FOR MAKING PROFITABLE INVESTMENT DECISIONS

Editor’s note: this was originally published by Intalus as a “How to” guide for Tradesignal users. It is reprinted here as a review of basics for CMT candidates and other members wanting a refresher on P&F basis. A five-minute video is also available.

Point & Figure charts are one of the oldest chart forms and have many advantages in analysis and the derivation of profitable trading signals. Because of their unique design and presentation P&F charts, in many circles, are viewed as complicated or even confusing – wrongly. The origins of the Point & Figure charts date back to the late 19th century, when price movements were still drawn by hand and made of numbers. A.W. Cohen popularized this technique and was the one who introduced the now common representation: P&F charts are composed of individual boxes, each representing a certain amount of movement. In short: P&F charts are based on price action, not time. If there are no significant price moves, nothing changes.

What Are the Advantages Point & Figure Charts?

This special design of this chart formation does an excellent service when it comes to eliminating insignificant noise from the chart. This way support and resistance zones and price patterns can be detected much clearer and easier. Figure 1 shows the price trend of Adidas shares as a P&F chart since 2011. As a comparison, figure 1b shows the candlestick chart for the same underlying. Just a brief glance is enough to determine that the P&F chart looks much tidier. It filters out insignificant price movements, while a candle is drawn in the traditional candlestick chart for each trading day – regardless of how significant the price movement was.

Box Size and Reversal Threshold – Parameters in Point & Figure Charts

The most striking feature of the P&F charts is its structure. X-columns represent rising prices, while O-columns stand for falling prices. Each of these columns in turn consists of individual boxes. Their size can be adjusted individually and determines the intensity of the filtering.

 An example: On a rising X-column, the box at 100 using a box size of 1 represents the price range from 100 to 100.99. The price therefore has to increase to at least 101 in order to show a new X-box in the chart. Using a box size of 5, a new X-could would only be drawn if the price rises to at least 105, etc. There are two common methods for defining the box size:

  • constant: for example, 1 box = X or Y points or currency units
  • percentage: for example, 1 box = 1%

Basically: The box size determines the sensitivity of the Point & Figure charts. So it is clear that the time horizon of traders is decisive for the choice of an appropriate box size. A very short-term oriented market participant could for example choose a box size of 5 points in the DAX, while another trader opts rather for values of 50 points, because he prefers a stronger filtration. Especially in long-term charts and securities whose prices are subject to large fluctuations over time, the use of the percentage-box setting is recommended. Values in the range of 0.5 percent to 2 percent are usual. Point & Figure charts, can be used both in the short-term intraday range – here is the origin of this chart form – and daily data. Time levels above it (weekly, monthly), however, are not recommended.

 The second parameter of a P&F chart is the so-called reversal threshold. This number indicates how many boxes of a counter-movement are required to in the chart a trend change – from X to O or vice versa – to initiate. The most common three-box reversal charts are used, i.e. a new column in the P&F chart is only started when there is a counter-movement to the extent of at least 3 boxes. The following table illustrates the charting process with a simple example.

Simple Means of Determining Support and Resistance Zones

P&F charts are ideal for the analysis of the securities markets. By filtering insignificant price moves, support and resistance zones will be determined quickly and clearly. The more times the price bounces on a price level (multiple columns at the same level) the more significant the resistance or support. The following figure shows the course of the currency pair EURJPY including support and resistance lines.

Objective Trend Lines – A Specialty of Point & Figure Charts

Trend lines play an important role in the technical analysis, but the technology is shaped by a sense of subjectivity – not so for P&F charts. In the traditional three-box reversal chart trend lines are in fact always drawn at a 45-degree angle and are thus clearly defined. If we look again at the price action of the currency pair EURJPY – this time you can see all the trend lines.

Identifying Point & Figure Patterns Automatically

Apart from support and resistance zones and trend lines Point & Figure charts offer clear patterns that can be used as trading signals. The simplest pattern that also forms the basis for all other more complex price patterns is called the double top breakout. Unlike the name suggests, it is not a bearish but a bullish pattern. A double top breakout is namely present when an X-column breaks through the high of the previous X-column.

The bearish variant called double top breakdown occurs when the current O-column falls below the low of the previous O-column. Triple tops or bottoms work in a similar manner – the only difference is that now there are two columns at the same level and then need to be broken.

Important here: In 3-box reversal charts, double or triple tops or bottoms can both be reversal or continuation patterns.

A bull trap occurs whenever the breakout consisting of only one box is followed by a 3-box reversal. The opposite variant is called a bear trap. Another pattern that is detected automatically by the software is called the catapult. A graphical overview of these price patterns is shown below.

Determining Price Objectives Using Point & Figure Charts

There are two counting methods for determining target price – the horizontal and the vertical. While the width of a consolidation zone defines the future price potential in the horizontal method, the vertical method is based on the length of a significant column which determines the upside or downside potential. Due to the better practicality we restrict ourselves at this point on the vertical method.

Let us look how to determine an upside price target. In the following example the first X-column after the bottom represents such a significant column (green label). Before this column can be used for calculation of a price target, it must be completed, which is the case when a 3-box-revesal occurs. Then the number of boxes counted in the significant column is multiplied by the respective box size and the reversal threshold. In the following graph, the significant column includes 4 boxes with the size of 1 point.

The target price can therefore be calculated as follows:

4 * reversal factor * box size or 4 * 3 * 1 = 12.

These 12 boxes are then added to the low of the lowest O-column and give the price target. This target is activated, however, only when the high of the significant column is broken (see yellow highlight).

Important Tips for Dealing With Point & Figure Price Targets

The determination of price targets using P&F charts proves to be a useful tool for the analysis and the generation of trading signals. Despite all the advantages the following aspects should be considered:

  • The target itself should never be the exclusive reason to open a position.
  • Before overarching price objectives become relevant, previous smaller price targets have to be met.
  • Negation of price targets: if the price moves below (above) the low (high) of the pattern, which was considered as the basis for a target price, the vertical count is negated.
  • When horizontal targets match with vertical price targets, the likelihood that the target projection will be achieved increases.
  • An upside target (downside target) should be achieved with a higher probability when it is generated above (below) a 45-degree trend line. Counts against the trend, however, are to be treated with caution.

Point & Figure charts provide tangible benefits in the analysis of financial markets – both for short-term traders as well as for strategically oriented investors. The focus is on the filtering of insignificant price information and the clear objective of generating trading signals.

For more information on Tradesignal and for additional summaries of indicators and strategies, please visit Intalus.

Contributor(s)

Intalus

THE PERPETUITIES THAT ARE NO LONGER PERPETUAL

Editor’s note: This article was originally published at the Global Financial Data blog and is reprinted here with permission.

At the beginning of 2015, the British government had £2.59 billion in undated securities outstanding, representing about 0.23% of the British government’s gilt portfolio. These bonds had no set redemption date, but could be redeemed with three months’ notice. In theory, the gilts could have existed forever.

These securities had originally been issued between 1853 and 1946 and replaced securities that originated back in the 1700s. Unfortunately, they are no more.  The last undated gilt, also referred to as a perpetuity because it had no redemption date, was called in by the British government on July 5, 2015.  Three hundred years of financial history has come to an end.

Perpetuities Begin

To understand why perpetuities existed, you have to go back to the beginning of Britain’s financial history.  Originally, loans were made direct to the sovereign, rather than to the government.  This put the lender at risk because the king could default on a loan, and the lender had little recourse to collect his money. 

After the glorious revolution in 1688, loans were made to the government, not the king.  The government tried various ways of raising money, such as issuing lotteries, or issuing annuities, which were paid for the life of the annuitant.  Naturally, lenders tried to game the debt by having children buy annuities in order to maximize the flow of interest payments until the purchaser died.  Eighty-year old men did not buy annuities.

Rather than trying to keep track of every annuitant and selling annuities to children, the British government introduced perpetuities which paid interest forever.  To further simplify things, the British government consolidated all the outstanding annuities and other bonds into a single security paying 3% interest. The 3% annuity had been issued in July 1729 and was converted into the 3% Consolidated Loan in July 1753. The 3% Consolidated Loan was refunded into a 2.75% Loan on April 5, 1888 and was converted into a 2.5% Loan in April 1902.  The Consolidated Loan provided an unbroken source for data on British bond yields from 1729 to 2015.

Until World War I, almost all of the government’s outstanding debt was in the form of undated gilts.  In 1910, of the £762 million in outstanding government debt, £567 million was in 2.5% Consolidated Debt.  By the time the 2.5% Consolidated Loan was redeemed in 2015, only £162.1 million was outstanding.

The idea of issuing bonds to be redeemed one, five, ten or thirty years from maturity and refunding these issues when they matured was the exception. The British government mainly issued debt when there was a war and redeemed debt during peacetime.  The government had yet to figure out how to run a deficit every single year, even in peacetime.  Britain was not alone in issuing perpetuities.  Most European governments had perpetuities outstanding which represented a large portion of their debt prior to World War I.

The Impact of World War I

During World War I, the British government had to issue large amounts of debt to fund the war. Because interest rates rose as a result of war-time inflation, lenders were unwilling to provide funds at 2.5% anymore. The British government was forced to issue large amounts of debt at higher interest rates.  Nevertheless, during the 1920s and 1930s, after inflation had subsided and interest rates returned to pre-war levels, the British government once again consolidated its outstanding war loans into perpetuities. 

The government issued the 3.5% Conversion Loan on April 1, 1921 in exchange for the 5% National War Bonds.  This issue could not be redeemed before April 1, 1961.  The 4% Consolidated Loan was issued on January 19, 1927 in exchange for various War Bonds and Treasury Bonds paying between 4% and 5% interest and could not be redeemed before 1957.  The largest of the War Loan issues was the 3.5% War Loan issued on December 1, 1932 in exchange for the 5% War Loan due between 1929 and 1947 and could not be redeemed before 1952.  This loan represented £1938.6 million.  All of these issues were outstanding in 2015.

In addition to these three conversions of War Loans, the British government had issued 2.5% Annuities on June 13, 1853 in exchange for South Sea Stock, Old South Sea 3% Annuities, New South Sea 3% Annuities, Bank of England 3% Annuities from 1726 and 3% Annuities from 1751.  Thus, the direct descendants of the remnants of the South Sea Bubble of 1720 were still around until they were finally redeemed on July 5, 2015.

When the British government nationalized the Bank of England in 1946, it issued 3% Treasury Stock in exchange for shares in the Bank of England.  There was £54.6 million in these securities outstanding when they were redeemed on May 8, 2015.  Two other securities, the 2.75% Annuities, originally issued on October 17, 1884 and the 2.5% Treasury stock issued on October 28, 1946 were also redeemed.  No undated gilts were issued by the British government after 1946.

The End of Perpetuities

Between February 1, 2015 and July 5, 2015, all eight outstanding undated gilt issues were called in by the British government.  Great Britain was the last country to have perpetuities outstanding.  Other than Great Britain, all countries had redeemed or their perpetuities by the 1950s, and no country issued any perpetuities after World War II.  Some governments, such as France, have issued 50-year bonds, and some companies, such as Disney, have issued 100-year “century” bonds (known as Sleeping Beauties), but no government or corporation has issued undated bonds.

The only perpetual financial instruments that still exist are common stock issued by corporations.  Common stock has no redemption date and exists as long as the corporation does, but when a corporation is taken over, the common stock ceases to exist. 

Of the 5000 listed securities traded in the United States, only 13 date from the nineteenth century.  The longest dated security is JPMorgan Chase & Co. which started trading as The New York Chemical Manufacturing Co. (later the Chemical Bank) on June 26, 1824.

JPMorgan Chase & Co. 1824-2016

The others securities that were originally issued in the 1800s and still exist are the Providence and Worcester Railroad Co. (1853), American Express (1856), ADM Diversified Equity Fund (originally Adams Express) (1866), Consolidated Edison Co. (1885), ExxonMobil (originally Standard Oil) (1886), Texas Pacific Land Trust (1888), Laclede Group Inc. (1889), NL Industries Inc. (originally National Lead) (1891), General Electric Co. (1892), UGI Corp. (originally United Gas Improvement Co.) (1895), Kansas City Southern Industries (1897), and Exelon Corp. (originally the Philadelphia Electric Co. (1899).  Three of them, Laclede Group, Inc. NL Industries, Inc. and General Electric Co. were part of the Dow Jones Industrial Average in 1896.

One thing that was nice about the undated gilts was that you could easily calculate their yield (assuming the loans weren’t called in three months) by dividing the yield by the price of the bond.  So the 2.5% Consolidated Loan yielded 5% when the loan was at 50 (2.5%/50) or 3.33% when the loan was at 75 (2.5%/75).  The chart below shows the yield on the Consolidated Loan from 1729 until 2015.  Inverting the chart provides the price of security.  Enjoy this record of financial markets over the past 285 years because we will be unable to update it anymore.

Contributor(s)

Dr. Bryan Taylor

Dr. Bryan Taylor President & Chief Economist, Global Financial Data Dr. Bryan Taylor serves as President and Chief Economist for Global Financial Data. He received his B.A. from Rhodes College, his M.A. from the University of South Carolina in International Relations, and...

WHO ATE JOE'S RETIREMENT MONEY? SEQUENCE RISK AND ITS INSIDIOUS DRAG ON RETIREMENT WEALTH

Editor’s note: this was originally published at GMO’s web site in August 2015. The full paper can be found there.

Summary

Defined Contribution (DC) plan participants are haunted by an invisible risk called sequence risk (sometimes called sequence-of-returns or path dependency risk), that is, getting the “right” returns but in the “wrong” order. Sequence risk in the retirement phase has been studied extensively. Sadly, not as much attention has been paid to sequence risk during the accumulation phase, but it is equally important. Sequence risk rears its head in this way: Even if an individual employee does everything “right” – participates in the plan, defers income religiously, takes full advantage of the company match, and even gets his exact expected return from his investments – he can still fall victim to disappointing final wealth outcomes if the order of those returns works against him. Current models of asset allocation – the most popular being static, or predetermined, target date glidepaths – “know” that sequence risk exists, but behave as if there is nothing that can be done to mitigate it. Valuation-based dynamic allocation, on the other hand, can help soften the bite.

Here’s a riddle for you: Who ate almost $300,000 of Joe’s retirement money? Meet some pretend employees, Joe and Jane. They worked for Acme, Inc., and were identical in essentially all aspects of their job, their salary, and their participation in their company’s defined contribution retirement plan (a “typical” 401(k)). Here are some key dimensions to consider:

  • Identical length of time working: Both started working at 25, and both worked for 40 years.
  • Identical starting salary and salary growth.[1]
  • Identical deferral rates and identical company match[2] for a combined 9% annual contribution.
  • Identical investments: Both invested in a typical target date fund (TDF), which started out with a 90% equity allocation, and glided down to below 50% by retirement age.
  • Identical returns: This is a key point. During their 40 years of investing, they both earned exactly 5% annualized real (above inflation, or roughly 8% nominal) after fees.

Identical in virtually every way. At the end of their 40-year careers, however, Jane had $880,000 in her account, while Joe had $590,000. How is this possible? Who ate $290,000 of Joe’s retirement money? How do you explain a 50% gap between these two employees?

The answer? Sequence risk.

Here’s what we didn’t tell you. Joe started his career in 1954, while Jane started in 1967, 13 years later. So, even though Joe earned the same annualized return as Jane, he earned it in a slightly different sequence, and that made all the difference. Sequence risk – an insidious risk in all DC plans – took a sizeable bite out of his potential retirement nest egg.

Sequence risk has been in the shadows for some time. One reason is that sequence risk is typically not a major concern for traditional Defined Benefit (DB) plans. (See sidebar discussion regarding DB plans on page 11.) Second, when it has been discussed in the academic or investment community, the focus has traditionally been on the withdrawal phase of retirement,[3] when cash flows are large. The main thrust of those studies demonstrated that the returns a retiree experiences in the first few years of his/her retirement are extremely important. A significant loss early on, even if it is recouped later, dramatically increases the risk that a retiree will run out of money. Sequence risk is undeniably important in the retirement phase, but most analyses simply start with an assumption that the retiree begins with some large lump sum. But this glosses over the fact that in a DC environment, it takes about 40 years of contributions, matches, and market returns to get to that final lump sum. Sequence risk rears its ugly head wherever cash flows matter – and we know cash flows matter both in the retirement and accumulation phases.

This paper tries to demonstrate the importance of sequence risk during the accumulation phase. The basic message is this: Even employees, like Joe, who apparently do everything “right” by traditional playbooks – stay in the plan, defer their income religiously, take full advantage of their company match, and even get the exact expected return from their investments – can still suffer from the effects of sequence-of-returns risk. That is, they get good returns, but they get them in a bad order (or, more specifically, they get good returns early in their career, and bad returns later when their account balance is higher).

Analysis: quantifying the significance of sequence risk

The Jane and Joe example above is interesting, but it only represents one “run” of history. Another method for measuring the impact is to simulate multiple runs, even thousands, through a stochastic process (akin to Monte Carlo simulations). We can artificially create 20,000 simulations of 40-year runs of history. Before we begin, however, let’s remind ourselves that as it relates to returns, employees confront both investment risk and sequence risk. Investment risk, the variability and distribution of possible returns around the mean, or expected return, is easily observable. Sequence risk, on the other hand, is much more insidious and harder to observe. We need to isolate it in order to see its significance.

We’ll start with a simple case, as above, by constructing a very typical target date allocation structure.[4] We ran 20,000 simulations,[5] and the output (see Exhibit 2) is a distribution of both returns (investment risk) and wealth outcomes (a function of both investment risk and sequence risk). The fact that there is a wide distribution of returns and possible wealth outcomes should not be surprising to anybody. Investment risk is well understood.

Many, however, might be surprised by the magnitude of sequence risk, which we can isolate by focusing on those simulated histories where the realized returns are identical. In our 20,000 simulations, for example, there were close to 1,400 instances where the realized returns were 5% real.[6] Fourteen hundred Joes and Janes, if you will. Yet even if we control for a given realized return, wealth outcomes are still widely dispersed (see the dotted orange box in Exhibit 2). This is the effect of sequence risk. Each of the 1,400 runs occurred in a unique sequence, some advantageous, some less so. The “luckiest” employee’s 5% return netted her over $1,000,000, while her unluckiest colleague, with the identical 5% return, netted $314,000, a massive, almost unimaginable discrepancy.

So now let’s isolate the orange box and see not just the extremes, but what the entire distribution looks like (see Exhibit 3). It is surprisingly vast. Even the “meat” of the distribution could easily result in a $200,000 to $250,000 gap between experiences. For many, this could be the difference between financial security and financial ruin (i.e., running out of money).[7] The real focus, however, should not be on the middle or the outcomes on the right “tail,” what we have called the “Lucky” Cluster. No, the problem is really with the left tail of the distribution, what we are calling the “Unlucky” Cluster of outcomes. Here, the risk of financial ruin is palpable.

Now, the general intuition behind sequence risk – getting the right returns but in a bad order – is that those unlucky outcomes are likely driven by poor returns or nasty drawdowns in the final phases of someone’s career. Let’s call this period the “Final 10 Years.” And the logic is pretty sound. Early in a career, market returns do not really matter too much for the simple fact that there is not much money in the DC account. Cash flows (employer and employee contributions), on the other hand, are the key driver of growing wealth. As an employee approaches mid-career and beyond, the account base is large enough that cash flows (as a percent of that base) are becoming less important while market returns rise in importance. Exhibit 4 confirms the intuition. The Lucky Cluster (those who experienced the full 40-year 5% annualized returns and ended up with very high wealth) tended to get solid, or even fantastic returns in their Final 10 Years. The Unlucky Cluster, the “Joes” of the world, also earned their 40-year 5% annualized returns overall, but got sub-par and, in many instances, negative returns for their Final 10 Years. Returns in the Final 10 Years have a high correlation with terminal wealth, so it’s important to avoid or mitigate drawdowns during any 10-year time frame (remember, ANY 10-year time frame is somebody’s Final 10 Years).

Static portfolios shrug their shoulders.

Okay. Sequence risk – drawdowns and nasty returns late in a career – is a problem. And sequence risk is really unfair to some unlucky employees. So now what? Unfortunately, there is an entire school of thought that believes sequence risk is just an unfortunate problem. It exists, the argument goes, but there is really nothing one can do about it. In fact, this is an assumption built into every single predetermined glidepath in a static TDF.[8] The assumption underlying their design is that there is no way to forecast future returns. These glidepaths typically make no attempt to adjust their asset mix based upon new information. It is one of the cornerstones of Efficient Market Theory that future returns are largely a “random walk.” Today’s price tells you nothing about future returns. It makes no sense, the argument continues, to adjust a portfolio’s asset allocation because future returns are unknowable. In essence, these TDF portfolios shrug their shoulders in the face of sequence risk and say, “Oh, well.” It’s just a hope and a prayer that your nasty returns don’t occur in those Final 10 Years.

Value…might help

We at GMO have a very different belief. Namely, that future returns are the furthest thing from a “random walk”; we can use well-established valuation metrics to forecast returns and help mitigate potential nasty returns in those Final 10 Years. The Cyclically Adjusted P/E ratio (or CAPE) is one such simple metric for determining whether stock markets are cheap, fairly-valued, or expensive9 (see Exhibit 5). A strategy of owning equities when P/Es were low (i.e., cheap) tended to outperform, while owning equities when P/Es were elevated (i.e., expensive) tended to do quite poorly. There are no useful or reliable tools, we believe, for forecasting returns in the short term, say, one or two years. But we believe the mean-reverting nature of P/Es over longer time horizons means that starting valuations correlate quite strongly with future returns. Exhibit 6 shows that correlations rise to close to 0.6 in the 7- to 10-year range. Therefore, “value” can be quite useful for adjusting a portfolio during anybody’s Final 10 Years.

More importantly, however, and perhaps more relevant, valuation-based sensitivity to the attractiveness of stocks can also help mitigate drawdowns. Cheap assets, as it turns out, tend to hold up a bit better during a market tumult. Stocks typically become cheap for a reason, usually because something bad is happening or is feared it will happen. As a result, valuation multiples drop (i.e., stocks get cheaper). Subsequent to this drop in valuation, if something bad actually does occur, much of the bad news has already been discounted. History bears this out as shown in Exhibit 7, which takes a look at starting CAPE ratios over the last 70 or so years, and tracks how different starting valuation levels performed during drawdown periods. Cheap stock markets suffered drawdowns, of course, but they tended to suffer quite a bit less than expensive markets.

Editor’s note: A discussion on implementation with glidepaths can be found in the original paper.

Conclusion

The story about Jane and Joe is, of course, made up. But assigning real names to imaginary individuals was done on purpose. We are morphing from an era of pooled risks in a DB plan, to a DC era of individually-owned risks. Sequence risk is just one of these risks, but an important one. With this transition comes a need for heightened sensitivity to the potential for financial ruin; that “left tail” is not some nameless and faceless actuarial cohort: It is a collection of specific individuals with very specific DC account balances. Risk is no longer just the classic definition of “standard deviation of returns” (if it ever was); no, it is much more personal. It is the risk that individuals may not “get” the right returns they need for a healthy retirement, or, more cruelly, they get them, but in the wrong order. This risk of financial ruin is the right prism through which to examine the issue and look for DC solutions. A valuation-sensitive approach to dynamic asset allocation can help mitigate these “new” risks that have been foisted upon the unlucky individual Joes of the world.

[1] National Association of Colleges and Employers, Average Starting Salary Survey, 2012.

[2]  Hewitt Associates, Trends and Experience in 401(k) Plans, 2009. Throughout most of recent history, a 50 cent match per dollar on the first 6% deferral was the most common match formula. Recent surveys have indicated plans are moving toward more generous matches, meaning cash flows will be even more important going forward, as they relate to sequence risk.

[3] Dr. Wade Pfau, “Sequence Risk vs. Retirement Risk,” RetirementResearcher.com, March, 2015; Larry R. Frank, John B. Mitchell, and David M. Blanchett, “Probability-of-Failure-Based Decision Rules to Manage Sequence Risk in Retirement,” Journal of Financial Planning, Vol 24, Issue 11, p 44, November 2011; R.G. Stout, “Stochastic Optimization of Retirement Portfolio Asset Allocations and Withdrawals,” Financial Services Review, 2008; and Dr. John B. Mitchell, “Withdrawal Rate Strategies for Retirement Portfolios: Preventive Reductions and Risk,” presented at the Academy of Financial Services, October, 2009.

[4] Our “model” target date glidepath begins with an allocation of 90% equities at ages 25 to 35, glides to 85% equity at age 40, 78% at age 45, 70% at age 50, 63% at age 55, 53% at age 60, and 40% at age 65. This is in line with what major glidepath providers do in the industry.

[5] Key assumptions in the exercise are the following: a) a simple two-asset-class portfolio; b) expected stock return is 6% real, with 18% volatility; c) expected bond return is 2% real, with 5% volatility; d) the correlation between stocks and bonds is zero; e) the portfolio is rebalanced annually; f) the demographic data assumes a starting salary of $43,000 and a 1.1% real rate of growth for 40 years; and g) the deferral rate and company match equate to 9% contributions.

[6] We looked at realized returns between 4.9% and 5.1%, or plus/minus 10 basis points around 5% real.

[7] 7 Defining risk as “financial ruin” (i.e., running out of money) is addressed at length in a series of papers written by our colleagues. See Ben Inker and Martin Tarlie, “Investing for Retirement: The Defined Contribution Challenge.” Also see Jim Sia, “The Road Less Traveled: Minimizing Shortfall and Dynamically Allocating in a DC Plan.” Each of these papers can be found at www.gmo.com

[8] Yes, static TDFs do change their allocations through time, but they do so in a predetermined manner. There is no judgment involved. In other words, we know exactly what the equity allocation is going to be 10 years from now or 20 years from now, and the mix will give no consideration whatsoever to current market environments or current valuations of stocks or bonds at that time. All the TDF “knows” is that a participant is of a certain age, and is therefore forced into an x% allocation to equities and 1-x% into bonds.

Contributor(s)

Peter Chiappinelli

Peter Chiappinelli is a member of GMO’s Asset Allocation team. Prior to joining GMO in 2010, he was an institutional portfolio manager on the asset allocation team at Pyramis Global Advisors, a subsidiary of Fidelity Investments. Previously, he was the director of...

Ram Thirukkonda, CFA

Ram Thirukkonda is a member of GMO’s Asset Allocation team. Prior to joining GMO in 2014, he was a quantitative analyst at Batterymarch Financial Management. Previously, he worked at Fidelity Investments, most recently as the Director of Architecture. Mr. Thirukkonda earned his...

RETHINKING INVESTMENT PERFORMANCE ATTRIBUTION

Editor’s note: this was one of the most widely read papers at SavvyInvestor.net in 2015. It was originally published in the Rotman International Journal of Pension Management (Volume 7, Issue 2, Fall 2014).  An electronic copy is available at http://ssrn.com/abstract=2497513.

Abstract: Proprietary information and data-processing systems have become key competitive differentiators for investors; better systems provide better data, which in turn drive better investment decisions and performance. But while the best systems are multifaceted and touch all aspects of the investment organization, one component of such systems is increasingly important: the measurement and attribution of investment performance. Performance attribution should do more than just explain the past; it should also be a tool to make better future investment decisions. This article describes the Alberta Investment Management Corporation’s journey to develop a performance attribution system as an investment management tool, in the hope of contributing to the institutional investor debate on how to best address this important topic.

Rebuilding AIMCo’s Information Architecture

The Alberta Investment Management Corporation (AIMCo) was established in 2008 as the arm’s-length investment manager of public-sector financial assets in province of Alberta, Canada.1 Today AIMCo manages approximately CAD$70 billion on behalf of 26 pension, endowment, and government reserve fund clients. The fund’s overarching objective is to earn incremental return on risk above what its clients could achieve by passively implementing their policy asset mixes with equity and fixed-income index funds.

At its inception, AIMCo had business, risk, and information systems that were either obsolete or lacking critical components; information was held together with spreadsheets. There was also little consistency in how data flowed into and out of the organization. The various best-of-breed business system components adopted did not speak the same data language, and thus needed translators. AIMCo began addressing these problems in 2009; rebuilding the fund’s information architecture took four years.

One of the main components of the new architecture is a centralized data warehouse that stores information and shares it across the firm via an information bus, a tool that transfers data between software programs, thus reducing the number of redundant systems that previously captured what was ultimately the same information and minimizing the reconciliation efforts required to keep all information in sync. The data structure was designed to store more detailed, granular data that would support in-depth queries in real time, thus allowing investment professionals to select various views of portfolios to gain unique insights. While maintaining alignment with industry standards for data preservation and integrity has been utmost primary concern, AIMCo has also focused on building mechanisms that allow for enrichment of our data in ways specific to portfolio managers’ own views and investment beliefs.

Thus, the new data infrastructure supports strong internal audit and compliance processes for data while also allowing AIMCo to be competitive through enriched analytics that cannot be purchased from outside vendors. With these more reliable systems in place, AIMCo has been able to take better data and make them flow smoothly through new and powerful systems, and thus to develop a performance attribution capability that is robust and grounded in objective data.

Benchmarks

AIMCo has a client-centric benchmark philosophy, which largely follows the CFA Institute’s guidelines that benchmarks have to be “investable” (CFA Institute 2013), but they also have to motivate us to meet the organization’s central objective, which is to earn a higher long-term risk-adjusted return net of costs than AIMCo’s clients could achieve by passively investing in equity and fixed-income market indices. Market returns are a logical starting point for measuring the effectiveness of a manager’s attempts to do better: incremental returns can come from asset allocation; from security selection within market bond and stock universes; and from investment in illiquid asset classes such as real estate, infrastructure, timberland, and private equity.

Most disagreements about benchmarks are related to private illiquid assets, whose various unique and idiosyncratic characteristics often make benchmarking difficult. Indeed, the theoretical rationale for investing in private assets is the existence of an illiquidity premium over the nearest listed proxy from a return-on-risk perspective. For example, private equity should have a higher return than listed equity, because with good management and hard work, the transformational activity (as opposed to pure financial engineering) of private equity management should be rewarded in the long run. Similarly, the return and risk profile of investments in infrastructure and timberland lies somewhere between those of index-linked bonds and equities, so active management should earn an illiquidity premium over the nearest liquid proxy. Managers should compare listed and unlisted opportunities and should invest in private assets only if the expected return warrants it.

There are two problems with this rationale, however. First, because what should be true in the long run is not necessarily true in the short run, judging long-term strategies by short-term outcomes is problematic, and no choice of market-based benchmark can get around this issue. Second, because clients often assume that the illiquidity premium is a given, it should be part of the benchmark instead of viewed as part of the value-add return on the asset management.

In our experience, aspirational return expectations are of little use in motivating responsible manager behavior. Simply stating that “we need at least liquid returns plus x%” does not make it possible to produce that result. A variant of this approach is a benchmark tied to the aggregate return expectation of “CPI+Y over an n-year horizon,” which appears in many investment policies.

Bearing in mind the importance of being thoughtful and focused in selecting benchmarks, AIMCo started a broad review in 2008. This review found that AIMCo’s predecessor organization had been operating with 94 different benchmarks.2 Many client benchmarks had fixed add-ons to CPI or other indices, in some cases as high as +8% (Table 1). Today, AIMCo judges its managers by market-based benchmarks,3 and many clients are now doing the same.

Illiquid Banking

AIMCo’s large and rising target allocation to illiquid assets cannot be achieved immediately, which makes calculating an “allocation effect” 4 for these assets meaningless. For example, a client’s policy allocation to real estate may be $5 billion, while the portfolio may contain $3 billion in these assets. While investment organizations can grow their real estate investments over time toward the clients’ targets, they cannot “close the underweight” quickly. Suppose the real-estate benchmark outperforms the fund’s aggregate benchmark by 2% in a quarter. In that case, the Brinson–Fachler decomposition formula, as provided below (Brinson and Fachler 1985), would indicate that $40 million in value was lost to allocation effect – allocating less than the target to a benchmark with above-average performance. But the “underweight” was not anyone’s decision, and the manager cannot fix it.

Our solution to this problem is “illiquid banking.” We set the benchmark weights for illiquid assets relative to actual weights, then invest the deviation from policy for a given asset class in stock and bond markets that represent its closest proxy. Except for frictional noise, this eliminates the allocation effect, while still keeping all assets within the total fund attribution analysis. The challenge is in the initial review and approval of such a policy, as well as in setting it up and maintaining it.

Basic Attribution

How can we most usefully attribute a fund’s investment return? Where active management is successful, it creates value-added performance for an investment organization beyond what passive allocations to public markets can achieve (see de Bever et al. 2013). Outperformance can be achieved through asset allocation, security selection, or some combination of the two. An investment organization can typically determine relatively easily whether active management is, in fact, adding value to the portfolio; the challenge lies in identifying the sources of those excess returns and in assessing precisely how they were created. We refer to this process as performance attribution.

To better determine whether and how managers are adding value, many organizations turn to the well-known Brinson– Fachler daily decomposition of asset class return (Brinson and Fachler 1985), a version of which can be written as follows:

While this decomposition is useful, we find it somewhat flawed, as it decomposes growth rates, which are not additive, leading to residual terms that can sometimes be significant. It also assumes fixed asset-class weights, which is problematic for illiquid asset classes. Finally, it does not fit with those investment strategies that require managers must use their skill to evaluate opportunities across components of asset classes. In other words, the Brinson–Fachler protocol is based on a set of investment beliefs that are underpinned by silos – a mindset that investment organizations should try to overcome. For all these reasons, AIMCo believed there had to be a better way.

Decision-Based Attribution: A Better Way

We decided to step back and determine what it would take to structure attribution so that it mirrors the way AIMCo actually makes investment decisions. In the process of doing this, we realized that four factors are important in attribution: system and data quality; selecting proper benchmarks; properly treating allocation effects in illiquid asset classes; and integrating the actual way we make investment decisions into the process of attribution. The result of this work is a process we call “decision-based attribution.”

Decision-based attribution reflects the way in which the organization actually makes investment decisions. Active management consists of various decisions to allocate funds to asset categories (i.e., asset allocation decisions), the lowest-level step being the security selection decision. Investment performance can depend on decisions made at many levels and by many groups within the organization. For example, the CIO and an investment strategy group typically determined the allocation between equities and fixed income; the heads of equity and fixed income asset classes and their strategy teams make decisions about allocation among various markets within each asset class; and, finally, portfolio managers (supported by their analysts) make decisions to buy specific stocks and bonds.

Identifying which agents in this ecosystem are truly adding value can be quite challenging. But if an organization, on behalf of its asset owners, can understand how much value each decision (and the respective team) contributes to the overall active performance, great opportunities open up for investors to capitalize on their competitive advantages, thus optimizing and improving investment performance. And, in our view, this is where decision-based attribution can play an important role.

If the attribution is to be meaningful, however, the structure and order of decisions in the attribution (the “decision tree”) must also accurately reflect how the organization actually makes investment decisions. Developing the decision tree requires collaboration between various investment groups within the organization; it is an iterative process that takes some time. Figure 1 offers an example of what such a tree might look like. For example, AIMCo makes decisions to allocate between asset classes (e.g., between public equities and fixed income) relative to the aggregate client benchmark before making decisions to allocate among regions. After all decisions have been enumerated, the organization can explain its achieved return and risk relative to chosen benchmarks without invoking economically meaningless “interaction effects” or “temporal smoothing.”

Performance Attribution System Implementation

AIMCo uses global tactical allocation across asset classes as one way to add value (de Bever et al. 2013). However, teams can also make opportunistic decisions that do not neatly fit within asset classifications. For example, the decision rule might be if you find an attractive asset of type X, take the allocation out of asset class Y, and you will be evaluated on whether that decision added return. The simple allocation selection decomposition described above could not accurately reflect several AIMCo decision rules. The organization’s previous mechanical computation of allocation effects did not always reflect the underlying decision process, which created frustration for managers.

After scanning the vendor marketplace, AIMCo quickly learned that few vendors of performance attribution systems were able to build the advanced calculation engine and data-integration capabilities required for decision-based attribution. After examining vendor capabilities and evaluating the alternative solutions available, the organization implemented the chosen system in September 2011. The system’s data-integration allowed for a speedy, minimally disruptive (“low-footprint”) implementation and even helped to identify weaknesses in the underlying data. To be sure of having the right starting point, AIMCo asked that this flexible second performance attribution system mimic the old methodology, then added the investment data details (e.g., fully described index constituents and a proper sector and industry classification system) to move to decision-based attribution. This transition was completed by August 2012; Table 2 illustrates the result.

Better Information Systems Lead to Better Investment Performance

Before AIMCo implemented the decision-based attribution model, performance “attribution” was simply a decomposition of the total value added in the prescribed “allocation” and “selection” buckets, which took no account of how managers made their investment decisions. The introduction of the new decision-based attribution system has materially improved AIMCo’s ability to understand the relationship between investment decisions and investment results.

A prerequisite for strong investment performance is providing good people with regular performance evaluation feedback from effective information systems. In turn, this feedback should lead to more informed investment decision making and better performance. While there is always room for improvement, it is already clear that better data, systems, benchmarks, and decision-based attribution are having a measurable impact on AIMCo’s ability to meet its clients’ expectations.

Endnotes

1 We thank Albert Yong and Andre Mirabelli for their contributions of material and editorial assistance to this article.

2 In AIMCo’s multi-client framework, clients set their own asset class (product) benchmarks. For example, for global equities, some clients may use the MSCI World index while others use MSCI ACWI and yet others use certain percentage allocations to regional components: the S&P 500, S&P Europe 350 (or MSCI Europe), and MSCI EAFE. In addition, some clients prescribe certain percentage allocations to large-caps and small- or mid-caps within Canadian and global equities. Taking into account all the different asset class / benchmark combinations, AIMCo found that it was managing to a set of 94 benchmarks – a clearly inefficient situation that significantly increased the operational burden on managers instead of giving them clear performance targets.

3 The exception here is benchmarks for real estate, which – because real estate as an asset class has a long and well-documented performance history – do not require proxies to the nearest listed asset class.

4 Recall that an allocation effect measures the effect of the manager’s decision to allocate funds to an asset category relative to benchmark allocation (or weight) to that category; it is not affected by portfolio performance.

References

CFA Institute. 2013. “Benchmarks and Indices: Issue Brief.” http://www.cfainstitute.org/ethics/Documents/benchmarks_and_indices_issue_brief.pdf de Bever, Leo,

Jagdeep Singh Bachher, Roman Chuyan, and Ashby Monk. 2013. “Case Study: Global Tactical Asset Allocation for Institutional Investment Management.” Investments & Wealth Monitor (March/April), 49. http://www.imca.org/publication-issues/MarchApril-2013%E2%80%94Manager-Search-SelectionAsset-Allocation

Brinson, Gary P., and Nimrod Fachler. 1985. “Measuring Non-US Equity Portfolio Performance.” Journal of Portfolio Management 11 (3): 73–76. http://dx.doi.org/10.3905/jpm.1985.409005

Contributor(s)

Jagdeep Singh Bachher

Jagdeep Singh Bachher was Executive Vice-President at the Alberta Investment Management Corporation (AIMCo) when this article was written; he is now CIO of the University of California (USA). Leo de Bever is CEO of AIMCo (Canada). Roman Chuyan is President and CIO...

Leo De Bever
Roman Chuyan
Ashby Monk

BLOOMBERG BRIEFS: CHARTS THAT SHOW GLOBAL STOCK MARKETS ARE TEETERING AT KEY SUPPORT

Editor’s note: this originally appeared in Bloomberg Briefs on January 21, 2016 and is extracted below.

The MSCI World Index is one of six major stock indexes that have broken, or are testing, key support levels amid the market selloff that has intensified since the start of 2016. The MSCI World Index’s former support line may now become a resistance level, creating a bleak short to medium-term outlook.

MSCI World Index Shows Bearish Head and Shoulders

The six charts below show the MSCI World and five other developed market indexes. In each of these markets, support levels based on the lows of the last six months have been met or broken in recent trading.

MSCI World Dips Below Support

EURO STOXX Broke Support, Then Rallied

Nikkei 225 Index Trades Through Support

S&P 500 Continues to Flirt With Support

 

FTSE 100’s Support Has Held

 

Shanghai Composite Bounces Off Support

Eoghan Leahy and Oliver Woolf are technical analysis specialists at Bloomberg LP. They can be reached at eleahy6@ bloomberg.net and owoolf@bloomberg.net. This story was written by Bloomberg LP employees who may be involved in the selling of the Bloomberg Professional Service and then edited by the News Department.

Contributor(s)

ETHICS CORNER: WHAT CAN I SAY ABOUT THE CMT EXAM ON MY RESUME?

This might be the most frequently asked question related to ethics – what can I say about my participation in the CMT program on my resume?

Candidates in the program want to know how to provide details on their accomplishments without violating any ethical guidelines. The answer is that you can include your status in the CMT program but must be factual. Below are specific guidelines for each step in the process:

You’ll notice that after registering for an exam you are a candidate for that level. After passing an exam you can note that you passed that level as well as previous levels.

The table above presents guidelines. Variations are allowed. Instead of saying you passed an exam, it is acceptable to say you completed the exam or use any other word that conveys that meaning.

Once you earn the CMT, it is also important to use the term correctly. You should never use CMT as a noun. It should always be used as an adjective. To test whether or not you are using the designation correctly, you should be able to omit the CMT from the sentence and the sentence should still make sense.

For example, you could say “Joe Jones is a CMT charterholder” and that would be correct. Saying “Joe Jones is a CMT” would not be technically correct although that is a common usage. As a CMT charterholder you should always strive to use the designation correctly.

Contributor(s)

AMBA: THE FULL CYCLE OF TRADE

Ambarella, Inc. (AMBA) by far was 2015’s ‘everything’ trade. Never can I remember a single stock doing so much in one year from a bull and bear perspective. There was literally something for everyone here technically and fundamentally. Below shows the initial breakout, which really was just a short squeeze on a market leading name. This broke out right around May as RS stocks were going consensus while money had fewer places to go.

For seasoned participants this is where things got interesting. Below is one of the very quiet techniques I’ve picked up over the years. As a fusion trader, I focus on fundamentals and technicals. Technicals include sentiment, and sentiment to me is one of the most important skill sets to build to really outperform.

Study the chart below. As AMBA moved into the 120’s and was trading at 20X’s sales I stalked the message board looking for just the right psychology. Often we look at sentiment simplistically, bull vs. bear. There is much more to it. I have identified 4 key components of bullish sentiment that are synonymous with tops. 

Arrogance, Greed, Complacency, and the ‘Only Game in Town’ process.  AMBA had them all in full bloom, coming off nothing other than a technical short squeeze.  For those who really want to learn more about sentiment, zoom in on the chart to really see board posts. Again, you are going to have to zoom to see it but it’s all there, and it is worth it.

Having done this long enough, I know the right combo when I see it. At the time the market cap was 2.6+B for approximately 200M in sales. With that in mind, think of it like walking into an annual investor conference and what is the atmosphere.

The atmosphere was very aggressive, with all the right concepts and the exact name calling to those in doubt.  And then the best part of the story. AMBA put in one of the most textbook double tops I think I have ever seen. Ever.

For true technicians, the most important components above are the declining upside volume on the retest, the 6-week distance making it perfect for a fail, and the candles at the top. From there it was lights out.

Another example of a textbook trade based on technicals and fundamentals is seen in the next chart, a chart of crude oil.

For now, there is no way to know when the next uptrend in oil or AMBA will begin.

Contributor(s)

CHART OF THE MONTH

Editor’s note: Crestmont Research maintains charts on a number of fundamental data series. Much of this data can be applied by technical analysts. P/E ratios and dividend yields can be interpreted as sentiment indicators while inflation and Treasury yields are inputs for intermarket analysis. These charts were updated in January by Crestmont Research.

Contributor(s)