The Berkshire Special + India's Fastest Growing Companies

"To check overconfidence, rely on pre-mortems and contrarian teams"

Michael Mauboussin, Director of Research, BlueMountain Capital Management

|
Published 5 years ago on Jun 05, 2019 11 minutes Read

Michael Mauboussin wears multiple hats – that of an analyst and fund manager, a writer and a teacher. Now director of research at BlueMountain Capital Management, he was the head of global financial strategies at Credit Suisse and the chief investment strategist at Legg Mason Capital Management earlier. Among the many books to his credit, three stand out — The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing; Think Twice: Harnessing the Power of Counterintuition; and More Than You Know: Finding Financial Wisdom in Unconventional Places. He has been an adjunct professor of finance at Columbia Business School since 1993 and continues to be on the faculty of The Heilbrunn Center for Graham and Dodd Investing. His distinction is defined by not just the fact that he received the Dean’s Award for Teaching Excellence in 2009 and 2016, but by the progress of his students, a great example of which is none other than Todd Combs, one of the two portfolio managers handpicked by Warren Buffett. He is an authoritative voice in the world of value investing, with great work around behavioural aspects and moats to his credit.

You spoke about common behavioural mistakes that investors make — the base rate problem, the problem of over confidence, the idea of regression to mean or oversimplifying multiples. Can you elaborate on these?

The classic way to look at problems is to gather lots of information and combine it with one’s experience and input, and project them into the future. That’s how analysts, typically, look at a stock — they gather information, build a model and then forecast. The outside view, that is, the application of base rate, asks a much more simple question as to what happened when other companies were in similar situations before. It allows you to understand a company’s performance in a historical context. As often is the case, especially for growth companies, investors ignore the base rate and focus too much on their own information and experience. And in many cases, they tend to be too optimistic in projecting the growth rate while valuing these companies.

It ties back to one of the finer findings in finance, that is, growth stocks typically underperform value stocks. One way to think about that is, growth stocks have high expectations and value stocks have low expectations because, in both extremes, there is not enough weight placed on base rates of performance. Hence, base rates are really an important way to ground an investor’s thinking about corporate performance. 

You have to understand base rates in terms of a number of important financial metrics, including sales growth rates, operating profit margins, return on invested capital patterns, earning growth rates, and so forth to see how various companies can fit into some sort of a historical context.

Does that apply to valuations also?

Valuation is tied to the performance of a company. So, first principle, discounted cash flow model is meant to represent the growth in free cash flows of a business over time. Over time, excess return, that is return above the cost of capital, are competed away, ultimately, a company’s valuation is the so-called steady state or commodity multiple.

Companies have high growth rates and high return on capital when they are young and, hence, the price-earnings (PE) multiple is very high. But over time the PE multiple tends to migrate towards a commodity multiple. Historical multiples need to be considered in the context of the performance or the future performance of companies. Eventually, all multiples veer towards commodity multiples.

Going back to behavioural mistakes, how do you create a framework to know where regression to mean would apply, and where this could be defied?

Regression to mean is an idea that all investors recognise is very important but it’s a tricky concept. Many don’t get it right. Regression to the mean says that an outcome that is far from average will be followed by an outcome with an expected value closer to the average. So, extremely high and extremely low, will move towards some sort of an average in the future. Regression to mean needs to be measured using a consistent metric over time. So, sales growth rates, earnings or return on capital will be metrics that would be common in this type of analysis. You are correlating the growth rate to itself over time. Any time the correlation between those two measures is less than one, you are going to have regression towards mean. The question is: how rapidly does that tend to occur?

We like to break this down into what we call the luck-skill continuum. Imagine if the continuum on one side are all activities that are all luck. So, you have lotteries or lucky dips, where the outcome is completely luck and there is no skill whatsoever. Then, at the opposite end is all skill and no luck, that might be chess games or running races or something like that, where there is no luck. Everything in life is between those two extremes. Here’s the rule of thumb. If your activity is close to the luck side of the continuum, regression towards the mean tends to be very rapid. If you won the lottery yesterday, that is great, but there is no reason to expect to win either today or tomorrow.

In contrast, if your activities are on the skill side, regression is very slow. So, one of the ways we try to quantify the rate of regression towards the mean, is to look at the correlations. If you see high correlations over time, it would mean slow regressions to the mean, and, if you see low correlations, you get rapid regression. The regression for sales growth rates tends to be fairly meaningful — so high growth rates tend to rush down pretty fast.

Earnings growth rates are fascinating because the correlations are negative. So, that says the companies which grow very rapidly, on average, fall below companies that are growing below average over time. Hence, earnings are very, very difficult to predict. If you can predict earnings obviously you can make a lot of money. But they are one of the most difficult series to predict in finance.

Mistakes investors and even leaders make often stem from overconfidence. How do you fight overconfidence? One way is to have a trusted person to bounce off ideas, similar to how Warren has Charlie. Are there any other ways to steadfastly avoid overconfidence? 

The main manifestation of overconfidence is thinking that you understand the potential outcomes more accurately than you do. You tend to forecast a range of outcomes that is too narrow. The way to offset that, to some degree, is to encourage people to think about a fuller range of distributions, and there are a number of techniques to do that. One is the base rate that we spoke of, as it can expand your thinking. The second would be a technique such as pre-mortem. 

Post-mortem is when a patient has died and asking what could have been done differently and learn from our mistakes. A pre-mortem is pretending we have made the decision and then launching ourselves into the future and asking what went wrong. By doing that we can offset overconfidence. And the last thing is to have a red team. That is, essentially, have people in the organisation advocating a point of view, and the red team would be the people who have an opposite view and would argue against the prevailing case. So, these could be mechanisms to avoid overconfidence.

Are there organisations or funds that have institutionalised this practice of having one set of analysts come up with ideas and the others who just shred it?

The genesis of red team, blue team is actually a military strategy. The military has been doing it for many years. One of the more contemporary examples, not in investing, would be cyber security. You develop security – you create a defence for your own organisation, protect your digital data and hire a hacker to hack your system to find out where the vulnerabilities are. So, that is an example of being alert about protecting and being alert about challenging yourself to see if there are vulnerabilities in your case.

You say that multiples are like short hand for valuations. Now, what are the common mistakes in interpreting multiples? 

Multiples are not valuations but a way to represent a process of valuation, and it is important to keep those separate. What’s good about shorthand is that they save you time but what is bad is that you may not understand the economic implications of the multiples assigned. The argument that I was making in my piece about price and EV/Ebitda multiples is to constantly remind people to go back to basics to understand what are the economic implications suggested by various multiples. Probably, the biggest problem with multiples is that they can do a sub-optimal job in reflecting growth rates and, most importantly, on capital intensity. At the end of the day, multiple is a function of return on incremental capital and growth. And there are some core principles that are worth revisiting. 

The first is, if you are earning the cost of capital, that is, if your return on invested capital (ROIC) is equal to the cost to capital, then growth makes no difference whatsoever, and the multiple justified is the commodity multiple. 

The second principle is that if you are earning above your cost of capital, that is great, as the faster you grow, more will be the wealth created. High return, high growth companies become extremely sensitive to growth rates. So, you often see that if a high-flying company misses earnings by just a little bit, its valuation will go down enormously. People sometimes don’t make those connections mathematically. 

The third and final observation is that if you are not earning your cost of capital, the more you grow, the more wealth you will destroy. You see that in mergers and acquisitions. Company A will buy company B and it will be accretive to earnings, but it will be destructive to value. 

So, it is important to recognise that multiples are a proxy for expectations. But it can be the case where low multiples are assigned to companies with low return on capital or to a declining business or industry, and that may be justified. Sometimes, low expectations are justified by fundamentals and sometimes high expectations are also justified by the fundamentals and so a high multiple may not only be appropriate but in fact understating the ultimate value of a company. It is very important to be disciplined to deconstruct what that multiple means and link it back to the economic drivers in the business to understand precisely the bet we’re making.

You have spoken on the dangers of simplistic comparisons of companies and, instead, suggested looking at a broader range of companies with similar characteristics. Can you give us examples of such comparisons?

So, the overarching point is that analysts often create comparable company tables. Not surprisingly, when academics investigated how analysts select a company, they often select companies that do tend to make the case that the analyst is trying to secure. They are selective to some degree. It’s often the case that companies in the same industry have similar economics and the comparable company tables are not unreasonable. But the argument we make is that, to really understand costs, one should map companies in terms of similar economic characteristics rather than the industry per se.

One old but very vivid example is Amazon in its early days. And people constantly compared it to Walmart because they are in the retail business (they sold things) and not surprisingly, Amazon had much lower operating margin but much less capital intensity than Walmart did. So, probably, the model that was more appropriate at the time, going back 20 years, was actually Dell which also had low margins, low capital intensity and, hence, high capital velocity. So, when you try to think about it analogously, the two dimensions I can think about are operating profit margin, and capital velocity. 

Can you elaborate on capital velocity?

Capital velocity is sales divided by investor capital. Let’s think about it more formally. ROIC is net operating profit after tax (NOPAT), divided by investor capital. That’s the definition. And then you can disaggregate that into two pieces. One is net profit after tax/sales which is margins, sales/investor capital which is capital velocity. So multiplied, you get NOPAT/investor capital. 

So, what you want to do is consider companies that are similar to one another in terms of NOPAT/sales margins and turnover. You need companies who get to high ROIC in the same way. So, within an industry, companies might have the same ROIC but with differing capital velocity. Just to take an extreme example in retail, a jewellery store will have very high margins and very low capital velocity. And then a grocery store will have very low margins and very high capital velocity. So, the jewellery store and the supermarket may get to the same ROIC with very different patterns.

Wherever there is technology redefining the mode of delivery or redefining the business itself, how do you estimate the value embedded in an incumbent? For example, we know what happened to JCPenney. But Walt Disney or say a number of media companies, which have traditional moats, still have humongous customer loyalty but then that whole platform itself is changing. 

It is obviously a challenging analytical task. But the framework that I usually refer to when I try to figure out such issues is Clayton Christensen’s work at Harvard Business School on disruptive innovation. He makes a very important distinction between sustained innovation and disruptive innovation. Sustained innovation is allowing companies to do more of the same and so for example when Dell was selling its computers in 1990, they did it by telephone because there was no internet. Their business model is very different than Compaq, IBM and others. When the internet came along, it was a case of sustained innovation, where they could do things they were doing earlier much more efficiently and that strengthened their business. The first question is whether sustained innovation is allowing stronger players to extend what they are doing. 

Now the question that you are getting at is disruptive innovation, which is a whole different way of doing business. And that is where it becomes really challenging. What Christensen argued in the past is that incumbents have to create separate operations, essentially, skunkworks that are completely separate businesses in order to start from the ground up. 

Auto is a good example. The large automobile companies understand that the technology is migrating; many of them are investing very aggressively. But the question is: how do you straddle more traditional business constructing combustion vehicles and move to building new electric vehicles? The history of companies transitioning smoothly from one to the other is not very encouraging. Not many companies have been able to do it successfully. 

Will that be a good hunting ground for value investors? 

Sure, on the long and short side. Whenever there is change, no one knows what is going on. By the way, there are times when companies die sooner than people think. There are also times when stocks can be of value because people are leaving them for dead and they continue to persist for some time. So, there is opportunity for a value investor.