The prediction problem

Letter # 25, 15 January 2021

“Well mate, I think I might enjoy lunch at home tomorrow,” gloated Glen McGrath, noted Australian bowler from the commentary box, on day 4 of the third cricket test match between India and Australia, suggesting that India would be bowled out before lunch on day 5. “Australia are 310 ahead at the moment, but I honestly think India won’t make even 200 in the second innings,” surmised Ricky Ponting, widely considered the most successful test captain in international cricket history.

However, what followed on day 5 demonstrated the monumental resolve of the Indian cricket team to bat through the day and force a draw as two Indian batsmen (Vihari and Ashwin) batted a staggering 48 overs on the last day. Since day 4, Australia was the only team looking for a win; the Indian cricket team treated the draw as a successful outcome (Ashwin even carried one of the stumps with him as a souvenir).

While this rekindles fond memories of another Indian cricketer Gautam Gambhir, who batted for 72 overs (close to 11 hours) in 2009 (Ind vs. NZ, 2nd test played at Napier, NZ) and close to 31 overs in 2011 (Ind vs. SA, 3rd test played at Cape Town, SA) to force a draw on international soil, the latest Ind vs. Aus. test brings an even more interesting aspect to fore–one that deals with failure of prediction.

Nate Silver, the founder of FiveThirtyEight.com, deals with this subject beautifully in his book The Signal and the Noise. Nate shot into the limelight after he accurately predicted the outcome of 49 (of 50) states in the 2008 US presidential election and the winner of all 35 senate seats. Originally a Chicago economics grad, Nate was working with KPMG as a transfer-pricing consultant when he created a predictive system for baseball (Pecota), which later helped him with his electoral predictions (his website is named after the 538 electoral college votes).

In the era of big data, one would presume that we are getting better at making predictions. But according to Nate, huge quantum of data is as much a problem as a solution. More data does not always result in better predictive abilities as much of the information could be just noise which distracts us from the truth. Besides, more data could lead to us forming spurious correlations, while falsely raising confidence instead of being cautious about overfitting and uncertainty. Blindly trusting models and creating huge leverage was one of the reasons that led to the global financial crisis in 2008; “some predictions may not fail as often as simpler ones, but when they fail, they fail badly,” writes Nate.

Nate also differentiates between fields where more data can lead to better predictions. Weather forecasting has dramatically improved with the introduction of super computers, but similar impact in economic or political forecasting is not visible. While ‘risk’ is something you can put a price on (you know the odds and can count it ahead of time), ‘uncertainty’ is a risk that is hard to measure; you might be acutely concerned about it, but have no idea how many of them are there or when they might strike.

In December 2007, economists in the Wall Street Journal’s forecasting panel predicted only 38% likelihood of a recession over the next year. This was surprising because data would later reveal that the economy was already in recession at the time. Nate sums it well, “we need to stop and admit it: we have a prediction problem. We love to predict things – and we aren’t very good at it.”

And there is no better time to talk about predictions than the present as many brokerage houses will put out their annual forecasts for 2021 at a time when markets are at their historic highs. Flush with liquidity, Indian markets are trading at a record valuation multiple and investors often ask, is the rally done?

We do not profess to know the answer, but going by the historical precedent, the rally should have been done a long back. But then, what forms part of ‘historical precedent’ today also happened for the first time at some point in the past. As a corollary, something ‘new’ that happens this time around will form part of historical precedence when we evaluate a few years down the line.

Analyst forecasts assume that F22 is going to be a normal year. As per historical precedent (as the chart below indicates), we had reached the peak (bubble) January 2008 valuations on 11th November 2020 itself (when Nifty was at 12700). As of writing this letter, we are already up 15% from that level. Is it different this time around? We don’t know! The 35% drawdown in Index had not happened before March 2020 and a V-shaped recovery has no precedent in history either. But then, history is made by a dominant role played by surprising events that have wide-scale implications.

Having said that, in our view, not being able to predict (whether the rally is done or not) is not the same as being unable to position the portfolio effectively. Like we have argued in the past (link) for a portfolio strategy to be effective–it needs to evolve with time and there cannot be ‘one strategy to beat the markets all the time.’

Disclaimers:
Information in this letter is not intended to be, nor should it be construed as investment, tax or legal advice, or an offer to sell, or a solicitation of any offer to make investments with Buoyant Capital. Prospective investors should rely solely on Disclosure Document filed with SEBI. Any description involving investment examples, statistical analysis or investment strategies are provided for illustration purposes only – and will not apply in all situations and may be changed at the discretion of principal officer. Certain information has been provided and/or based on third-party sources and although believed to be reliable, has not been independently verified; the investment managers make no express warranty as to its completeness or accuracy, nor can it accept responsibility for errors appearing herein.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

*