Evolution usually progresses gradually over many years, but sometimes it makes a giant leap. On May 11, 1997, the reigning world Chess champion, Garry Kasparov, lost to Big Blue, a computer program developed by IBM. The symbolism of machine besting man was not lost on the world. The era of AI (artificial intelligence) had officially begun.
AI’s achievements continue to stack up. In 2011, IBM’s Watson beat the top Jeopardy players in the world and it is now being tweaked for use in a number of big-data applications. Less than a decade ago, commentators claimed that AI wouldn’t be able to beat the best human players at “Go,” the quintessential abstract strategy game with many more possible moves per turn than chess, and yet, between March 9 and March 15 2016, Google’s DeepMind (called AlphaGo) prevailed over world champion Lee Sedol in four out of five games.
Whether you believe that the growth of AI will bring great benefits to humankind or a Skynet-inspired struggle for survival, one thing we can agree upon is that machines will eventually outclass human intelligence. Right?
Not so fast.
Following that epic 1997 Big Blue match, Kasparov, in the true spirit of if you can’t beat ‘em, join ‘em, proposed a new form of chess called “Freestyle Chess,” wherein human players can (but are not obliged to) make use of computers when selecting their moves. Those players who made the final decision on each move with the support of computers Kasparov coined “centaurs.”
The first major freestyle competition saw centaurs playing superior chess to those relying solely on a computer program (no human input) or a human (no computer input), proven by the fact that the final four teams were all centaurs. Three of those four included grandmasters using military grade super-computers. The fourth, choosing to remain anonymous under the moniker ZackS, won the tournament. ZackS was not a grandmaster and did not use state-of-the-art hardware. To everyone’s amazement, ZackS was made up of two amateur chess players—a soccer coach and a database administrator—who used three different AI systems running on consumer-grade computers (one of which had been borrowed from one of their dads). When asked how they did it, they said:
“We knew that this AI system performed better in this environment. We knew that this one was better over here. When the system and the game moved itself into those places, we’d switch between our machines. Sometimes we’d ignore them all, and sometimes we’d play what we thought was best.”1
Today, the best AIs available lose consistently to centaurs. What’s more, the best centaurs are not made up of grandmasters supported by super-computers. Instead, like ZackS, the winners have generally been amateur chess players using multiple consumer-grade computers but who understand the technology and how to best utilize various AIs. Raw intelligence and computer speed are losing to better processes and human-machine pairings.
The lesson of freestyle chess is not only that human/machine teams can outperform machine-alone match ups but, more importantly, that understanding the strengths of your team members (human or otherwise) and implementing processes that maximize the ability of each team member to contribute is paramount for teams who want to excel.
Similar to chess, success in investing boils down to optimal decision-making; however, investing is more complex with far more variables to consider. Since our team believes that having the best processes in place while optimally utilizing available resources will lead to the best outcomes, we created “The Lab.” “The Lab” is a designated workspace where we experiment and test different ways of incorporating computer capabilities into our processes. Two successful lab developments that occurred over the past year include an autoDCF (discounted cash flow) engine and an Analyst Performance Analysis study.
The autoDCF starts with historical public information (e.g., financial statements), pre-populates a DCF model with that existing historical information and then provides an initial cash-flow projection based on algorithms which account for acquisitions/divestments (which may otherwise distort growth projections), reversion to the mean and other effects. The framework by no means provides the right “answer” to a company’s valuation (DCF models rarely do), it instead allows the analyst to (1) delve more quickly into and tweak the DCF model, and (2) perform an apples-to-apples comparisons across companies (e.g., to benchmark against variable analyst assumptions). It also provides the analyst with another screening tool that, while similar to using P/E ratios, allows the analyst to account for growth and isn’t as sensitive to variable accounting rules.
Analyst Performance Analysis is a tool which statistically tests the stock recommendations of individual sell-side analysts. The process includes all ~16,000 publishing sell-side analysts world-wide, documents each stock recommendation they’ve made since the year 2000 and then tests their success. The test amounts to running their recommendations through a mock portfolio (essentially testing how well their top pick and buy recommendations do relative to their hold and sell recommendations) followed by an additional test to see if any patterns of outperformance can be explained by randomness alone or if there is enough statistical significance to detect stock-picking skill. The results from the Analyst Performance Analysis suggest that ~3-5% of analysts do exhibit moderate stock-picking skill (not explainable by randomness). With that database, we can add another indicator of talent (stock-picking skill is by no means the only contribution we look for in sell-side analysts) to our assessment criteria for sell-side analysts. This translates into higher-quality interactions with sell-side analysts.
Ultimately, we believe that in this new AI era, those who learn how to adapt to and synergize with computers and data will be best positioned to thrive. But as Freestyle Chess has shown us, we can’t rely solely on technology to do our thinking for us. Instead, we embrace a combined intelligence, where decision making is enhanced by man/machine teams capitalizing on the other’s strengths while compensating for respective weaknesses.