Artificial Intelligence: From Checkmate to Teammate

05.26.2016 By , ,

Evolution usually progresses gradually over many years, but sometimes it makes a giant leap. On May 11, 1997, the reigning world Chess champion, Garry Kasparov, lost to Big Blue, a computer program developed by IBM.  The symbolism of machine besting man was not lost on the world.  The era of AI (artificial intelligence) had officially begun.

AI’s achievements continue to stack up.  In 2011, IBM’s Watson beat the top Jeopardy players in the world and it is now being tweaked for use in a number of big-data applications.  Less than a decade ago, commentators claimed that AI wouldn’t be able to beat the best human players at “Go,” the quintessential abstract strategy game with many more possible moves per turn than chess, and yet, between March 9 and March 15 2016, Google’s DeepMind (called AlphaGo) prevailed over world champion Lee Sedol in four out of five games.

Whether you believe that the growth of AI will bring great benefits to humankind or a Skynet-inspired struggle for survival, one thing we can agree upon is that machines will eventually outclass human intelligence. Right?

Not so fast.

Following that epic 1997 Big Blue match, Kasparov, in the true spirit of if you can’t beat ‘em, join ‘em, proposed a new form of chess called “Freestyle Chess,” wherein human players can (but are not obliged to) make use of computers when selecting their moves. Those players who made the final decision on each move with the support of computers Kasparov coined “centaurs.”

The first major freestyle competition saw centaurs playing superior chess to those relying solely on a computer program (no human input) or a human (no computer input), proven by the fact that the final four teams were all centaurs. Three of those four included grandmasters using military grade super-computers. The fourth, choosing to remain anonymous under the moniker ZackS, won the tournament. ZackS was not a grandmaster and did not use state-of-the-art hardware.  To everyone’s amazement, ZackS was made up of two amateur chess players—a soccer coach and a database administrator—who used three different AI systems running on consumer-grade computers (one of which had been borrowed from one of their dads). When asked how they did it, they said:

“We knew that this AI system performed better in this environment. We knew that this one was better over here. When the system and the game moved itself into those places, we’d switch between our machines. Sometimes we’d ignore them all, and sometimes we’d play what we thought was best.”1

Today, the best AIs available lose consistently to centaurs.  What’s more, the best centaurs are not made up of grandmasters supported by super-computers.  Instead, like ZackS, the winners have generally been amateur chess players using multiple consumer-grade computers but who understand the technology and how to best utilize various AIs.  Raw intelligence and computer speed are losing to better processes and human-machine pairings.

The lesson of freestyle chess is not only that human/machine teams can outperform machine-alone match ups but, more importantly, that understanding the strengths of your team members (human or otherwise) and implementing processes that maximize the ability of each team member to contribute is paramount for teams who want to excel.

Similar to chess, success in investing boils down to optimal decision-making; however, investing is more complex with far more variables to consider. Since our team believes that having the best processes in place while optimally utilizing available resources will lead to the best outcomes, we created “The Lab.” “The Lab” is a designated workspace where we experiment and test different ways of incorporating computer capabilities into our processes. Two successful lab developments that occurred over the past year include an autoDCF (discounted cash flow) engine and an Analyst Performance Analysis study.

The autoDCF starts with historical public information (e.g., financial statements), pre-populates a DCF model with that existing historical information and then provides an initial cash-flow projection based on algorithms which account for acquisitions/divestments (which may otherwise distort growth projections), reversion to the mean and other effects.  The framework by no means provides the right “answer” to a company’s valuation (DCF models rarely do), it instead allows the analyst to (1) delve more quickly into and tweak the DCF model, and (2) perform an apples-to-apples comparisons across companies (e.g., to benchmark against variable analyst assumptions). It also provides the analyst with another screening tool that, while similar to using P/E ratios, allows the analyst to account for growth and isn’t as sensitive to variable accounting rules.

Analyst Performance Analysis is a tool which statistically tests the stock recommendations of individual sell-side analysts.  The process includes all ~16,000 publishing sell-side analysts world-wide, documents each stock recommendation they’ve made since the year 2000 and then tests their success.  The test amounts to running their recommendations through a mock portfolio (essentially testing how well their top pick and buy recommendations do relative to their hold and sell recommendations) followed by an additional test to see if any patterns of outperformance can be explained by randomness alone or if there is enough statistical significance to detect stock-picking skill.  The results from the Analyst Performance Analysis suggest that ~3-5% of analysts do exhibit moderate stock-picking skill (not explainable by randomness).  With that database, we can add another indicator of talent (stock-picking skill is by no means the only contribution we look for in sell-side analysts) to our assessment criteria for sell-side analysts.  This translates into higher-quality interactions with sell-side analysts.

Ultimately, we believe that in this new AI era, those who learn how to adapt to and synergize with computers and data will be best positioned to thrive. But as Freestyle Chess has shown us, we can’t rely solely on technology to do our thinking for us. Instead, we embrace a combined intelligence, where decision making is enhanced by man/machine teams capitalizing on the other’s strengths while compensating for respective weaknesses.

 

1 http://www.mike-walsh.com/blog/sean-gourley-quid

You Might Also Like

5 Comments

  • Reply
    Peter H
    05.26.2016 at 7:28 am

    Thanks for the interesting article Justin. Sticking with the chess analogy, top players today use computers/AIs fairly extensively to analyze their previous matches to identify areas of improvement. Has Mawer considered, or perhaps already done so, turning the focal point of the data analysis inwards? It could be fruitful to analyze historic positions and see if there were correlations between certain characteristics/traits of stocks and their contribution to Mawer’s performance.

    • Reply
      Justin
      05.26.2016 at 9:35 am

      Funny you should ask Peter – The Lab is currently looking at a number of historical decisions and projections and testing them. One quick example is our DCF models which predicts a probabilistic range of values for a stock. We are testing our calibration to see how frequently our predicted range captures the actual performance – much like a meteorologist who predicts 20% chance of rain would go back and check how often their x-% predictions came about ultimately answer the question “when I predict 20% chance of rain, it rains x% of the time”. Such measurement injects important feedback into the system and should improve forecasting accuracy over time.

  • Reply
    Sheldon
    07.14.2016 at 10:11 am

    Hi Justin, sounds very interesting. Can you maybe get into how the Lab has been part of the investment process, and how it integrates the bottom-up fundamental analysis? Does the lab also have any contribution when it comes portfolio construction, i.e., buy and sell decisions?

    Looking forward to your reply. Thanks

  • Reply
    Justin
    07.14.2016 at 10:56 am

    Thanks for the question Sheldon – an appropriate answer to your question could take many pages to complete – the Lab address a broad number of process improvements that fit in disparate ways to our core “Quality Businesses, strong Management Teams at reasonable valuations” investment philosophy. One part of the lab that I believe is most fundamental in strengthening our process are the tools which shorten/strengthen feedback loops. Above I used the example of meteorology as a “well-calibrated” science and this is due in large part to the fact that after you make a prediction/forecast (say 20% chance of rain) you know the next day whether it in fact rained! Such high quality feedback leads to a culture that is able to hone/improve forecasting skills. While such short cycle times are impossible in investing, developing tools which better document and study investment decisions are one way that the lab tries to shorten/strengthen investment feedback loops. Please feel free to email or call me to discuss other ways the lab interacts with our investment process.

  • Reply
    Merri
    02.25.2017 at 1:05 am

    I think that what you wrote made a bunch of sense.
    However, what about this? suppose you added a little content?
    I ain’t suggesting your content is not solid.,
    but what if you added something that grabbed a person’s attention? I mean Artificial Intelligence: From Checkmate to Teammate
    – The Art of Boring is kinda boring. You should glance at
    Yahoo’s home page and see how they create article headlines to grab viewers interested.
    You might add a video or a picture or two to grab people interested about everything’ve written. Just my opinion, it would
    make your website a little livelier.

  • Leave a Reply