MIT technology review publishes silly article

The MIT technology review is an influential, world-class publication. But they just published a  silly article. The title—”Machine learning predicts World Cup winner”—caught my eye. I am quite skeptical of prognostication of specific outcomes. The article overviews research done at the University of Dortmund in Germany.

The title makes it appear that the research predict the winner. It is unwise to predict a single outcome with machine learning (ML) or other probabilistic methods. And to the researchers credit, they don’t. Instead they predict that Spain, with a 17.8% change of winning.

ML or simple probabilities does not predict single instances. Instead it gives expectations. As an illustration, consider card-counting in Black Jack. With a fresh deck, the probably of winning favors the dealer. (Of course, otherwise casinos would be called charitries.) But after some hands have been played the distribution of the remaining cards in the deck is different from a fresh deck. For example, the probably of drawing a card worth 10 (Ace, King, Queen, Jack, or Ten) is 20/52, 5/13, about 38%. But if 5 of the first 20 cards are such then the probability drops to 15/32, about 47%. This changes the odds in favor of the player. However, it is wrong to predict that the player will win the next hand. Rather the odds have increased. So the way card counting wins is to bet low when the deck is in the dealer’s favor and bet high when it is in the player’s favor. You will lose some of your big bets and win some of your small bets. But over time—if you play enough hands—this is a winning strategy.

As noted the researchers do not say Spain will win. Rather they prediction Spain has a 17.8%. This is different from the 12.5% predicted by the bookies referenced in the article. But should we think this is any better? The article talks about how it was done. It uses a random forest. A good choice but better? Nothing in the article suggests why it is better. Because this is only the 21st World Cup, it is unlikely that there is sufficient data provide a robust model. Moreover, what happened in the 20 previous World Cups is not determinative and largely irrelevant because countries field a different team every year.

But there are several reasons given in the article to suggest it is worse. Though the authors didn’t intend it that way. The model includes the country’s GDP and population. It includes the nationality of the coach. The article latter notes that population and nationality of the coach are “unimportant.” But this should be obvious. None of the 4 most populated countries have played in even a World Cup Finals match. Only 2 of the top 20 have (Brazil and Germany). Also, the experience of the coach can matter—but not the country on his passport.

Here is the real tell about the problem with this research: it “include[s] other ranking attempts, such as the rankings used by bookmakers.” This isn’t data (not a fact); it is an opinion. Moreover, I suggest that the bookie data is the only data that matters and is the reason this works at all.

In summary, the model is not predicting a winner as claimed. It gives odds. Second, this is a poor use of machine learning. Last, it is the research equivalent of click bait. It is unfortunate that MIT Tech Review swallowed it.

Leave a Reply