Using machine learning to predict basketball scores

By: Scott Clark, PhD

Here at SigOpt we think a lot about model tuning and building optimization strategies; one of our goals is to help users get the most out of their Machine Learning (ML) models as quickly as possible. When our last hackathon rolled around I was inspired by some recent articles about using machine learning to make sports bets. For my hackathon project I teamed up with our amazing intern George Ke and set out to use a simple algorithm and open data to build a model that could predict the best basketball bets to make. We used SigOpt to tune the features and hyperparameters of this model to make it as profitable as possible, hoping to find a winning combination that could beat the house. Is it possible to use optimized machine learning models to beat Vegas? The short answer is yes; read on to find out how [0].

Broadly speaking, there are three main challenges before deploying a machine learning model. First, you must Extract the data from somewhere, Transform it into a usable state, and then Load it somewhere you can quickly access it (ETL). This stage often requires a lot of creativity and good old-fashioned hacking. Next, you must apply your domain expertise about the problem to build the features and pick the model that will best solve it. Once you have your data and model you must train and tune the model to get it to the best possible state. This is what we will focus on in this post.

It is often completely intractable to tune a model with more than a handful of parameters using traditional methods like grid and random search, because of the curse of dimensionality and how resource-intensive this process is. Model tuning is non-intuitive and orthogonal to the domain expertise required for the rest of the ML process so it is often also prohibitively inefficient to be done by hand. However, with the advent of optimization tools like SigOpt to properly tune models it is now possible for experts in any field to get the most out of their models quickly and easily. While sometimes in practice this final stage of model building is skipped, it can often mean the difference between making money and losing money with your model, as we see below.

We used one of the simplest possible sports bets you can make in Vegas for our experiment, the Over/Under line. This is a bet that the total number of points scored by both teams in a game will be higher, or lower, than some number that Vegas picks. For example, if Vegas says the sum of scores for a game will be 200.5, and the scores totaled to 210, and we bet “over,” then we would be entitled to \$100 of winnings for every \$110 we bet [1], otherwise (if we bet “under” or the score came in lower than 200.5) we would lose our \$110 bet. On each game we simulated the same \$110 bet (only winning \$100 when we choose correctly). We picked NBA games for the experiment both for the wide availability of open statistics [2] and because over 1,000 games are played per year, giving us many data points with which to train our model.

We picked a random forest regression model as our algorithm because it is easy to use and has interesting parameters to tune (hyperparameters) [3]. 23 different team-based statistics were chosen to build the features of the model [4]. We did not modify the feature set beyond our initial picks in order to show how model tuning, independent of feature selection, would fare against Vegas. For each of the 23 features we created a slow and fast moving average for both the home and away team. These fast and slow moving averages are tunable feature parameters which we use SigOpt to optimize [5]. The averages were calculated both for a total number of games and for a number of games of similar type (home games for the home team, away games for the away team). This led us to 184 total features for every game and a total of 7 tunable parameters [3] [5].

The output of our model is a predicted number of total points scored given the historical statistics of the two teams playing in a given game. If the model predicts a lower score than the Vegas Over/Under line then we will bet under; similarly if the model predicts a higher score we will bet over. We will also let SigOpt tune how “certain” the model needs to be in order for us to make a bet by only simulating a bet when the difference between our prediction and the overunder line is greater than a tunable threshold.

We used the ‘00-’14 NBA seasons to train our model (training data), and random subsets of the ‘14-’15 season to evaluate it in the tuning phase (test data). For every set of tunable parameters we calculated the average winnings (and variance of winnings) that we would have achieved over many random subsets of the testing data. Every evaluation took 15 minutes on a high CPU Linux machine. Note that grid search and random search (traditional approaches to model tuning) would be an impractical way to perform parameter optimization on this problem because the number of required evaluations grows so large with the number of parameters for both methods [6]. SigOpt takes a linear number of evaluations with respect to the number of parameters in practice. It is worth noting that even though it requires fewer evaluations, SigOpt also tends to find better results than grid and random search. Figure 1 shows how profitability increases with evaluations as SigOpt tunes the model.

image

Figure 1: Over the course of 100 different train and test evaluations, SigOpt was able to tune our model from losing more than \$500 to winning more than \$1,000, on average.  This value was computed on random subsets of the ‘14-’15 test season, which was not used for training.

Once we have used SigOpt to fine tune the model, we want to see how it performs on a holdout dataset that we have never seen before. This is simulating using our model to make bets where the only information is historical information. Since the model was trained and tuned on the ‘00-’15 seasons, we used the first games of the ‘15-’16 season (being played now) to evaluate our tuned model. After simulating 131 total bets over a month, we observe that the SigOpt tuned model would have made \$1,550 in profit. An untuned version of this same model racked up \$1,020 in losses over the same holdout dataset [7]. Not only does model tuning with SigOpt make a huge difference, but a simple, well-tuned model can beat the house.

image

Figure 2: The blue line is cumulative winnings after each day of the SigOpt tuned model. The grey dashed line is the cumulative winnings of the untuned model. The dashed red line is the breakeven line.

We are releasing all of the code used in this example in our github repo. We were able to use the power of SigOpt optimization to take a relatively simple model and make it beat Vegas. Can you use a more complicated model to get better results? Can you think of more features to add? Does including individual player stats increase accuracy? All of these questions can be explored by forking the repository and using a free trial of SigOpt to tune your model [0].

[0]: All bets in this blog post were simulated, no actual money was gambled. SigOpt does not advocate sports gambling. Check your local laws to learn if gambling is legal in your area. Make sure you read the SigOpt terms of service before using SigOpt to tune your models.

[1]: Betting \$110 to win \$100 is part of the edge that Vegas keeps. This keeps a player from breaking even by picking “over” and “under” randomly.

[2]: NBA stats: http://stats.nba.com, Vegas lines: http://www.nowgoal.net/nba.htm

[3]: http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html We will tune the hyperparameters of n_estimators, min_samples_leaf and min_samples_split.

[4]: 23 different team level features were chosen: points per minute, offensive rebounds per minute, defensive rebounds per minute, steals per minute, blocks per minute, assists per minute, points in paint per minute, second chance points per minute, fast break points per minute, lead changes per minute, times tied per minute, largest lead per game, point differential per game, field goal attempts per minute, field goals made per minute, free throw attempts per minute, free throws made per minute, three point field goals attempted per minute, three point field goals made per minute, first quarter points per game, second quarter points per game, third quarter points per game, and fourth quarter points per game.

[5]: The feature parameters included the number of games to look back for the slow and fast moving averages, as well as an exponential decay parameter for how much the most recent games count towards that average (with a value of 0 indicating linear decay), and the threshold for the difference between our prediction and the overunder line required to make a bet.

[6]: Even a coarse grid of width 5 would require 5^7 = 78125 evaluations, taking over 800 days to run sequentially. The coarse width would almost certainly also perform poorly compared to the Bayesian approach that SigOpt takes, for examples see this blog post.

[7]: The untuned model uses the same random forest implementation (with default hyperparameters), the same features, a fast and slow moving linear average of 1 and 10 games respectively, and a “certainty” threshold of 0.0 points.

View the original article here

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s