Everyone hates losing that nail biter. It hurts a lot more than that blowout loss where you weren’t even close. **The question is: should each loss (and alternatively each win) be treated equally?**

**Margin of Victory**

If most of your wins are by a single point and you’re getting blown out in your losses, it might be a sign that your Win/Loss performance is due for a regression. Alternatively, if your only losses are of the nail biter variety, you might just be on the wrong side of variance. As an assessment, it might be helpful to measure your margin of victory on your wagers.

The margin of victory (“MOV”) measurement is a simple but useful measurement of how well your bets are performing. Since bets are generally binary outcomes (win or loss) there is a quite a bit of variance when it comes to measurement by simply wins and losses. Using the MOV measurement can give you a more precise measurement that isn’t as influenced by the binary nature of wager outcomes.

*MOV Example:*

Say you placed the 15 following NBA ATS bets, winning 7 and losing 8 during the first week of March:

A 46.7% winning percentage at -110 is certainly not a profitable record. We could just assume that these weren’t very good bets. What we’d rather do, however, is examine our margin of victory for these games. The first wager of Kings -7.5, for example, was a game that the favorite failed to cover by 1.5 points, winning the game by 6 points when favorite bettors had to lay 7.5. Your wager (Kings -7.5) would have a MOV of -1.5 since your bet lost by 1.5 points.

We can do this same analysis for each wager and find that your MOV averaged 17.5 points in your wins and -3.1 in your losing wagers. Thus despite a losing record, your wagers had a total MOV of 6.5 points.

This certainly indicates that variance was not on your side as you were on the losing side of several one-possession games and most of your wins occurred at pretty comfortable MOVs.

Now certainly there are limitations to an MOV analysis. First, since it is an “average” measurement, it can be influenced by outliers. You might consider capping the MOV (say a 10 or 15-point maximum MOV) to reduce the impact of outliers. Second, different sports have different key numbers and a simple MOV analysis does not account for key numbers or non-normal distributions. Lastly, this type of analysis doesn’t translate as easily for moneyline wagers. To make an apples to apples comparison, you would need to assess the average score differential at various moneylines. We computed the average run differential of away teams in the MLB based on the breakeven win probability of their moneyline odds in the graph below.

**More Granular Measurements**

For sports that have lumpy scoring (NFL, NHL, MLB) you might perform a similar analysis using even more granular data than scoring. For example, to remove cluster luck from baseball scoring, you might do an analysis of net base production or for football you might analyze yards per play or play success rates.

**Grading Your Own Predictions**

Now let’s say you’ve made a model to come up with your own predictions for games (we’ll cover several ways to do this in our model building section) and you want to assess your predictions vs the market (or someone else). In statistics and machine learning, two common ways of assessing performance are by the mean absolute error (“MAE”) and the root mean squared error (“RMSE”) of various models.

**Mean Absolute Error**

The great thing about these terms is that their names so accurately describe their calculations. The mean absolute error is the average (**mean**) of the **absolute** value of your model’s prediction **error**. So if you forecasted a game to be -5 and the game ended in -3 the absolute value of the error of your model was 2 points. Do that for every prediction and take the average. Simple enough.

**Root Mean Squared Error**

The root mean squared error is conceptually very similar to MAE except that you first 1) **square** your error term, then 2) take the average (**mean**) of the squared **error** terms and finally 3) take the **squared root** of those squared errors.

We’ve calculated the MAE and RMSE for the NBA ATS wagers that you made below. Naturally, since those wagers had a positive average MOV, we’re not surprised that the prediction error was less than the market.

The difference between MAE and RMSE is that by squaring the error values, you are more heavily penalizing predictions with large errors. If large errors are significantly worse than smaller errors, then RMSE might be a better calculation for you to use. Otherwise MAE will work just fine.

Ready to see what we're betting? Subscribe atcleat-street.com. Questions? Email us at team@cleat-street.com