Back in March we investigated some 2018 season predictions from sports books and statistical projections published at FanGraphs.com.
Here are the initial projections for home runs. The players shown are those offered for over-under betting by Westgate Sports Book in Las Vegas. Projections from ZiPS, Steamer and FanGraphs’ Depth Charts were added manually to the data.
Let’s look at how the players performed compared to the projections. The solid black dot indicates the final totals.
Right at the top, the Yankees’ Giancarlo Stanton didn’t come close to the hype projected at FanGraphs and finished just below the line set by Westgate (38 vs. 39.5). But Stanton wasn’t alone in falling short of expectations. Across-the-board under bets would have netting a lot of money. In fact, only seven of the 39 players listed exceeded all expectations (from the top): Khris Davis, J.D. Martinez, Nelson Cruz, Paul Goldschmidt, Trevor Story, Francisco Lindor and Mookie Betts. And only 14 players would have earned a winning over bet at Westgate.
Below is the list of the 12 players who missed their over-under line by the largest margin.
It’s mostly a list of players who missed great portions of the season with injuries — except for Votto, who managed only 12 homers in 145 games this season. He had never failed to hit at least 24 homers in any season in which he played at least 130 games. Roughned Odor’s horrible start to the season, along with some injury time, also landed him on this list.
We also plotted the range of predictions for teams’ win totals based on the over-under lines from Westgate and Bodog.ca and projections from FanGraphs and Baseball Prospectus. This was the original plot:
Of the two divisions I watch the most, they all overpredicted the entire AL Central and underprojectd the Red Sox and Yankees in the AL East. Baseball Prospectus came closest to projecting Tampa Bay’s strong season. They all missed badly on Oakland, Milwaukee and Atlanta. They mostly got the orders of finish correct, apart from the teams mentioned above.
It would be interesting to calculate, if we can, which set of projections was closest to being correct most often. First let’s see how many times each projection was over, under or equal. (Note: Westgate and Bodog won’t have any equals because they set their lines at half-run intervals to avoid pushes.)
We can also calculate the average difference between the projected win total and the actual win total for all 30 teams among the four groups. We’ll use the absolute values, treating negatives and positives the same, otherwise they’ll cancel each other out in the final calculation.
##  "Westgate average = 8.3 wins"
##  "Bodog average = 8.247 wins"
##  "FanGraphs average = 8.133 wins"
##  "Baseball Prospectus average = 7.6 wins"
Baseball Prospectus came out with the lowest average difference at 7.6 wins. The others were close to the same at just over 8.0.
Let’s check out the distribution of the differences for each group and figure out the median of the distribution, the mode (most common value), and the median absolute deviation. We’ll show those on a histogram of each distribution.
Baseball Prospectus’ edge in the average difference is undercut by the high mode of -7. FanGraphs’ median was the closest to 0 and also had the lowest mode and the lowest median absoluate deviation.
None of these systems seems to stand out from the rest, plotting them together derives fairly tight clusters for each team. It would be interesting to study other years to see if the betting the full slate of unders nets a profit each time or if there are particular teams that under- or over-perform against their projections (looking at you Oakland and Tampa Bay).
R Markdown and data files for this project are here.
Photo: Pixabay via Pexels.com