For hardcore baseball fans, the “Pythagorean” record, called such for its ostensible appearance to the Pythagorean formula, represents how many games a given team “should” have won over the course of a season. The formula uses nothing but a team’s runs scored and runs allowed and spits out out a winning percentage that very closely correlates with the actual win total of a team. Even more surprisingly, this Pythagorean record correlates closer to the following year’s win-loss record than the previous year’s record does. This strongly suggests, although hardly definitively, that runs scored and runs allowed are a better indicator of team quality than win-loss percentage itself.
Amateur analysts have taken this as a very happy research opportunity. Some looked at how many games a team has won over its Pythagorean record as proof of a team’s strength in ways not measured using runs scored or runs allowed alone. The differential has been used to put numbers to statistical bugaboos like team chemistry, the effectiveness of the coaches, and the usefulness of the running game. However, these methods have fallen out of practice as those variables have failed to exhibit any year-to-year statistical significance in explaining the differential. The only objective factor that has been shown to explain any of the differential is the strength of a team’s bullpen, and that effect is not strong.
We’re lucky that in baseball, unlike real life, we have an amazing degree of data capture and an embarrassment of historical riches, with nearly complete yearly records going back more than 125 years. Retrosheet even has made available full play-by-play data for every season for more than forty years. Every play, every action, of all 162 games, (now) all 30 teams… it is a bewildering accomplishment. This treasure chest of data, an ostentatious auric ensemble of empirics, allows analysts to respond to numerical questions with genuine answers.
Yet, that’s not all. In baseball, every action is discrete. A hitter smashes a home run. A second baseman fields a ball and throws out the runner. Each event is distinct and countable for all to see. Furthermore, the data encapsulates what is important, which is why the pythagorean records work as well as they do. If the subjective, uncountable variables were what really drove a record instead of runs scored and runs allowed, the pythagorean record would not have its predictive power.
The subjective, uncountable variables like team chemistry likely have some effect to a team’s final record, but we have no idea what the effect is. It is now widely accepted to be a fool’s errand to try to tease anymore meaning from the differential. Yet, outside of baseball analysis, the problem of identifying the cause of residuals appears in far more important matters. However, since real-life data rarely possesses the ideal characteristics found in baseball statistics (gratuitous amount of data, countability of actions, importance of objective factors over subjective ones), we cannot take the same logical positivist approach to analysis; that is to say, we cannot ever rely on data to confirm any hypothesis definitively. We may be able to identify correlations for objective causes with some certainty, but we would only be lying to ourselves to think we had captured everything important.
Thomas Sowell has pointed out such presumption in what he calls The Residual Fallacy. The fallacy states that if we control for every objective variable we can find, that persistent statistical significance of a “soft” variable proves that the “soft” variable was directly responsible. But, as stated above, in real life, we don’t have a full picture of everything important. Imagine if in baseball, we only had the number of home runs hit and the average height of the players and were asked to use it to estimate the team’s win-loss record with such information. Without information about the other things that matter, we might pick up on some statistically significant relationship between win-loss record and, for example, average attendance. Would it then be fair to conclude that we have taken everything important into account, and that large crowds cause teams to win more often by cheering?
Yet this is what we do, routinely, including academics and scholars. Courts accept it as evidence. The example that Sowell points to is “proving” racism empirically. It is a fact that, in most industries, if you adjust for age, years of education, marital status, and everything else for which data is easy to collect, Asians make more than Caucasians and Caucasians make more than Hispanics and Blacks. Does this prove racism and put a dollar number on it? Of course not. It is likely that there are additional subjective differences between the groups. This does not mean it’s an INNATE difference or that the groups are somehow better than one another; it means that there probably exist other explanatory variables that analysts cannot capture. At the same time, this does not disprove racism, either. It could well be true that if we could somehow control for literally everything important, race would still be a persistent factor. The point is that we do not know and can’t know in any meaningful sense by throwing everything objective in a regression and seeing what sticks. It just isn’t evidence.
Baseball is a weird case, a rare human event where we can boil down most of what’s important to a few numbers. Still, this doesn’t turn the minds of baseball traditionalists -especially reporters- who spend years with the team and believe team chemistry to be an essential part of a winning organization. They could be well be right; if we could measure chemistry meaningfully, it may explain both some of the differential between the actual and pythagorean records and a reason why the team scored those runs in the first place. Yet, it is strange that some hold standards higher for baseball, where there is evidence that we have captured everything important, than in sociological questions with enormous political ramifications, where believing we no everything is nonsense.