Discussions of statistics in sports have a tendency to take on the qualities of a religious debate. They are couched in a language of true believers and the faithless, with little ground left for the practical few who are successfully integrating quantitative methods into their organizations. A recent Jonah Lehrer article in Grantland does an excellent job of illustrating the purposefully controversial and exaggerated writing that surrounds the topic (and everything associated with Bill Simmons). The article has a clear preference for argument over information, which would be forgivable — it is entertainment after all — if it wasn’t for the core lack of understanding Lehrer shows about the use of quantitative analysis and its place in the NBA.

The article uses the purchase of a new car as an analogy for basketball decision making, saying that you can’t just snap up a car based on its stats, specifically horsepower and miles per gallon. I agree (as anyone would), but it is a terrible analogy for the argument Lehrer is trying to make. The reason you should not rely on horsepower or MPG when choosing a car is that these numbers are not the best, or even very good, evaluative measures. They are decoy numbers pushed on those with a shallow understanding of cars. Flashy numbers that falsely claim to have significant descriptive power. To imply that using quantitative analysis is in anyway similar to purchasing a car based on these numbers ignores not only current NBA practices, but the very purpose of statistics.

The increase of quantitative analysis in the NBA is teaching coaches, GMs, and owners not to rely on the flashy stats. The goal of quantitative analysts in the NBA is to move away from inefficient descriptors and find the measurements that can actually help to accurately understand player quality and predict game outcomes. The quants are leveraging their available information methodically and trying to make improvements to team evaluations.

There is nothing radical about the formalization of player evaluation or sports teams leveraging information using formal methods — it is only the introduction of advanced math that has sportswriters up in arms. Teams have long had formal systems in place that help scouts identify important player characteristics and standardized methods for evaluating these characteristics in different players. But throw in something that looks like this:

and suddenly you’re obscuring the heart of the game. Sportswriters’ reactions to quantitative analysis has generally been a combination of fear of the unknown, macho posturing (“those nerds are missing the point!”), and premeditated controversy creation.

Not to say there aren’t terrible statistics and statisticians in the league, there are many. Quantitative analysis is a tool, and like other tools it’s all in how you use it. There is no “guaranteed success equation,” and bad GMs and coaches will make bad decisions with quantitative analysis. Thing is, they would make bad decisions without it. Several good GMs use quantitative analysis as a portion of their research process. They acknowledge the flaws in the methods, they consider the context, they test their results against their judgment and a multitude of qualitative evidence. Stats are not a shortcut, you still have to do the rest of your homework, but they are one more source of information to aid decision-making.

Getting back to the article’s substance, Lehrer rests his case on Mavericks coach Rick Carlisle starting J.J. Barea despite his low scoring, negative plus-minus, and poor shooting. “Although Barea’s statistics still look pretty ordinary — his scoring average fell in the Finals despite the fact that he started — the Mavs have declared that re-signing him is a priority. Because it doesn’t matter what the numbers say. Barea won games.”

In an article about the downsides of advanced quantitative analysis, Lehrer cites some of the shallowest and most inconsequential of statistics. The fact is that the Mavericks are one of the most quantitatively-aware teams in the league. They are able to look beyond Barea’s points per game or his raw plus-minus *because *of their use of stats — not in spite of it. Carlisle received constant analyses from the only bench quant in the league that showed the effectiveness of different player combinations. He verified these reports by watching tape, thinking about his lineups in the context of his long coaching and playing experience, and talking with his assistant coaches. And then he made some great decisions. Mark Cuban recently confirmed as much to Deadspin. To call this a triumph of quantitative analysis would require heavy assumptions, but to call it a case where quantitative methods were unable to explain the “inherent mystery of athletic talent” is ridiculous.

The author hits a couple more points that drive home his lack of real familiarity with advanced statistics. For reasons I cannot understand, Lehrer implies that the quant community is of the opinion that Nenad Krstic is an acceptable replacement for Kendrick Perkins. Krstic looks horrible on advanced stat sheets while Perkins is a effective scorer who boxes out, sets great screens, and is a force on defense (all things that have been quantified to some level). The fact that Perkins is a better player and a better fit for the Celtics is very easily shown through the stats if you look past points per game.

Lehrer then writes, “For reasons that remain mysterious, some teammates make each other much better and some backup point guards really piss off Ron Artest. These are the qualities that often determine wins and losses, and yet they can’t be found on the back of a trading card or translated into a short list of clever equations.”

Of course the most useful statistics are not on the back of trading cards. The fact is that quantitative analysis can and does measure the phenomena Lehrer cites. Quantitative analysis of lineup and player combination efficiencies are hugely useful and the effectiveness of particular defensive matchups is likewise readily quantifiable and being used to help smart teams win games.

In the end, the post’s subtitle says it all. “Sabermetrics can help teams identify hidden talent and turn regular sports fans into math nerds. But can the numbers lie?” No, numbers cannot lie. They can be misread, they can be taken out of context, they can be overvalued, but they will not lie. That’s not how math works. It isn’t a silver bullet or an algorithm that gives a single well-defined answer. It is not playing fantasy basketball. It is not comparing the backs of trading cards. It is thinking about basketball in the most formalized way possible in an effort to remove traditional biases and provide an alternative perspective on players and the game. It is nothing to hate on, it is never going to dominate sports, but it might force some journalists to take a math class.