It was March 7th this year, when this mail from the ASA found its way to the ASA members:
On first sight, it didn’t look like that one needs to pay too much attention, but in the longer pdf-version, you can read these six principles:
- P-values can indicate how incompatible the data are with a specified statistical model.
- P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
- Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
- Proper inference requires full reporting and transparency.
- A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
- By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.
taken from the full statement in The American Statistician.
For me this sounds like “the end” of classical statistics as a sub-discipline of mathematics. The cause seems to be obvious for me: In the light of Data Science as a widely promoted but hardly defined discipline, statistics seems to lose ground more and more. Unfortunately, the ASA does not really deliver new directions, that would make the ordinary statisticians more future proved.
Is this new? I would say, no. Ever since John Tukey promoted EDA (Exploratory Data Analysis, for those who are too young to know) we got new directions from someone who really knew the math behind statistics and as a result saw the limitations.
Digging in my old talks I found this slide from 2002
Nothing new, really, and 15 years ago in the light of the buzz word “Data-Mining”. But the point is the same.
The only question is:
Does the statistics community react too late, and is now doomed to diminish towards insignificance?