What is P-Hacking & How To Avoid It in 2024?
Updated on Aug 28, 2023 | 9 min read | 9.3k views
Share:
For working professionals
For fresh graduates
More
Updated on Aug 28, 2023 | 9 min read | 9.3k views
Share:
Table of Contents
Statistical Analysis is an essential part of Data Science and analysis. One of the most important concepts in statistics is Hypothesis Testing and P-Values. Interpreting P-Value can be tricky and you might be doing it wrong. Beware of P-Hacking!
By the end of this tutorial you will have the knowledge of below:
Learn data science courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career
Let’s dive right in!
P-values evaluate how well the sample data supports that the null hypothesis is true. It measures how correct your sample data are with the null hypothesis.
While performing Statistical tests, a threshold value or the alpha needs to be set prior to starting the test. A common value for it is 0.05, which can be thought of as a probability. P-values are defined as the probability of getting the outcome as rare as that alpha or even rarer.
Therefore, if we get our P-value less than that alpha, that would mean that our statistical test didn’t occur by chance and it was indeed significant. So, if our P-Value comes, say, 0.04, we say we reject the Null Hypothesis.
A low P value suggests that your sample provides enough evidence that you can reject the null hypothesis for the entire population. If you got a P-Value of anything less than 0.05 in our case, then you can safely say that the null hypothesis can be rejected. In other words, the sample you took from the population didn’t occur by pure chance and the experiment indeed had a significant effect.
So what can go wrong?
As we say that getting any P-value of less than alpha gives us the liberty to safely reject the Null Hypothesis, we might be making a mistake if our experiment itself is not showing the right picture! In other words, it might be a false positive.
As we explore the intricacies of p-hacking techniques, a growing realization emerges about the ease with which one can inadvertently or deliberately stray into these practices. This highlights the crucial significance of receiving proper statistical training and maintaining an unyielding dedication to upholding scientific integrity. The primary goal should be to present the data, avoiding any inclination to shape it according to our preferences.
P-hacking possesses the potential to undermine the very core of scientific research silently. However, there is no need to worry. By adhering to certain best practices, one can ensure they stay on the correct path:
Before conducting any research, develop a comprehensive and well-structured plan encompassing your hypotheses, data collection strategies, and analysis procedures. This meticulous roadmap safeguards against the tempting path of p-tracking, where one may resort to trial-and-error techniques by manipulating variables and experimenting with different data analyses until significant results are obtained. By adhering to a predetermined plan, you can uphold the integrity of your research and avoid any unintentional bias or manipulation that could compromise the validity of your findings.
Before initiating the study, make your research strategy known to the general audience. By taking action, you considerably reduce the temptation to deviate from your original goal in light of preliminary results. This open approach also conveys to other researchers that your work may be regarded more seriously since it shows your dedication to impartial and unbiased study. Use systems like upGrad to document and publish your research strategy to pre-register your investigations, assuring more responsibility and legitimacy in the scientific community.
Embrace honesty as your most helpful ally in research by keeping track of all your efforts, including the unsuccessful ones. This dedication to openness necessitates the establishment of comparison groups in advance and delivering a thorough report containing all relevant variables, circumstances, data exclusions, tests, and measurements. By doing this, you can ensure that your study is transparent and that your findings are trustworthy, helping you build confidence in the scientific community.
The popularity of “p-hacked” research frequently results from ignorance of the dangers rather than deliberate bad intentions. It is essential to understand statistical concepts and be conscious of the risks associated with p-hacking to protect against such practices. Every researcher’s toolset should include continuous learning since it improves their capacity to conduct solid research. Understanding statistics is essential to achieving this goal.
Understanding that any choice made during statistical analysis might impact the outcomes is critical. P-hacking may not necessarily be an intentional act of dishonesty, but it typically results from a lack of statistical knowledge.
We can ensure the reliability of our research and the validity of our conclusions by following these recommended practices. Avoiding p-hacking is essential for maintaining the integrity of the overall scientific method and obtaining reliable results. Adopting these principles strengthens research’s position as a reliable source of information and insight and helps keep research authentic.
You must be wondering what is p-hacking? We say that we P-Hacked when we incorrectly exploit the statistical analysis and falsely conclude that we can reject the null hypothesis. Let’s understand this in detail.
Consider we have 5 types of CoronaVirus candidate Vaccines with us for which we need to check which one has actual impact on recovery time of patients. So let’s say we do Hypothesis Tests for all 5 types of vaccines one by one. We set the alpha as 0.05. And hence if P-Value for any vaccine comes less than that, we say we can reject the Null Hypothesis.. Or can we?
Say, Vaccine A gives a P-Value of 0.2, Vaccine B gives 0.058, Vaccine C gives 0.4, Vaccine D gives 0.02, Vaccine E gives 0.07.
Now, by above results, a naive way to deduce will be that Vaccine D is the one which significantly reduces recovery time and can be used as the CoronaVirus Vaccine. But can we really say that just yet? No. If we do, we might be P value Hacking. As this can be a false positive.
Okay, let’s take it another way. Consider we have a Vaccine X and we surely know that this Vaccine is useless and has no effect on recovery time. Still we carry out 10 hypothesis tests by different random samples each time with P-Value of 0.05. Say we get the following P-values in our 10 tests: 0.8, 0.7, 0.78, 0.65, 0.03, 0.1, 0.4, 0.09, 0.6, 0.75. Now if we had to consider the above tests, the test with a surprisingly low P-Value of 0.03 would have made us reject the Null Hypothesis, but in reality it was not.
So what do we see from the above examples? In essence, when we say that alpha = 0.05 we set a confidence interval of 95%. And that means that 5% of the tests will still result in errors as above.
One way to tackle this would be to increase the number of tests. So more the tests, more easily you can say that the maximum number of tests are resulting in rejection of Null. But also, more tests will mean that there will be more false positives(5% of total tests in our case). 5 out of 100, 50 out of 1000 or 500 out of 10,000! This is also called the Multiple Testing Problem.
One of the ways to tackle above problems is to adjust all the P-Value by using a mechanism called False Discovery Rate (FDR). FDR is a mathematical adjustment of the P-Values which increases them by some values and in the end, the P-Values which incorrectly came lower, might get adjusted to values higher than 0.05.
Learn: 8 Important Skills for Data Scientists
Now consider a case from example where Vaccine B gave a P-value of 0.058. Wouldn’t you be tempting to add some more data and retest to see if P-Value decreases? Say, you add a few more data points, and the P-value for Vaccine B came to be 0.048. Is this legit? No, you’d again be P value hacking. We cannot change or add data to suit our tests later and the exact sample size needs to be decided prior to performing the tests by doing Power Analysis.
Power Analysis tells us the right sample size we need to have the maximum chances of correctly rejecting the null hypothesis and not getting fooled.
upGrad’s Exclusive Data Science Webinar for you –
ODE Thought Leadership Presentation
One more mistake you shouldn’t do is to change the alpha after you perform the experiments. So once you see a P-Value of 0.058, you think what if my alpha was 0.06?
But you cannot change it once your experiment starts.
P-hacking statistics harms research studies, frequently without the examiner’s knowledge. Data dredging may have several well-known negative impacts in the fields of data science and machine learning models, including:
Must Read: How to Become a Data Scientist?
Hypothesis Testing and P-Values is a tricky subject and needs to be carefully understood before having any deductions. Statistical Power and Power Analysis are an important part of this which need to be kept in mind before starting the tests.
If you are curious to learn about data science, check out IIIT-B & upGrad’s PG Diploma in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.
Get Free Consultation
By submitting, I accept the T&C and
Privacy Policy
Start Your Career in Data Science Today
Top Resources