With increased media exposure and ever-growing fan bases, businesses (e.g. sports teams) in the world of sport are striving to deliver world-class results and performance . The resulting pressure on professional athletes to perform well is high, ensuring the need for optimal development from all its practitioners and a competitive edge against the opposition [2,3]. However, optimal performance can only be achieved when adequate knowledge is provided from supporting disciplines (e.g. sports science). Consequently, the demand for evidence-based research is increasing, with the ultimate aim to evaluate the efficacy of sport programmes .
This expanding evidence base is adding to the academic credibility of sport development by challenging knowledge and improving our understanding of issues that determine the value and impact of interventions for developing sport . By focusing on evidence-based research, practitioners can criticise, with reasonable confidence, the success of a programme in relation to its objectives . Ultimately, this can enhance the base for future developments and aid in decision-making regarding allocation of resources (e.g. time, budget and equipment) .
What is Statistical Significance?
Evidence-based practice is supposed to enhance practical decision-making, but interpreting research is often difficult for some practitioners . As such, clinical research is only of value if it is properly interpreted . Underpinning many scientific conclusions is the concept of ‘statistical significance’, which is essentially a measure of whether the research findings are actually true. In other words, statistical significance is the probability that the observed difference between two groups is due to chance or some factor of interest [9, 10]. When a finding is significant, it simply means that you can feel confident that it is real, not that you just got lucky in choosing the sample.
The significance level that is widely used within academic research is 0.05 (Figure 1), which is often reported as ‘p = 0.05’ or ‘α = 0.05’. Put more simply, there is a 5% chance that the findings from the research study are due to chance, or a 95% chance of being true. For example, if you were to analyse a set of data looking at reaction times following caffeine consumption, with the resulting significance value being p = 0.03, you can be fairly confident that caffeine consumption improves reaction time as there is a 97% chance that the results are true, and therefore statistically significant due to being over the 95% threshold.