Science isn’t boring – it’s the pursuit of the truth and that is incredible

Photo by Magda Ehlers on Pexels.com

This article was inspired and written during a period of time where it became crystal clear how hard research actually is, how long the process takes and how much internal fortitude people require to complete even one paper because I was struggling. I’d like for it to be a little inspirational so that when I look back on it during times where I HATE science I can remember why we do what we do in the pursuit of truth but there is a risk it may come across a little indifferent to the challenges the scientific community faces and the efforts science communicators make to translate research findings to the masses. Make of it what you will – maybe I’m being taught the right lessons if I’m falling into line with this somewhat old-school mentality. Enjoy – GW

Science is not boring. No part of science is boring. Every part of the process is deliberate and requires adequate care. It requires precision and requires concentration. Without this focused approach, the end product is diluted and diluted conclusions clouds the already cloudy world of science. They blunt our ability to see the picture clearly. That picture is the truth. The observable. The patterns of our existence. In my case it’s the patterns of our physiology. Of the physiology of people who exercise. Of the physiology of the people who exercise at the brink of the human limit – the trained humans who have learnt how to play a game and test the limits of their potential. I get to test them and determine their truth. There’s absolute magic in that.

Yet, as scientists often we play down parts of the process. Statistical analysis is the easiest one to identify but also robust study design is one that I believe is overlooked (when it really should not be). Even as I write that I have to stop myself saying “I understand maths isn’t for everyone”. Often we hear this in presentations – “I won’t bore you with that part, I’ll skip to the interesting things”. Unfortunately good science doesn’t work like that. It relies that you understand how the answer was calculated. A linear mixed-model instead of a two-way repeated measures analysis. A Bayesian approach rather than a frequentist approach. Sample size was n = what? Our results and interpretation rely on an understanding of the limitations of the methods we use. By skipping it and “not boring” those listening to us we do them two disservices. The first is we trivialise the importance of understanding statistical analyses and keeping the analysis in mind when determining study design (which I will elaborate on in a second). Underpowered studies do not allow for robust hypothesis testing and prediction. Instead they lead to unsure conclusions and wasted resources. Poor data collection methods that lead to data which cannot be analysed effectively by an established statistical method increase the time it takes to complete a project and ultimately reduces the quality of the results. Without understanding the limits of statistical analyses and understanding why precise data collection is required we diminish our ability to answer meaningful questions quickly and astutely. Secondly, when we trivialise statistics we reduce the power of our results – by using the best analysis method available it allows for the audience to understand that the questions has been tested thoroughly via both the study design and the mathematical modelling. Are these methods sometimes complex? Yes. Just because maths isn’t for everyone must we dilute the explanation of our analysis and the analysis itself? Absolutely not. The scientific community should understand the maths. They should know what their results mean. Or at the very least, spend the time to look it up and try to learn and understand. So that when the results are published, the public can be confident in their determination of the truth. When someone reads an article it should be clear that someone tested a theory and it was rejected/plausible/confirmed (Mythbusters reference – check). If we don’t defend both the process and the complexity of scientific principles (I chose statistical analysis but choose any of the research processes if you’d like) then we’re lowering the bar and I don’t think that’s right, nor should it be encouraged.

I write this having only conducted extremely applied research which by my own standards may never get published. Regardless doing so has taught me something important. To be published you have to do great work – it has to be worth publishing. It can’t be hodge-podge data thrown through an ANOVA and let’s see what turns up. It has to be good. I’ll try to get published with the data I have – like all scientists I want my work published, to have others read my data and learn from it. Whilst applied, the conclusions raise interesting theories about physiology and may open a can of worms in some areas. However, I’m realistic and want to do better on my next try – that feeling is what I’m afraid to lose if there’s any complacency in the standards. I am afraid that I (and collectively, the generation of researchers being raised right now) will lose the resilience required to get back on the horse, get better and achieve the standard – not the first time, the second, the third or some magical number of times (10 000 hours anyone?) but whenever it happens. I am afraid that by diluting the quality, science becomes difficult to sell as truth – because the reality is, how do you know it is truth if you haven’t tested outcomes robustly enough? That’s what science may become if we get rid of the boring, the mundane and the downright hard parts about it – it would become a sloppy sludge of “maybes”, “in some cases” and “mights”. Some could argue that in the sport science space that’s where we are headed right now – I say not on my watch!

Leave a comment