Does the concept of science fit into a sport organisation? My answer as it stands in 2019

At the start of this year I was so excited to begin research – I’m still excited now, 10 months on and a week before my first data collection but now I’m more aware of the realities of my situation. I’m in a position where I know that the data that I collect won’t answer a question completely but it’ll provide some information that can help answer a problem to a certain extent. That’s the life of a scientist, provide one piece of information one step at a time until the complete puzzle is as close to be solved. I’m also in a position where I’ve seen data being collected over a long period in time and have attempted to help it inform practice in the form of both exercise prescription and coaching philosophies. It’s been an interesting year witnessing the realities of organising and managing various data collection processes and it has challenged me in my ability to work as a scientist and manager both of myself and other people. Reconciling that scientific process to the reality in sport which is, teams want to win and data wants to be observed in real time has been one of the challenges that I’ve struggled with the most. You see, as I see it, science is a discipline whose sole responsibility is to collect valid and reliable data that will help answer a research question which will help increase the understanding of a certain phenomenon that can be observed. There’s a couple of points I want to elaborate on pertaining to that point.

The first is science relies on observation of a phenomenon/test/result as well as the observation of factors that lead to that result in order to draw a connection between the two. Sport is a place where uncertainty reigns supreme and variability is high – performance and physiology are both dictated by so many factors that I think it seems silly to try to collect data on them all to observe all the factors – but some teams try to? I’d imagine trying to find the differences through the noise would be draining – Creatine kinase before training, functional movement screens, immunological assessment – I don’t actually think it’s worth the investment of time. Also surely these assessments would have collinearity – the body’s processes don’t choose to turn off one by one when fatigued or adapting, it’s a holistic physiological shift. Then there’s the resources issue. I saw a paper recently citing heat shock proteins could be used as an exercise monitoring tool but unfortunately access to ELISAs regularly that are reliable can be draining on a budget. I haven’t read enough papers or seen a large, regular (daily) testing battery in action to know for sure but knowing what I know about the limited work we do at Bond University Rugby Club, and the effort it takes to draw reasonable conclusions about each player for coaches and athletes to understand, I don’t think that good science in sport is founded on having a large variety of data collected. Instead I feel that concentrated projects and tactical data collection with a clear purpose may be where science has it’s best place in a sporting organisation but I’ll get to that soon.

The second point about science is that we rely on seeing patterns happening repeatedly to gain a reasonable conclusion – we live and die on the premise that what we observe must be reliable to apply it generally to those who we work with. Yet the best performers are outliers and perform ridiculous things when asked of them – that’s why people love sport, the players or athletes always push the boundaries of how good is the best and if we’re not encouraging that we lose out in our performance goals. We’re looking to push all athletes into the 5% that are away from the herd – the significant few (one tailed) if you will. And then when they’re there we’re pushing them even further up again – now into the 5% of their peers again. So a general solution in this case could be a bad solution – to be repeatable means to be repeatably average and average doesn’t win matches. Training plans (worst case scenarios, conditioning prescription) based off average data or prescribed based on average data will either overtrain or undertrain athletes unless you sit on the average. So switching to accept that theoretically everyone is a case study is probably a good step for a scientist – this is generally accepted in the profession but I haven’t read or seen a system where this is executed particularly well – I’m sure they’re out there and I just need to find them to learn how to do it but it’s difficult to source examples of this unless you work for these organisations. Additionally, if we’re trying to gather data to help understand what leads to peak performance, we need to understand that even individually the case study extends to adaptation and fatigue right? So a linear/exponential/any traditional model will not predict much unless we know how someone profiled similarly to the athlete that we have has responded to certain stimuli or unless someone sits close to the average? So how can we quantify that, quickly, efficiently and without collecting too much data for a long period of time and then manage it, providing feedback for those who need it? The answer, I feel, is that we may not be able to – I also feel the answer definitely doesn’t lie in more information – it lies in targeted analysis which takes time, brain power and skills to accomplish and occurs at a slow pace. In a world where information needs to be readily available (or it seems that it needs to be) and an industry where “the edge” is searched for continuously, science, being the patient, slow beast may be a little out of place. Unless the organisation is patient, slow and understands that some questions may not be answered in 2 years let alone 5.

I’m starting to ask myself whether I as a scientist am approaching the way I work all wrong. I’m the first to put my hand up and say I am so fresh into this world that I have only a limited idea of what I should be doing. Even still, I know the work that I’ve done – hydration work on game days, session RPEs, countermovement jump data collection and analyses, performance analysis week to week has been received well by coaching staff. I’m just struggling a little to understand how to communicate the challenges I’ve outlined above to those who I work with – even if I do say this stuff, I don’t know whether it dilutes the “product” I’m selling – if it does dilute my value then I feel like the product that management wants is not the product that we can provide as scientists no matter what level of sport you work in.

I write this as someone who’s just starting to realise how broad sport science as a discipline is but also how young it is in the context of a discipline as well. As far as I know, since 2010 the idea of science in sport has become so popular that it seems to be chased as a cool thing to do with little or no understanding of how the gadgets and gizmos relate to training and game performance – am I wrong? Is there a system where the science has been integrated into an organisation to allow for long term realisation of answers to general questions plus short term servicing tests that helps feed data back to management and athletes in a responsible way? I’d love to know what it looks like if there is one.

I feel like there’s a couple of things I’m going to do to get at these issues I have with science in sport – whether they’re the right call or not we’ll have to wait and see. The first is ensure that the program that I’m working with now endeavours to set up a sport science arm that answers both small and big questions – what I mean by that is we endeavour to collect longitudinal data that we know relates to either physiology, performance or biomechanics in contexts that we want them to excel in for example anaerobic performance under fatigue, decision making under fatigue or aerobic performance at a particular intensity. Whatever benchmarks the club sets, my job will be to have a test, record the results and report areas where we excel or areas where we may require more work. We will attempt to maximise (NOT OPTIMISE) the capacity of our athletes across all those parameters. In the short term the goal is simple – give coaches the data that they need to review each game and then S+C coach the data they need to program a week so that players are overloaded at a stimulus that increases their capacity over time. These outcomes will need to be discussed but as the keeper of the data (should be an official title – not sport scientist) I think that’s a reasonable goal to have considering the challenges I’ve outlined above. I’m not sure if it’s the right start or not but it’s a start – as I endeavour to understand more about data I will expand on the analysis that I can undertake but for now, this system will have to do.

The year so far

JUNE 2019 Where to begin. It’s pretty much June now and whilst writing this it’s the first time that I’ve realised that I’ve been involved in research for 6 months now. I started postgraduate research in January and from the first week it has been manic. I think if I’m to do all my experiences justice, I need to dedicate a post to each of them. But overall, these 6 months have been incredible. Tough, yes. Rewarding, most certainly. But overall, I feel like I’ve been able to learn more than a few lessons that will hold me in good stead for the rest of the year at the very least. In any case, looking back and thinking about how much I’ve learnt makes me feel like I’ve aged about 5 years, when in reality it’s been only 6 months.

One of the biggest learning curves was that there’s always learning curves. Programs need to be established, built and then constantly refined. Skills need to be developed and then maintained. As do relationships. Most importantly though, knowledge will always be growing faster than we can learn. When I was first introduced to scientific journals by my supervisor, V.C. introduced me to a table of contents subscription service that allow free access to the titles and associated abstracts of papers in all journals. Therefore, for the past 6 months, emails have been flooding my inbox, as all sorts of papers are published. In particular at the start of the month, publication emails pile on in. Through this mass of information I sort to find what I’m looking for, anything related to environmental science and its relation to exercise performance. As I do so, I skip over published papers, which are all scientific experiments, planned, conducted and written by either a student like me or an academic who has dedicated their life to solving the unanswered questions in the world. That being said, I only subscribe to sport physiology papers. I haven’t even scratched the surface of technical and tactical skills, sport biomechanics and skill acquisition. Knowing the amount of papers that I skip over to find that single paper that I can use in my studies reminds me of the nature of information. It’s everywhere and all consuming. This means that being overwhelmed is a constant state of being for the research world. As I write this, I know that I’ve just emerged from a period of information overload that occurred through the confirmation process. However, what I’ve realised is that whilst the information may be confusing and at the time is challenging, once you’re through the other side you’re all good. You just need to work hard to get there. It’s stuff like that that keeps me going, knowing that eventually I’ll emerge and know a lot more about the world around me by observing how sportspeople work on a day to day basis and working with them to help them get to their goals.

The other component that stands out for me is the support of the people around me and the way they’ve encouraged me to pursue my own interests and learn concepts on my own. I’ve been extremely lucky to have two extremely good supervisors which have helped me get to where I am in the timeframe that was set which was very very quick relative to a lot of other people. In addition, the HDR students who work around me are always offering advice and pursuing excellence in their own field which is reassuring as I try to make my own way into the field that I want to go into. They’ve helped me refine my ideas and worked with me to develop the outputs I’ve had to create in such a short amount of time. Overall, the support of these people has helped me get to where I am and without it, the road would have been a lot lonelier, a lot harder and a lot longer I’m sure.

Over the next few months hopefully I’ll add to this little post with more and more posts about what I’m learning and observing. For now though, I think that the year so far has taught me those two things – you always learn and you always need quality people around you to get quality output. We’ll see what other lessons the rest of the year teach me.

JANUARY 2021 Those two lessons still stand as the most important – the only one I’d add is stay calm under pressure and focus on the task at hand when you’re in the moment. Refining those skills helps keeps the distractions at bay and prevent the whole overwhelmed feeling.