Injury rate and prevention in elite football: let’s first search within our own hearts

Buchheit M, Eirale C,  Simpson BM and Lacome M. Injury rate and prevention in elite football: let’s first search within our own hearts. BJSM, In press.

 

logic

Full text here

Research and discussions about injury rates and their prevention in elite football is one of the hottest topics in the medical and sport science literature. Over the past years, there has been an explosion of the number of publications, including surveys,1 observational, retrospective or prospective2 studies, training interventions and various types of expert opinions and commentaries.3 This array of information are likely useful to improve our understanding of what the best practices may be, and in turn, increase our ability to better prepare, manage and treat players. However, a recent survey has shown that 83% of UEFA clubs do not follow evidenced-based prevention programs.1 It was also shown that hamstring injuries kept increasing over the last 13 years.2 Taken together, those two papers may suggest that the majority of elite club practitioners likely disregard research findings1 and may therefore be the one to be blamed for those increased injury rates.2

Making supporting staff and coaches responsible for those injuries is easy, especially when considering their perceived typical personality traits (i.e., so-called Type 2,4 high egos and little open-mindedness and willingness and learn – “why could they be bothered applying the new study findings?”). While this may be true sometimes, the reality is that elite club practitioners are rather often in the frontline with new treatment options and training programs. We believe therefore that there may be alternate, less naïve ways to look at those latter findings,1 2 which could suggest that club practitioners may not be as bad as they may appear from those papers. In fact, type 2-researchers4 often discard new research findings that contradict their own paradigm (confirmation bias); they may also be more prone to pursue their old research topics in the name of security and comfort,4 missing in turn potentially important advances in the field. Two examples highlighting how those attitudes have increased the disconnect and misunderstanding between research findings and real practice are discussed below.

  1. Are club injury prevention programs that bad? What if practitioners were just doing differently (and maybe better) than what is restrictively ‘evidenced-based’?5 Survey questionnaires and Delphi consensus are often better suited for well-identified group-based approaches and generic answers to be given, where highly individual and multifactorial approaches6 are more difficult to be registered and reported. In fact, in real practice, people use multiple types of exercises and use variations in terms of volume and intensity; they adapt their programming as a function of player needs, profile, context, game schedule, acute loading, availability of tools on site (e.g., away game, camps), beliefs, experience and many other considerations. Some players have their own external-to-the-club physios and fitness coaches, who obviously don’t complete the questionnaires. Clubs encouraging these individual-player practices end up being classified as “non-compliant”1 to evidenced-based programs, but does this mean that what those supporting staff do has no value? What about “best practice”? In fact, the understanding of the field context is often overlooked by research recommendations, and when it comes to train and work with elite athletes, the science doesn’t always apply.5 It is however worth mentioning that elite practitioners’ attitude toward innovation may be a double-edged sword; new practices (not researched yet) may also sometimes, in retrospect, turn out not to be that efficient. Finally, since randomized controlled trials are impossible to implement in an elite population, the ‘evidence’ is often based on interventions conducted in sub-elite/amateur populations. In fact, the response of both muscle strength and architecture to training is likely training-status dependent; therefore, extrapolating the research findings obtained in sub-elites to what could be expected in highly-trained players remains hazardous. This in fine questions the relevance of a lot of research findings, and in turn, limits their adoption by top athletes.
  2. Have injuries really increased over the past 13 years? The 2-4% increase in injury rate2 reported between the years 2000 and 2013 was established while reporting injury occurrence as a function of both training attendance and game participation, as measured by minutes/hours of training and play. First, as discussed recently,7 the fact that team doctors may tend to adopt now a more conservative approach to withdraw players from training when early muscle warnings are apparent (but before an injury is actually registered) may artificially increase training injury rate.7 Second, when the slightly increased match injury occurrence2 is reported as a function of the corresponding high-intensity running demands over the same periods (which have moderately-to-largely increased8) – and not simply relative to overall playing time-2 it appears that match injury incidence has in fact slightly decreased (̴ 20%) over time (Table 1)! In fact, this small but substantial decrease in injury rate over the past years lends support to the individual prevention programs and load management strategies implemented in clubs, that may turn out to be more efficient than previously thought. In other words, the individual, multifactorial, context-driven approaches implemented in those ‘evidenced-based non-compliant’ clubs may in fine work at decreasing injuries during matches. Using some specific running demands rather than playing time to examine injury rate (Table 1) makes actually a lot of sense given the strong association between high-speed running and injuries,3 and should probably be expanded further to the majority of epidemiological investigations in team sports.

To conclude, those two examples suggest that injuries may in fact be better prevented and managed in clubs than it may appear from some of the research papers. Since elite environments are more complex than meets the eye, before making any recommendations, we, both as researchers and practitioners, should never oversee the importance of context. Pragmatism, use of common sense and the consideration of best practices4 are often to be prioritized over oversimplified research findings.

Table 1

References

  1. Bahr R, Thorborg K, Ekstrand J. Evidence-based hamstring injury prevention is not adopted by the majority of Champions League or Norwegian Premier League football teams: the Nordic Hamstring survey. Br J Sports Med 2015;49(22):1466-71. doi: 10.1136/bjsports-2015-094826 [published Online First: 2015/05/23]
  2. Ekstrand J, Walden M, Hagglund M. Hamstring injuries have increased by 4% annually in men’s professional football, since 2001: a 13-year longitudinal analysis of the UEFA Elite Club injury study. Br J Sports Med 2016;50(12):731-7. doi: 10.1136/bjsports-2015-095359 [published Online First: 2016/01/10]
  3. Gabbett TJ. The training-injury prevention paradox: should athletes be training smarter and harder? Br J Sports Med 2016;50(5):273-80. doi: 10.1136/bjsports-2015-095788 [published Online First: 2016/01/14]
  4. Buchheit M. Outside the Box. Int J Sports Physiol Perform 2017;12(8):1001-02. doi: 10.1123/ijspp.2017-0667 [published Online First: 2017/11/02]
  5. Buchheit M. Houston, We Still Have a Problem. Int J Sports Physiol Perform 2017;12(8):1111-14. doi: 10.1123/ijspp.2017-0422 [published Online First: 2017/07/18]
  6. Mendiguchia J, Martinez-Ruiz E, Edouard P, et al. A Multifactorial, Criteria-based Progressive Algorithm for Hamstring Injury Treatment. Med Sci Sports Exerc 2017;49(7):1482-92. doi: 10.1249/mss.0000000000001241 [published Online First: 2017/03/10]
  7. Eirale C. Hamstring injuries are increasing in men’s professional football: every cloud has a silver lining? Br J Sports Med 2018 doi: 10.1136/bjsports-2017-098778 [published Online First: 2018/01/25]
  8. Barnes C, Archer DT, Hogg B, et al. The evolution of physical and technical performance parameters in the English Premier League. Int J Sports Med 2014;35(13):1095-100. doi: 10.1055/s-0034-1375695 [published Online First: 2014/07/11]
Advertisements

Science and Application of High Intensity Interval Training – The book

A preview to: The Science and Application of High Intensity Interval Training

Release date: towards the end of 2018

L&B - Science and Application of HIT

Most of us have heard of high-intensity interval training (HIIT) – that so called time efficient training strategy we should use to improve our cardiorespiratory and metabolic health and performance. Today in fact, HIIT is considered the highest interest topic in our field. But as I’ve written, our ability to link much of the sport science research outputs derived from our academic institutions, including HIIT, into today’s elite sport practice, can often be considered limited by those it matters most to, the coaches and athletes in the trenches of high performance sport.

This past year, my colleague Paul Laursen and I have taken on the project of working towards putting a book together on the topic of HIIT and its real world application in high performance sport. The initiative stems from the popularity of a two-part literature review we wrote on the topic back in 2013 (Part I & Part II, see also the 2013 UKSCA presentation below). As shown in the infographic, Science and application of high-intensity interval training takes the reader though the history of HIIT and its traditional methods, before diving into our scientific understanding of how it can be used to gain a response in certain biological targets of importance for sport performance. From there, we break down the key components of an HIIT session (intensity, duration, recovery, etc) to learn how we can manipulate these factors to form different HIIT formats, or what we term our ‘weapons’. Our HIIT weapons can be further refined to hit the biological targets of importance, in line with the sport type, the individual, and the all-important sport context. Other important considerations covered include concurrent strength programming and health aspects, as well as load monitoring and individual response surveillance.

Most importantly, we are privileged to have gained the generous contributions of practitioners embedded within 20 high profile individual and team-based sports, who tell us precisely how they apply the science of HIIT in their practice to maximise athlete and sport performance. The detail they offer will leave the enthusiastic coach and sport scientist breathless. It is our hope that the work will inspire a future generation of sport scientists to think outside the box when it comes to high performance sport science research and HIIT application, and critically, narrow today’s void between science and practice.

Videos from the 2013 UKSCA Conference

(many thanks to them for making the link available to all)

Nb: White man before Yann Le Meur‘s Infographics 🙂

Want to see my report, coach?

In this new paper I merged and developed a bit further the 2 IJSPP papers on 1) the stats that changed my life and 2) some personal thoughts on chasing the 0.2 (i., making an impact) in an elite setting.

aspetar-201702

The value and importance of sport science varies greatly between elite clubs and federations. Among the different components of effective sport science support, the three most important elements are likely the following:

  1. Appropriate understanding and analysis of the data; i.e. using the most important and useful metrics only and using magnitude-based inferences as statistics. In fact, traditional null hypothesis significance testing (P values) is neither appropriate to answer the types of questions that arise from the field (i.e. assess magnitude of effects and examine small sample sizes) nor to assess changes in individual performances.
  2. Attractive and informative reports via improved data presentation/ visualisation (‘simple but powerful’).
  3. Appropriate communication skills and personality traits that help to deliver data and reports to coaches and athletes. Developing such an individual profile requires time, effort and most importantly, humility

Does beetroot juice really improve sprint and high-intensity running performance? – probably not as much as it seems: how (poor) stats can help to tell a nice story

beetrootfoot

 A few tweets, re-tweets and emails from colleagues have caught my attention within the last 24 hrs, all pointing toward a new study showing improvements in sprinting and high-intensity intermittent running performance after dietary nitrate supplementation (beetroot shots) (1). In the 36 team-sports players (training 5-10h a week) who volunteered for the study, significant “improvements” in 5- (2.3%), 10- (1.6%) and 20-m (1.2%) sprint times and a 3.9% “increase” in high-intensity intermittent performance were reported, after no longer than 5 days of supplementation! (1)

nosprint

To all practitioners who may read both the article (1) and the present blog post, the topic is obviously highly relevant; we are all looking for various ways to improve our players’ running performance – even better if these improvements can be gained legally (no doping) and without (physical) efforts. If you can convince yourself to commit to drink daily an awful 70-ml beetroot shot for 5 days before an important competition, then you may have found a really cool and lazy way to get faster and fitter!!

However, before I began to tell (again) every player at the club (who would systematically pass on beetroot because of its taste) to finally commit themselves to drink this stuff, because it really works, I wished to make sure it would be worth the effort, both for them and me. After a deeper read of the paper, a closer look at the study design, the data analysis and the stat approach, I realized that in fact, beetroot supplementation, within the context of the present study, may not be as promising as it could be understood while only reading the title of the paper. This for at least two important reasons: 1) the somewhat limited magnitude of the “changes”, although significant and 2) the questionable study design/data analysis that doesn’t allow individual responses to be clearly accounted for and analyzed.

  1. The magnitude of the “improvement” may not be large enough to be meaningful. When considering the magnitude of the smallest worthwhile changes for different sprint distances (SWC, i.e., the minimum improvement likely to have an impact on the field, such as that required to be 20 cm ahead of an opponent to win a ball) (2), the changes reported in the present study are in fact either smaller (5 m: study 2.3% vs SWC ̴ 4%, 10 m: study 1.6% vs SWC ̴2%) or just similar to (20 m: study 1.2% vs SWC ̴1%) (2). Even for 20-m time, which magnitudes equals the SWC, chances for the “improvement” to be substantial may be no more than 50% at the individual level (when considering a typical error of the measurement (TE) of the same magnitude than the SWC – while in fact the TE may actually be twice as large as the SWC for such a distance (2), decreasing further the likelihood of a substantial change) (2). The same reasoning applies to the “increase” in Yo-YoIR1 performance (+3.9%), which SWC is generally twice larger (̴ +8% (3), +7% as 0.2xSD in the present study). In conclusion, the comparison of the reported changes, although significant, to their specific SWC directly questions the practical impact and in turn, the usefullness of beetroot supplementation in the context of the present study. These data illustrate once again that the use of null hypothesis significance testing (NHST) is clearly limited to assess the actual performance benefit of a supplement or an intervention (4, see the blog on the topic) – in the present case the significant P value likely results from the large sample size (n=36) – different conclusions (and probably less misleading in the present case) would be drawn with lower samples (i.e., n<15).
  1. The data analysis doesn’t allow individual responses to be clearly accounted for/analyzed. In fact, the authors simply chose to compare the sprints/YoYoIR1 performances following beetroot supplementation to these following the placebo drink (Post beetroot – Post placebo, via paired-samples t-tests)!? While it is not clear why such a limited approach was chosen, the proper way to analyze these data would be to look first at within-group changes, and more importantly, to compare these within-group changes (i.e., between-group differences in the changes – typical crossover design, as ‘post beetroot – pre beetroot’ compared with ‘post placebo – pre placebo’). This latter approach is way more powerful and allows the understanding of i) the effect of each treatment per se (within-group effect, in relation to the SWC), ii) the variability of the response within each treatment (SD of the change, which has important implications when using supplementation with athletes – some will respond, some not !! – and how many and by which magnitude?), iii) compare the efficacy of the treatments (differences in the magnitude of the changes) and even more importantly, iiii) compare the magnitude of the individual responses between each treatment (i.e., which treatment shows the greater variability in response). Unfortunately, all these relevant information for practitioners are missing in the manuscript.

That being said, I am happy to keep beetroot shots on the supplement table for the moment (for players that can cope with the taste… at least it hasn’t been shown to be detrimental). I may, however, not use the present study to advertise the benefit of beetroot to the players – if we want to keep our legitimacy and maintain the trust that the players put on us, I believe it is important to come to them with the right message – and in that case, applying some appropriate stats surely helps!

References

  1. Thompson C, Vanhatalo A, Jell H, Fulford J, Carter c, Nyman L, Bailey SJ and Jones AM. Dietary nitrate supplementation improves sprint and high-intensity intermittent running performance. Nitric Oxide 61 (2016) 55-61.
  2. Haugen T, Buchheit M. Sprint running performance monitoring: methodological and practical considerations. Sports Med. 2016;46(5):641.
  3. Bangsbo J, Iaia FM, Krustrup P. The Yo-Yo Intermittent Recovery Test: a useful tool for evaluation of physical performance in intermittent sports. Sports Med. 2008;38:37–51.
  4. Buchheit M. The Numbers Will Love You Back in Return—I Promise. Int J Sports Physiol & Perf, 2016, 11, 551 -554.

The numbers will love you back in return – I promise

Buchheit M. The numbers will love you back in return – I promise. IJSPP 2016, 11, 551 – 554

Full text here

Abstract: The first sport science-oriented and comprehensive paper on magnitude-based inferences (MBI) was published 10 years ago in the first issue of this journal. While debate continues, MBI is today well-established in sports science and in other fields, particularly clinical medicine where practical/clinical significance often takes priority over statistical significance. In this commentary, some reasons why both academics and sport scientists should abandon null hypothesis significance testing (NHST) and embrace MBI are reviewed. Apparent limitations and future areas of research are also discussed. The following arguments are presented: P values and in turn, study conclusions, are sample-size dependent, irrespective of the size of the effect; significance doesn’t inform on magnitude of effects, yet magnitude is what matters the most; MBI allows authors to be honest with their sample size and better acknowledge trivial effects; the examination of magnitudes per se helps provide better research questions; MBI can be applied to assess changes in individuals; MBI improves data visualisation; and lastly, MBI is supported by spreadsheets freely available on the internet. Finally, recommendations to define the smallest important effect and improve the presentation of standardized effects are presented.

Keywords: magnitude-based inferences; null hypothesis significance testing; sample size; trivial effect; smallest important effect.

 

Figure 3 MBI

Figure 3. Differences in various anthropometric, physiological and performance measures between two groups of young soccer players differing by their maturity status (0.9 ± 0.3 vs. -0.2 ± 0.4 years from predicted peak height velocity)30 when expressed in percentages (A), using Cohen’s effect size principle (B) and as a factor of variable-specific smallest worthwhile differences (SWD) (C):28 0.2 x between-athletes SD for height, MAS and matches tracking data; performance-related changes for HRR and MSS (723 and 222%, respectively). The numbers of * indicate the likelihood for the between-group differences to be substantial, with 1 symbols referring to possible difference, 2 to likely, 3 to very likely and 4 to almost certain differences. Note that that magnitude of the between-group differences and their likelihood varies between the panels. My suggestion is to use the method used in panel C (with a variable-specific SWD). MSS: maximal sprinting speed, MAS: maximal aerobic speed, HRR: heart rate recovery after submaximal exercise, D>16 km/h: distance ran above 16 km/h during matches, #HIR: number of high intensity runs during matches.