Athlete Performance Testing

RELIABILITY

Importance of Reliability

Test reliability is generally a more familiar concept to coaches than the test validity, and is more relevant when considering the significance of changes in the score of a test. A highly reliable test is one that gives similar results when completed by an athlete (in the absence of any training or fatigue effects between the two tests). For example, the 20m sprint test is generally thought to be a highly reliable test with athletes typically achieving very similar results when the tests are performed in quick succession (allowing for rest). However this test is only reliable when performed using timing gate systems, and when using a handheld stopwatch the test instantly has very poor reliability. This is an example of the test equipment/methodology influencing the reliability of the test. Other common sources of methodology-derived error include varying the time or environment of the testing, using unreliable equipment, and using a different assessor (for tests requiring subjective ratings or assessor input). As a general rule of thumb, to optimise reliability we need our test methodologies to be as repeatable as possible to minimise test error.

The other source of reliability issues relates to the "reliability of the athlete" to repeat their performance on successive occasions. Similar to what was discussed with test validity, if a test is complicated then the performance of the athlete during that test may depend on several unique variables, and an athlete may struggle to replicate the specific combination/optimisation of all of these variables on successive occasions. Other athlete-dependent sources of reliability relate to athlete arousal levels (refer to arousal-performance curve), fatigue, and test motivation. Whilst these factors are less controllable than the methodological factors, the use of consistent motivational cuing for all athletes can help to overcome this.

The reliability of a test is very important when trying to infer meaning from changes in test result. For example, let's assume that the highly reliable 20m test (using timing gates) has a test-retest reliability of 5%, meaning that if you repeat the test then the results will very likely fall within +/-5% of the original result. Test-retest reliability is often easy to find by consulting research papers, however make sure to always consider how they've completed their test, and the population used. Let's suppose we have two athletes (A and B) who each ran a 3.00 second 20m sprint at the start of the year, and then during our mid-year testing we find that athlete A ran a 2.80 second 20m sprint (+7% improvement) whereas athlete B ran a 2.90 second 20m sprint (3.5% improvement). Given the test-retest reliability of 5%, it's possible that athlete B may not be any faster than initially, and these results are just due to the natural variation of the test. Hence if we'd repeated the test enough times at the start of the year, it's likely that athlete B may have achieved a 2.9 second result in one of those tests. Conversely, athlete A's improvement is greater than the test-retest reliability, hence it's very likely that this athlete is faster now than they were at the start of the year.

The test-retest reliability is important to understand, because our goal should always be to maximise the test-retest reliability such that we can then accurately and confidently tell when our athletes are improving. This is the reason why we must have consistent protocols that must be adhered to, otherwise test-retest reliability suffers and we start to getting errors in our testing. For example, say that the timing gates are haphazardly thrown down around a tape measure, resulting in a 20m sprint now measuring 19.9m instead (very easy to do!). The 10cm less an athlete must run corresponds to 0.5%, hence we would expect times to be 0.5% better! This may push some times over the 5% improvement threshold such that we consider them important, yet next time we retest these athletes and set the sprint up correctly, we might actually find now that some of the athletes are slower than previous which will affect how we approach our programming and athlete management! Hence it's essential we do everything possible to maximise the reliability of our tests such that we get useful information from them! For an example of how rigorous we are with our test standardisation, refer to the below video demonstrating how a 20m sprint test is set-up to maximise reliability (taken from a uni assignment done many years ago, don't plagiarise this!).

Back to

modules

Next

Page

  • Facebook - White Circle
  • Twitter - White Circle
  • Instagram - White Circle
  • YouTube - White Circle

Original website concept and design by Breanna Harris

© 2020 Rise Health Group

Deakin.png
Lysterfield Wolves JFC.png