top of page

Journal of Sports Sciences, in press

DOI: 10.1080/02640414.2012.679674


The need for ‘representative task design’ in evaluating efficacy of skills tests in sport: A comment on Russell, Benton and Kingsley (2010)


Luís Vilar, Duarte Araújo, Keith Davids & Ian Renshaw

Introduction


An important task in applied sport science is to design tests to assess performance of skills and the conditioning requirements of specific sports such as association football (Ali et al., 2007; McGregor, Nicholas, Lakomy, & Williams, 1999; Rampinini et al., 2008; Reilly, Williams, Nevill, & Franks, 2000). Recently, Russell, Benton and Kingsley (2010) suggested a new test comprising three different tasks to evaluate players’ soccer skills. Passing and shooting tasks required players to kick a moving ball, delivered at a constant speed, towards one of four randomly determined targets (identified by a custom lighting system). The authors described the passing distances as short (4.2 m) and long (7.9 m), while the dribbling task required players to negotiate seven marker cones placed 3 m away from each other over 20 m (cones 1 & 7 were 1 m away from the ends of the course). Results showed that the protocol successfully differentiated between soccer players of different skill levels in their ability to kick a ball at a static target and to dribble around static cones (construct validity) as well as reproduce the performance values obtained over repeated trials (reliability). However, we argue that the authors’ intention to relate this proposed evaluation protocol to the competitive performance environment of soccer (validity) lacked scientific support. A critical issue concerns the implicit justification for the data based on the observation that professional players attained better results on these skills tests compared to recreational players due to their level of experience.  In this paper, we highlight the limitations of this assumption and aim to provide a theoretical rationale for the design and generality of sport performance evaluation protocols by clarifying the following authors’ misuse of the terms ‘representative design’ and ‘ecological validity’ in their paper:

 

“Test–retest reliability was examined in the 20 players, which is representative of the number of players in a soccer squad” (p. 1400).
“To enhance ecological validity, no prior touches were allowed to control the ball ” (p. 1401).


We argue that the skills tests designed in the study of Russell et al. (2010) may not have been representative of competitive performance in football because they did not include critical perceptual variables that performers typically use to control their actions during performance. While the test data reported by Russell and colleagues (2010) may have been able to differentiate between skilled and less skilled performers on test performance,  no evidence was presented to show how the tests may relate to competitive performance in association football, which is the ultimate objective of performance evaluation tests in sport science. To support our arguments, we show how key ideas from ecological dynamics can be implemented in the representative design of performance evaluation tests as a theoretical framework that considers the performer–environment relationship as the relevant scale for understanding sport performance (Araújo, Davids, & Hristovski, 2006; Davids & Araújo, 2010; Pinder, Davids, Renshaw, & Araújo, 2011; Vilar, Araújo, Davids, & Button, in press).



 

Disambiguating the terms ecological validity and representative design


The term ‘ecological validity’ (first proposed by Brunswik, 1943) was used by Russell and colleagues (2010) to refer to the arrangement of conditions in the skills evaluation tests so that they represented the actual football performance environment to which the results were intended to apply. However, in their paper the term ‘ecological validity’ was misused and their proposed definition actually refers to Brunswik’s (1956) concept of ‘representative design’. Ecological validity, as Egon Brunswik (1943) conceived it, refers to the validity of a perceptual variable in predicting a criterion state of the environment. Ecological validity is defined by the statistical correlation between the perceptual variables available to a performer (e.g., the time for a defender to intercept a pass) and the distal criterion variables of interest in a performance environment (a desired state, e.g., knowing whether a defender will intercept the ball) (see Brunswik, 1956; Hammond & Stewart, 2001). On the other hand, in the context of this commentary, the core premise of the concept of representative design is that the informational properties of an evaluation test should represent the properties of the performance environment to which evaluators wish to generalize. Lack of representative design in a test may mean that the behaviours emerging from test performance are highly specific to the test. They may have been altered in such a way that the obtained results are not representative of actual functioning in the competitive performance environment (Araújo, Davids, & Passos, 2007). By using the term ‘ecological validity’, Russell et al. (2010) were actually alluding to aspects of ‘representative design’ and confusing concepts of environmental properties, performance achievement and data generalizability (Araújo et al., 2007).


What does this theoretical clarification of the misconceptualisation of the test rationale provided by Russell et al. (2010) suggest for the design of skills assessments in sport? In order to generalize performance beyond skills tests to the competitive performance environment, the protocols must ensure that the constraints of the competitive performance environment have been adequately sampled. That is, the protocols should guarantee that the evaluation tests should have representative design (Araújo et al., 2007; Pinder et al., 2011). A key issue here is that skill evaluation tests should be specific to an evaluated performance environment (Savelsbergh & Van der Kamp, 2000). That is, they should be predicated on the same perceptual variables that convey the information that players use to control their actions in specific sport performance contexts (Araújo et al., 2007; Dicks, Davids, & Araújo, 2008; Pinder et al., 2011). These ideas are grounded on the assumption that the information present in a skills test are representative of the information present in a competitive environment that supports performance. This is important since representative information sources specify the actions that performers need to make in specific performance contexts by affording opportunities to act (Araújo et al., 2006). Consequently, the representative design of evaluation tests should be measured, not only by product variables (e.g., time to complete the task, number of points scored, number of trials to achieve criterion), but also by process variables (kinematics of behaviour, stability and variability of behaviours) (Araújo et al., 2007; Pinder et al., 2011).The absence of the relevant perceptual variables to specify actions in the skills test proposed by Russell et al. (2010) may have led the players to use information that was non-specifying of the competitive performance environment, supporting emergence of different behaviours (Pinder, Renshaw, & Davids, 2009). This argument is based on compelling evidence showing that, when informational constraints of a task are altered, different patterns of movement coordination may emerge (Dicks, Button, & Davids, 2010; Oudejans, Michaels, & Bakker, 1997; Pinder et al., 2009). Of course, the possibility exists that these patterns may be less functional than those actually required during competitive performance.



Ecological dynamics as a rationale for understanding skill performance


Our arguments here are sustained by concepts of ecological dynamics which advocate that functional performance is grounded on the ability to detect and use specific perceptual variables from the environment (i.e., specifying variables) that afford opportunities to act (Araújo et al., 2006). The functional behaviours of performers are based on the accurate and efficient coupling between perception and action systems during performance (Savelsbergh & Van der Kamp, 2000). For example, previous research in 1vs1 sub-phases of team ball sports has shown that players are highly attuned to information from the actions of an immediate opponent to regulate passing, shooting and dribbling behaviours. This key idea has been verified in studies of successful shooting in basketball (Araújo, Davids, Bennett, Button, & Chapman, 2004), try scoring in rugby union (Passos et al., 2008), and successful dribbling in in association football (Duarte et al., 2010) .These studies have shown how skill performance in shooting and dribbling is highly constrained by the interpersonal distance and relative velocity of an attacker and a marking defender during their dynamic interactions (Duarte et al., 2010; Passos et al., 2008). In rugby union, the time-to-contact between an attacker and defender and the distances between defenders has also been shown to yield information about future action possibilities (Correia, Araújo, Craig, & Passos, 2011).


By neglecting the active role of opponents in the design of skill evaluation tests, Russell et al. (2010) failed to reproduce the dynamic nature of the football performance environment which would impact on the functionality of the skills evaluation test. In contrast, the proposed skill tests contained too many static information sources which are not present in the competitive performance environment of association football. The use of cones and lighting schemes as information to guide actions was not representative of the information available in competitive football and may not have been specific to the skills evaluation environment. As long as those data remain used solely to evaluate performance on the specific tests of Russell et al. (2010), the concerns are negligible. However, if one seeks to generalise results from the evaluation tests to the competitive performance environment of association football, then a raft of issues emerge over the information available for performers to regulate their actions. To exemplify, the static nature of cones, might allow a performer to be completely in control of a specifying perceptual variable time-to-contact between the ball carrier and the obstacle to avoid (e.g., simulating a defender). This perceptual variable has been shown to be responsible for transitions in a player’s decision-making when running through gaps between defenders in team sports (Correia et al., 2011). Instead, we suggest that this specifying variable should be represented in skill evaluation tests in a dynamic manner between the ball carrier and an active (not static) ‘obstacle to avoid’ (such as a moving coach or defender).


The rather static test design proposed by Russell et al. (2010) in their evaluation protocol functionally decoupled processes of perception and action in their participants, and this is why the data are limited in generalization to the competitive performance environment in football (Araújo et al., 2007). To design more functional and valid evaluation tests, sport scientists need to implement a detailed performance analysis of behaviours in team sports like football to identify the key perceptual variables that support skill performance. After the important task of verifying the specific information sources used to regulate actions in passing, shooting and dribbling during competitive football performance, functional and representative evaluation tests can be designed (for a similar argument on representative learning design see Pinder et al., 2011). Non-representative performance evaluation designs, such as the one proposed by Russell et al. (2010), might result in a performer converging on non-specifying variables to support different patterns of movement coordination in evaluation tasks than would emerge in competitive performance environments.


The important research task of empirically confirming the specifying information sources used by performers to regulate their actions has already begun in the sports and movement sciences. For example, in studies of team sports, it has been found that the control of action lies somewhere between an attacker and an immediate defender interacting in a dynamical system (see Correia et al., 2011; Passos et al., 2008; Passos et al., 2009; Pinder, Renshaw, Davids, & Kerhervé, in press; Vilar et al., in press). In order to design tasks to evaluate skills in sport, performance analysts and practitioners should integrate their knowledge (experiential and empirical, see Greenwood, Davids, & Renshaw, in press) to sample the key perceptual variables that players use to guide successful skill performance in the competitive environment.



 

Acknowledgments


The first author was supported by a financial grant from the “Portuguese Foundation for Science and Technology” (SFRH/BD/43251/2008). The authors wish to thank Ross Pinder for his suggestions on the current work.


References


Ali, A., Williams, C., Hulse, M., Strudwick, A., Reddin, J., Howarth, L., . . . Mcgregor, S. (2007). Reliability and validity of two tests of soccer skill. Journal of Sports Sciences, 25(13), 1461-1470.
Araújo, D., Davids, K., Bennett, S., Button, C., & Chapman, G. (2004). Emergence of Sport Skills under Constraints. In A. M. Williams & N. J. Hodges (Eds.), Skill Acquisition in Sport: Research, Theory and Practice (pp. 409-433). London: Routledge, Taylor & Francis.
Araújo, D., Davids, K., & Hristovski, R. (2006). The ecological dynamics of decision making in sport. Psychology of Sport and Exercise, 7, 653-676.
Araújo, D., Davids, K., & Passos, P. (2007). Ecological Validity, Representative Design, and Correspondence Between Experimental Task Constraints and Behavioral Setting: Comment on Rogers, Kadar, and Costall (2005). Ecological Psychology, 19(1), 69-78.
Brunswik, E. (1943). Organismic achievement and environmental probability. Psychological Review, 50, 255-272.
Brunswik, E. (1956). Perception and the representative design of psychological experiments. Berkeley and Los Angeles: The University of California Press.
Correia, V., Araújo, D., Craig, C., & Passos, P. (2011). Prospective information for pass decisional behavior in rugby union. Human Movement Science. doi: 10.1016/j.humov.2010.07.008
Davids, K., & Araújo, D. (2010). The concept of ‘Organismic Asymmetry’ in sport science. Journal of Science and Medicine in Sport, 13(6), 663-640.
Dicks, M., Button, C., & Davids, K. (2010). Examination of gaze behaviors under in situ and video simulation task constraints reveals differences in information pickup for perception and action. Attention Perception & Psychophysics, 72(3), 706-720.
Dicks, M., Davids, K., & Araújo, D. (2008). Ecological psychology and task representativeness: Implications for the design of perceptual-motor training programmes in sport. In Y. Hong & R. Bartlett (Eds.), Handbook of biomechanics and human movement science (pp. 129-139). New York: Routledge.
Duarte, R., Araújo, D., Gazimba, V., Fernandes, O., Folgado, H., Marmeleira, J., & Davids, K. (2010). The Ecological Dynamics of 1v1 Sub-Phases in Association Football. The Open Sports Sciences Journal, 3, 16-18.
Greenwood, D., Davids, K., & Renshaw, I. (in press). How elite coaches’ experiential knowledge might enhance empirical understanding of sport performance International Journal of Sports Science and Coaching.
Hammond, K., & Stewart, T. (2001). The essential Brunswik: Beginnings, explications, applications. Oxford: Oxford University Press.
McGregor, S. J., Nicholas, C. W., Lakomy, H. K. A., & Williams, C. (1999). The influence of intermittent high-intensity shuttle running and fluid ingestion on the performance of a soccer skill. Journal of Sports Sciences, 17(11), 895-903.
Oudejans, R. R., Michaels, C. F., & Bakker, F. C. (1997). The effects of baseball experience on movement initiation in catching fly balls. J Sports Sci, 15(6), 587-595.
Passos, P., Araujo, D., Davids, K., Gouveia, L., Milho, J., & Serpa, S. (2008). Information-governing dynamics of attacker-defender interactions in youth rugby union. Journal of Sports Sciences, 26(13), 1421-1429.
Passos, P., Araújo, D., Davids, K., Gouveia, L., Serpa, S., Milho, J., & Fonseca, S. (2009). Interpersonal Pattern Dynamics and Adaptive Behavior in Multiagent Neurobiological Systems: Conceptual Model and Data. Journal of Motor Behavior, 41(5), 445-459.
Pinder, R., Davids, K., Renshaw, I., & Araújo, D. (2011). Representative learning design and functionality of research and practice in sport. Journal of Sport & Exercise Psychology, 33(1), 146-155.
Pinder, R., Renshaw, I., & Davids, K. (2009). Information-movement coupling in developing cricketers under changing ecological practice constraints. Human Movement Science, 28(4), 468-479. doi: S0167-9457(09)00027-X [pii]
10.1016/j.humov.2009.02.003
Pinder, R., Renshaw, I., Davids, K., & Kerhervé, H. (in press). Principles for use of ball projection machines in elite and developmental sport programmes. Sports Medicine.
Rampinini, E., Impellizzeri, F. M., Castagna, C., Azzallin, A., Bravo, D. F., & Wisloff, U. (2008). Effect of match-related fatigue on short-passing ability in young soccer players. Medicine and Science in Sports and Exercise, 40(5), 934-942. doi: Doi 10.1249/Mss.0b013e3181666eb8
Reilly, T., Williams, A. M., Nevill, A., & Franks, A. (2000). A multidisciplinary approach to talent identification in soccer. Journal of Sports Sciences, 18(9), 695-702.
Russell, M., Benton, D., & Kingsley, M. (2010). Reliability and construct validity of soccer skills tests that measure passing, shooting, and dribbling. Journal of Sports Sciences, 28(13), 1399-1408.
Savelsbergh, G., & Van der Kamp, J. (2000). Information in learning to coordinate and control movements: is there a need for specificity of practice? International Journal of Sport Psychology, 31, 476-484.
Vilar, L., Araújo, D., Davids, K., & Button, C. (in press). The role of ecological dynamics in analysising performance in team sports. Sports Medicine.

bottom of page