Action Observation/Execution matching

Visually perceiving an action may activate corresponding motor programs in the observer. Such automatic motor activation can occur both for high-level (i.e., the goal of an action) and low-level aspects of an action (i.e. the specific effector with which it is activated). From a functional point of view, motor activation by action observation has been recently ascribed to the mental simulation of our conspecifics’ actions. One possible purpose of such a simulation process appears to be the prediction of another person's next action steps (e.g., Graf, Reitzner, Corves, Casile, Giese, & Prinz, 2007), which in turn allows to adapting own actions to a continuously changing environment.

Our group addresses the functional relationships between action perception and action simulation. For instance, we address research questions on the time course of action simulation, using a paradigm in which observers observe well-known actions that are transiently occluded. We study the interaction between perceptual mechanisms (that take care of representing the action before and after occlusion) and simulation mechanisms (that take care of action representation during occlusion). How should we understand the transition between action perception and action simulation (e.g., prediction of the action after occlusion)? For instance, do predictive simulation processes just carry on old processes or rather initiate new ones? Do they rely on old action representations or create novel ones? More specifically, our current research strategy involves

  • Dual task paradigms to investigate whether and to which extend the simulation pattern may be modulated by the observer's own motor activity, and by semantic action knowledge (see project by Springer et al.; Tausche et al.);
  • Functional MRI in order to disentangle perceptual and motor aspects in action prediction tasks (see project by Stadler et al.);
  • Tool-use action paradigms to dissociate priming effects for observing the target, the movement, or the target-to-movement mapping of a tool-use action (see project by Massen et al.).
Go to Editor View