Impact of Visual Self-Motion Cues on Spatial and Temporal Visual Integration, 2023

DOI

We present two psychophysics experiments investigating the impact of visual self-motion cues on visual spatiotemporal summation. The first experiment is a 4AFC target detection task where participants give the location of a target that appeared in one of four known locations, against a moving background. Each target is composed of two small probes which are separated by four diagonal pixels. Probes always appear for one frame (8.33ms). The probes are either presented concurrently (at the same time) or are separated by one of six interstimulus-intervals (ISIs) of 0 to 100ms. The two probes are aligned radially, such that the second probe is either inward (closer to the centre of the screen) or outward than the first probe. The background is an array of ~1000 dots which move in an expanding or contracting pattern to simulate forwards or backwards self-motion. The main manipulation is the relationship between the alignment of the probes (inward or outward) and the background motion (expanding or contracting). The relationship can either be congruent (both in same direction) or incongruent. Staircases adjust the target luminance for each condition to identify the threshold where participants have 75% accuracy. A second study has the same design but with targets appearing in three of the four known locations and participants identifying the location which did NOT contain a target. Finally a flow-parsing study was conducted to validate that our background stimulus was eliciting self-motion cues. In this study participants saw a single moving target in one of four locations and indicated whether it was moving inwards or outwards. A staircase procedure adjusted the target speed to identify the speed where participant gave an ‘inward’ response 50% of the time. This version contains the full 12 iterations of the three main tasks and the figures and aggregate data files have been updated.Every time we move, the image of the world at the back of the eye changes. Despite this, our perception is of an unchanging world. How does the brain translate a continually changing image into a percept of a stable, stationary, rigid world? Does the brain use a map of the external environment (an "allocentric map") and the position of the observer within it, built up over time, to underpin the perception of stability? Does the brain continually update a map of where scene objects are relative to the observer (an "egocentric map"; e.g. there is an object straight ahead of me, if I walk forward I should expect it to get closer to me)? Does the brain not create a map but just divide up the image motion into that which is likely due to movement of the observer (and which can consequently be ignored) and that which is likely due to objects moving within the scene (which become a focus of attention)? The hypothesis that underpins this research project is that it is not a single one of these mechanisms that underpins perceptual stability, but that all of them, their contribution dependent on the task being performed by the observer. In some cases the task will require a fast estimate to support an ongoing action which might favour one mechanism, on another task, where timing is not so critical, a slower, but more accurate, mechanism might be more appropriate. This collaborative project, which combines complementary expertise in Psychology, Movement Sciences, and Computing from Germany, The Netherlands and the United Kingdom, and importantly, researchers that start from different theoretical perspectives, will test this hypothesis. We will study a diverse series of tasks that present a range of challenges to the moving observer. We will make use of various innovative experimental paradigms that exploit recent technological advances such as virtual reality combined with simultaneous motion tracking. Understanding where and how different mechanisms of perceptual stability play a role advances not only our scientific understanding, but also has the potential to inform industry as well as medicine about the circumstances in which disorientation or nausea in real or virtual environments can be minimised.

Computer-based task run with PsychoPy version 3. Participants sat in a room with no lighting, 43.7cm from a computer monitor with their head in a chin rest. All responses were collected on a keypad. Participants took breaks and turned the lights on every ~100 trials. Participants competed all conditions 12 times across several testing sessions on different days.

Identifier
DOI https://doi.org/10.5255/UKDA-SN-856941
Metadata Access https://datacatalogue.cessda.eu/oai-pmh/v0/oai?verb=GetRecord&metadataPrefix=oai_ddi25&identifier=af9bf1b52bc78424085d381fc7fb13b99ed0f87e668b24838d17386ec7ca539f
Provenance
Creator Rushton, S, Cardiff University; Martin, N, University of Bristol
Publisher UK Data Service
Publication Year 2024
Funding Reference Economic and Social Research Council
Rights Simon Rushton. Nick Martin, University of Bristol; The Data Collection is available to any user without the requirement for registration for download/access.
OpenAccess true
Representation
Resource Type Numeric
Discipline Psychology; Social and Behavioural Sciences
Spatial Coverage Cardiff; United Kingdom