Abstract
This article addresses 2 questions that arise from the finding that visual scenes are first parsed into visual features: (a) the accumulation of location information about objects during their recognition and (b) the mechanism for the binding of the visual features. The first 2 experiments demonstrated that when 2 colored letters were presented outside the initial focus of attention, illusory conjunctions between the color of one letter and the shape of the other were formed only if the letters were less than 1 degree apart. Separation greater than 2 degrees resulted in fewer conjunction errors than expected by chance. Experiments 3 and 4 showed that inside the spread of attention, illusory conjunctions between the 2 letters can occur regardless of the distance between them. In addition, these experiments demonstrated that the span of attention can expand or shrink like a spotlight. The results suggest that features inside the focus of attention are integrated by an expandable focal attention mechanism that conjoins all features that appear inside its focus. Visual features outside the focus of attention may be registered with coarse location information prior to their integration. Alternatively, a quick and imprecise shift of attention to the periphery may lead to illusory conjunctions among adjacent stimuli.
Keywords
Affiliated Institutions
Related Publications
Coordinate Attention for Efficient Mobile Network Design
Recent studies on mobile network design have demonstrated the remarkable effectiveness of channel attention (e.g., the Squeeze-and-Excitation attention) for lifting model perfor...
Qualitative research : theory, method and practice
PART ONE: INTRODUCTION Introducing Qualitative Research - David Silverman PART TWO: OBSERVATION Ethnography - Isabelle Baszanger and Nicolas Dodier Relating the Part to the Whol...
Dual Attention Network for Scene Segmentation
In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the self-attention mechanism. Unlike previous works that capture context...
ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks
Recently, channel attention mechanism has demonstrated to offer great potential in improving the performance of deep convolutional neural networks (CNNs). However, most existing...
Publication Info
- Year
- 1989
- Type
- article
- Volume
- 15
- Issue
- 4
- Pages
- 650-663
- Citations
- 197
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1037//0096-1523.15.4.650