The importance of relational spatial information for scene classification E. David¹ (Presenter) M. Võ² ¹ Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt ² Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt When identifying scenes and grasping their content, we rely not only on global scene properties or visual features of individual objects, but also on their placement in the scene relative to each other. Indeed, scene grammar informs about the probability of finding certain objects in scenes, their co-occurrence, their distance (close or far apart) and relational (e.g., "X on top of Y", "X inside Y") placement. What information is sufficient for scene categorization? In a series of online experiments, we asked participants to classify 3D-modelled scenes (8 scene categories with 10 exemplars each). To obtain a performance baseline, Experiment 1 measured scene classification performance of the fully-rendered scenes (original meshes, textures and lighting). In Experiment 2, we calculated the minimum oriented bounding box (OBB) for all objects within a scene. We then generated new 3D scenes where the OBBs had replaced the original objects, i.e. their textures and shapes were missing, but the objects‘ general extent in the three dimensions, their relative size and location in the scenes remained. Finally in Experiment 3, we replaced OBBs with spheres of uniform size, thus eliminating all visual information related to an object, leaving only information about their relative placements within the scenes. In both Experiments 2 and 3, participants viewed the reduced 3D scenes from a bird's eye view for two full rotations during seven seconds. As expected, accuracy dropped as we removed visual information from scenes. But, in all three cases classification performance was above chance, i.e. even when all object information was replaced by uniform spheres. Our results demonstrate that observers were able to deduce scene categories with very sparse, visuo-spatial information.