Mike Lewicki receives $1.8M 4-year NSF grant for studying motion in natural scenes

Mike LewickiMike Lewicki (Associate Professor of EECS) has received a $1.8M ($815K for CWRU) 4-year NSF grant (together with colleagues at UC Berkeley and UT Austin) for studying motion in natural scenes.

This project seeks to advance our understanding of a fundamental problem in both biological and machine vision: how does a visual system recover 3D scene structure -- such as the layout of the environment, surface shape, or object motion -- from dynamic, 2D images?  Computer vision has approached this problem by developing algorithms for recovering specific aspects of a scene, but obtaining general solutions that perform robustly for complex natural scenes and viewing conditions remains a challenge.  Biological vision systems have evolved impressive information processing strategies for extracting 3D structure from natural scenes, but the neural representations for doing this are poorly understood and provide little insight into the computational process.  This project will pursue an interdisciplinary approach by attempting the understand the universal principles that lie at the heart of this problem in both biological and machine vision systems.  Specifically, the project will  1) develop a novel class of computational models that recover and represent 3D scene information by factoring apart the underlying causal structure of images, 2) collect high quality video and range data of dynamic natural scenes under a variety of controlled motion conditions, and 3) test the perceptual implications of these models in psychophysical experiments.
The project is a collaboration between three laboratories that have played a leading role in developing theoretical models of natural image statistics, visual neural representations, and perceptual processes.  The PIs seek to combine their efforts to develop new models, data sets, and characterizations of 3D natural scene structure that go beyond previous studies of natural image statistics and which can be tested in neurophysiological and psychophysical experiments.  This project has the potential to bring about fundamental advances in neuroscience, visual perception, and computer vision by developing new classes of models that robustly infer representations of the 3D natural environment.  It will create a set of high quality databases that will be made available to help other investigators study these issues.  It will also open up new possibilities for generating realistic stimuli that can guide novel investigations of neural representation and processing.