Space-time Modulated Active 3D Imager-on-Chip


  • Autonomous unmanned air vehicles and driverless/driver-assisted cars
  • 3D gaming, simulation, training in high-fidelity virtual replicas of real-world environments, augmented reality visualization, consumer mobile 3D photography
  • GPS-denied navigation and human navigation in obscured or unknown environments
  • Home marketing, appraisal, safety inspection, renovation modeling or energy survey
  • Movie production (capturing virtual representations of real-world sets)

Problem Addressed

Three-dimensional (3D) sensing capabilities has become important in the defense, commercial and industrial sectors. However, existing 3D imaging technologies cannot meet the stringent size, weight, and power requirements for a variety of day/night, medium-range applications. Many existing image detector schemes suffer from readout circuitry bottlenecks, including limited data rate and high read noise. Furthermore, applications such as advanced unmanned systems exceed current imaging capabilities.


The Space-Time Modulated Active 3D Imager-on-chip is a tightly-integrated system of a laser light source, optical elements, photo-sensitive focal plane array, and integrated circuit electronics. Functionally, the main system components are the source, the detector, and the processing system. The light produced by the source bounces off of the scene and back to the detector; the processing algorithm is then able to infer the 3D structure of the scene by comparing the detected image to a known, pre-calibrated reference image. The sensor provides a dense 2D map within its field-of-view with a depth map for preservation of the 3D structure. The device can operate at a high frame-rate, so that the sequence of images output can be viewed as a full-motion video. This on-chip imager enables a high-fidelity capture of the scene with low size, weight and power requirements.


  • Indoor/Outdoor day/night operation with small size, weight and power requirements
  • Extended maximum range and wide field-of-view at high resolution
  • Robust to featureless/feature-aliased environments and sensor platform motion
  • Entirely on-chip computation; no external processing required with easy integration with other imaging devices