Neuromorphic Sensors: Architectures, IC Design Techniques, and Emerging Applications

90 minutes
2 Speakers
Presentation

Domain

VLSI Circuit and SoC Design- Neuromorphic

Prerequisites

A basic familiarity with CMOS sensor design and mixed-signal circuits with keen interest in cutting edge sensor technologies is recommended but not mandatory.

Key Words

Neuromorphic Image SensorsMetavisionCMOS Image SensorEvent Based SensorsDynamic Vision Sensors

Abstract

This tutorial on neuromorphic event-based sensors will serve as a prelude to the exploration of neuromorphic engineering and brain-inspired systems. It will introduce the concepts and implementation strategies behind neuromorphic sensors, focusing on their IC design, pixel architecture, event-driven processing pipeline, metavision, and integration challenges. It will also explore how neuromorphic sensors are enabling breakthroughs in cutting-edge fields like robotics, augmented reality (AR), virtual reality (VR), automotive vision, and surveillance. This tutorial equips participants with the knowledge and tools and bio-inspired systems that shape the future of AI hardware. The increasing demand for intelligent, low-power, and real-time sensing systems imposes critical design challenges that are difficult to be met by conventional sensors. Neuromorphic vision technology marks a transformative leap in sensor technology, which attempts to mimic the energy-efficient ways of the human brain. It essentially combines scientific principles from neuroscience, mathematics, computer science, and electrical engineering to replicate the human visual system on silicon hardware (artificial retina). Traditional sensors operate on a continuous sampling basis, capturing data at fixed intervals known as frames. In contrast, cameras featuring event-based technology enable fast imaging by focusing only on changes in their field of view, like a pixel being activated when it detects a change in light. Each pixel in this sensor can act like a neuron, activating independently and asynchronously based on the number of photons it detects, creating an 'event.' These events are then used to reconstruct the scene.

Tutorial Preview

Get a sneak peek of what's in store