Event Based Vision Sensors

Reveal the invisible between the frames

Introduction

Traditional camera systems use a frame-based approach, wherein a lot of the captured data is the same from frame to frame with typically only a few clusters of pixels changing values. This means that the sensor is constantly integrating the full field of view and sending the data from every pixel within it, even when it is the same data as the previous frame. When the camera is looking at a stationary scene, much of this pixel data is the same from frame to frame thus consuming unneeded bandwidth and processing resources.

Event-based Vision Sensors (EVS) help alleviate this issue by only sending data from pixels that have detected a change in intensity. This allows them to minimize the volume of data that is transmitted over the sensor’s data bus while minimizing the processing resources need to analyze the image. On top of this, individual pixel autonomously responds to illuminance changes allowing the sensor to detect small and/or high-frequency changes with lower bandwidth requirements.

Pixel Architecture

The way that a pixel on an EVS sensor works is like that of how the human eye senses light. Receptors in the retina convert light into an electrical signal that is processed within the brain. The neuronal cells detect the light and shade and send this information to the visual cortex when they detect a change in the scene.

In a similar way, light received in the EVS is captured inside the light receiving unit (the pixel), this luminance signal passes through an amplification circuit and then through a comparator where it is compared to a previous level. If the signal increases or decreases, it can trigger an event. These resulting events can then be further processed elsewhere in the camera or vision system.

Instead of a matrix of pixel values outputted like in standard image sensors, a stream of event values is sent providing the pixels coordinates (X,Y), the time of the event, and the light polarity (dark to bright or bright to dark) for each pixel that has detected a change. After a pixel has triggered an event, it is reset using the new illuminance value as its reference. This produces an event stream of data that is unlike traditional frame-based cameras allowing for equivalent frame rates of over 10k fps. (Reference Sony’s EVS Technology explained: https://www.sony-semicon.com/en/technology/industry/evs.html )

About Prophesee 

Prophesee is the inventor of the world’s most advanced neuromorphic vision systems. Composed of patented Metavision® sensors and algorithms, these systems enable machines to see what was invisible to them until now.

Sony and Prophesee Partnership

Start evaluation of breakthrough stacked Event-based Vision Sensor released by Sony Semiconductor Solutions and realized in collaboration between Sony and PROPHESEE.

For the latest generation EVS, Sony Semiconductor Solutions, a world leader in CMOS sensor technology and manufacturing, has collaborated with Prophesee to produce two new sensors. The 1MP IMX636 and VGA IMX637 sensors offer the industry’s smallest pixel size of 4.86 µm for an event-based vision sensor. They include build-in H/W event filter to remove unnecessary event data such as periodical events due to light flickering or other events that are unlikely moving objects, for example.

These two sensors were made possible through a collaboration between Sony and Prophesee, by combining Sony’s CMOS image sensor technology with Prophesee’s unique event-based Metavision® sensing technology.
This accomplishment was made possible by combining technical features of Sony’s stacked CMOS image sensor, resulting in small pixel size and excellent low light performance that are achieved by the use of Cu-Cu connection, with Prophesee’s event-based Metavision® sensing technologies leading to fast pixel response, high temporal resolution and high throughput data readout.

Prophesee Metavision®

With 10-1,000x less data generated, >120dB dynamic range and microsecond time resolution (over 10k images per second equivalent), Prophesee Metavision® opens vast new potential in areas such as industrial automation, security, and surveillance, mobile, IoT and AR/VR. Its solutions improve safety, reliability, efficiency, and user experiences across a broad range of use-cases.

Since their inception 150 years ago, all conventional video tools have represented motion by capturing several still images each second. Displayed rapidly, such images create an illusion of continuous movement. From the flip book to the movie camera, the illusion became more convincing, but its basic structure never really changed.

For a computer, this representation of motion is of little use. The camera is blind between each frame, losing information on moving objects. Even when the camera is recording, each of its “snapshot” images contains no information about the motion of elements in the scene. Worse still, within each image, the same irrelevant background objects are repeatedly recorded, generating excessive unhelpful data.

Prophesee Releases Metavision® Intelligence Suite 4.0

The 5x award-winning Metavision® Intelligence suite (event-based vision software) is available for free, including commercial license. It includes a complete set of Machine Learning tools, new key Open-Source modules, ready-to-use applications, code samples and allows for completely free evaluation, development and release of products with the included commercial license. The free modules in Metavision Intelligence Software Suite are available through C++ and Python APIs and include a comprehensive machine learning toolkit. With this advanced toolkit, engineers can easily develop computer vision applications on a PC for a wide range of markets, including industrial automation, IoT, surveillance, mobile, medical, automotive and more.

Information, Not Raw Data

Inspired by the human retina, Prophesee’s patented Event-Based Vision sensor features a new class of pixels, each powered by its own embedded intelligent processing, allowing them to activate independently, generating 10-1,000x less data.

Capturing Information Between Frames

Using a fast-moving object, you realize that traditional vision technology is a succession of pictures and between these pictures, there is a gap that means blindness to machines.

These sensors are not subjected to this limitation meaning that, for the first-time, machines can see between images. This enables them to see much faster (millisecond time scale).

Event-Based Optical Flow: Understanding Motion, Pixel by Pixel

Rediscover this fundamental computer vision building block, but with an event twist. Understand motion much more efficiently, through continuous pixel-by-pixel tracking and not sequential frame by frame analysis anymore. Get features only on moving objects and use 17x less power compared to traditional image-based approaches.

Vibration Monitoring

Monitor, non-invasively, vibration frequencies of a target continuously and remotely with pixel precision. This frequency detection can be applied to every pixel in a scene. For each movement of the target, there is a measurable brightness change that creates an event where the pixel coordinates, their intensity polarity and timestamp are recorded. This provides a global, continuous understanding of vibration patterns for oscillations starting at 1Hz into the kHz ranges, with 1 pixel accuracy.

FSM-IMX636 Event-Based Vision Sensing (EVS) Development Kit

The FSM-IMX636, built around innovative Sony and Prophesee IMX636 sensor, enables product developers, engineers, and innovators to easily test the potential of EVS technology in their applications.

Prototype quickly and capture ultra-fast-moving objects or even machine vibration frequencies at a fragment of the data rate processing efforts and power consumption compared to conventional frame-based image sensing.

Explore EVS Technology with FSM-IMX636 Development Kit

Example Applications for Event Based Sensors

Industrial Automation

Industrial processes, inspection, monitoring, object identification, detection & tracking, handling, high speed motion control/robotics, AGV

IoT & Surveillance

Motion detection and analysis, intruder detection, traffic data acquisition, crowd management, people counting, always-on visual input, gesture detection, without concerns for privacy

Automotive & Mobility, Drones

Autonomous driving, emergency breaking assist, driver assistance, collision avoidance, pedestrian protection, occupant identification and classification, driver monitoring systems, visual SLAM (simultaneous localization and mapping)

Medical

Live sample sterility testing for gene therapy, vision restoration, blood cell tracking

Available products

Metavision Gen4 Evaluation Kit 2 – HD S-Mount

Sensor evaluation kit, Prophesee, optics: S, 1 x 720P ATIS sensor module | 1 x S-mount lens (70° FOV) | 1 x USB-C to A...

Metavision Gen4 Evaluation Kit 2 – HD CS-Mount

Sensor evaluation kit, Prophesee, optics: CS, 1 x 720P ATIS sensor module | 1 x CS-mount lens (70° FOV) | 1 x USB-C to A...

Metavision Evaluation Kit 4 – HD

Sensor evaluation kit, Prophesee, USB3.0, optics: C, With Sony’s IMX636ES stacked Event-based Vision Sensor | Compatible with Prophesee Metavision Intelligence Suite 2.3 onward | Provided...

Metavision Evaluation Kit 3 – Gen4H – HD-CD-CS Mount

Sensor evaluation kit, Prophesee, USB3.0, optics: CS, Contrast Detection (CD) events | Compatible with Prophesee Metavision Intelligence Suite 2.2 onward | C/CS with S-mount adapter,...

Metavision Evaluation Kit 3 – Gen3M – VGA-CD

Sensor evaluation kit, Prophesee, USB3.0, optics: CS, Contrast Detection (CD) events | Compatible with Prophesee Metavision Intelligence Suite 2.2 onward | C/CS with S-mount adapter,...

Prophesee Metavision Packaged Sensor Gen3M VGA CD

Prophesee 0.31 MP, 640 x 480, 3/4, 15 x 15 µm, mini PBGA Typical latency: 40-200µs | Typical background activity: <1mHz | Max. Bandwidth: 66...

LET US ANSWER ANY QUESTION YOU HAVE ABOUT IMAGE SENSOR TECHNOLOGIES OR REQUEST A SAMPLE TODAY.