Prophesee event-based vision – a new paradigm

We don’t use terms like paradigm-shift lightly. Event-based vision (EBV) is shaking things up just as much as the arrival of frame-based and line-scan imaging once did . It’s that different. And why we are excited to offer Prophesee sensors and cameras.

Event-based vision applications areas – Courtesy Prophesee

Applications examples… and just enough about concepts

This informational blog skews towards applications examples, with just enough about concepts and EBV technology to lend balance. Event-based vision is so new and so different than previous vision technologies. We believe our readers may appreciate an examples-driven approach to understanding this radically new branch of machine vision.

Unless you’re an old hand at event-based vision (EBV) …

…and as of this writing in Summer 2025 few could be, let’s show a couple of short teaser videos before we explain the concepts or go deeper on applications.

Example 1: High-speed multiple-object counting and sizing

The following shows a use of event-based imaging to both count particles or objects and to estimate their size. All in a high-speed multiple concurrent target environment.

Courtesy Prophesee

Example 2: Eye-tracking – only need to track pupil changes

Now consider eye-tracking: in the video below, we see synchronized side-by-side views of the same scene. The left side was obtained with a frame-based sensor. The right side used a Prophesee event-based sensor. Why “waste” bandwidth and processing resources separating a pupil from an iris, eyelids, and eyebrows, when the goal is eye-tracking?

Courtesy Prophesee

Radically increased speed; massively less data

Prophesee Metavision‘s sensor tracks “just” what changes in the field of view, instead of the frame-based approach of reading out whole sensor arrays, transmitting voluminous data from the camera to the host, and algorithmically looking for edges, blobs, or features.

Contact us

Example 3: Object tracking

Not unlike the eye-tracking example above, but this time with multiple moving targets – vehicular traffic. With frame-based vision, bandwidth and processing has to repeatedly process static pavement, light poles, guard rails, and other “clutter” comprising 80% of the field of view. But with event-based vision the sensor only detects and transmits the moving vehicles.

Whether counting traffic for congestion planning/reporting, or collision avoidance abord a given vehicle, we don’t care what the vehicle looks like – only that it’s there and moving along a certain trajectory and at a detected speed.

Example 4: Surveillance and security

Prophesee named this video “Optical Flow Crowd”, which is best understood in the context of security and surveillance, we suggest. It’s not unlike the vehicular flow example above, except that cars and trucks mostly stay in-lane. Whereas pedestrians move at diverse angles. And their arm, leg, head, and torso movements also convey information.

The (computed) vector overlays indicate speed and directional changes, important to reveal potentially dangerous actions taken or likely to emerge. For example, does a raised arm indicate a handgun or knife being readied, or is it just a prelude to scratching an itchy nose? Is there a pursuer turning towards another pedestrian, to which a nearby policeman should be alerted?

Example 5: Vibration monitoring and preventative maintenance

Motorized equipment is typically service either on a preventative maintenance schedule and/or a break-fix approach, depending on costs, legal liabilities and risks, etc. What if one could inexpensively identify vibration patterns that reveal wearing belts or bearings before a breakdown and before there is preventable collateral damage to other components?

Courtesy Prophesee

Enough with the examples – how can I get event-based sensors?

1stVision is pleased to represent Prophesee with a wide range of sensors, evaluation kits, board-level and housed cameras, and an SDK designed for event-based vision applications.

Built on neuromorphic engineering principles inspired by the brain’s neural networks and human vision, Prophesee products are surprisingly affordable, enabling new fields of event-based vision. Or improving on frame-based vision applications that may be done better, faster, or less-expensively using an event-based approach.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

T2IR – Trigger to Image Reliability

Machine vision systems are widely accepted for their value-add in so many fields. Whether in manufacturing, pick-and-place, quality control, medicine, or other areas, vision systems either save money and/or create competitive advantage.

From “black box” uncertainties to confident outcomes using T2IR – Courtesy Teledyne DALSA

BUT deploying new systems can be tricky

For experienced machine vision system builders, sometimes the next new system is either “simple” in complexity, or a minor variant on previously built complex systems in which one has good confidence. Deployment, systems acceptance testing and validation may be straightforward, with little doubt about correctness or reliability. Congratulations if that’s the case.

For new undertakings, even for experienced vision system builders, there’s often some uncertainty in the “black box” nature of a new system. Can I trust the results as accurate? Am I double counting? Am I missing any frames? If the system overloads, how can I track which inspections were skipped while the system recovers? How to troubleshoot and debug? And rising QA standards are pushing us to verify performance – yikes!

T2IR: Trigger to Image Reliability can open up the black box

Take the mystery out of system deployment, tuning, and commissioning. T2IR’s powerful capabilities essentially open up the black box. And for users of many Teledyne DALSA cameras and framegrabber products, T2IR is free!

Two minute overview of T2IR: Trigger to Image Reliability – Courtesy Teledyne DALSA

Bundled with many Teledyne DALSA products

The T2IR features are available at no cost for customers who purchase (or already own) any of:

Cameras: Genie Nano, Linea, Piranha, and others

Frame Grabbers: Xtium and Xcelera

Software: Sapera LT SDK (for both Windows and Linux)

T2IR in a nutshell

T2IR, Trigger to Image Reliability, is a set of hardware and software features that help improve system reliability. It’s offered as a free benefit to Teledyne DALSA customers for many of their cameras and framegrabbers. It helps you get inside your system to audit and debug image flow.


Example One: Coping with unexpected triggers

Consider the following diagram:

Courtesy Teledyne DALSA

To the left of the illustration we see the expected normal trigger frequency. Each trigger, perhaps from a mechanical event and an actuator sending an electronic pulse, triggers the next camera image, readout, and processing, at intervals known to be sufficient for complete processing. No lost or partial images. No overtriggering. All as expected.

But in the middle of the illustration we see invalid triggers interspersed with valid ones. What would happen in your system if you didn’t plan for this possibility? Lost or partial images? Risk of shipping bad product to customers due to missed inspections? Damage to your system? What’s the source of the extra triggers? How to debug the problem if found during design and commissioning? How to recover if system is already in production?

With T2IR, the Sapera API, may be programmed to detect the invalid triggers that are out of the expected timing intervals. An event can be generated for the application to manage. The application might be programmed to do something like “stop the system”, “tag the suspect images”, “note the timestamps”, or otherwise make a controlled recovery.

Key takeway: instead of the system just blundering along with the invalid triggers undetected, with possible grave consequences, T2IR can help create a more robust system that recovers gracefully.

Courtesy Teledyne DALSA

Example Two: Tracking and Tracing Images – Tagging with timestamps

Consider the challenges of high-speed materials handling, let’s say 3000 parts per minute. Some such systems require multiple cameras, lets say 4 per object, to make an accept, reject, or re-inspect decision on each component. For this hypothetical system, that would be 12,000 images per minute to coordinate!

Correlating up to thousands of images per minute – Courtesy Teledyne DALSA

The illustration above shows the timestamps one might assign with T2IR, based on ticks in microseconds, according to known or believed belt motion speed. But what if results seem corrupted using that method, or if there are doubts about whether speed is constant or variable?

With the Xtium frame grabber, one can use T2IR to choose among alternate timestamp bases beside the “hoped for” microsecond calculations:

Multiple timestamp tagging options – Courtesy Teledyne DALSA

So one might prefer to tag the images with and external trigger or a shaft encoder, which are physically coupled to the system’s movements, and hence more precise than calculated assumptions.


Courtesy Teledyne DALSA

Sounds promising, how to learn more about T2IR?

Pursue any/all of the following:

Detailed primer: T2IR Whitepaper

Read further in this blog: Additional high-level illustrations and another example below

Speak with 1stVision: Sales engineers happy to speak with you – call 978-474-0044.


Diagnostic tool: a brief overview

Consider the following screenshot, in the context of an Xtium framegrabber together with T2IR:

Diagnostic T2IR window for Xtium framegrabber – Courtesy Teledyne DALSA

The point of the above annotated screenshot is to give a sense of the level of detail provided even at this top level window – with drill ins to inspect an control diverse parameters, to generate reports, etc. This is “just” a teaser blog meant to whet the appetite – read the tutorial, speak with us, or see the manual, for a deeper dive.


T2IR Elements and Benefits – in more detail

Courtesy Teledyne DALSA

Conclusion

Every major machine vision camera and software provider offers an SDK and at least some configuration and control features. But Trigger to Image Reliability – T2IR – takes traceability and control to another level. And for users of so many Teledyne DALSA cameras, framegrabbers, and software, it’s bundled in at no cost!

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

245 MP SVS-Vistek SHR811 Super High Resolution Cameras

245 MP SHR811 – Courtesy SVS-Vistek

245 MP cameras in both monochrome and color models

SVS-Vistek, with 35 years experience developing machine vision cameras, releases their first SHR811 Super High Resolution cameras. Additional sensors and cameras will be released in the coming months. The first two SHR models, one each in monochrome and color, are based on the Sony IMX811 CMOS sensor.

Highlights at a glance (Left); LCD pixels (Right) – Courtesy SVS-Vistek

The right-hand image above may look like just a color grid – in fact it’s a segment of a 245 MP image from an LCD inspection application. So imagine a large HDTV or other flat panel showing a test pattern as part of post-production quality acceptance testing. The image is a segment of the inspection image showing just a subset grid of the activated pixels – at such resolution that machine vision algorithms can conclusively give each panel a clear pass or fail determination.

Frame rates to 12 fps might not sound impressive for certain applications, but for a 245 MP sensor, it’s pretty compelling. That’s achieved with the CoaXPress (CSP) interface.

For the SHR811 CCX12 COLOR camera
For the SHR811 MCX12 MONOCHROME camera

Applications and Examples

The series name – SHR – already suggests the applications for which these cameras are intended – Super High Resolution. You may have innovative applications of your own, but classical uses include:

  • Electronics and PCB inspection
  • Display inspection
  • Semiconductor wafer inspection
  • Microscopy
  • High-end surveillance
Applications and examples – Courtesy SVS-Vistek

Additional applications include microscopy and surveillance:

SHR applications –
Courtesy SVS-Vistek

Technical details

Based on the Sony IMX411, this remarkable sensor is the key technology around which SVS-Vistek has designed the SHR811 launch-model camera.

This sensor has 62% higher pixel density than the highly-successful Sony IMX411 sensor, at 2x the framerate, and similar sensor size. So it’s a classic example of Moore’s Law, with size reduction and performance improvements, as Sony builds on its track record of innovation.

1stVision overview of the SHR811MCX12 monochrome camera

Features

As one would expect, there is a comprehensive feature set, including:

SHR feature overview – Courtesy SVS-Vistek

To highlight one specific feature, consider the Sequencer capability. It allows a single trigger to begin a series of timed exposures, as described in the following short video:

Setting the Sequencer – Courtesy SVS-Vistek

For full specifications, go to the SVS-Vistek SHR camera family, drill in to a specific model, and see or download PDF Datasheet, Manual, Technical Drawing, and/or Sensor Spec. Or just call us at 978-474-0044 and let us guide you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Collimated lighting important with telecentric lens

LTCLHP Collimated Light – Courtesy Opto Engineering

Machine vision practitioners, regardless of application or lens type, know that contrast is essential. Without sharp definition, features cannot be detected effectively.

When using a telecentric lens for precision optical 2-D measurements, ideally one should also use collimated lighting. Per the old adage about a chain being only as good as its weakest link, why invest in great lensing and then cut corners on lighting?

contact us

WITH collimated light expect high edge definition:

The cost of the light typically pays for itself relative to quality outcomes. Below see red-framed enlargements of the same region of a part being imaged by the same telecentric lens.

The left-hand image was taken with a conventional backlight – note how the light wraps around the edge, creating “confusion” and imprecision due to refracted light coming from all angles.

The right-hand image was obtained with a collimated backlight – with excellent edge definition.

Conventional backlight (left) vs. collimated backlight (right) – Courtesy Opto Engineering.

It all comes down to resolution

While telecentric imaging is a high-performance subset of the larger machine vision field in general, the same principles of resolution apply. It takes several pixels to confidently resolve any given feature – such as an edge – so any “gray areas” induced by lower quality lighting or optics would drag down system performance. See our blog and knowledge-base coverage of resolution for more details.

Collimated lighting in more detail

Above we see the results of using “diffuse” vs. “collimated” light sources, which are compelling. But what is a collimated light and how does it work so effectively?

UNLIKE a diffuse backlight, whose rays emanate towards the object at angles ranging from 0 to almost 180°, a collimated backlight sends rays with only very small deviations from perfectly parallel. Since parallel rays are also all that the telecentric lens receives and transmits on to the camera sensor, stray rays are mitigated and essentially eliminated.

The result is a high-contrast image which is easier to process with high-reliability. Furthermore, shutter speeds are typically faster, achieving necessary saturation more quickly, thereby shortening cycle times and increasing overall throughput.

Many lights to choose from:

The video below shows a range of light types and models, including clearly labeled direct, diffuse, and collimated lights.

Several light types – including clearly labeled collimated lights

[Optional] Telecentric concepts overview

Below please compare the diagrams that show how light rays travel from the target position on the left, through the respective lenses, and on to the sensor position on the far right.

A telecentric lens is designed to insure that the chief rays remain parallel to the optical axis. The key benefit is that (when properly focused and aligned) the system is invariant to the distance of the object from the lens. This effectively ignores light rays coming from other angles of incidence, and thereby supports precise optical measurement systems – a branch of metrology.

If you’d like to go deeper on telecentrics, see the following two resources:

Telecentric concepts presented as a short blog.

Alternatively as a more comprehensive Powerpoint from our KnowledgeBase.

Video: Selecting a telecentric lens:

Call us at 978-474-0044 to tell us more about your application – and how we can guide you through telecentric lensing and lighting options.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.