IDS uEye EVS Event Based Cameras – Use cases

IDS uEye EVS event based cameras
uEye EVS Event Based Cameras – Courtesy IDS Imaging

We introduced these event-based cameras in a previous blog – still a great entry point and overview. In this new blog we’ll highlight use cases. They are pretty compelling.


But first we re-run a single graphic to highlight the paradigm shift from frame-based to event-based imaging:

frame-based vs event-based paradigm
XCP-E Event based cameras utilize the Sony Prophesee sensor – Courtesy IDS Imaging

If you come from a frame-based imaging background – as most of us do – it’s worth getting one’s head wrapped around the event based model. It’s that different – at the technology level and in what it enables at the applications level.


On to use cases and key takeaways…

Results instead of raw data: Per the scene-driven remark in the paradigm comparison graphic above, observe the video analysis clip below. By ONLY picking up on motion, the camera delivers exactly and only what one wants – the people and suitcases passing through the field of view.

Results instead of raw data – Courtesy IDS Imaging

A frame-based approach to such an application would require complex algorithms to identify the “moving stuff” from the “background stuff”, which is compute intensive. It may be doable the hard way, but it takes effort – and isn’t as performant.


Extremely high dynamic range

See in the dark. The Sony Prophesee IMX636 sensor recognizes contrast changes even from 0.08 lux.

Sensitive in very low light – Courtesy IDS Imaging

Detect extremely fast processes

Temporal resolution <100us. i.e. the minimum measurable time difference between two consecutive pixel events, is less than 100µs. That’s comparable to a traditional image-based frame rate of more than 10,000 FPS without motion blur.

High speed applications – Courtesy IDS Imaging

blah blah


Efficient data processing

Only changes are captured – static areas are ignored. So there is (much) less data to process than with a frame-based approach. This saves memory, data transfer volumes, and compute time.

The astute reader will have already inferred that this is a corollary on the “results instead of raw data” message and video earlier in this blog. It’s such a key point it bears repeating.

Less data generated means less data to process – Courtesy IDS Imaging

The following short video shows that the Sony Prophesee IMX636 is the key to sending less data, as it only senses “what’s changed”. Essentially it lights up a pixel exactly and only when that position senses motion – and not when it doesn’t.

Frame-based approach sends entire frame every time vs. event-based just sends each next change – Courtesy IDS Imaging

Use cases

Some of the videos above suggest certain use cases, but let’s spell out a few:

Monitoring: Compared to CCTV, the IDS uEye XCP-E cameras are more compact, and only show action as opposed to (also) steady-state. Or combine the two with event-based cameras logging the timestamps of interest.

Video analysis and Smart City people tracking: A level up from simple monitoring, people tracking doesn’t just detect motion but infers/projects trajectories, and may lead or assist in threat detection.

Drone detection: Just as with people tracking, an event-based camera finds what’s moving against a field of static clutter, as it only sees what’s moving.

Gesture recognition: UI design opportunities, whether for pupil tracking, head motions, and/or hand/finger tracking.

Industrial applications: Monitor equipment vibration to optimize preventative maintenance and/or anticipate and avoid catastrophic breakdown.

Counting: E.g. pill production and sorting, food processing, or other fast-but-small-items conveyor applications.


Takeaway: If it moves, an event-based camera will find it.


See the entire family of IDS uEye XCP-E cameras. Call us at 978-474-0044. Tell us a little about your application and we’ll help you pick the ideal camera and accessories.

Contact us for a quote

#IDS #uEye #EventBased

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Drone detection event-based cameras from Prophesee

Event-based cameras outperform frame-based approaches for many applications. We provided insight to the event-based paradigm in a recent blog. Or download our whitepaper on event-based sensing.

In this piece, we focus on drone detection, a task at which event-based imaging excels. For full-impact, please view the following in full-screen mode using the “four corners” button.

Find the drone – Event-based approach beats frame-based method – Courtesy scientific paper attribution

As discussed in the event-based paradigm introductions links above, frame-based approaches would struggle to track a drone moving in a visually complex environment (above left), having to parse for drone shapes and orientations, occlusions, etc., even when most of the imagery is static.

Meanwhile, as seen in the event-based video (above right), the new paradigm only looks for “what’s changed”, which amounts to showing “what’s moving?”. For drone detection, as well as other perimeter intrusion applications, vibration monitoring, etc., that’s ideal.

1stVision represents Prophesee’s event-based sensors and cameras, built on neuromorphic engineering principles inspired by human vision. Call us at 978-474-0044 to learn more or request a quote.

Contact us for a quote

For some applications, one only needs an event-based sensor – problem solved. For other applications, one might combine different imaging approaches. Consider the juxtaposition of three methods shown below:

Visible, polarization, and event-based approaches – Courtesy Prophesee and EOPTIC

The multimodal approach above is utilized in a proprietary system developed by EOPTIC, in which visible, polarization, and event-based sensors are integrated. For certain applications one may require the best of speed, detail, and situational awareness, for automated “confidence” and accuracy, for example.

Here’s another side-by-side video on drone detection and tracking:

Visible vs. (hybrid) event-based imaging – Courtesy Prophesee and NEUROBUS

The above-left video uses conventional frame-based imaging, where it’s pretty hard to see the drone until it rises above the trees. But the event-based approach used by Prophesee’s customer Neurobus, together with their own neuromorphic technologies, identifies the drone event amidst the trees – a level of early warning that could make all the difference.

By the numbers:

Enough with the videos – looks compelling but can you quantify Prophesee event-based sensors for me please?

Quantifying key attributes – Courtesy Prophesee

Ready to evaluate event-based vision in your application?

1stVision offers Prophesee Metavision® evaluation kits designed to help engineers and developers quickly assess event-based sensing for high-speed motion detection, drone tracking, robotics, and other dynamic vision applications. Each kit provides everything needed to get started with Prophesee’s Metavision technology, including hardware, software tools, and technical support from our experienced machine vision team. Request a quote to discuss kit availability, configuration options, and how we can help accelerate your proof-of-concept or system deployment.” – we can link that page with the kits. 

Technical note: The GenX320 Starter kit for Raspberry Pi 5” utilizes the Sony IMX636 sensor, expressly designed for event-based sensing.

Kit or camera? You choose.

The kits described and linked above are ideal for those pursuing embedded designs. If you prefer a full camera – still very compact at less than 5cm per side – and you want a USB3 interface – see IDS uEye event-based cameras. You’ve got options.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Prophesee event-based vision kits and Raspberry Pi

Event-based vision (EBV) is really taking off. We provide an overview of the concepts and applications, as well as Prophesee products for EBV. So here’s a reminder diagram and short video for context, then we’ll dig into using Prophesee EBV kits with Raspberry Pi.

Event-based vision is a new paradigm – Courtesy Prophesee

Frame-based vs. Event-based approach to eye tracking – for example:

So I want to try a Prophesee Metavision Evaluation Kit!

Enough theory already, I want to get hands-on with this! Per the Prophesee Metavision Evaluation Kits, how do I get started?

1st vision is the official partner for Prophesee in the US, so start with a quote and purchase a Raspberry Pi 5 CSI modules with the 320×320 pixel GenX320 sensor. Or the GenX320 Raspberry Pi 5 Module.

The main difference between the two are that the M12 lens mount allows for changing lenses yourself, but if you prefer the M6 lens version, you receive a smaller front of the camera and a wider field of view. You can see the variations here:  https://www.1stvision.com/cameras/GenX320-Starter-Kit-for-Rasberry-Pi-5

You’ll need to purchase the Raspberry Pi elsewhere, as 1st vision does not sell them.

Prophesee recommends the 8gb vision at a minimum, with the 16 recommended for on board vision computation. 

Prophesee also recommends getting the active cooler, 27 W Power supply and NVME adapter. 

  1. Active Cooler – SC1148
  2. Buy a Raspberry Pi Active Cooler – Raspberry Pi
  3. 27 W Power Supply
  4. Buy a Raspberry Pi 27W USB-C Power Supply – Raspberry Pi
  5. SSD Kit
  6. Buy a Raspberry Pi SSD Kit – Raspberry Pi

Software options

A further note, the Metavision 5 SDK will not run on the Raspberry Pi 5, as the CPU power is insufficient for that computational load. You’ll need to use the Metavision 4 OpenEB SDK. So to be clear, the SDK choices are:

SDK for Raspberry PiSDK for PC
Metavision 4 OpenEB
(no cost)
Metavision 5 SDK
(bundled offer or standalone purchase)
Metavision SDK options by processor preference

If you would like to talk it through, just call us at 978-474-0044. Or use the link below to request we get back to you by either e-mail or phone.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Whitepaper: Event-based sensing paradigm

Except for sometimes compelling line-scan imaging, machine vision has been dominated by frame-based approaches. (Compare Area-scan vs. Line-scan). With an area-scan camera, the entire two-dimensional sensor array of x pixels by y pixels is read out and transmitted over the digital interface to the PC host. Whether USB3, GigE, CoaXPress, CameraLink, or any other interface, that’s a lot of image data to transport.

Download whitepaper
Event-based sensing as alternative to frame-based approach

If your application is about motion, why transmit the static pixels?

The question above is intentionally provocative, of course. One might ask, “do I have a choice?” With conventional sensors, one really doesn’t, as their pixels just convert light to electrons according to the physics of CMOS, and readout circuits move the array of charges on down the interface to the host PC, for algorithmic interpretation. There’s nothing wrong with that! Thousands of effective machine vision applications use precisely that frame-based paradigm. Or the line-scan approach, arguably a close cousin of the area-scan model.

Consider the four-frame sequence to the left, relative to a candidate golf-swing analysis application. Per the legend, with post-processing markup the blue-tinged golfer, club, and ball are undersampled in the sense that there are unshown phases of the swing.

Meanwhile the non-moving tree, grass, and sky are needlessly re-sampled in each frame.

It takes an expensive high-frame-rate sensor and interface to significantly increase the sample rate. Plus storage capacity for each frame. And/or processing capacity – for automated applications – to separate the motion segments from the static segments.

With event-based sensing, introduced below, one can achieve the equivalent of 10k fps – by just transmitting the pixels whose values change.

Images courtesy Prophesee Metavision.

Event-based sensing only transmits the pixels that changed

Unlike photography for social media or commercial advertising, where real-looking images are usually the goal, for machine vision it’s all about effective (automated) applications. In motion-oriented applications, we’re just trying to automatically control the robot arm, drive the car, monitor the secure perimeter, track the intruder(s), monitor the vibration, …

We’re NOT worried about color rendering, pretty images, or the static portions in the field of view (FOV). With event-based sensing, “high temporal imaging” is possible, since one need only pay attention to the pixels whose values change.

Consider the short video below. The left side shows a succession of frame-based images for a machine driven by an electric motor and belt. But the left hand image sequence is not a helpful basis for monitoring vibration with an eye to scheduling (or skipping) maintenance, or anticipating breakdowns.

The right-hand sequence was obtained with an event-based vision sensor (EVS), and absolutely reveals components with both “medium” and “significant” vibration. Here those thresholds have triggered color-mapped pseudo-images, to aid comprehension. But an automated application could map the coordinates to take action, such as gracefully shutting down the machine, scheduling maintenance according to calculated risk, etc.

Courtesy Prophesee Metavision

Another example to help make it real:

Here’s another short video, which brings to mind applications like autonomous vehicles and security. It’s not meant to be pretty – it’s meant to show the sensor detects and transmits just the pixels that correlate to change:

Courtesy Prophesee Metavision

Event-based sensing – it really is a different paradigm

Even (especially?) if you are seasoned at line-scan or area-scan imaging, it’s a paradigm shift to understand event-based sensing. Inspired by human vision, and built on the foundation of neuromorphic engineering, it’s a new technology – and it opens up new kinds of applications. Or alternative ways to address existing ones.

Download whitepaper
Event-based sensing as alternative to frame-based approach

Download the whitepaper and learn more about it! Or fill out our form below – we’ll follow up. Or just call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

#EVS

#event-based

#neuromorphic