uEye EVS Event Based Cameras – Courtesy IDS Imaging
We introduced these event-based cameras in a previous blog – still a great entry point and overview. In this new blog we’ll highlight use cases. They are pretty compelling.
But first we re-run a single graphic to highlight the paradigm shift from frame-based to event-based imaging:
XCP-E Event based cameras utilize the Sony Prophesee sensor – Courtesy IDS Imaging
If you come from a frame-based imaging background – as most of us do – it’s worth getting one’s head wrapped around the event based model. It’s that different – at the technology level and in what it enables at the applications level.
On to use cases and key takeaways…
Results instead of raw data: Per the scene-driven remark in the paradigm comparison graphic above, observe the video analysis clip below. By ONLY picking up on motion, the camera delivers exactly and only what one wants – the people and suitcases passing through the field of view.
Results instead of raw data – Courtesy IDS Imaging
A frame-based approach to such an application would require complex algorithms to identify the “moving stuff” from the “background stuff”, which is compute intensive. It may be doable the hard way, but it takes effort – and isn’t as performant.
Extremely high dynamic range
See in the dark. The Sony Prophesee IMX636 sensor recognizes contrast changes even from 0.08 lux.
Sensitive in very low light – Courtesy IDS Imaging
Detect extremely fast processes
Temporal resolution <100us. i.e. the minimum measurable time difference between two consecutive pixel events, is less than 100µs. That’s comparable to a traditional image-based frame rate of more than 10,000 FPS without motion blur.
High speed applications – Courtesy IDS Imaging
blah blah
Efficient data processing
Only changes are captured – static areas are ignored. So there is (much) less data to process than with a frame-based approach. This saves memory, data transfer volumes, and compute time.
The astute reader will have already inferred that this is a corollary on the “results instead of raw data” message and video earlier in this blog. It’s such a key point it bears repeating.
Less data generated means less data to process – Courtesy IDS Imaging
The following short video shows that the Sony Prophesee IMX636 is the key to sending less data, as it only senses “what’s changed”. Essentially it lights up a pixel exactly and only when that position senses motion – and not when it doesn’t.
Frame-based approach sends entire frame every time vs. event-based just sends each next change – Courtesy IDS Imaging
Use cases
Some of the videos above suggest certain use cases, but let’s spell out a few:
Monitoring: Compared to CCTV, the IDS uEye XCP-E cameras are more compact, and only show action as opposed to (also) steady-state. Or combine the two with event-based cameras logging the timestamps of interest.
Video analysis and Smart City people tracking: A level up from simple monitoring, people tracking doesn’t just detect motion but infers/projects trajectories, and may lead or assist in threat detection.
Drone detection: Just as with people tracking, an event-based camera finds what’s moving against a field of static clutter, as it only sees what’s moving.
Gesture recognition: UI design opportunities, whether for pupil tracking, head motions, and/or hand/finger tracking.
Industrial applications: Monitor equipment vibration to optimize preventative maintenance and/or anticipate and avoid catastrophic breakdown.
Counting: E.g. pill production and sorting, food processing, or other fast-but-small-items conveyor applications.
Takeaway: If it moves, an event-based camera will find it.
See the entire family of IDS uEye XCP-E cameras. Call us at 978-474-0044. Tell us a little about your application and we’ll help you pick the ideal camera and accessories.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.
Frame grabbers interface between high-speed cameras and PCs to reliably transfer and buffer image data. They can also do various pre-processing and image transformations, improving throughput and reducing workload on the PC.
Teledyne DALSA is a recognized industry leader in frame grabbers and machine vision cameras. Their board-level frame grabbers are dependable and high-performance. Now there’s Teledyne DALSA’s XTIUM3-CLHS PX8 Camera Link HS Frame Grabber. It’s designed for maximum sustained throughput with high-speed image acquisition rates up to 8.6 GB/s and host transfer rates up to 12.5 GB/s.
High speed data transmission
Using CLHS X-protocol, Xtium3-CLHS PX8 achieves over 97% packet efficiency with 64/66-bit encoding. With 7-lane AOC cables, maximum input data rates at cable lengths beyond 30m. Data forwarding enables real-time redistribution of data to up to 12 computers, connecting with other Xtium3-CLHS grabbers via standard AOC cables. Image courtesy Teledyne DALSA.
Optimized performance and compatibility
The Xtium3 series leverages PCIe Gen4 architecture to deliver sustained throughput of 13.2 GB/s directly to host memory, minimizing CPU overhead and accelerating image processing. Its enhanced memory design supports area and line scan, monochrome and color cameras, and offers exceptional performance for Camera Link HS® and CoaXPress® interfaces.
Faster. More efficient. Higher-performance.
Thanks to Moore’s law and industry innovation, machine vision practitioners benefit from electronics components such as cameras, frame grabbers, and computers that outperform their predecessors. If you already use CLHS and prior generation frame grabbers, you may already know you need or want the XTIUM3-CLHS PX8.
Or are you at the design and brainstorming phase?
We’re always happy to provide a product quote, whether for single units or for multiples, of course. But we take pride in assisting our customers by guiding component selection across camera interfaces, sensor selection, lenses, frame grabbers, and more. Call us at 978-474-0044.
XTIUM3 family – more to follow
The XTIUM3 – CLHS PX8 is the first member of the XTIUM3 family, continuing Teledyne DALSA’s commitment to high-performance frame grabber innovation.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.
Short Wave Infrared (SWIR) imaging is enjoying double-digit growth rates, thanks to improving technologies and performance, and innovative applications. Unlike visible-light sensors, SWIR cameras can image through silicon, plastics, and other semitransparent materials. That’s really effective for many quality control applications, materials sorting and inspection, crop management, fruit sorting, medical applications, and more.
Visible vs. SWIR image pairs – Courtesy Allied Vision – a TKH Vision brand
Unlike CMOS sensors, from which high-quality images are reliably derived under wide operating conditions, SWIR sensors typically need “tuning” relative to temperature and exposure duration. First generation SWIR cameras sometimes generated images that while useful, were a bit rough and with certain limitations in the extreme. SWIR camera manufacturers have been innovating solutions to raise the performance of their cameras.
What’s the problem?
In short-wave infrared (SWIR) imaging applications, camera operation points such as exposure time, gain and bit-depth need to be adapted depending on the inspection task at hand. Image sensor defects such as defective pixels and image non-uniformities – inherent to SWIR sensors – are sensitive to the aforementioned operations points.
Unless controlled, image quality can suffer
Consider the following image:
The gray field is intentionally unexciting as a flat field baseline without a target. The white dots are undesired defect pixels, an unfortunate characteristic that one can thankfully correct through interpolation. This image is meant to show “what we do NOT want”.
The four parameters exposure setting, temperature, bit-depth, and gain may collectively be called the “Operating Point” of a SWIR sensor, as together they have a significant bearing on image quality. Through manual or automated adjustments, one can optimize image outcomes.
Harnessing variable parameters into manageable corrections – Courtesy Allied Vision – a TKH Vision brand
In this blog, we provide context for these concepts. And we introduce Dynamic Operating Point Optimization (DOPO) as an automated innovation available in the fx series of SWIR cameras offered by SVS Vistek / Allied Vision.
fx series SWIR cameras – Courtesy SVS Vistek / Allied Vision – a TKH Vision brand
Before Dynamic Operating Point Optimization (DOPO)
SWIR cameras with some image correction capabilities – prior to DOPO we’ll describe in the next section – certainly improved image quality. Largely via defect pixel correction (DPC) and non-uniformity correction (NUC).
Defect pixel correction (DPC) is achieved by replacing the “hot” or “dead” pixel value by the average value of its nearest neighbors. As long as there isn’t a cluster defect with multiple adjacent defect pixels (typically identified and rejected at manufacturing quality control), this is an effective solution.
Non-uniformity correction (NUC) is a bit more complex, but worth understanding. The non-uniformities arise in thermal imaging due to variations in sensitivity among pixels. If uncorrected, the target image could be negatively impacted with striations, ghost images, flecks, etc.
Factory configuration of each camera, before finalizing testing and shipping, adapts for the nuanced differences among individual sensors. Correction tables are created and stored onboard the camera, so that the user receives a camera that already compensates for the variations.
In reality it’s a bit more complicated
In fact defect pixels aren’t always simply hot or dead: they may appear only at certain operating points (exposure duration, temperature, gain, bit-depth, or combinations thereof).
Likewise for non-uniformity characteristics.
So that factory configuration mentioned above, while satisfactory for many applications, is a one size fits all best hope compromise, relative to the tools (then) available to the camera manufacturer and the price point the market would accept. Just as with t-shirts and socks, one size doesn’t really fit every need.
Dynamic Operating Point Optimization (DOPO)
Allied Vision has introduced dynamic operating point optimization (DOPO) to further automate SWIR cameras’ capacity to adapt to changes brought about by exposure time, temperature, gain, and bit depth. Let’s examine the graphic below to understand DOPO and the added value it delivers.
First consider the Y-axis, “Image Quality”. Looking at the flat-field gray block, clearly one would prefer the artifact-free characteristics of the upper region.
Also note the X-axis, “Sensor Temperature / Exposure Time”, for an uncooled thermal sensor. (Note that some thermal cameras do have sensor cooling options, but that’s a topic for another blog.) See the black line “No correction” sloping from upper left to lower right, and how the number of image artifacts grows markedly with exposure time. Without correction the defect pixels and sensor non-uniformities are very apparent.
Flat-field image quality with and without corrections – Courtesy Allied Vision – a TKH Vision brand
Now look at the gray lines labeled “NUC+DPC”. For a factory calibrated camera optimized for a sensor at 30 degrees Celsius and a 25ms exposure, the NUC and DPC corrections indeed optimize the image effectively – right at that particular operating point. And it’s “not bad” for exposure times of 20ms or 15ms to the left, or 30ms or 35ms to the right. But the corrections are less effective the further one gets away from that calibration point.
Finally let’s look at the zig-zag red lines labeled “DOPO”. Instead of the “one size best-guess” factory calibration, represented by the grey lines, a DOPO equipped camera is factory calibrated at up to 600 correction maps, varying each of exposure time, temperature, gain and bit depth across a range of steps, and building maps that represent all the stepwise permutations.
Takeaway: DOPO provides a set of correction tables not just one
So with DOPO providing a set of correction tables, the camera can automatically apply the best-fit correction for whatever operating point is in use. That’s the key point of DOPO. Unlike single-fit correction tables, with so many calibrated corrections under DOPO, the best-fit isn’t far off.
Give us some brief idea of your application and we will contact you to discuss camera options.
Thermal imaging with SWIR cameras – plenty of choices
There are a number of options as one selects a SWIR camera. Is your choice driven mostly by performance under extreme conditions? Size? Cost? A combination of these?
Call us at 978-474-0044. We can guide you to a best-fit solution, according to your requirements.
The key message of this blog is to introduce Dynamic Operating Point Optimization – DOPO – as a set of factory calibration tables and the camera’s ability to switch amongst them. An equally important takeaway is that you may or may not need DOPO for a particular thermal imaging application. There are many SWIR options, in cameras and lenses, and we can help you choose.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.
All of us machine vision practitioners know a thing or two about camera lenses. Some of us are optical engineers. Others are self-taught through reading and experience. Others let their systems designers choose the lens.
Ever need a fast focus change?
If your application does fine with a fixed focal lens, or a mechanically adjustable focus, that’s great. But some applications benefit from – or only become possible with – the ability to rapidly tune the focus. Enter liquid lenses, like Opto Engineering’s EL5MP and EL12MP.
EL5MP liquid lens – Courtesy Opto Engineering
Liquid lenses – from theory to commercial availability
Leonhard Euler (Euler’s equations, anyone?) did groundbreaking work in fluid dynamics in the 1700s. In 1859 Thomas Sutton used a glass sphere filled with water to create a lens. So the concepts for liquid lenses aren’t new. But they’ve only been commercialized in the last 20 years. Here’s a short video (3 minutes) featuring an early leader in liquid lenses, with a nice overview of the key concepts:
From theory to practice – a 5MP and 12MP liquid lens series
If you need fast focus (a few milliseconds) and high reliability (more than a billion cycle lifetime), Opto Engineering offers both a 5MP liquid lens series as well as a 12MP series. Each series provides several focal length options:
6mm for the 5MP series only
8mm for the 5MP series only
12mm for BOTH the 5MP and 12MP series
16mm for BOTH the 5MP and 12MP series
25mm for BOTH the 5MP and 12MP series
35mm for 12MP series only
Working distance coverage range
Across the two series, there are working distances on the near side from 60 – 200mm, depending on the specific model. At the far side the WD goes to infinity for each of the lenses. See the product comparison tables and data sheets at Opto Engineering EL5MP and EL12MP respectively.
More specs
The 5MP series is designed for sensors up to 2/3″. One exception: the 6mm focal length model is for sensors up to 1/1.8″.
The 12MP series is for sensors up to 1.1″.
Basis for liquid lens – Courtesy Opto Engineering
Liquid lens advantages vs. mechanical focus – Courtesy Opto Engineering
Low distortion is another advantage
Liquid lens image (left) has almost no distortion – another huge benefit – Courtesy Opto Engineering
What are the focus demands of your application?
Do you know your application’s focus requirements? Could you build a more effective application with faster focus? Reduce lens service and replacement intervals by switching from a mechanical to a liquid lens? Call us at 978-474-0044 to discuss options or get a quote.
Video presentation on Opto Engineering liquid lenses
Tradeshow presentation runs 14 minutes, if you want to do a deeper dive that way:
Courtesy Opto Engineering
Note: Over the years, various operating principles for liquid lenses have been introduced.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.