Dynamic Operating Point Optimization – Explained!

Short Wave Infrared (SWIR) imaging is enjoying double-digit growth rates, thanks to improving technologies and performance, and innovative applications. Unlike visible-light sensors, SWIR cameras can image through silicon, plastics, and other semitransparent materials. That’s really effective for many quality control applications, materials sorting and inspection, crop management, fruit sorting, medical applications, and more.

Visible vs. SWIR image pairs – Courtesy Allied Vision – a TKH Vision brand

Unlike CMOS sensors, from which high-quality images are reliably derived under wide operating conditions, SWIR sensors typically need “tuning” relative to temperature and exposure duration. First generation SWIR cameras sometimes generated images that while useful, were a bit rough and with certain limitations in the extreme. SWIR camera manufacturers have been innovating solutions to raise the performance of their cameras.

What’s the problem?

In short-wave infrared (SWIR) imaging applications, camera operation points such as exposure time, gain and bit-depth need to be adapted depending on the inspection task at hand. Image sensor defects such as defective pixels and image non-uniformities – inherent to SWIR sensors – are sensitive to the aforementioned operations points.

Unless controlled, image quality can suffer

Consider the following image:

The gray field is intentionally unexciting as a flat field baseline without a target. The white dots are undesired defect pixels, an unfortunate characteristic that one can thankfully correct through interpolation. This image is meant to show “what we do NOT want”.

The four parameters exposure setting, temperature, bit-depth, and gain may collectively be called the “Operating Point” of a SWIR sensor, as together they have a significant bearing on image quality. Through manual or automated adjustments, one can optimize image outcomes.

Harnessing variable parameters into manageable corrections – Courtesy Allied Vision – a TKH Vision brand

In this blog, we provide context for these concepts. And we introduce Dynamic Operating Point Optimization (DOPO) as an automated innovation available in the fx series of SWIR cameras offered by SVS Vistek / Allied Vision.

fx series SWIR cameras – Courtesy SVS Vistek / Allied Vision – a TKH Vision brand

Before Dynamic Operating Point Optimization (DOPO)

SWIR cameras with some image correction capabilities – prior to DOPO we’ll describe in the next section – certainly improved image quality. Largely via defect pixel correction (DPC) and non-uniformity correction (NUC).

Defect pixel correction (DPC) is achieved by replacing the “hot” or “dead” pixel value by the average value of its nearest neighbors. As long as there isn’t a cluster defect with multiple adjacent defect pixels (typically identified and rejected at manufacturing quality control), this is an effective solution.

Non-uniformity correction (NUC) is a bit more complex, but worth understanding. The non-uniformities arise in thermal imaging due to variations in sensitivity among pixels. If uncorrected, the target image could be negatively impacted with striations, ghost images, flecks, etc.

Factory configuration of each camera, before finalizing testing and shipping, adapts for the nuanced differences among individual sensors. Correction tables are created and stored onboard the camera, so that the user receives a camera that already compensates for the variations.


In reality it’s a bit more complicated

In fact defect pixels aren’t always simply hot or dead: they may appear only at certain operating points (exposure duration, temperature, gain, bit-depth, or combinations thereof).

Likewise for non-uniformity characteristics.

So that factory configuration mentioned above, while satisfactory for many applications, is a one size fits all best hope compromise, relative to the tools (then) available to the camera manufacturer and the price point the market would accept. Just as with t-shirts and socks, one size doesn’t really fit every need.

Dynamic Operating Point Optimization (DOPO)

Allied Vision has introduced dynamic operating point optimization (DOPO) to further automate SWIR cameras’ capacity to adapt to changes brought about by exposure time, temperature, gain, and bit depth. Let’s examine the graphic below to understand DOPO and the added value it delivers.

First consider the Y-axis, “Image Quality”. Looking at the flat-field gray block, clearly one would prefer the artifact-free characteristics of the upper region.

Also note the X-axis, “Sensor Temperature / Exposure Time”, for an uncooled thermal sensor. (Note that some thermal cameras do have sensor cooling options, but that’s a topic for another blog.) See the black line “No correction” sloping from upper left to lower right, and how the number of image artifacts grows markedly with exposure time. Without correction the defect pixels and sensor non-uniformities are very apparent.

Flat-field image quality with and without corrections – Courtesy Allied Vision – a TKH Vision brand

Now look at the gray lines labeled “NUC+DPC”. For a factory calibrated camera optimized for a sensor at 30 degrees Celsius and a 25ms exposure, the NUC and DPC corrections indeed optimize the image effectively – right at that particular operating point. And it’s “not bad” for exposure times of 20ms or 15ms to the left, or 30ms or 35ms to the right. But the corrections are less effective the further one gets away from that calibration point.

Finally let’s look at the zig-zag red lines labeled “DOPO”. Instead of the “one size best-guess” factory calibration, represented by the grey lines, a DOPO equipped camera is factory calibrated at up to 600 correction maps, varying each of exposure time, temperature, gain and bit depth across a range of steps, and building maps that represent all the stepwise permutations.

Takeaway: DOPO provides a set of correction tables not just one

So with DOPO providing a set of correction tables, the camera can automatically apply the best-fit correction for whatever operating point is in use. That’s the key point of DOPO. Unlike single-fit correction tables, with so many calibrated corrections under DOPO, the best-fit isn’t far off.

contact us
Give us some brief idea of your application and we will contact you to discuss camera options.

Thermal imaging with SWIR cameras – plenty of choices

There are a number of options as one selects a SWIR camera. Is your choice driven mostly by performance under extreme conditions? Size? Cost? A combination of these?

Call us at 978-474-0044. We can guide you to a best-fit solution, according to your requirements.

We might recommend a DOPO equipped camera, such as one of the fxo series SWIR cameras:

DOPO equipped SWIR cameras – Courtesy SVS Vistek / Allied Vision – a TKH Vision brand

Or you might be best-served with a Goldeye camera, in cooled or uncooled models:

Goldeye available in uncooled and cooled models – Courtesy Allied Vision – a TKH Vision brand

Or an Alvium compact camera, whether housed or modular (for embedded designs), in USB / MIPI CSI-2 or GigE interfaces.

Alvium cameras some with SWIR sensors
Alvium cameras – Courtesy Allied Vision – a TKH Vision brand

The key message of this blog is to introduce Dynamic Operating Point Optimization – DOPO – as a set of factory calibration tables and the camera’s ability to switch amongst them. An equally important takeaway is that you may or may not need DOPO for a particular thermal imaging application. There are many SWIR options, in cameras and lenses, and we can help you choose.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Drone detection event-based cameras from Prophesee

Event-based cameras outperform frame-based approaches for many applications. We provided insight to the event-based paradigm in a recent blog. Or download our whitepaper on event-based sensing.

In this piece, we focus on drone detection, a task at which event-based imaging excels. For full-impact, please view the following in full-screen mode using the “four corners” button.

Find the drone – Event-based approach beats frame-based method – Courtesy scientific paper attribution

As discussed in the event-based paradigm introductions links above, frame-based approaches would struggle to track a drone moving in a visually complex environment (above left), having to parse for drone shapes and orientations, occlusions, etc., even when most of the imagery is static.

Meanwhile, as seen in the event-based video (above right), the new paradigm only looks for “what’s changed”, which amounts to showing “what’s moving?”. For drone detection, as well as other perimeter intrusion applications, vibration monitoring, etc., that’s ideal.

1stVision represents Prophesee’s event-based sensors and cameras, built on neuromorphic engineering principles inspired by human vision. Call us at 978-474-0044 to learn more or request a quote.

Contact us for a quote

For some applications, one only needs an event-based sensor – problem solved. For other applications, one might combine different imaging approaches. Consider the juxtaposition of three methods shown below:

Visible, polarization, and event-based approaches – Courtesy Prophesee and EOPTIC

The multimodal approach above is utilized in a proprietary system developed by EOPTIC, in which visible, polarization, and event-based sensors are integrated. For certain applications one may require the best of speed, detail, and situational awareness, for automated “confidence” and accuracy, for example.

Here’s another side-by-side video on drone detection and tracking:

Visible vs. (hybrid) event-based imaging – Courtesy Prophesee and NEUROBUS

The above-left video uses conventional frame-based imaging, where it’s pretty hard to see the drone until it rises above the trees. But the event-based approach used by Prophesee’s customer Neurobus, together with their own neuromorphic technologies, identifies the drone event amidst the trees – a level of early warning that could make all the difference.

By the numbers:

Enough with the videos – looks compelling but can you quantify Prophesee event-based sensors for me please?

Quantifying key attributes – Courtesy Prophesee

Ready to evaluate event-based vision in your application?

1stVision offers Prophesee Metavision® evaluation kits designed to help engineers and developers quickly assess event-based sensing for high-speed motion detection, drone tracking, robotics, and other dynamic vision applications. Each kit provides everything needed to get started with Prophesee’s Metavision technology, including hardware, software tools, and technical support from our experienced machine vision team. Request a quote to discuss kit availability, configuration options, and how we can help accelerate your proof-of-concept or system deployment.” – we can link that page with the kits. 

Technical note: The GenX320 Starter kit for Raspberry Pi 5” utilizes the Sony IMX636 sensor, expressly designed for event-based sensing.

Kit or camera? You choose.

The kits described and linked above are ideal for those pursuing embedded designs. If you prefer a full camera – still very compact at less than 5cm per side – and you want a USB3 interface – see IDS uEye event-based cameras. You’ve got options.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

AVT updates bring new Alvium features

Here are some cool new features. At least they’re cool if you already use AVT Alvium cameras and want to get even more out of them. Conversely the features may get your attention to give Alvium a look for your next application.

We call out five specific new features (or feature sets):

  • Liquid lens autofocus controls – great for logistics applications: fast focus change
  • Power saving standby mode – heat minimization for embedded designs
  • Improved recovery from over-temperature power savings mode – automated recovery
  • More GenICam features for V4L2 Video for Linux – great to have Linux options
  • Additional registers and controls – if some DRA is good, more is better

… especially for the Alvium camera families, including USB3 and MIPI CSI-2, and 1 GigE and 5 GigE models.

Alvium USB3, MIPI CSI-2, 1 GigE and 5 GigE compact and powerful cameras – Courtesy AVT – a TKH brand

Call us at 978-474-0044 to speak to one of our experienced sales engineers. Or tell us what you’d like to know more about – whether concepts, features, or pricing – and we’ll get back to you:

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.

Liquid Lens Autofocus Controls

If you’re new to liquid lenses, see our prior blog for examples and an overview. Liquid lenses can change focus within milliseconds, far faster than mechanical apertures.

Below you can see the hardware configuration, which new new autofocus controls can utilize.

Courtesy AVT – a TKH Vision brand

So AVT provides the lens controlling capability on the camera side, and you can optionally connect a liquid lens if that would help your application. Naturally AVT Alvium cameras may also be used with conventional lenses, including S, CS, C, closed, open, and bare-board – range of options varies slightly by model. Please review when ordering or confer with us per adage “measure twice cut once”.

Power saving standby mode

There are at least to reasons why you might be interested in power savings. The layman’s view might be to preserve the environment or save on energy costs. But compact sensors and cameras don’t use a lot of power, often just +/- 1 watt. The primary motivator, for embedded systems designers, is to reduce heat, during periods when no imaging is required. That in turn enhances image quality and prolongs system life.

Power saving mode enabled vs. disabled – Courtesy AVT – a TKH Vision brand

Improved Recovery from over temperature mode

When the camera goes into over temperature mode, it automatically stops power draw as a self-protection mechanism. In firmware V13 this required a camera reboot to resume imaging. Now in V15 the camera resumes normal function without requiring reboot.

Improved recovery from over temperature mode – Courtesy AVT – a TKH Vision brand

(More) GenICam features for V4L2 Video for Linux

If you favor video for Linux (V4L2) drivers and APIs for your development and production controls, below see GenICam features now available to you.

Courtesy AVT – a TKH Vision brand

Additional Registers and Controls

In addition to all the registers previously available on Alvium’s MIPI CSI-2 cameras, below are a number of new registers, whose names suggest their meaning and use. One may control each feature through any of GenICam APIs, V4L2 Video for Linux, or by Direct Register Access (DRA) memory addressing. Whichever method you prefer.

New registers available for DRA – Courtesy AVT – a TKH Vision brand

Manuals for all AVT cameras and SDKs are downloadable, of course. Drill in on any feature or attribute of interest.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

JAI prism-based 5 GigE cameras for superior color

If monochrome sensors and methods aren’t enough for your application, a machine vision color camera may be needed. And if color is needed, is “good enough” from a single sensor with a Bayer filter all you need? Or do you need the precision of a prism-based 3 sensor camera, one for each of R, G, and B? See our whitepaper Considerations for Color Machine Vision Cameras.

Prism-based 3-sensor imaging vs. interpolated Bayer mosaic sensors

Bryce Bayer, the engineer at Eastman Kodak whose name is associated with his Bayer filter innovation, created a very compact and efficient way to layer a color filter atop a monochrome sensor. The vast majority of today’s color cameras – in both machine vision and consumer imaging – utilize precisely such a color filter mechanism to interpolate color. When the resolution is sufficiently fine, the rendered image is typically good enough for many applications.

But “good enough” for some isn’t the same as good enough for all

Interpolation is a form of estimation – in the case of a Bayer filter its design presumes that the Red, Green, and Blue values between each of the “true” measurements of those values is the average of the values at the accurate points. So the in-between values are computed, and may or may not correlate to the true color present at the source.

For certain machine vision, industrial imaging, and medical applications, maximum color accuracy is essential.

What’s best for my application?

Read on, for more detail. Or give us a call at 978-474-0044. Or tell us about your requirements and we’ll contact you.

Contact us

For certain applications, color accuracy and fidelity is essential

Applications note provides further information – Courtesy JAI
See corresponding applications note for details – Courtesy JAI
See applications note for more – Courtesy JAI
All four images and and associated texts above – Courtesy JAI

JAI adds 3 new 5.1 Mpix cameras to its Apex Series

5.1 Megapixel prism-based 3 sensor camera – Courtesy JAI

Previously JAI’s Apex prism-based camera series included 1.6 Mpix and 3.2 Mpix models. Three new models join the series, at 5.1 Megapixels each. The new members all use the same SONY IMX548, one of the Pregius S sensors.

If the new 5.1 Mpix models all use the same sensors, why are there three models? Because there are three interface options, depending on your need for speed.

  • 5 GigE model: 32 fps
  • CoaXpress model: 75 fps
  • Camera Link model: 55 fps

Numerous features and benefits

There are many features designed into the Apex series cameras, including binning, single and multi-region ROI, chromatic aberration correction, and automatic level control. Download a manual for details. Or call us at 978-474-0044.

Feature highlight: Per-channel exposure control

Since the rationale for a 3 sensor prism camera is color performance, the per-channel exposure control feature helps to achieve that goal. By adjusting the exposure time for each channel separately, the camera increases signal without amplifying noise.

Per-channel exposure control – Courtesy JAI

Call us at 978-474-0044 to learn more about JAI Apex cameras. Tell us about your application goals and requirements, and we’ll help you determine the best camera, lens, lighting, filters, and software. It’s what we do.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.