LWIR – Long Wave Infrared Imaging – Problems Solved

What applications challenges can LWIR solve?

LWIR is the acronym, is it reminds us where on the electromagnetic spectrum we’re focused – wavelengths around 8 – 14 micrometers (8,000 – 14,000 nm). More descriptive is the term “thermal imaging”, which tells us we’re sensing temperatures not with a contact thermometer – but using non-contact sensors detecting emitted or radiated heat.

Remember COVID? Pre-screening for fever. Courtesy Teledyne DALSA.

Security, medical, fire detection, and environmental monitoring are common applications. More on applications further below. But first…

How does an LWIR camera work?

Most readers probably come to thermal imaging with some prior knowledge or experience in visible imaging. Forget all that! Well not all of it.

For visible imaging using CMOS sensors, photons enter pixel wells and generate a voltage. The array of adjacent pixels are read out as a digital representation of the scene passed through the lens and onto the sensor, according to the optics of the lens and the resolution of the sensor. Thermal camera sensors work differently!

Thermal cameras use a sensor that’s a microbolometer. The helpful part of the analogy to a CMOS sensor is there we still have an array of pixels, which determines the resolution of the camera, as a 2D digital representation of the scene’s thermal characteristics.

But unlike a CMOS sensor whose pixels react to photons, a microbolometers upper pixel surface, the detector, is comprised of IR absorbing material, such as Vanadium oxide. The detector is heated by the IR exposure, and the intensity of exposure in turn changes the electrical resistance. The change in electrical resistance is measured and passed by an electrode to a silicon substrate and readout integrated circuit.

Vanadium oxide (VOx) pixel structure – Courtesy Teledyne DALSA

Just as with visible imaging, for machine vision it’s the digital representation of the scene that matters, as it’s algorithms “consuming” the image in order to take some action: danger vs. safe; good part vs. bad part; steer left, straight, or right – or brake; etc. Whether one generates a pseudo-image for human consumption may well be unnecessary – or at least secondary.

Applications in LWIR

Applications include but are not limited to:

  • Security e.g. intrusion detection
  • Health screening e.g. sensing who has a fever
  • Fire detection – detect heat from early combustion before smoke is detectable
  • Building heat loss – for energy management and insulation planning
  • Equipment monitoring e.g. heat signature may reveal worn bearings or need for lubrication
  • Food safety – monitor whether required cooking temperatures attained before serving

You get the idea – if the thing you care about generates a heat signature distinct from the other things around it, thermal imaging may be just the thing.

What if I wanted to buy an LWIR camera?

We could help you with that. Does your application’s thermal range lie between -25C and +125C? Would a frame rate of 30fps do the job? Does a GigEVision interface appeal?

It’s likely we’d guide you to Teledyne DALSA’s Calibir GX cameras.

Calibir GX front and rear views – Courtesy Teledyne DALSA
Contact us

Precision of Teledyne DALSA Calibir GX cameras

Per factory calibration, one already gets precision to +/- 3 degrees Celsius. For more precision, use a black body radiator and manage your own calibration to +/- 0.5 degrees Celsius!

Thresholding with LUT

Sometimes one wants to emphasize only regions meeting certain criteria – in this case heat-based criteria. Consider the following image:

Everything between 38 and 41°C shown as red – Courtesy Teledyne DALSA

Teledyne DALSA Calibir GX control software let’s users define their own lookup tables (LUTs). One may optionally show regions meeting certain temperatures in color, leaving the rest of the image in monochrome.

Dynamic range

The “expressive power” of a camera is characterized by dynamic range. Just as the singers Enrico Caruso (opera) and Freddie Mercury (rock) were lauded for their range as well as their precision, in imaging we value dynamic range. Consider the image below of an electric heater element:

“Them” (left) vs. us (right) – Courtesy Teledyne DALSA

The left side of the image if from a 3rd party thermal imager – it’s pretty crude essentially showing just hot vs. not-hot, with no continuum. The right side was obtained with a Teledyne DALSA Calibir GX – there we see very hot, hot, warm, slightly warm, and cool – a helpfully nuanced range. Enabled by a 21 bit ADC, the Teledyne DALSA Calibir GX is capable of a dynamic range across 1500°C.

In this short blog we’ve called out just a few of the available features – call us at 978-474-0044 to tell us more about your application goals, and we can guide you to whichever hardware and software capabilities may be most helpful for you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Artificial intelligence in machine vision – today

This is not some blue-sky puff piece about how AI may one day be better / faster / cheaper at doing almost anything at least in certain domains of expertise. This is about how AI is already better / faster / cheaper at doing certain things in the field of machine vision – today.

Classification of screw threads via AI – Courtesy Teledyne DALSA

Conventional machine vision

There are classical machine vision tools and methods, like edge detection, for which AI has nothing new to add. If the edge detection algorithm is working fine as programmed in your vision software, who needs AI? If it ain’t broke, don’t fix it. Presence / absence detection, 3D height calculation, and many other imaging techniques work just fine without AI. Fair enough.

From image processing to image recognition

As any branch of human activity evolves, the fundamental building blocks serve as foundations for higher-order operations that bring more value. Civil engineers build bridges, confident the underlying physics and materials science lets them choose among arch, suspension, cantilever, or cable-stayed designs.

So too with machine vision. As the field matures, value-added applications can be created by moving up the chunking level. The low-level tools still include edge-detection, for example, but we’d like to create application-level capabilities that solve problems without us having to tediously program up from the feature-detection level.

Traditional methods (left) vs. AI classification (right) – Courtesy Teledyne DALSA
Traditional Machine Vision ToolsAI Classification Algorithm
– Can’t discern surface damage vs water droplets– Ignores water droplets
– Are challenged by shading and perspective changes– Invariant to surface changes and perspective
For the application images above, AI works better than traditional methods – Courtesy Teledyne DALSA

Briefly in the human cognition realm

Let’s tee this up with a scenario from human image recognition. Suppose you are driving your car along a quiet residential street. Up ahead you see a child run from a yard, across the sidewalk, and into the street.

While it may well be that the rods and cones in your retina, and your visual cortex, and your brain used edge detection to process contrasting image segments to arrive at “biped mammal” – child, , and on to evaluating risk and hitting the brakes – isn’t how we usually talk about defensive driving. We just think in terms of accident avoidance, situational awareness, and braking/swerving – at a very high level.

Applications that behave intelligently

That’s how we increasingly would like our imaging applications to behave – intelligently and at a high level. We’re not claiming it’s “human equivalent” intelligence, or that the AI method is the same as the human method. All we’re saying is that AI, when well-managed and tested, has become a branch of engineering that can deliver effective results.

So as autonomous vehicles come to market of course we want to be sure sufficient testing and certification is completed, as a matter of safety. But whether the safe-driving outcome is based on “AI” or “vision engineering”, or the melding of the two, what matters is the continuous sequence of system outputs like: “reduce following distance”, “swerve left 30 degrees”, and “brake hard”.

Neural Networks

One branch of AI, neural networks, has proven effective in many “recognition” and categorization applications. Is the thing being imaged an example of what we’re looking for, or can it be dismissed? If it is the sort of thing we’re looking for, is it of sub-type x, y, or z? “Good” item – retain. “Bad” item – reject. You get the idea.

From training to inference

With neural networks, instead of programming algorithms at a granular feature analysis level, one trains the network. Training may include showing “good” vs. “bad” images – without having to articulate what makes them good or bad – and letting the network infer the essential characteristics. In fact it’s sometimes possible to train only with “good” examples – in which case anomaly detection flags production images that deviate from the trained pool of good ones.

Deep Neural Network (DNN) example – Courtesy Teledyne DALSA

Enough theory – what products actually do this?

Teledyne DALSA Astrocyte software creates a deep neural network to perform a desired task. More accurately – Astrocyte provides a graphical user interface (GUI) and a neural network framework, such that an application-specific neural network can be developed by training it on sample images. With a suitable collection of images, Teledyne DALSA Astrocyte can create an effective AI model in under 10 minutes!

Gather images, Train the network, Deploy – Courtesy Teledyne DALSA

Mix and match tools

In the diagram above, we show an “all DALSA” tools view, for those who may already have expertise in either Sapera or Sherlock SDKs. But one can mix and match. Images may alternatively be acquired with third party tools – paid or open source. And one may not need rules-based processing beyond the neural network. Astrocyte builds the neural network at the heart of the application.

Contact us

User-friendly AI

The key value proposition with Teledyne DALSA Astrocyte is that it’s user-friendly AI. The GUI used to configure the training and to validate the model requires no programming. And one doesn’t need special training in AI. Sure, it’s worth reading about the deep learning architectures supported. They include: Classification, Anomaly Detection, Object Detection, and Segmentation. And you’ll want to understand how the training and validation work. It’s powerful – it’s built by Teledyne DALSA’s software engineers standing on the shoulders of neural network researchers – but you don’t have to be a rocket scientist to add value in your field of work.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution! We’re big enough to carry the best cameras, and small enough to care about every image.

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Kowa FC24M C-mount lens series

With 9 members in the Kowa FC24M lens series, focal lengths range from 6.5mm through 100mm. Ideal for sensors like the 1.1″ Sony IMX183, 530/540, 253 and IMX304, these C-mount lenses cover any sensor up to 14.1mm x 10.6mm, with no vignetting. Their design is optimized for sensors with pixel sizes as small as 2.5µm – but of course work great on pixels larger than that as well.

Kowa FC24M C-mount lenses – Courtesy Kowa

Lens selection

Machine vision veterans know that lens selection ranks right up there with camera/sensor choice, and lighting, as determinants in application success. For an introduction or refresher, see our knowledge base Guide to Key Considerations in Machine Vision Lens Selection.

Click to contact
Give us a brief idea of your application and we will contact you with options.

Noteworthy features

Particularly compelling across the Kowa FC24M lens series is the floating mechanism system. Kowa’s longer name for this is the “close distance aberration compensation mechanism.” It creates stable optical performance at various working distances. Internal lens groups move independently of each other, which optimizes alignment compared to traditional lens design.

Kowa FC24M lenses render sharp images with minimal distortion – Courtesy Kowa

Listing all the key features together:

  • Floating mechanism system (described above)
  • Wide working range… and as close at 15 cm MOD
  • Durable construction … ideal for industrial applications
  • Wide-band multi-coating – minimizes flare and ghosting from VIS through NIR
High resolution down to pixels as small as 2.5um – Courtesy Kowa

Video overview shows applications

Applications include manufacturing, medical, food processing, and more. View short one-minute video:

Kowa FC24M key features and example applications – Courtesy Kowa

What’s in a family name?

Let’s unpack the Kowa FC24M lens series name:

F is for fixed. With focal lengths at 9 step sizes from 6 – 100, lens design is kept simple and pricing is correspondingly competitive.

C is for C-mount. It’s one of the most popular camera/lens mounts in machine vision, with a lot of camera manufacturers offering diverse sensors designed in to C-mount housings.

24M is for 24 Megapixels. Not so long ago it was cost prohibitive to consider sensors larger than 20M. But as with most things in the field of electronics, the price : performance ratio keeps moving in the user’s favor. Many applications benefit from sensors in this size.

And the model names?

Model names include LM6FC24M, LM8FC24M, …, LM100FC24M. So the focal length is specified by the digit(s) just before the family name. i.e. the LM8FC24M has a focal length of 8mm. In fact that particular model is technically 8.5mm but per industry conventions one rounds or truncates to common de facto sizes.

LM8FC24M 8.5mm focal length – Courtesy Kowa

See the full brochure for the Kowa FC24M lens series, or call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution! We’re big enough to carry the best cameras, and small enough to care about every image.

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Machine vision problems solved with SWIR lighting

Some problems best solved outside the visible spectrum

Most of us think about vision with a human bias, since most of us are normally sighted with color stereo vision. We perceive distance, hues, shading, and intensity, for materials that emit or reflect light in the wavelengths 380 – 750 nm. Many machine vision problems can also be solved using monochrome or color light and sensors in the visible spectrum.

Human visible light – marked VIS – is just a small portion of what sensors can detect – Courtesy Edmund Optics

Many applications are best solved or even only solved, in wavelengths that we cannot see with our own eyes. There are sensors that react to wavelengths in these other parts of the spectrum. Particularly interesting are short wave infrared (SWIR) and ultraviolet (UV). In this blog we focus on SWIR, with wavelengths in the range 0.9 – 1.7um.

Examples in SWIR space

The same apple with visible vs. SWIR lighting and sensors – Courtesy Effilux

Food processing and agricultural applications possible with SWIR. Consider the above images, where the visible image shows what appears to be a ripe apple in good condition. With SWIR imaging, a significant bruise is visible – as SWIR detects higher densities of water which render as black or dark grey. Supplier yields determine profits, losses, and reputations. Apple suppliers benefit by automated sorting of apples that will travel to grocery shelves vs. lightly bruised fruit that can be profitably juiced or sauced.

Even clear fluids in opaque bottles render dark in SWIR light –
Courtesy Effilux

Whether controlling the filling apparatus or quality controlling the nominally filled bottles, SWIR light and sensors can see through glass or opaque plastic bottles and render fluids dark while air renders white. The detection side of the application is solved!

Hyperspectral imaging

Yet another SWIR application is hyperspectral imaging. By identifying the spectral signature of every pixel in a scene, we can use light to discern the unique profile of substances. This in turn can identify the substance and permit object identification or process detection. Consider also multi-spectral imaging, an efficient sub-mode of hyperspectral imaging that only looks for certain bands sufficient to discern “all that’s needed”.

Multispectral and hyperspectral imaging – Courtesy Allied Vision Technologies

How to do SWIR imaging

The SWIR images shown above are pseudo-images, where pixel values in the SWIR spectrum have been re-mapped into the visible spectrum along grey levels. But that’s just to help our understanding, as an automated machine vision application doesn’t need to show an image to a human operator.

In machine vision, an algorithm on the host PC interprets the pixel values to identify features and make actionable determinations. Such as “move apple to juicer” or “continue filling bottle”.

Components for SWIR imaging

SWIR sensors and cameras; SWIR lighting, and SWIR lenses. For cameras and sensors, consider Allied Vision’s Goldeye series:

Goldeye SWIR cameras – Courtesy Allied Vision

Goldeye SWIR cameras are available in compact, rugged, industrial models, or as advanced scientific versions. The former has optional thermal electric cooling (TEC), while the latter is only available in cooled versions.

Contact us

For SWIR lighting, consider Effilux bar and ring lights. Effilux lights come in various wavelengths for both the visible and SWIR applications. Contact us to discuss SWIR lighting options.

EFFI-FLEX bar light and EFFI-RING ring light – Courtesy Effilux

By emitting light in the SWIR range, directed to reflect off targets known to reveal features in the SWIR spectrum, one builds the components necessary for a successful application.

Hyperspectral bar lights – Courtesy Effilux

And don’t forget the lens. One may also need a SWIR-specific lens, or a hybrid machine vision lens that passes both visible and SWIR wavelengths. Consider Computar VISWIR Lite Series Lenses or their VISWIR Hyper-APO Series Lenses. It’s beyond the scope of this short blog to go into SWIR lensing. Read our recent blog on Wide Band SWIR Lensing and Applications or speak with your lensing professional to be sure you get the right lens.

Takeaway

Whether SWIR or UV (more on that another time), the key point is that some machine vision problems are best solved outside the human visible portions of the spectrum. While there are innovative users and manufacturers continuing to push the boundaries – these areas are sufficiently mature that solutions are predictably creatable. Think beyond the visible constraints!

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!