Machine vision software –> Sapera Processing

Why read this article?

Generic reason: Compact overview of machine vision software categories and functionality.

Cost-driven reason: Discover that powerful software comes bundled at no cost to users of Teledyne DALSA cameras and frame grabbers. Not just the a viewer and SDK – though of course those – but select image processing software too.

contact us

Software – build or buy?

Without software machine vision is nowhere. The whole point of machine vision is to acquire and image and then process with an algorithm that achieves something of value.

Whether it’s presence/absence detection, medical diagnostics, thermal imaging, autonomous navigation, pick and place, automated milling, or myriad other applications, the algorithm is expressed in software.

You might choose a powerful software library needing “just” parameterization by the user – or AI – or a software development kit (SDK) permitting nearly endless scope of programming innovation. Either way it’s the software that does the processing and delivers the results.

In this article, we survey build vs. buy arguments for several types of machine vision software. We make a case for Teledyne DALSA’s Sapera Software Suite – but it’s a useful read for anyone navigating machine vision software choices – wherever you choose to land.

Sapera Vision Software Suite – Courtesy Teledyne DALSA

Third party or vision library from same vendor?

Third party software

If you know and love some particular third party software, such as LabView, HALCON, MATLAB, or OpenCV, you may have developed code libraries and in-house expertise on which it makes sense to double-down. Even if there are development or run time licensing costs. Do the math on total cost of ownership.

Same vendor for camera and software

Unless the third party approach described above is your clear favorite, consider the benefits of one-stop shopping for your camera and your software. Benefits include:

  • License pricing: SDK and run-time license costs are structured to favor the customer who sourced his cameras and software from the same provider.
  • Single-source simplicity: Since the hardware and software come from the same manufacturer, it just works. They’ve done all the compatibility validation in-house. And the feature name on the camera side is the same as the parameter name on the software side.
  • Technical support: When it all comes from one provider, if you have support questions there’s no finger pointing.

You – the customer/client – are the first party. It’s all about you. Let’s call the camera manufacturer the second party, since the camera and the sensor therein are at the heart of image acquisition. Should licensed software come from a third party, or from the camera manufacturer? It’s a good question.

contact us

Types/functions of machine vision software

While there are all-in-one and many-in-one packages, some software is modularized to fulfill certain functions, and may come free, bundled, discounted, open-source, or priced, according to market conditions and a developer’s business model. Before we get into commercial considerations, let’s briefly survey the functional side, including each of the following categories in turn:

  • Viewer / camera control
  • Acquisition control
  • Software development kit (SDK)
  • Machine vision library
  • AI training/learning as an alternative to programming

Point of view: Teledyne DALSA’s Sapera software packages by capability

Viewer / camera control – included in Sapera LT

When bringing a new camera online, after attaching the lens and cable, one initially needs to configure and view. Regardless of whether using GigE Vision, CameraLink, CameraLink HS, USB3 Vision, CoaXpress, or other standards, one must typically assign the camera a network address and set some camera parameters to establish communication.

A graphical user interface (GUI) viewer / camera-control-tool makes it easy to quickly get the camera up and running. The viewer capability permits an image stream so one can get the camera mounted, adjust aperture, focus, and imaging modes.

Every camera manufacturer and software provider offers such a tool. Teledyne DALSA calls theirs CameraExpert GUI, and it’s part of Sapera LT. It’s free for users of Teledyne DALSA 2D/3D cameras and frame grabber.

CamExpert – Courtesy Teledyne DALSA

Acquisition control – included in Sapera LT

The next step up the chain is referred to as acquisition control. On the camera side this is about controlling the imaging modes and parameters to get the best possible image before passing it to the host PC. So one might select a color mode, whether to use HDR or not, gain controls, framerate or trigger settings, and so on.

On the communications side, one optimizes depending on whether a single camera on the bus, or bandwidth sharing. Anyone offering acquisition control software has provide all these controls.

Controlling image acquisition with GUI tools – Courtesy Teledyne DALSA

Those with Sapera LT can utilize Teledyne DALSA’s patented TurboDrive, realizing speed gains of x1.5 to x3, under GigE Vision protocol. This driver brings added bandwidth without needing special programming.

Software development kit (SDK) – included in Sapera LT

GUI viewers are great, but often one needs at least a degree of programming to fully integrate and control the acquisition process. Typically one uses a software development kit (SDK) for C++, C#, .NET, and/or Standard C. And one doesn’t have to start from scratch, as such SDKs almost always include programming examples and projects one may adapt and extend, to avoid re-inventing the wheel.

Teaser subset of code samples provided – Courtesy Teledyne DALSA

Sapera Vision Software allows royalty free run-time licenses for select image processing functions when combined with Teledyne DALSA hardware. If you’ve just got a few cameras, that may not be important to you. But if you are developing systems for sale to your own customers, this can bring substantial economies of scale.

Machine vision library

So you’ve got the image hitting the host PC just fine – now what? One needs to programmatically interpret the image. Unless you’ve thought up a totally new approach to image processing, there’s an excellent chance your application will need one or more of edge detection, bar code reading, blob analysis, flipping, rotation, cross-correlation, frame-averaging, calibration, or other standard methods.

A machine vision library is a toolbox of many tens of such functions pre-programmed and parameterized for your use. It allows you to marry your application-specific insights with proven machine vision processes, so that you can build out the value-add by standing on the shoulders of machine vision developers who provide you with a comprehensive toolbox.

No surprise – Teledyne Dalsa has an offer in this space too. It’s called Sapera Processing. It includes all we’ve discussed above in terms of configuration and acquisition control. And it adds a suite of image processing tools. The suite’s tools are best understood across three categories:

  • Calibration – advanced configuration including compensation for geometric distortion
  • Image processing primitives – convolution functions, geometry functions, measurement, transforms, contour following, and more
  • Blob analysis – uses contrast to segment objects in a scene; determine centroid, length and area; min, max, and standard deviation; thresholding, and more
Just some of the free included image processing primitives –
Courtesy Teledyne DALSA

So unless you skip ahead to the AI training/learning features of Astrocyte (next section), Sapera Processing is the programmer’s comprehensive toolbox to do it all. Viewer, camera configuration, acquisition control, and image evaluation and processing functions. From low-level controls if you want them, through parameterized machine vision functions refined, validated, and ready for your use.

AI training/learning as an alternative to programming

Prefer not to program if possible? Thanks to advances in AI, many machine vision applications may now be trained on good vs. bad images, such that the application learns. Once validated, each next production image is correctly processed based on the training sets and the automated inference engine.

No coding required – Courtesy Teledyne DALSA

Teledyne DALSA’s Astrocyte package makes training simple and cost-effective. Naturally one can combine it with parameterized controls and/or SDK programming, if desired. See our recent overview of AI in machine vision – and Astrocyte.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to with what topics you’d like to know more about. 

Lens extension tube or close up ring increases magnification

Summary at a glance:

Need a close-up image your preferred sensor and lens can’t quite deliver? A glass-free extension tube or close up ring can change the optics to your advantage.

C-mount extension tube kit – Courtesy Edmund Optics

What’s an extension tube?

An extension tube is a metal tube one positions between the lens and the camera mount. It comes with the appropriate threads for both the lens and camera mount, so mechanically it’s an easy drop-in procedure.

By moving the lens away from the optical plane, the magnification is increased. Sounds like magic! Well almost. A little optical calculation is required – or use of formulas or tables prepared by others. It’s not the case than any tube of any length will surely yield success – one needs to understand the optics or bring in an expert who does.

S-mount extension tube kit – Courtesy Edmund Optics

Note: One can also just purchase a specific length extension tube. We’ve shown images of kits to make it clear there are lots of possibilities. And some may want to own a kit in order to experiment.


Sometimes an off-the-shelf lens matched to the sensor and camera you prefer suits your optical needs as well as your available space requirements. By available space we mean clearance from moving parts, or ability to embed inside an attractively sized housing. Lucky you.

But you might need more magnification than one lens offers, yet not as much as the next lens in the series. Or you want to move the camera and lens assembly closer to the target. Or both. Read on to see how extension rings at varying step sizes can achieve this.

Navigating the specifications

Once clear on the concept, it’s often possible to read the datasheets and accompanying documentation, to determine what size extension tube will deliver what results. Consider, for example, Moritex machine vision lenses. Drilling in on an arbitrary lens family, look at Moritex ML-U-SR Series 1.1″ Format Lenses, then, randomly, the ML-U1217SR-18C.

ML-U1217SR-18C 12mm lens optimized for 3.45um pixels and 12MP sensors – Courtesy Moritex

If you’ve clicked onto the page last linked above, you should see a PDF icon labeled “Close up ring list“. It’s a rather large table showing which extension tube lengths may be used with which members of the ML-U-SR lens series, to achieve what optical changes in the Field-Of-View (FOV). Here’s a small segment cropped from that table:

Field-Of-View changes with extension tubes of differing lengths – Courtesy Moritex

Compelling figures from the chart above:

Consider the f12mm lens in the rightmost column, and we’ll call out some highlights.

Extension tube length (mm)WD (far)Magnification
5mm tube yields 86% closer WD and 4x magnification!

Drum roll here…

Let’s expand on that table caption above for emphasis. For this particular 12mm lens, by using a 5mm extension tube, we can move the camera 86% closer to the target than by using just the unaugmented lens. And we quadruple the magnification from 0.111x to 0.414x. If you are constrained to a tight space, whether for a one-off system, or while building systems you’ll resell at scale, those can be game-changing factors.

contact us

Any downside?

As is often the case with engineering and physics, there are tradeoffs one should be aware of. In particular:

  • The light reaching the focal plane is reduced, per the inverse square law – if you have sufficient light this may not have any negative consequences for you at all. But if pushed to the limit resolution can be impacted by diffraction.
  • Reduced depth of field – does the Z dimension have a lot of variance for your application? Is your application working with the center segment of the image or does it also look at the edge regions where field curvature and spherical aberrations may appear?

We do this

Our team are machine vision veterans, with backgrounds in optics, hardware, lighting, software, and systems integration. We take pride in helping our customers find the right solution – and they come back to us for project after project. You don’t have to get a graduate degree in optics – we’ve done that for you.

Give a brief idea of your application and we’ll provide options.

Related resources

You might also be interested in one or more of the following:

contact us

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to with what topics you’d like to know more about. 

Monochrome light better for machine vision than white light

Black and white vs. color sensor? Monochrome or polychrome light frequencies? Visible or non-visible frequencies? Machine vision systems builders have a lot of choices – and options!

Let’s suppose you are working in the visible spectrum. You recall the rule of thumb to favor monochrome over color sensors when doing measurement applications – for same sized sensors.

So you’ve got a monochrome sensor that’s responsive in the range 380 – 700 nm. You put a suitable lens on your camera matched to the resolution requirements and figure “How easy, I can just use white light!”. You might have sufficient ambient light. Or you need supplemental LED lighting and choose white, since your target and sensor appear fine in white light – why overthink it? – you think.

Think again – monochrome may be better

Polychromatic (white) light is comprised of all the colors of the ROYGBIV visible spectrum – red, orange, yellow, green, blue, indigo, and violet – including all the hues within each of those segments of the visible spectrum. We humans perceive it as simple white light, but glass lenses and CMOS sensor pixels see things a bit differently.

Chromatic aberration is not your friend

Unless you are building prisms intended to separate white light into its constituent color groups, you’d prefer a lens that performs “perfectly” to focus light from the image onto the sensor, without introducing any loss or distortion.

Lens performance in all its aspects is a worthwhile topic in its own right, but for purposes of this short article, let’s discuss chromatic aberration. The key point is that when light passes through a lens, it refracts (bends) differently in correlation with the wavelength. For “coarse” applications it may not be noticeable; but trace amounts of arsenic in one’s coffee might go unnoticed too – inquiring minds want to understand when it starts to matter.

Take a look at the following two-part illustration and subsequent remarks.

Transverse and longitudinal chromatic aberration – Courtesy Edmund Optics

In the illustrations above:

  • C denotes red light at 656 nm
  • d denotes yellow light at 587 nm
  • F denotes blue light at 486 nm

Figure 1, showing transverse chromatic aberration, shows us that differing refraction patterns by wavelength shift the focal point(s). If a given point on your imaged object reflect or emits light in two more more of the wavelengths, the focal point of one might land in a different sensor pixel than the other, creating blur and confusion on how to resolve the point. One wants the optical system to honor the real world geometry as closely as possible – we don’t want a scatter plot generated if a single point could be attained.

Figure 2 shows longitudinal chromatic aberration, which is another way of telling the same story. The minimum blur spot is the span between whatever outermost rays correspond to wavelengths occurring in a given imaging instance.

We could go deeper, beyond single lenses to compound lenses; dig into advanced optics and how lens designers try to mitigate for chromatic aberration (since some users indeed want or need polychromatic light). But that’s for another day. The point here is that chromatic aberration exists, and it’s best avoided if one can.

So what’s the solution?

The good news is that a very easy way to completely overcome chromatic aberration is to use a single monochromatic wavelength! If your target object reflects or emits a given wavelength, to which your sensor is responsive, the lens will refract the light from a given point very precisely, with no wavelength-induced shifts.

Making it real

The illustration below shows that certain materials reflect certain wavelengths. Utilize such known properties to generate contrast essential for machine vision applications.

Red light reflects well from gold, copper, and silver – Courtesy CCS Inc.

In the illustration we see that blue light reflects well from silver (Ag) but not from copper (Cu) nor gold (Ag). Whereas red light reflects well from all three elements. The moral of the story is to use a wavelength that’s matched to what your application is looking for.

Takeaway – in a nutshell

Per the carpenter’s guidance to “measure twice – cut once”, approach each new application thoughtfully to optimize outcomes:

Click to contact
Give us an idea of your application and we will contact with lighting options and suggestions.

Additional resources you may find helpful from 1stVision’s knowledge base and blog articles: (in no particular order)

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to with what topics you’d like to know more about. 

LWIR – Long Wave Infrared Imaging – Problems Solved

What applications challenges can LWIR solve?

LWIR is the acronym, is it reminds us where on the electromagnetic spectrum we’re focused – wavelengths around 8 – 14 micrometers (8,000 – 14,000 nm). More descriptive is the term “thermal imaging”, which tells us we’re sensing temperatures not with a contact thermometer – but using non-contact sensors detecting emitted or radiated heat.

Remember COVID? Pre-screening for fever. Courtesy Teledyne DALSA.

Security, medical, fire detection, and environmental monitoring are common applications. More on applications further below. But first…

How does an LWIR camera work?

Most readers probably come to thermal imaging with some prior knowledge or experience in visible imaging. Forget all that! Well not all of it.

For visible imaging using CMOS sensors, photons enter pixel wells and generate a voltage. The array of adjacent pixels are read out as a digital representation of the scene passed through the lens and onto the sensor, according to the optics of the lens and the resolution of the sensor. Thermal camera sensors work differently!

Thermal cameras use a sensor that’s a microbolometer. The helpful part of the analogy to a CMOS sensor is there we still have an array of pixels, which determines the resolution of the camera, as a 2D digital representation of the scene’s thermal characteristics.

But unlike a CMOS sensor whose pixels react to photons, a microbolometers upper pixel surface, the detector, is comprised of IR absorbing material, such as Vanadium oxide. The detector is heated by the IR exposure, and the intensity of exposure in turn changes the electrical resistance. The change in electrical resistance is measured and passed by an electrode to a silicon substrate and readout integrated circuit.

Vanadium oxide (VOx) pixel structure – Courtesy Teledyne DALSA

Just as with visible imaging, for machine vision it’s the digital representation of the scene that matters, as it’s algorithms “consuming” the image in order to take some action: danger vs. safe; good part vs. bad part; steer left, straight, or right – or brake; etc. Whether one generates a pseudo-image for human consumption may well be unnecessary – or at least secondary.

Applications in LWIR

Applications include but are not limited to:

  • Security e.g. intrusion detection
  • Health screening e.g. sensing who has a fever
  • Fire detection – detect heat from early combustion before smoke is detectable
  • Building heat loss – for energy management and insulation planning
  • Equipment monitoring e.g. heat signature may reveal worn bearings or need for lubrication
  • Food safety – monitor whether required cooking temperatures attained before serving

You get the idea – if the thing you care about generates a heat signature distinct from the other things around it, thermal imaging may be just the thing.

What if I wanted to buy an LWIR camera?

We could help you with that. Does your application’s thermal range lie between -25C and +125C? Would a frame rate of 30fps do the job? Does a GigEVision interface appeal?

It’s likely we’d guide you to Teledyne DALSA’s Calibir GX cameras.

Calibir GX front and rear views – Courtesy Teledyne DALSA
Contact us

Precision of Teledyne DALSA Calibir GX cameras

Per factory calibration, one already gets precision to +/- 3 degrees Celsius. For more precision, use a black body radiator and manage your own calibration to +/- 0.5 degrees Celsius!

Thresholding with LUT

Sometimes one wants to emphasize only regions meeting certain criteria – in this case heat-based criteria. Consider the following image:

Everything between 38 and 41°C shown as red – Courtesy Teledyne DALSA

Teledyne DALSA Calibir GX control software let’s users define their own lookup tables (LUTs). One may optionally show regions meeting certain temperatures in color, leaving the rest of the image in monochrome.

Dynamic range

The “expressive power” of a camera is characterized by dynamic range. Just as the singers Enrico Caruso (opera) and Freddie Mercury (rock) were lauded for their range as well as their precision, in imaging we value dynamic range. Consider the image below of an electric heater element:

“Them” (left) vs. us (right) – Courtesy Teledyne DALSA

The left side of the image if from a 3rd party thermal imager – it’s pretty crude essentially showing just hot vs. not-hot, with no continuum. The right side was obtained with a Teledyne DALSA Calibir GX – there we see very hot, hot, warm, slightly warm, and cool – a helpfully nuanced range. Enabled by a 21 bit ADC, the Teledyne DALSA Calibir GX is capable of a dynamic range across 1500°C.

In this short blog we’ve called out just a few of the available features – call us at 978-474-0044 to tell us more about your application goals, and we can guide you to whichever hardware and software capabilities may be most helpful for you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to with what topics you’d like to know more about.