Machine vision software –> Sapera Processing

Why read this article?

Generic reason: Compact overview of machine vision software categories and functionality.

Cost-driven reason: Discover that powerful software comes bundled at no cost to users of Teledyne DALSA cameras and frame grabbers. Not just the a viewer and SDK – though of course those – but select image processing software too.

contact us

Software – build or buy?

Without software machine vision is nowhere. The whole point of machine vision is to acquire and image and then process with an algorithm that achieves something of value.

Whether it’s presence/absence detection, medical diagnostics, thermal imaging, autonomous navigation, pick and place, automated milling, or myriad other applications, the algorithm is expressed in software.

You might choose a powerful software library needing “just” parameterization by the user – or AI – or a software development kit (SDK) permitting nearly endless scope of programming innovation. Either way it’s the software that does the processing and delivers the results.

In this article, we survey build vs. buy arguments for several types of machine vision software. We make a case for Teledyne DALSA’s Sapera Software Suite – but it’s a useful read for anyone navigating machine vision software choices – wherever you choose to land.

Sapera Vision Software Suite – Courtesy Teledyne DALSA

Third party or vision library from same vendor?

Third party software

If you know and love some particular third party software, such as LabView, HALCON, MATLAB, or OpenCV, you may have developed code libraries and in-house expertise on which it makes sense to double-down. Even if there are development or run time licensing costs. Do the math on total cost of ownership.

Same vendor for camera and software

Unless the third party approach described above is your clear favorite, consider the benefits of one-stop shopping for your camera and your software. Benefits include:

  • License pricing: SDK and run-time license costs are structured to favor the customer who sourced his cameras and software from the same provider.
  • Single-source simplicity: Since the hardware and software come from the same manufacturer, it just works. They’ve done all the compatibility validation in-house. And the feature name on the camera side is the same as the parameter name on the software side.
  • Technical support: When it all comes from one provider, if you have support questions there’s no finger pointing.

You – the customer/client – are the first party. It’s all about you. Let’s call the camera manufacturer the second party, since the camera and the sensor therein are at the heart of image acquisition. Should licensed software come from a third party, or from the camera manufacturer? It’s a good question.

contact us

Types/functions of machine vision software

While there are all-in-one and many-in-one packages, some software is modularized to fulfill certain functions, and may come free, bundled, discounted, open-source, or priced, according to market conditions and a developer’s business model. Before we get into commercial considerations, let’s briefly survey the functional side, including each of the following categories in turn:

  • Viewer / camera control
  • Acquisition control
  • Software development kit (SDK)
  • Machine vision library
  • AI training/learning as an alternative to programming

Point of view: Teledyne DALSA’s Sapera software packages by capability

Viewer / camera control – included in Sapera LT

When bringing a new camera online, after attaching the lens and cable, one initially needs to configure and view. Regardless of whether using GigE Vision, CameraLink, CameraLink HS, USB3 Vision, CoaXpress, or other standards, one must typically assign the camera a network address and set some camera parameters to establish communication.

A graphical user interface (GUI) viewer / camera-control-tool makes it easy to quickly get the camera up and running. The viewer capability permits an image stream so one can get the camera mounted, adjust aperture, focus, and imaging modes.

Every camera manufacturer and software provider offers such a tool. Teledyne DALSA calls theirs CameraExpert GUI, and it’s part of Sapera LT. It’s free for users of Teledyne DALSA 2D/3D cameras and frame grabber.

CamExpert – Courtesy Teledyne DALSA

Acquisition control – included in Sapera LT

The next step up the chain is referred to as acquisition control. On the camera side this is about controlling the imaging modes and parameters to get the best possible image before passing it to the host PC. So one might select a color mode, whether to use HDR or not, gain controls, framerate or trigger settings, and so on.

On the communications side, one optimizes depending on whether a single camera on the bus, or bandwidth sharing. Anyone offering acquisition control software has provide all these controls.

Controlling image acquisition with GUI tools – Courtesy Teledyne DALSA

Those with Sapera LT can utilize Teledyne DALSA’s patented TurboDrive, realizing speed gains of x1.5 to x3, under GigE Vision protocol. This driver brings added bandwidth without needing special programming.

Software development kit (SDK) – included in Sapera LT

GUI viewers are great, but often one needs at least a degree of programming to fully integrate and control the acquisition process. Typically one uses a software development kit (SDK) for C++, C#, .NET, and/or Standard C. And one doesn’t have to start from scratch, as such SDKs almost always include programming examples and projects one may adapt and extend, to avoid re-inventing the wheel.

Teaser subset of code samples provided – Courtesy Teledyne DALSA

Sapera Vision Software allows royalty free run-time licenses for select image processing functions when combined with Teledyne DALSA hardware. If you’ve just got a few cameras, that may not be important to you. But if you are developing systems for sale to your own customers, this can bring substantial economies of scale.

Machine vision library

So you’ve got the image hitting the host PC just fine – now what? One needs to programmatically interpret the image. Unless you’ve thought up a totally new approach to image processing, there’s an excellent chance your application will need one or more of edge detection, bar code reading, blob analysis, flipping, rotation, cross-correlation, frame-averaging, calibration, or other standard methods.

A machine vision library is a toolbox of many tens of such functions pre-programmed and parameterized for your use. It allows you to marry your application-specific insights with proven machine vision processes, so that you can build out the value-add by standing on the shoulders of machine vision developers who provide you with a comprehensive toolbox.

No surprise – Teledyne Dalsa has an offer in this space too. It’s called Sapera Processing. It includes all we’ve discussed above in terms of configuration and acquisition control. And it adds a suite of image processing tools. The suite’s tools are best understood across three categories:

  • Calibration – advanced configuration including compensation for geometric distortion
  • Image processing primitives – convolution functions, geometry functions, measurement, transforms, contour following, and more
  • Blob analysis – uses contrast to segment objects in a scene; determine centroid, length and area; min, max, and standard deviation; thresholding, and more
Just some of the free included image processing primitives –
Courtesy Teledyne DALSA

So unless you skip ahead to the AI training/learning features of Astrocyte (next section), Sapera Processing is the programmer’s comprehensive toolbox to do it all. Viewer, camera configuration, acquisition control, and image evaluation and processing functions. From low-level controls if you want them, through parameterized machine vision functions refined, validated, and ready for your use.

AI training/learning as an alternative to programming

Prefer not to program if possible? Thanks to advances in AI, many machine vision applications may now be trained on good vs. bad images, such that the application learns. Once validated, each next production image is correctly processed based on the training sets and the automated inference engine.

No coding required – Courtesy Teledyne DALSA

Teledyne DALSA’s Astrocyte package makes training simple and cost-effective. Naturally one can combine it with parameterized controls and/or SDK programming, if desired. See our recent overview of AI in machine vision – and Astrocyte.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to with what topics you’d like to know more about. 

Artificial intelligence in machine vision – today

This is not some blue-sky puff piece about how AI may one day be better / faster / cheaper at doing almost anything at least in certain domains of expertise. This is about how AI is already better / faster / cheaper at doing certain things in the field of machine vision – today.

Classification of screw threads via AI – Courtesy Teledyne DALSA

Conventional machine vision

There are classical machine vision tools and methods, like edge detection, for which AI has nothing new to add. If the edge detection algorithm is working fine as programmed in your vision software, who needs AI? If it ain’t broke, don’t fix it. Presence / absence detection, 3D height calculation, and many other imaging techniques work just fine without AI. Fair enough.

From image processing to image recognition

As any branch of human activity evolves, the fundamental building blocks serve as foundations for higher-order operations that bring more value. Civil engineers build bridges, confident the underlying physics and materials science lets them choose among arch, suspension, cantilever, or cable-stayed designs.

So too with machine vision. As the field matures, value-added applications can be created by moving up the chunking level. The low-level tools still include edge-detection, for example, but we’d like to create application-level capabilities that solve problems without us having to tediously program up from the feature-detection level.

Traditional methods (left) vs. AI classification (right) – Courtesy Teledyne DALSA
Traditional Machine Vision ToolsAI Classification Algorithm
– Can’t discern surface damage vs water droplets– Ignores water droplets
– Are challenged by shading and perspective changes– Invariant to surface changes and perspective
For the application images above, AI works better than traditional methods – Courtesy Teledyne DALSA

Briefly in the human cognition realm

Let’s tee this up with a scenario from human image recognition. Suppose you are driving your car along a quiet residential street. Up ahead you see a child run from a yard, across the sidewalk, and into the street.

While it may well be that the rods and cones in your retina, and your visual cortex, and your brain used edge detection to process contrasting image segments to arrive at “biped mammal” – child, , and on to evaluating risk and hitting the brakes – isn’t how we usually talk about defensive driving. We just think in terms of accident avoidance, situational awareness, and braking/swerving – at a very high level.

Applications that behave intelligently

That’s how we increasingly would like our imaging applications to behave – intelligently and at a high level. We’re not claiming it’s “human equivalent” intelligence, or that the AI method is the same as the human method. All we’re saying is that AI, when well-managed and tested, has become a branch of engineering that can deliver effective results.

So as autonomous vehicles come to market of course we want to be sure sufficient testing and certification is completed, as a matter of safety. But whether the safe-driving outcome is based on “AI” or “vision engineering”, or the melding of the two, what matters is the continuous sequence of system outputs like: “reduce following distance”, “swerve left 30 degrees”, and “brake hard”.

Neural Networks

One branch of AI, neural networks, has proven effective in many “recognition” and categorization applications. Is the thing being imaged an example of what we’re looking for, or can it be dismissed? If it is the sort of thing we’re looking for, is it of sub-type x, y, or z? “Good” item – retain. “Bad” item – reject. You get the idea.

From training to inference

With neural networks, instead of programming algorithms at a granular feature analysis level, one trains the network. Training may include showing “good” vs. “bad” images – without having to articulate what makes them good or bad – and letting the network infer the essential characteristics. In fact it’s sometimes possible to train only with “good” examples – in which case anomaly detection flags production images that deviate from the trained pool of good ones.

Deep Neural Network (DNN) example – Courtesy Teledyne DALSA

Enough theory – what products actually do this?

Teledyne DALSA Astrocyte software creates a deep neural network to perform a desired task. More accurately – Astrocyte provides a graphical user interface (GUI) and a neural network framework, such that an application-specific neural network can be developed by training it on sample images. With a suitable collection of images, Teledyne DALSA Astrocyte can create an effective AI model in under 10 minutes!

Gather images, Train the network, Deploy – Courtesy Teledyne DALSA

Mix and match tools

In the diagram above, we show an “all DALSA” tools view, for those who may already have expertise in either Sapera or Sherlock SDKs. But one can mix and match. Images may alternatively be acquired with third party tools – paid or open source. And one may not need rules-based processing beyond the neural network. Astrocyte builds the neural network at the heart of the application.

Contact us

User-friendly AI

The key value proposition with Teledyne DALSA Astrocyte is that it’s user-friendly AI. The GUI used to configure the training and to validate the model requires no programming. And one doesn’t need special training in AI. Sure, it’s worth reading about the deep learning architectures supported. They include: Classification, Anomaly Detection, Object Detection, and Segmentation. And you’ll want to understand how the training and validation work. It’s powerful – it’s built by Teledyne DALSA’s software engineers standing on the shoulders of neural network researchers – but you don’t have to be a rocket scientist to add value in your field of work.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution! We’re big enough to carry the best cameras, and small enough to care about every image.

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to with what topics you’d like to know more about.