Artificial intelligence in machine vision – today

This is not some blue-sky puff piece about how AI may one day be better / faster / cheaper at doing almost anything at least in certain domains of expertise. This is about how AI is already better / faster / cheaper at doing certain things in the field of machine vision – today.

Classification of screw threads via AI – Courtesy Teledyne DALSA

Conventional machine vision

There are classical machine vision tools and methods, like edge detection, for which AI has nothing new to add. If the edge detection algorithm is working fine as programmed in your vision software, who needs AI? If it ain’t broke, don’t fix it. Presence / absence detection, 3D height calculation, and many other imaging techniques work just fine without AI. Fair enough.

From image processing to image recognition

As any branch of human activity evolves, the fundamental building blocks serve as foundations for higher-order operations that bring more value. Civil engineers build bridges, confident the underlying physics and materials science lets them choose among arch, suspension, cantilever, or cable-stayed designs.

So too with machine vision. As the field matures, value-added applications can be created by moving up the chunking level. The low-level tools still include edge-detection, for example, but we’d like to create application-level capabilities that solve problems without us having to tediously program up from the feature-detection level.

Traditional methods (left) vs. AI classification (right) – Courtesy Teledyne DALSA
Traditional Machine Vision ToolsAI Classification Algorithm
– Can’t discern surface damage vs water droplets– Ignores water droplets
– Are challenged by shading and perspective changes– Invariant to surface changes and perspective
For the application images above, AI works better than traditional methods – Courtesy Teledyne DALSA

Briefly in the human cognition realm

Let’s tee this up with a scenario from human image recognition. Suppose you are driving your car along a quiet residential street. Up ahead you see a child run from a yard, across the sidewalk, and into the street.

While it may well be that the rods and cones in your retina, and your visual cortex, and your brain used edge detection to process contrasting image segments to arrive at “biped mammal” – child, , and on to evaluating risk and hitting the brakes – isn’t how we usually talk about defensive driving. We just think in terms of accident avoidance, situational awareness, and braking/swerving – at a very high level.

Applications that behave intelligently

That’s how we increasingly would like our imaging applications to behave – intelligently and at a high level. We’re not claiming it’s “human equivalent” intelligence, or that the AI method is the same as the human method. All we’re saying is that AI, when well-managed and tested, has become a branch of engineering that can deliver effective results.

So as autonomous vehicles come to market of course we want to be sure sufficient testing and certification is completed, as a matter of safety. But whether the safe-driving outcome is based on “AI” or “vision engineering”, or the melding of the two, what matters is the continuous sequence of system outputs like: “reduce following distance”, “swerve left 30 degrees”, and “brake hard”.

Neural Networks

One branch of AI, neural networks, has proven effective in many “recognition” and categorization applications. Is the thing being imaged an example of what we’re looking for, or can it be dismissed? If it is the sort of thing we’re looking for, is it of sub-type x, y, or z? “Good” item – retain. “Bad” item – reject. You get the idea.

From training to inference

With neural networks, instead of programming algorithms at a granular feature analysis level, one trains the network. Training may include showing “good” vs. “bad” images – without having to articulate what makes them good or bad – and letting the network infer the essential characteristics. In fact it’s sometimes possible to train only with “good” examples – in which case anomaly detection flags production images that deviate from the trained pool of good ones.

Deep Neural Network (DNN) example – Courtesy Teledyne DALSA

Enough theory – what products actually do this?

Teledyne DALSA Astrocyte software creates a deep neural network to perform a desired task. More accurately – Astrocyte provides a graphical user interface (GUI) and a neural network framework, such that an application-specific neural network can be developed by training it on sample images. With a suitable collection of images, Teledyne DALSA Astrocyte can create an effective AI model in under 10 minutes!

Gather images, Train the network, Deploy – Courtesy Teledyne DALSA

Mix and match tools

In the diagram above, we show an “all DALSA” tools view, for those who may already have expertise in either Sapera or Sherlock SDKs. But one can mix and match. Images may alternatively be acquired with third party tools – paid or open source. And one may not need rules-based processing beyond the neural network. Astrocyte builds the neural network at the heart of the application.

Contact us

User-friendly AI

The key value proposition with Teledyne DALSA Astrocyte is that it’s user-friendly AI. The GUI used to configure the training and to validate the model requires no programming. And one doesn’t need special training in AI. Sure, it’s worth reading about the deep learning architectures supported. They include: Classification, Anomaly Detection, Object Detection, and Segmentation. And you’ll want to understand how the training and validation work. It’s powerful – it’s built by Teledyne DALSA’s software engineers standing on the shoulders of neural network researchers – but you don’t have to be a rocket scientist to add value in your field of work.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution! We’re big enough to carry the best cameras, and small enough to care about every image.

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about. 

Machine vision lights as important as sensors and optics

Lighting matters as much or more than camera (sensor) selection and optics (lensing). A sensor and lens that are “good enough”, when used with good lighting, are often all one needs. Conversely, a superior sensor and lens, with poor lighting, can underperform. Read further for clear examples why machine vision lights are as important as sensors and optics!

Assorted white and color LED lights – courtesy of Advanced Illumination

Why is lighting so important? Contrast is essential for human vision and machine vision alike. Nighttime hiking isn’t very popular – for a reason – it’s not safe and it’s no fun if one can’t see rocks, roots, or vistas. In machine vision, for the software to interpret the image, one first has to obtain a good image. And a good image is one with maximum contrast – such that photons corresponding to real-world coordinates are saturated, not-saturated, or “in between”, with the best spread of intensity achievable.

Only with contrast can one detect edges, identify features, and effectively interpret an image. Choosing a camera with a good sensor is important. So is an appropriately matched lens. But just as important is good lighting, well-aligned – to set up your application for success.

What’s the best light source? Unless you can count on the sun or ambient lighting, or have no other option, one may choose from various potential types of light:

  • Fluorescent
  • Quartz Halogen – Fiber Optics
  • LED – Light Emitting Diode
  • Metal Halide (Mercury)
  • Xenon (Strobe)
Courtesy of Advanced Illumination

By far the most popular light source is LED, as it is affordable, available in diverse wavelengths and shapes (bar lights, ring lights, etc.), stable, long-life, and checks most of the key boxes.

The other light types each have their place, but those places are more specialized. For comprehensive treatment of the topics summarized here, see “A Practical Guide to Machine Vision Lighting” in our Knowledgebase, courtesy of Advanced Illumination.

Download whitepaper
Download whitepaper

Lighting geometry and techniques: There’s a tendency among newcomers to machine vision lighting to underestimate lighting design for an application. Buying an LED and lighting up the target may fill up sensor pixel wells, but not all images are equally useful. Consider images (b) and (c) below – the bar code in (c) shows high contrast between the black bars and the white field. Image (b) is somewhere between unusable or marginally usable, with reflection obscuring portions of the target, and portions of the (should be) white field appearing more grey than white.

Courtesy of Advanced Illumination

As shown in diagram (a) of Figure 22 above, understanding bright field vs dark field concepts, as well as the specular qualities of the surface being imaged, can lead to radically different outcomes. A little bit of lighting theory – together with some experimentation and tuning, is well worth the effort.

Now for a more complex example – below we could characterize images (a), (b), (c) and (d) as poor, marginal, good, and superior, respectively. Component cost is invariant, but the outcomes are sure different!

Courtesy of Advanced Illumination

To learn more, download the whitepaper or call us at (978) 474-0044.

Contact us

Color light – above we showed monochrome examples – black and white… and grey levels in between. Many machine vision applications are in fact best addressed in the monochrome space, with no benefit from using color. But understanding what surfaces will reflect or absorb certain wavelengths is crucial to optimizing outcomes – regardless of whether working in monochrome, color, infrared (IR), or ultraviolet (UV).

Beating the same drum throughout, it’s about maximizing contrast. Consider the color wheel shown below. The most contrast is generated by taking advantage of opposing colors on the wheel. For example, green light best suppresses red reflection.

Courtesy of Advanced Illumination

On can use actual color light sources, or white light together with well-chosen wavelength “pass” or “block” filters. This is nicely illustrated in Fig. 36 below. Take a moment to correlate the configurations used for each of images (a) – (f), relative to the color wheel above. Depending on one’s application goals, sometimes there are several possible combinations of sensor, lighting, and filters to achieve the desired result.

Courtesy of Advanced Illumination

Filters – can help. Consider images (a) and (b) in Fig. 63 below. The same plastic 6-pack holder shown is shown in both images, but only the image in figure (b) reveals stress fields that, were the product to be shipped, might cause dropped product, reduced consumer confidence in one’s brand. By designing in polarizing filters, this can be the basis for a value-added application, automating quality control in a way that might not have been otherwise achievable – or not at such a low cost.

Courtesy of Advanced Illumination

For more comprehensive treatment of filter applications, see either or both Knowledgebase documents:


Powering the lights – should the be voltage-driven or current-driven? How are LEDs powered? When to strobe vs running in continuous modes? How to integrate light controller with the camera and software. These are all worth understanding – or having someone in your team – whether in-house or a trusted partner – who does.

For comprehensive treatment of the topics summarized here, see Advanced Illumination’s “A Practical Guide to Machine Vision Lighting” in our Knowledgebase:

Download whitepaper
Download whitepaper

This blog is intended to whet the appetite for interest in lighting – but it only skims the surface. Machine vision lights as important as sensors and optics. Please download the guide linked just above – to deepen your knowledge. Or if you want help with a specific application, you may draw on the experience of our sales engineers and trusted partners.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Components needed for machine vision and industrial imaging systems

Machine vision and industrial imaging systems are used in various applications ranging from automated quality control inspection, bottle filling, robot pick-and-place applications, autonomous drone or vehicle guidance, patient monitoring, agricultural irrigation controls, medical testing, metrology, and countless more applications.

Imaging systems typically include a least a camera and lens, and often also include one or more of specialized lighting, adapter cards, cables, software, optical filters, power supply, mount, or enclosure.

At 1stVision we’ve created a resource page is intended to make sure that nothing in a planned imaging application has been missed.  There are many aspects on which 1stVision can provide guidance.   The main components to consider are indicated below.

Diverse cameras

Cameras: There are area scan cameras for visible, infrared, and ultraviolet light, used for static or motion situations.  There are line scan cameras, often used for high-speed continuous web inspection.  Thermal imaging detects or measures heat.  SWIR cameras can identify the presence or even the characteristics of liquids.  The “best” camera depends on the part of the spectrum being sensed, together with considerations around motion, lighting, surface characteristics, etc.

An assortment of lens types and manufacturers

Lens: The lens focuses the light onto the sensor, mapping the targeted Field of View (FoV) from the real world onto the array of pixels.  One must consider image format to pair a suitable lens to the camera.  Lenses vary by the quality of their light-passing ability, how close to the target they can be – or how far from it, their weight (if on a robot arm it matters), vibration resistance,  etc.  See our resources on how to choose a machine vision lens.  Speak with us if you’d like assistance, or use the lens selector to browse for yourself.

Lighting: While ambient light is sufficient for some applications, specialized lighting may also be needed, to achieve sufficient contrast.  And it may not just be “white” light – Ultra-Violet (UV) or Infra-Red (IR) light, or other parts of the spectrum, sometimes work best to create contrast for a given application – or even to induce phosphorescence or scatter or some other helpful effect.  Additional lighting components may include strobe controllers or constant current drivers to provide adequate and consistent illumination. See also Lighting Techniques for Machine Vision.

Optical filter: There are many types of filters that can enhance application performance, or that are critical for success.  For example a “pass” filter only lets certain parts of the spectrum through, while a “block” filter excludes certain wavelengths.  Polarizing filters reduce glare.  And there are many more – for a conceptual overview see our blog on how machine filters create or enhance contrast

Don’t forget about interface adapters like frame grabbers and host adapters; cables; power supplies; tripod mounts; software; and enclosures. See the resource page to review all components one might need for an industrial imaging system, to be sure you haven’t forgotten anything.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC card and industrial computers, we can provide a full vision solution!

Types of 3D imaging systems – and benefits of Time of Flight (ToF)

Time Of Flight Gets Precise: Whitepaper

2D imaging is long-proven for diverse applications from bar code reading to surface inspection, presence-absence detection, etc.  If you can solve your application goal in 2D, congratulations!

But some imaging applications are only well-solved in three dimensions.  Examples include robotic pick and place, palletization, drones, security applications, and patient monitoring, to name a few.

For such applications, one must select or construct a system that creates a 3D model of the object(s).  Time of Flight (ToF) cameras from Lucid Vision Labs is one way to achieve cost-effective 3D imaging for many situations.

ToF systems setup
ToF systems have a light source and a sensor.

ToF is not about objects flying around in space! It’s about using the time of flight of light, to ascertain differences in object depth based upon measurable variances from light projected onto an object and the light reflected back to a sensor from that object.  With sufficiently precise orientation to object features, a 3D “point cloud” of x,y,z coordinates can be generated, a digital representation of real-world objects.  The point cloud is the essential data set enabling automated image processing, decisions, and actions.

In this latest whitepaper we go into depth to learn:
1. Types of 3D imaging systems
2. Passive stereo systems
3. Structured light systems
4. Time of Flight systems
Whitepaper table of contents
Download

Let’s briefly put ToF in context with other 3D imaging approaches:

Passive Stereo: Systems with cameras at a fixed distance apart, can triangulate, by matching features in both images, calculating the disparity from the midpoint.  Or a robot-mounted single camera can take multiple images, as long as positional accuracy is sufficient to calibrate effectively.

Challenges limiting passive stereo approaches include:

Occlusion: when part of the object(s) cannot be seen by one of the cameras, features cannot be matched and depth cannot be calculated.

ToF diagram
Occlusion occurs when a part of an object cannot be imaged by one of the cameras.

Few/faint features: If an object has few identifiable features, no matching correspondence pairs may be generated, also limiting essential depth calculations.

Structured Light: A clever response to the few/faint features challenge can be to project structured light patterns onto the surface.  There are both active stereo systems and calibrated projector systems.

Active stereo systems are like two-camera passive stereo systems, enhanced by the (active) projection of optical patterns, such as laser speckles or grids, onto the otherwise feature-poor surfaces.

ToF diagram
Active stereo example using laser speckle pattern to create texture on object.

Calibrated projector systems use a single camera, together with calibrated projection patterns, to triangulate from the vertex at the projector lens.  A laser line scanner is an example of such a system.

Besides custom systems, there are also pre-calibrated structured light systems available, which can provide low cost, highly accurate solutions.

Time of Flight (ToF): While structured light can provide surface height resolutions better than 10μm, they are limited to short working distances. ToF can be ideal for or applications such as people monitoring, obstacle avoidance, and materials handling, operating at working distances of 0.5m – 5m and beyond, with depth resolution requirements to 1 – 5mm.

ToF systems measure the time it takes for light emitted from the device to reflect off objects in the scene and return to the sensor for each point of the image.  Some ToF systems use pulse-modulation (Direct ToF).  Others use continuous wave (CW) modulation, exploiting phase shift between emitted and reflected light waves to calculate distance.

The new Helios ToF 3D camera from LUCID Vision Labs, uses Sony Semiconductor’s DepthSense 3D technology. Download the whitepaper to learn of 4 key benefits of this camera, example applications, as well as its operating range and accuracy,

Download whitepaper
Download whitepaper
Time Of Flight Gets Precise: Whitepaper
Download Time of Flight Whitepaper

Have questions? Tell us more about your application and our sales engineer will contact you.

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.