How machine vision filters create contrast in machine vision applications

Before and after applying filters

Imaging outcomes depend crucially on contrast. It is only by making a feature “pop” relative to the larger image field in which the feature lies, that the feature can be optimally identified by machine vision software.

While sensor choice, lensing, and lighting are important aspects in building machine vision solutions with effective contrast creation, effective selection and application of filters can provide additional leverage for many applications. Often overlooked or misunderstood, here we provide a first-look at machine vision filter concepts and benefits.

Before and after applying filters

In the 4 image pairs above, each left-half image was generated with the same sensor, lighting, and exposure duration as the corresponding right-half images. But the right-half images have had filters applied to reduce glare or scratch-induced scatter, separate or block certain wavelengths, for example. If your brain finds the left-half images to be difficult to discern, image processing software wouldn’t be “happy” with the left-half either!

While there are also filtering benefits in color and SWIR imaging, it is worth noting that we started above with examples shown in monochrome. Surprising to many, it can often be both more effective and less expensive to create machine vision solutions in the monochrome space – often with filters – than in color. This may seem counter-intuitive, since most humans enjoy color vision, and use if effectively when driving, judging produce quality, choosing clothing that matches our skin tone, etc. But compared to using single-sensor color cameras, monochrome single sensor cameras paired with appropriate filters:

  • can offer higher contrast and better resolution
  • provide better signal-to-noise ratio
  • can be narrowed to sensitivity in near-ultraviolet, visible and near-infrared spectrums

These features give monochrome cameras a significant advantage when it comes to optical character recognition and verification, barcode reading, scratch or crack detection, wavelength separation and more. Depending on your application, monochrome cameras can be three times more efficient than color cameras.

Identify red vs. blue items

Color cameras may be the first thought when separating items by color, but it can be more efficient and effective to use a monochrome camera with a color bandpass filter. As shown above, to brighten or highlight an item that is predominantly red, using a red filter to transmit only the red portion of the spectrum can be used, blocking the rest of the transmitted light. The reverse can also work, using a blue filter to pass blue wavelengths while blocking red and other wavelengths.

Here we have touched on just a few examples, to whet the appetite. We anticipate developing a Tech Brief with a more in depth treatment of filters and their applications. We partner with Midwest Optical to offer you a wide range of filters for diverse application solutions.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

Types of 3D imaging systems – and benefits of Time of Flight (ToF)

Time Of Flight Gets Precise: Whitepaper

2D imaging is long-proven for diverse applications from bar code reading to surface inspection, presence-absence detection, etc.  If you can solve your application goal in 2D, congratulations!

But some imaging applications are only well-solved in three dimensions.  Examples include robotic pick and place, palletization, drones, security applications, and patient monitoring, to name a few.

For such applications, one must select or construct a system that creates a 3D model of the object(s).  Time of Flight (ToF) cameras from Lucid Vision Labs is one way to achieve cost-effective 3D imaging for many situations.

ToF systems setup
ToF systems have a light source and a sensor.

ToF is not about objects flying around in space! It’s about using the time of flight of light, to ascertain differences in object depth based upon measurable variances from light projected onto an object and the light reflected back to a sensor from that object.  With sufficiently precise orientation to object features, a 3D “point cloud” of x,y,z coordinates can be generated, a digital representation of real-world objects.  The point cloud is the essential data set enabling automated image processing, decisions, and actions.

In this latest whitepaper we go into depth to learn:
1. Types of 3D imaging systems
2. Passive stereo systems
3. Structured light systems
4. Time of Flight systems
Whitepaper table of contents
Download

Let’s briefly put ToF in context with other 3D imaging approaches:

Passive Stereo: Systems with cameras at a fixed distance apart, can triangulate, by matching features in both images, calculating the disparity from the midpoint.  Or a robot-mounted single camera can take multiple images, as long as positional accuracy is sufficient to calibrate effectively.

Challenges limiting passive stereo approaches include:

Occlusion: when part of the object(s) cannot be seen by one of the cameras, features cannot be matched and depth cannot be calculated.

ToF diagram
Occlusion occurs when a part of an object cannot be imaged by one of the cameras.

Few/faint features: If an object has few identifiable features, no matching correspondence pairs may be generated, also limiting essential depth calculations.

Structured Light: A clever response to the few/faint features challenge can be to project structured light patterns onto the surface.  There are both active stereo systems and calibrated projector systems.

Active stereo systems are like two-camera passive stereo systems, enhanced by the (active) projection of optical patterns, such as laser speckles or grids, onto the otherwise feature-poor surfaces.

ToF diagram
Active stereo example using laser speckle pattern to create texture on object.

Calibrated projector systems use a single camera, together with calibrated projection patterns, to triangulate from the vertex at the projector lens.  A laser line scanner is an example of such a system.

Besides custom systems, there are also pre-calibrated structured light systems available, which can provide low cost, highly accurate solutions.

Time of Flight (ToF): While structured light can provide surface height resolutions better than 10μm, they are limited to short working distances. ToF can be ideal for or applications such as people monitoring, obstacle avoidance, and materials handling, operating at working distances of 0.5m – 5m and beyond, with depth resolution requirements to 1 – 5mm.

ToF systems measure the time it takes for light emitted from the device to reflect off objects in the scene and return to the sensor for each point of the image.  Some ToF systems use pulse-modulation (Direct ToF).  Others use continuous wave (CW) modulation, exploiting phase shift between emitted and reflected light waves to calculate distance.

The new Helios ToF 3D camera from LUCID Vision Labs, uses Sony Semiconductor’s DepthSense 3D technology. Download the whitepaper to learn of 4 key benefits of this camera, example applications, as well as its operating range and accuracy,

Download whitepaper
Download whitepaper
Time Of Flight Gets Precise: Whitepaper
Download Time of Flight Whitepaper

Have questions? Tell us more about your application and our sales engineer will contact you.

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

Ultra-Uniform lighting provides enhanced contrast for machine vision

The old saying of a chain is only as strong as its weakest link stands true in machine vision applications. A machine vision hardware solution typically consists of a camera, lens, and light to obtain an image for processing. If anyone of these components are improperly selected, its the weakest. Lighting is a key component to produce features and contrast for image processing. Uniformity, brightness, wavelength and specific geometries used in conjunction with each other are key elements to a good lighting component. In this blog, we introduce Phlox machine vision lighting, which is unsurpassed in uniformity and has many key advantages.

Phlox lighting provided much more than just lighting, regardless of the application. Below are 5 main attributes making Phlox unique


1 – Better uniformity: A precise mathematical model and the micro prisms (< 30 µ) enable us to reach up to +/- 5 % uniformity for the backlight surface.


2 – Higher luminance: Up to 80% of the light injected is emitted onto the target surface, up to twice as much as light pipes using diffusion.


3 – More compact design: Phlox technology is well suited to the manufacture of very thin pipes (1 mm). This feature enables Phlox to create very low profile (very thin) products in response to the need to reduce bulk for various applications.

4 – Long life cycle: The combination of Phlox technology and the quality of the designs ensures constant control of temperature and provides an exceptional life cycle.


5 – Faster response: Fast delivery of a standard products and five (5) weeks on average to design and delivery of a prototype or custom-made product.

How do Phlox back lights compare to the competition?

The following are application examples showing the increased contrast generated by Phlox back lights:

Less stray light & better contrasts
Phlox image comparison
Gear profile is prominent
Phlox backlights emit more direct light

In conjunction with the unparalleled uniformity, Phlox lighting can be configured with a variety of wavelengths optimizing contrast as seen in the examples below.

color to create contrast
color to enhance contrast
solve any application with the right wavelength

Phlox uses exclusive technology, providing many key advantages along with a 2-year warranty

Phlox technology
Watch this short video detailing the engraving process

1stVision is the exclusive distributor in North America for Phlox product. Contact us to discuss your application with our experienced technical advisors – CLICK HERE

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

5 benefits of using strobed lighting for machine vision applications

Gardasoft controller for machine vision

Gardasoft controllerPulsing (aka strobing)  a machine vision LED  light is powerful technique that can beneficial to machine vision systems in various ways.

This blog post outlines 5 benefits you will receive from pulsing  a LED  light head.  Gardasoft is an industry leader in strobe controllers capable of driving 3rd party LED light heads or custom LED banks for machine vision.

1 – Increase the LED light output

It is common to use pulsed light to “freeze” motion for high speed inspection.  But, when the light is on only a short term in burst, its possible to increase the light output beyond the LED manufacturers specified maximum, using a technique called “Overdrive”.   In many cases, the LED can be powered by 10X over the constant current power input in turn providing brighter pulses of light.  When synchronized with the camera acquisition, a brighter scene is generated.Gardasoft LED overdrive

2 – Extend the life of the LED 

As mentioned in the first benefit, strobing a LED light head only turns on the LED for short period of time.  In many cases, the duty cycles are very low which extends the life of the LED and any degradation in turn, keeping the scene at a consistent brightness for years.  (i.e. If the duty cycle is only 10%, the lifetime of the LED head will increase by 10%)

3 – Ambient Light control

Ambient light conditions frequently interfere with machine vision measurements and these issues can be solved by pulsing and over driving the system’s LEDs. For example, over driving the LED by 200% doubles the light intensity and enables the camera exposure to be halved, so reducing the effects of ambient light by a factor of 4.  The end result is the cameras exposure is only utilizing light from the give LED source and NOT ambient light.

4 – High speed imaging and Increased depth of field

Motion blur in images from fast-moving objects can be eliminated with appropriate pulsing of the light.  In some cases a defined camera exposure will be good enough to freeze motion (read our blog on calculating camera exposure), but may suffer in light intensity with constant illumination.  “Over driving” a light can boost the output up to 10x its brightness rating in short pulses.  Increased brightness could allow the whole system to be run faster because of the reduced exposure times.  Higher light output may also allow the aperture to be reduced to give better depth of field.

Extended Depth of Field (DOF) is achieved with a brighter light allowing the f-stop to be turned down

Gardasoft controllers include our patented SafePower and SafeSense technology which prevents over driving from damaging the light.

5 -Multi-Lighting schemed & Computational Imaging

Lighting controllers can be used to reduce the number of camera stations. Several lights are set up at a single camera station and pulsed at different intensities and duration’s in a predefined sequence.

CCS America Shape from shading
Generate edge and texture images using shape from shading

Each different lighting can highlight particular features in the image. Multiple measurements can be made at a single camera station instead of needing multiple stations and reduces, mechanical complexity saving money. For example, sequentially triggering 3 different types of lighting could allow a single camera to acquire specific images for bar code reading, surface defect inspection and a dimensional check in rapid succession.

Pulsing can also be used for computational imaging, where a component is illuminated sequentially by 4 different lights from different directions. The resultant images would be combined to exclude the effect of random reflections from the component surface.  Contact us and ask for the white paper on Computational imaging to learn more

CCS Computational imaing
The images on the right (top and bottom) were taken with bright field and dark field lighting. The left images is the the result of the computational imaging combining the lighting techniques allowing particles and water bubble to be seen

Pulsed multiple lighting schemes can also benefit line scan imaging by using different illumination sources to capture alternate lines. Individual images for each illumination source are then easily extracted using image processing software.

In conclusion, strobe controllers can provide many benefits and save money in an overall setup more than the cost of a controller!

1st Vision has additional white papers on the following.  Contact us an ask for any one of these informative white papers – Simply send an email and ask for 1 or all of the white papers.
1- Practical use of LED controllers
2 – Intelligent Lighting for Machine Vision Systems
3- LED Strobe lighting for ITS systems
4 – Liquid Lens technology and controllers for machine vision.
5 – Learn about computational imaging and how CCS Lighting can help

Contact us

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

Related Topics

Learn how liquid lenses keep continuous focus on machine vision cameras when the working distance changes.

White Paper – Key benefits in using LED lighting controllers for machine vision applications