5 benefits of using strobed lighting for machine vision applications

Gardasoft controller for machine vision

Gardasoft controllerPulsing (aka strobing)  a machine vision LED  light is a powerful technique that can be beneficial to machine vision systems in various ways.

This blog post outlines 5 benefits you will receive from pulsing  a LED  light head.  Gardasoft is an industry leader in strobe controllers capable of driving 3rd party LED light heads or custom LED banks for machine vision.

1 – Increase the LED light output

It is common to use pulsed light to “freeze” motion for high speed inspection.  But, when the light is on only a short term in burst, its possible to increase the light output beyond the LED manufacturers specified maximum, using a technique called “Overdrive”.   In many cases, the LED can be powered by 10X over the constant current power input in turn providing brighter pulses of light.  When synchronized with the camera acquisition, a brighter scene is generated.Gardasoft LED overdrive

2 – Extend the life of the LED 

As mentioned in the first benefit, strobing a LED light head only turns on the LED for short period of time.  In many cases, the duty cycles are very low which extends the life of the LED and any degradation in turn, keeping the scene at a consistent brightness for years.  (i.e. If the duty cycle is only 10%, the lifetime of the LED head will increase by 10%)

3 – Ambient Light control

Ambient light conditions frequently interfere with machine vision measurements and these issues can be solved by pulsing and over driving the system’s LEDs. For example, over driving the LED by 200% doubles the light intensity and enables the camera exposure to be halved, so reducing the effects of ambient light by a factor of 4.  The end result is the cameras exposure is only utilizing light from the give LED source and NOT ambient light.

4 – High speed imaging and Increased depth of field

Motion blur in images from fast-moving objects can be eliminated with appropriate pulsing of the light.  In some cases a defined camera exposure will be good enough to freeze motion (read our blog on calculating camera exposure), but may suffer in light intensity with constant illumination.  “Over driving” a light can boost the output up to 10x its brightness rating in short pulses.  Increased brightness could allow the whole system to be run faster because of the reduced exposure times.  Higher light output may also allow the aperture to be reduced to give better depth of field.

Extended Depth of Field (DOF) is achieved with a brighter light allowing the f-stop to be turned down

Gardasoft controllers include our patented SafePower and SafeSense technology which prevents over driving from damaging the light.

5 -Multi-Lighting schemed & Computational Imaging

Lighting controllers can be used to reduce the number of camera stations. Several lights are set up at a single camera station and pulsed at different intensities and duration’s in a predefined sequence.

CCS America Shape from shading
Generate edge and texture images using shape from shading

Each different lighting can highlight particular features in the image. Multiple measurements can be made at a single camera station instead of needing multiple stations and reduces, mechanical complexity saving money. For example, sequentially triggering 3 different types of lighting could allow a single camera to acquire specific images for bar code reading, surface defect inspection and a dimensional check in rapid succession.

Pulsing can also be used for computational imaging, where a component is illuminated sequentially by 4 different lights from different directions. The resultant images would be combined to exclude the effect of random reflections from the component surface.  Contact us and ask for the white paper on Computational imaging to learn more

CCS Computational imaing
The images on the right (top and bottom) were taken with bright field and dark field lighting. The left images is the the result of the computational imaging combining the lighting techniques allowing particles and water bubble to be seen

Pulsed multiple lighting schemes can also benefit line scan imaging by using different illumination sources to capture alternate lines. Individual images for each illumination source are then easily extracted using image processing software.

In conclusion, strobe controllers can provide many benefits and save money in an overall setup more than the cost of a controller!

1st Vision has additional white papers on the following.  Contact us an ask for any one of these informative white papers – Simply send an email and ask for 1 or all of the white papers.
1- Practical use of LED controllers
2 – Intelligent Lighting for Machine Vision Systems
3- LED Strobe lighting for ITS systems
4 – Liquid Lens technology and controllers for machine vision.
5 – Learn about computational imaging and how CCS Lighting can help

Contact us

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

Related Topics

Learn how liquid lenses keep continuous focus on machine vision cameras when the working distance changes.

White Paper – Key benefits in using LED lighting controllers for machine vision applications

Imaging Basics – Calculating Exposure time for machine vision cameras

calculate camera exposure

In any industrial camera application, one key setting is the exposure time of the camera.  In cases where this is set arbitrarily, the resulting image maybe blurry due to movement of the scene we are imaging.  To maximize our settings, we can calculate the minimum exposure time to eliminate blur and maximize our scene brightness.  In this blog post, we will help understand the effects of exposure and calculate it for a given application.

First, let’s  explain camera exposure.  Exposure time for cameras, or shutter speed is the amount of time you let light fall on the image sensor. The longer the exposure time the more you ‘expose’ the sensor charging up the pixels to make them brighter.  Shutter speeds are usually given as a fraction of second, like 1/60th, /125,  1/1000 of a second in photography cameras and come from the film days.  In industrial cameras, exposure time is normally given in milliseconds, just the reciprocal of the shutter speed. (i.e. 1/60 sec = 0.0166 seconds or 16ms).

So how does this relate to blur?  Blur is what you get when your object moves relative to the sensor and in turn moving across 2 or more pixels during the exposure time.

You see this when you take a picture of something moving faster than the exposure time can fully stop the motion.  In the image to the left, we have a crisp picture of the batter, but the ball is moving very fast causing it to appear blurry.  The exposure in this case was taken at 1/500 sec (2 ms), but the ball moved many pixels during this exposure.

The faster the shutter speed, the less chance the object moves much relative to where it started.  In machine vision, cameras are fixed so they don’t move, but what we are worried about is the effect of the object moving during exposure time.

Depending on the application, it may or may not be sensitive to blur.  For instance, say you have a camera that has a pixel array of 1280 pixels in the

pixel blur diagram
Array of pixels – Movement of an object during exposure across pixels = Pixel Blur

x-axis, and your object on the sensor is 1000 pixels.  During the exposure the object moves 1 pixel, it is now moved 1 pixel over to the right. It has moved 1 pixel out of 1000 pixels, This is what we call “pixel blur”.  However, visibly you cannot notice this.  If we have an application in which we’re just viewing a scene and no machine vision algorithms are making decisions on this image, if the object moves a very small fraction of the total object size during exposure, we probably don’t care!.

Now assume you are measuring this object using machine vision algorithms.   Movement becomes more significant, because you now have uncertainty of the actual size of the object.  However, if your tolerances are within 1/1000, you are OK.  However, if your object was only 100 pixels, and it moved 1 pixel, from a viewing application this might still be fine, but from a measurement application, you are now off by 1%, and that might not be tolerable!pixel blur calc

In most cases, we want crisp images with no pixel blur.  The good part is this is relatively easy to calculate!   To calculated blur, you need to know the following:

  • Camera resolution in pixels (in direction of travel )
  • Field of View (FOV),
  • Speed of the object.
  • Exposure time

Then you can calculate how many pixels the object will move during the exposure using the following formula:

B = Vp * Te * Np / FOV

Where:
B = Blur in pixels
Vp = part velocity
FOV = Field of view in the direction of motion
Te = Exposure time in seconds
Np = number of pixels spanning the field of view

In the example above, Vp is 1 cm/sec, Te is 33ms, Np is 640 pixels and FOV is 10cm then:

B = 1 cm/sec * .033 sec * 640 pixels / 10cm = 2.1 pixels

In most cases, blurring becomes an issue past 1 pixel.  In precision measurements, even 1 pixel of blur maybe too much and need to use a faster exposure time.

1st Vision has over 100 years of combined experience contact us to help you calculate the correct exposure

Pixel blur calculator

Contact us

Related Blog posts that you may also find helpful are below: 

Imaging Basics: How to Calculate Resolution for Machine Vision

Imaging Basics – Calculating Lens Focal length

CCD vs CMOS industrial cameras – Learn how CMOS image sensors excel over CCD!

CCD vs CMOSCMOS Image sensors used in machine vision industrial cameras are now the image sensor of choice!  But why is this?

Allied Vision conducted a nice comparison between CCD and CMOS cameras showing the advantages in the latest Manta cameras.

Until recently, CCD was generally recommended for better image quality with the following properties:

  • High pixel homogeneity, low fixed pattern noise (FPN)
  • Global shutters for machine vision applications requiring very short exposure times

Where in the past, CMOS image sensors were used due to existing advantages:

  • High frame rate and less power consumption
  • No blooming or smear image artifacts contrary to CCD image sensors
  • High Dynamic Range (HDR) modes for acquisition of contrast rich and extremely bright objects.

Today CMOS image sensors offer many more advantages in industrial cameras versus CCD image sensors as detailed below

Overall key advantages are better image quality than earlier CMOS sensors due to higher sensitivity,  lower dark noise, spatial noise and higher quantum efficiency (QE) as seen in the specifications comparing a CCD and CMOS camera.

CCD vs CMOS comparisonsSony ICX655 CCD vs a Sony IMX264 CMOS sensor

Comparing the specifications between CCD and CMOS  industrial cameras, the advantages are clear.

  • Higher Quantum Efficiency (QE) – 64% vs 49% where higher is better in converting photons to electrons. 
  • Pixel well depth (ue.sat: ) – 10613 electrons (e-) vs 6600 e- where a higher well depth is beneficial
  • Dynamic range (DYN) – Where CMOS provides almost +17 dB more dynamic range.  This is a partial result of the pixel well depth along with low noise.
  • Dark Noise:  CMOS is significantly less vs CCD with only 2 electrons vs 12!

Images are always worth a thousand words!  Below are several comparison images contrasting the latest Allied Vision CMOS industrial cameras vs CCD industrial cameras.

Dynamic Range of today’s CMOS image sensors are contributed to several of the characteristics above and can provide higher fidelity images with better dynamic range and lower dark noise as seen in this image comparison of a couple of electronics parts

Allied vision cmos vs ccdThe comparison above illustrates how higher contrast can be achieved with high dynamic range and low noise in the latest CMOS industrial cameras

  • High noise in the CCD image causes low contrast between characters on the integrated circuit, whereas the CMOS sensor provides higher contrast.
  • Increased Dynamic range from the CMOS image allows darker and brighter areas in an image to be seen.  The battery (left part) is not as saturated vs the CCD image allowing more detail to be observed.

Current CMOS image sensors eliminate several artifacts and provide more useful images for processing.  The images below are an example of a PCB with LEDs illuminated imaged with a CCD vs CMOS industrial camera

ccd vs cmos artifactsCMOS images will result in less blooming of bright areas (LED’s for example in the image), smearing (vertical lines seen in the CCD image) and lower noise (as seen in the darker areas, providing higher overall contrast)

  • Smearing (vertical lines seen in the CCD image) are eliminated with CMOS.  Smear has inherently been a bad artifact of CCDs.
  • Dynamic Range inherent to CMOS sensors allow the LED’s to not saturates as much as the CCD allowing more detail to be seen.
  • Lower noise in the CMOS image, as seen in the bottom line graph shows a cleaner image.

More advantages of new CMOS image sensors include:

  • Higher frame rates and shutter speeds over CCD resulting in less image blur in fast moving objects.
  • Much lower cost of CMOS sensors translate into much lower cost cameras!
  • Improved global shutter efficiency.

CMOS image sensor manufacturers are also working to design sensors that easily replace CCD sensors making for an easy transition which results in lower cost and better performance.  Allied Vision has several new cameras replacing current CCD’s with more to come!  Below are a few popular cameras / image sensors that have been recently crossed over to CMOS image sensors

Sony ICX424 and Sony ICX445 (1/3″ sensor)  found in the Manta G-032 and Manta G-125 cameras are now replaced by the Sony IMX273 in the Manta G-158 camera keeping the same sensors size.  (Read more here)

Sony ICX424 (1/3″sensor), can also be replaced by the Sony IMX287 (1/2.9″ sensor) with pixel sizes of 6.9um closely matching the older IMX424 having 7.4um pixels.  Allied Vision Manta G-040 is a nice solution with all the benefits of the latest CMOS image sensor technology.  View the short videos below for the highlights.

 

Contact us

 

 

 

 

 

Related Posts

What are the attributes to consider when selecting a camera and its performance?

Allied Vision Manta G-040 & G-158 provide great replacements to legacy CCD cameras

Upgrade your 5MP CCD (Sony ICX625) camera for higher performance with an Allied Vision Mako G-507 (IMX264)

 

3-CMOS machine vision cameras bring color fidelity to the market at half the price as previous models

JAI- 3CMOS Apex cameras

JAI Apex Series cameras

Single sensor machine vision cameras use a mosaic filter placed on the sensor to create color images.  This is also called a ‘Bayer’ filter, named after the person who invented it.  However, color images from this filter lose resolution and color fidelity compared to ‘true’ color images.  Spatial resolution is lost due to interpolation, while the Bayer filter pattern reduces true color representation, sensitivity and dynamic range.   To overcome these issues, multi-sensor (3-CCD / 3-CMOS)  machine vision cameras can be used.

Typically, machine vision 3-CCD cameras were high cost, until now with CMOS sensors becoming the leading image sensor technology.  Now, machine vision 3-CMOS machine cameras provide major benefits over Bayer cameras and at more attractive entrance cost.

CMOS sensor technology has lowered the price of 3 image sensor cameras by 50% providing a better alternative to Bayer color cameras for many applicationsJAI’s Apex Series 3-CMOS cameras are the game changer for demanding color applications.    Contact us

Watch this video to learn more about 3-CCD/3-CMOS cameras

Machine Vision 3-CMOS cameras Vs Bayer cameras provide major benefits for color applications

Better color precision – Accurate RGB values are obtained for each pixel so there is no interpolation/estimation of colors as found in Bayer cameras. This can be critical for paint/ink matching, printing inspection systems, digital pathology, or other applications where color values must be extremely accurate.comparison images

Better spatial resolution –  The Bayer interpolation process also tends to blend edges and small details. While this can be pleasing to the eye, it can make spatial measurements or bar code reading imprecise or error prone, causing the use of more expensive high resolution Bayer cameras or requiring a second monochrome camera for imaging these details

JAI Apex 3-CMOSJAI 5MP Bayer image

Higher sensitivity – The prism glass in the AP-3200T-USB and associated cameras, has better light transmission properties than the polymer filters in a standard Bayer sensor.  This enables more light to reach the pixels for better overall sensitivity and lower lighting requirements.

Lower noise, higher dynamic range – White balancing on a JAI prism camera can be done on individual channels with shutter adjustments instead of adding gain to the image. This results in lower noise and higher usable dynamic range.

3ccd vs Bayer dynamic range

What about “improved” Bayer capabilities like 5×5 interpolation?

Several camera manufacturers claim vastly improved capabilities for color imaging, including 5×5 de-Bayering, color-anti-aliasing, denoising
and improved sharpness.  But consider the following:  5×5 interpolation
means you are using an even larger area within the image to estimate each pixel’s color value. So while this can do a better job of “smoothing” color transitions to the eye, it can actually result in less-precise color values for image processing, especially where color variation is high.

This is illustrated in the following images, under identical conditions, by a camera with 5 x 5 debayering and a JAI Apex 3-CMOS camera.  The CIE L*a*b* reference chart provides a set of exact color values when expected under specified lighting conditions.   The result:  5×5 debayering results in 40%  out of match to the expected colors vs 13% for the JAI Apex 3-CMOS camera!

JAI 3-CMOS Apex camera matchingMore advanced color imaging features

JAI’s Apex 3-CMOS machine vision cameras provide additional advanced features aside from excellent color fidelity and highlighted as follows:

  • Color Space conversion:  Color data from the camera can be provided using built in conversions to several color spaces including sRGB, Adobe RGB, CIE XYZ and HSI.  Custom RGB conversions can also be done using the cameras color matrix circuit.
  • Color Enhancer Function:  Allows the 3-CMOS cameras to “boost” the intensity of 6 colors to help features stand out, such as the red color of blood vs surrounding tissue in a medical application.  Additionally, degree’s of edge enhancement can be to increase the contrast of color boundaries.  JAI 3-CMOS Color enhancement
  • Color Binning:  While most Bayer cameras do not offer this, due to the prism architecture of the 3-CMOS cameras, you can easily bin pixels by 1×2, 2×1 and 2×2 to increase sensitivity, reduce shot noise and / or increase the frame rate.
  • Color temperature presets from 3200K, 5000K, 6500K and 7000K

All of these features, along with reduced costs for 3-CMOS color cameras, now make this a very attractive solution for demanding color applications!  Applications in eye diagnostics, pathology, surgical imaging, meat/food inspection, print inspection and automotive color matching are a few that would highly benefit from the JAI Apex 3-CMOS camera series.

Contact us

Need to proof 3-CMOS / 3-CCD prism based cameras will enhance your application?  Let’s discuss sending you a demo camera!

Currently there are 6 new CMOS models outlined below and full specifications can be found HERE.   

JAI Apex Series cameras

1st Vision is the leading provider of industrial imaging components with over 100 years of combined imaging experience.   Do not hesitate to contact us regarding the new prices of the 3-CMOS cameras!

Be sure to visit our related blogs on 3-CCD and Prism based cameras

How does a 3CCD camera improve color accuracy and spatial resolution versus standard Bayer color cameras?

White Paper – Learn about High Dynamic Range (HDR) Imaging techniques for Machine Vision

White Paper -How does prism technology help to achieve superior color image quality?