Optotune liquid lenses – 5 case examples for machine vision

Optotune tunable lenses

Optotune & Gardasoft liquid lens controlsLiquid lens technology, with its ability to change focus within the order of milliseconds is opening up a host of new applications in both machine vision and the life sciences.  It is gaining growing interest from a wide cross section of applications and easily adapts to standard machine vision lenses.

Liquid lens technology alone provides nice solutions, but when combined with advanced controls, many more applications can be solved.

To learn the fundamentals of liquid lens technology and download a comprehensive white paper read our previous blog HERE. 

see spec's

In this blog, we will highlight several case application areas for liquid lens technology.

Case 1:  Applications requiring various focus points and extended depth of field:  This does cover many applications, such as logistics, packaging and code reading in packaging.  Optotune Liquid lenses provide the ability to have pre-set focus points, auto-focus or utilize distance sensors for feedback to the lens.  In the example below, 2 presets can be programmed and toggled to read 2D codes at various heights essentially extending the depth of field.

extended DOF

Case 2:  3D imagery of transparent materials / Hyperfocal (Extended DOF Images:  When image stackingusing an Optotune liquid lens in conjunction with a Gardasoft TR-CL180 controller, sequence of images can be taken with the focus point stepped between each image.  This technique is known as focus stacking.   This will build up a 3D image of transparent environments such as cell tissue or liquid for analysis.  This can also be used to find particles suspended in liquids.

image stacking for cells

A Z-stack of images can also be used to extract 3D data (depth of focus) and compute a hyper-focus or extended depth of field (EFOF) image.

The EDOF technique requires tacking a stack of individual well focused images which have preferably been synchronized with one flash per image.  An example is show below with the rendered hyper focus image shown at right.

Hyperfocus imageCase 3:  Lens inspection:  Liquid lenses can be used to inspect lenses, such as those in cell phones for dust and scratches looking through the lens stack.

Optotune liquid lens stack imageFor this application, a liquid lens is used in conjunction with a telescentric lens taking images through different heights of the lens stack.  

Case 4:  Bottle / Container inspection:  Optotune Liquid lenses can be used to facilitate image bottom’s of glass bottles or containers of various heights.

In this example, the camera is consistently at the neck of the bottle, but the bottom is at different heights.  optotune lens - bottle inspection

Case 5:  Large surface inspections with variation in height:  Items ranging from PCB’s to LCD’s are not flat, have various component heights and need to be inspected at high magnification (typically using lenses with minimal DOF).  Optotune Liquid lenses are a perfect solution using preset focus points.

pcb inspection

Machine Vision applications using Optotune Liquid lenses and controller are endless!

These applications are just the tip of the iceberg and many more exist, but this will give you a good idea of capabilities.   Gardasoft TR-CL controllers are fully GigE Vision compliant, so any compatible GigE Vision client image processing software such as Cognex VisionPro, Teledyne Dalsa Sherlock or National Instruments LABVIEW can be used easily.

Click to contact

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

Contact us to help in the specification and providing pricing

Ph:  978-474-0044  /  info@1stvision.com  / www.1stvision.com

Related Video

Related Blog Posts

Learn how liquid lenses keep continuous focus on machine vision cameras when the working distance changes.

5 benefits of using strobed lighting for machine vision applications

Gardasoft controller for machine vision

Gardasoft controllerPulsing (aka strobing)  a machine vision LED  light is a powerful technique that can be beneficial to machine vision systems in various ways.

This blog post outlines 5 benefits you will receive from pulsing  a LED  light head.  Gardasoft is an industry leader in strobe controllers capable of driving 3rd party LED light heads or custom LED banks for machine vision.

1 – Increase the LED light output

It is common to use pulsed light to “freeze” motion for high speed inspection.  But, when the light is on only a short term in burst, its possible to increase the light output beyond the LED manufacturers specified maximum, using a technique called “Overdrive”.   In many cases, the LED can be powered by 10X over the constant current power input in turn providing brighter pulses of light.  When synchronized with the camera acquisition, a brighter scene is generated.Gardasoft LED overdrive

2 – Extend the life of the LED 

As mentioned in the first benefit, strobing a LED light head only turns on the LED for short period of time.  In many cases, the duty cycles are very low which extends the life of the LED and any degradation in turn, keeping the scene at a consistent brightness for years.  (i.e. If the duty cycle is only 10%, the lifetime of the LED head will increase by 10%)

3 – Ambient Light control

Ambient light conditions frequently interfere with machine vision measurements and these issues can be solved by pulsing and over driving the system’s LEDs. For example, over driving the LED by 200% doubles the light intensity and enables the camera exposure to be halved, so reducing the effects of ambient light by a factor of 4.  The end result is the cameras exposure is only utilizing light from the give LED source and NOT ambient light.

4 – High speed imaging and Increased depth of field

Motion blur in images from fast-moving objects can be eliminated with appropriate pulsing of the light.  In some cases a defined camera exposure will be good enough to freeze motion (read our blog on calculating camera exposure), but may suffer in light intensity with constant illumination.  “Over driving” a light can boost the output up to 10x its brightness rating in short pulses.  Increased brightness could allow the whole system to be run faster because of the reduced exposure times.  Higher light output may also allow the aperture to be reduced to give better depth of field.

Extended Depth of Field (DOF) is achieved with a brighter light allowing the f-stop to be turned down

Gardasoft controllers include our patented SafePower and SafeSense technology which prevents over driving from damaging the light.

5 -Multi-Lighting schemed & Computational Imaging

Lighting controllers can be used to reduce the number of camera stations. Several lights are set up at a single camera station and pulsed at different intensities and duration’s in a predefined sequence.

CCS America Shape from shading
Generate edge and texture images using shape from shading

Each different lighting can highlight particular features in the image. Multiple measurements can be made at a single camera station instead of needing multiple stations and reduces, mechanical complexity saving money. For example, sequentially triggering 3 different types of lighting could allow a single camera to acquire specific images for bar code reading, surface defect inspection and a dimensional check in rapid succession.

Pulsing can also be used for computational imaging, where a component is illuminated sequentially by 4 different lights from different directions. The resultant images would be combined to exclude the effect of random reflections from the component surface.  Contact us and ask for the white paper on Computational imaging to learn more

CCS Computational imaing
The images on the right (top and bottom) were taken with bright field and dark field lighting. The left images is the the result of the computational imaging combining the lighting techniques allowing particles and water bubble to be seen

Pulsed multiple lighting schemes can also benefit line scan imaging by using different illumination sources to capture alternate lines. Individual images for each illumination source are then easily extracted using image processing software.

In conclusion, strobe controllers can provide many benefits and save money in an overall setup more than the cost of a controller!

1st Vision has additional white papers on the following.  Contact us an ask for any one of these informative white papers – Simply send an email and ask for 1 or all of the white papers.
1- Practical use of LED controllers
2 – Intelligent Lighting for Machine Vision Systems
3- LED Strobe lighting for ITS systems
4 – Liquid Lens technology and controllers for machine vision.
5 – Learn about computational imaging and how CCS Lighting can help

Contact us

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

Related Topics

Learn how liquid lenses keep continuous focus on machine vision cameras when the working distance changes.

White Paper – Key benefits in using LED lighting controllers for machine vision applications

Imaging Basics – Calculating Exposure time for machine vision cameras

calculate camera exposure

In any industrial camera application, one key setting is the exposure time of the camera.  In cases where this is set arbitrarily, the resulting image maybe blurry due to movement of the scene we are imaging.  To maximize our settings, we can calculate the minimum exposure time to eliminate blur and maximize our scene brightness.  In this blog post, we will help understand the effects of exposure and calculate it for a given application.

First, let’s  explain camera exposure.  Exposure time for cameras, or shutter speed is the amount of time you let light fall on the image sensor. The longer the exposure time the more you ‘expose’ the sensor charging up the pixels to make them brighter.  Shutter speeds are usually given as a fraction of second, like 1/60th, /125,  1/1000 of a second in photography cameras and come from the film days.  In industrial cameras, exposure time is normally given in milliseconds, just the reciprocal of the shutter speed. (i.e. 1/60 sec = 0.0166 seconds or 16ms).

So how does this relate to blur?  Blur is what you get when your object moves relative to the sensor and in turn moving across 2 or more pixels during the exposure time.

You see this when you take a picture of something moving faster than the exposure time can fully stop the motion.  In the image to the left, we have a crisp picture of the batter, but the ball is moving very fast causing it to appear blurry.  The exposure in this case was taken at 1/500 sec (2 ms), but the ball moved many pixels during this exposure.

The faster the shutter speed, the less chance the object moves much relative to where it started.  In machine vision, cameras are fixed so they don’t move, but what we are worried about is the effect of the object moving during exposure time.

Depending on the application, it may or may not be sensitive to blur.  For instance, say you have a camera that has a pixel array of 1280 pixels in the

pixel blur diagram
Array of pixels – Movement of an object during exposure across pixels = Pixel Blur

x-axis, and your object on the sensor is 1000 pixels.  During the exposure the object moves 1 pixel, it is now moved 1 pixel over to the right. It has moved 1 pixel out of 1000 pixels, This is what we call “pixel blur”.  However, visibly you cannot notice this.  If we have an application in which we’re just viewing a scene and no machine vision algorithms are making decisions on this image, if the object moves a very small fraction of the total object size during exposure, we probably don’t care!.

Now assume you are measuring this object using machine vision algorithms.   Movement becomes more significant, because you now have uncertainty of the actual size of the object.  However, if your tolerances are within 1/1000, you are OK.  However, if your object was only 100 pixels, and it moved 1 pixel, from a viewing application this might still be fine, but from a measurement application, you are now off by 1%, and that might not be tolerable!pixel blur calc

In most cases, we want crisp images with no pixel blur.  The good part is this is relatively easy to calculate!   To calculated blur, you need to know the following:

  • Camera resolution in pixels (in direction of travel )
  • Field of View (FOV),
  • Speed of the object.
  • Exposure time

Then you can calculate how many pixels the object will move during the exposure using the following formula:

B = Vp * Te * Np / FOV

Where:
B = Blur in pixels
Vp = part velocity
FOV = Field of view in the direction of motion
Te = Exposure time in seconds
Np = number of pixels spanning the field of view

In the example above, Vp is 1 cm/sec, Te is 33ms, Np is 640 pixels and FOV is 10cm then:

B = 1 cm/sec * .033 sec * 640 pixels / 10cm = 2.1 pixels

In most cases, blurring becomes an issue past 1 pixel.  In precision measurements, even 1 pixel of blur maybe too much and need to use a faster exposure time.

1st Vision has over 100 years of combined experience contact us to help you calculate the correct exposure

Pixel blur calculator

Contact us

Related Blog posts that you may also find helpful are below: 

Imaging Basics: How to Calculate Resolution for Machine Vision

Imaging Basics – Calculating Lens Focal length

Which Industrial camera would you use in low light?

OK vs NGOur job as imaging specialists is to help our customers make the best decisions on which industrial camera and image sensor works best for their application.  This is not a trivial task as there are many data points to consider, and in the end, a good image comparison test helps provide the true answer.  In this blog post, we conduct another image sensor comparison for low light applications testing a long time favorite e2V EV76C661 Near Infra Red (NIR) sensor to the new Sony Starvis IMX178 and Sony Pregius IMX174 image sensor using IDS Imaging cameras.

An Industrial camera can be easily selected based on resolution and frame rates, but image sensor performance is more challenging.  We can collect data points from the camera EMVA1288 test results and spectral response charts, but one can not conclude on what is best for the application based on one data point.  In many cases, several data points need to be reviewed to start making an educated decision.

We started this review comparing 3 image sensors to determine which ones would perform best in low light applications.

Below is a chart comparing the e2v EV76C661 NIR, Sony Starvis IMX178 and , Sony Pregius IMX174 image sensors found in the IDS Imaging UI-3240NIR, UI-3880CP and UI-3060CP cameras using EMVA1288 data to start. This provides us with accurate image sensor data to evaluate.

image sensor comparison
Table 1: Sensor comparison data
Spectral response cufves
Camera Spectral Response curves

 

 

We also look at the Quantum Efficiency (QE) curves for the sensors to see the sensor performs over the light spectrum as seen to the left.  (As a note, QE is the conversion of photon to an electrical charge (electrons)

 

 

 

 

 

 

 

 

 

For this comparison, our objective is to determine which sensor will perform best in low light applications with broadband light.  From table 1, the IMX178 has very low absolute sensitivity (abs sensitivity) with taking ~ 1 photon to help make a adequate charge, however the pixels are small (2.4um), so maybe not gather light as well as larger pixels.  It does have the best dark noise characteristics however.  In comparison, the e2V sensor has 9.9 photons  for abs sensitivity (not as good as 1 photon) and has a larger pixel size (bigger is better to collect light).  The IMX174 proves to be interesting as well with the largest pixel of 5.86um and the highest QE @ 533nm.

Using the data from the spectral response curves however, helps give us more insight across the light spectrum.  Given we are using a NIR enhanced camera, we will have significant more conversion of light to a create a charge on the sensor across most of the light spectrum.  In turn, we expect we’d see brighter images from the e2V NIR IDS UI-3240 NIR camera.

As a note, one more data point is to look at the pixel well depth.  Smaller pixels will saturate faster making the image brighter, so if other variables were close, this may also be taken into consideration.

As one can see, this is not trivial, but evaluating many of the data points, can give us some clues, but testing is really what it takes!  So, lets now compare the images to see how they look.

The following images were taken with the same exposure, lens + f-stop in the identical low light environment.  In the 2nd image, the e2v image sensor in the IDS-UI-3240CP NIR provides the brighter image as some of the data points started to indicate.  The IDS UI-3060CP-M (IMX174) is second best.

IDS UI-3880CP (IMX178)
IDS UI-3240CP NIR (e2v )
IDS UI-3060CP-M (Sony Pregius IMX174)

In low light situations, we can always add camera gain, but we pay the price of adding noise to the image.   Depending on the camera image sensor, some have the ability to provide more gain than others.  This is another factor to review when considering adding gain.  We need to also take into account read noise as this will get amplified with gain.   Our next part of our test is to turn up the gain to see how we compare.

The following set of images was taken again with the same lens + f-stop, lighting, but with gain at max for each camera.

IDS UI-3880CP with 14.5X gain
IDS UI-3240CP NIR with 4X gain
IDS UI-3060CP-M with 24X gain

The IDS-UI-3060CP-M has the highest gain available, but still keeps the read noise relatively low with 6 electrons.  This in low light WITH gain, gives us a nice image in nearly dark environments.

Conclusion
We can review the data points until we are blue in the face and they can be very confusing.  We can however take in all the data and help make some more educated decisions on which cameras to test.  For example in the first test, we had a good idea the NIR sensor would perform well looking a the QE curves along with other data.  In our second test, we may have seen the UI-3060CP had 24X gain vs others still with low read noise, giving an indication, we’d have relatively clean image.

In the end, 1st Vision’s sales engineers will help provide the needed information and help conduct testing for you!  We spend a lot of time in our lab  in order to provide first hand information to our customers!

Contact us

1st Vision is the leading provider of industrial imaging components with over 100 years of combined imaging experience.  Do not hesitate to contact us to discuss your applications!

Related Blogs

How do I sort through all the new industrial camera image sensors to make a decision? Download the sensor cheat sheet!

 

Just a few foot notes regarding this blog post:

Magnification of the images differs due to sensor size.  Working distance of the cameras was kept identical in all setups and focused accordingly with distance.

This topic can be very complex!  If we were to dig in even deeper, we’d take into consideration charge convergence of the pixel which effects sensitivity aside from looking at just QE!.. That’s probably another blog post!

As a reference, this image was taken with an Iphone and set to best represent what my eye viewed during our lab test.  Note that the left container with markers was non-distinguishable to the human eye

Clipart courtesy of clipartextra.com