Liquid lens technology, with its ability to change focus within the order of milliseconds is opening up a host of new applications in both machine vision and the life sciences. It is gaining growing interest from a wide cross section of applications and easily adapts to standard machine vision lenses.
Liquid lens technology alone provides nice solutions, but when combined with advanced controls, many more applications can be solved.
In this blog, we will highlight several case application areas for liquid lens technology.
Case 1: Applications requiring various focus points and extended depth of field: This does cover many applications, such as logistics, packaging and code reading in packaging. Liquid lenses provide the ability to have pre-set focus points, auto-focus or utilize distance sensors for feedback to the lens. In the example below, 2 presets can be programmed and toggled to read 2D codes at various heights essentially extending the depth of field.
Case 2: 3D imagery of transparent materials / Hyperfocal (Extended DOF Images: When using an Optotune liquid lens in conjunction with a Gardasoft TR-CL180 controller, sequence of images can be taken with the focus point stepped between each image. This technique is known as focus stacking. This will build up a 3D image of transparent environments such as cell tissue or liquid for analysis. This can also be used to find particles suspended in liquids.
A Z-stack of images can also be used to extract 3D data (depth of focus) and compute a hyper-focus or extended depth of field (EFOF) image.
The EDOF technique requires tacking a stack of individual well focused images which have preferably been synchronized with one flash per image. An example is show below with the rendered hyper focus image shown at right.
Case 3: Lens inspection: Liquid lenses can be used to inspect lenses, such as those in cell phones for dust and scratches looking through the lens stack.
Case 5: Large surface inspections with variation in height: Items ranging from PCB’s to LCD’s are not flat, have various component heights and need to be inspected at high magnification (typically using lenses with minimal DOF). Optotune Liquid lenses are a perfect solution using preset focus points.
Machine Vision applications using Optotune Liquid lenses and controller are endless!
These applications are just the tip of the iceberg and many more exist, but this will give you a good idea of capabilities. Gardasoft TR-CL controllers are fully GigE Vision compliant, so any compatible GigE Vision client image processing software such as Cognex VisionPro, Teledyne Dalsa Sherlock or National Instruments LABVIEW can be used easily.
Pulsing (aka strobing) a machine vision LED lightis powerful technique that can beneficial to machine vision systems in various ways.
This blog post outlines 5 benefits you will receive from pulsing a LED light head. Gardasoft is an industry leader in strobe controllers capable of driving 3rd party LED light heads or custom LED banks for machine vision.
1 – Increase the LED light output
It is common to use pulsed light to “freeze” motion for high speed inspection. But, when the light is on only a short term in burst, its possible to increase the light output beyond the LED manufacturers specified maximum, using a technique called “Overdrive”. In many cases, the LED can be powered by 10X over the constant current power input in turn providing brighter pulses of light. When synchronized with the camera acquisition, a brighter scene is generated.
2 – Extend the life of the LED
As mentioned in the first benefit, strobing a LED light head only turns on the LED for short period of time. In many cases, the duty cycles are very low which extends the life of the LED and any degradation in turn, keeping the scene at a consistent brightness for years. (i.e. If the duty cycle is only 10%, the lifetime of the LED head will increase by 10%)
3 – Ambient Light control
Ambient light conditions frequently interfere with machine vision measurements and these issues can be solved by pulsing and over driving the system’s LEDs. For example, over driving the LED by 200% doubles the light intensity and enables the camera exposure to be halved, so reducing the effects of ambient light by a factor of 4. The end result is the cameras exposure is only utilizing light from the give LED source and NOT ambient light.
4 – High speed imaging and Increased depth of field
Motion blur in images from fast-moving objects can be eliminated with appropriate pulsing of the light. In some cases a defined camera exposure will be good enough to freeze motion (read our blog on calculating camera exposure), but may suffer in light intensity with constant illumination. “Over driving” a light can boost the output up to 10x its brightness rating in short pulses. Increased brightness could allow the whole system to be run faster because of the reduced exposure times. Higher light output may also allow the aperture to be reduced to give better depth of field.
Gardasoft controllers include our patented SafePower™ and SafeSense™ technology which prevents over driving from damaging the light.
5 -Multi-Lighting schemed & Computational Imaging
Lighting controllers can be used to reduce the number of camera stations. Several lights are set up at a single camera station and pulsed at different intensities and duration’s in a predefined sequence.
Each different lighting can highlight particular features in the image. Multiple measurements can be made at a single camera station instead of needing multiple stations and reduces, mechanical complexity saving money. For example, sequentially triggering 3 different types of lighting could allow a single camera to acquire specific images for bar code reading, surface defect inspection and a dimensional check in rapid succession.
Pulsed multiple lighting schemes can also benefit line scan imaging by using different illumination sources to capture alternate lines. Individual images for each illumination source are then easily extracted using image processing software.
In conclusion, strobe controllers can provide many benefits and save money in an overall setup more than the cost of a controller!
In any industrial camera application, one key setting is the exposure time of the camera. In cases where this is set arbitrarily, the resulting image maybe blurry due to movement of the scene we are imaging. To maximize our settings, we can calculate the minimum exposure time to eliminate blur and maximize our scene brightness. In this blog post, we will help understand the effects of exposure and calculate it for a given application.
First, let’s explain camera exposure. Exposure time, or shutter speed is the amount of time you let light fall on the image sensor. The longer the exposure time the more you ‘expose’ the sensor charging up the pixels to make them brighter. Shutter speeds are usually given as a fraction of second, like 1/60th, /125, 1/1000 of a second in photography cameras and come from the film days. In industrial cameras, exposure time is normally given in milliseconds, just the reciprocal of the shutter speed. (i.e. 1/60 sec = 0.0166 seconds or 16ms).
So how does this relate to blur? Blur is what you get when your object moves relative to the sensor and in turn moving across 2 or more pixels during the exposure time.
You see this when you take a picture of something moving faster than the exposure time can fully stop the motion. In the image to the left, we have a crisp picture of the batter, but the ball is moving very fast causing it to appear blurry. The exposure in this case was taken at 1/500 sec (2 ms), but the ball moved many pixels during this exposure.
The faster the shutter speed, the less chance the object moves much relative to where it started. In machine vision, cameras are fixed so they don’t move, but what we are worried about is the effect of the object moving during exposure time.
Depending on the application, it may or may not be sensitive to blur. For instance, say you have a camera that has a pixel array of 1280 pixels in the
x-axis, and your object on the sensor is 1000 pixels. During the exposure the object moves 1 pixel, it is now moved 1 pixel over to the right. It has moved 1 pixel out of 1000 pixels, This is what we call “pixel blur”. However, visibly you cannot notice this. If we have an application in which we’re just viewing a scene and no machine vision algorithms are making decisions on this image, if the object moves a very small fraction of the total object size during exposure, we probably don’t care!.
Now assume you are measuring this object using machine vision algorithms. Movement becomes more significant, because you now have uncertainty of the actual size of the object. However, if your tolerances are within 1/1000, you are OK. However, if your object was only 100 pixels, and it moved 1 pixel, from a viewing application this might still be fine, but from a measurement application, you are now off by 1%, and that might not be tolerable!
In most cases, we want crisp images with no pixel blur. The good part is this is relatively easy to calculate! To calculated blur, you need to know the following:
Camera resolution in pixels (in direction of travel )
Field of View (FOV),
Speed of the object.
Then you can calculate how many pixels the object will move during the exposure using the following formula:
B = Vp * Te * Np / FOV
B = Blur in pixels
Vp = part velocity
FOV = Field of view in the direction of motion
Te = Exposure time in seconds
Np = number of pixels spanning the field of view
In the example above, Vp is 1 cm/sec, Te is 33ms, Np is 640 pixels and FOV is 10cm then:
In most cases, blurring becomes an issue past 1 pixel. In precision measurements, even 1 pixel of blur maybe too much and need to use a faster exposure time.
1st Vision has created an Excel sheet to make this a bit easier and is a handy tool. If you’d like a copy of the Excel sheet, please email me at email@example.com with the subject “Pixel Blur calculator”.
Our job as imaging specialists is to help our customers make the best decisions on which industrial camera and image sensor works best for their application. This is not a trivial task as there are many data points to consider, and in the end, a good image comparison test helps provide the true answer. In this blog post, we conduct another image sensor comparison for low light applications testing a long time favorite e2V EV76C661 Near Infra Red (NIR) sensor to the new Sony Starvis IMX178 and Sony Pregius IMX174 image sensor using IDS Imaging cameras.
An Industrial camera can be easily selected based on resolution and frame rates, but image sensor performance is more challenging. We can collect data points from the camera EMVA1288 test results and spectral response charts, but one can not conclude on what is best for the application based on one data point. In many cases, several data points need to be reviewed to start making an educated decision.
We started this review comparing 3 image sensors to determine which ones would perform best in low light applications.
Below is a chart comparing the e2v EV76C661 NIR, Sony Starvis IMX178 and , Sony Pregius IMX174 image sensors found in the IDS Imaging UI-3240NIR,UI-3880CPand UI-3060CP cameras using EMVA1288 data to start. This provides us with accurate image sensor data to evaluate.
We also look at the Quantum Efficiency (QE) curves for the sensors to see the sensor performs over the light spectrum as seen to the left. (As a note, QE is the conversion of photon to an electrical charge (electrons)
For this comparison, our objective is to determine which sensor will perform best in low light applications with broadband light. From table 1, the IMX178 has very low absolute sensitivity (abs sensitivity) with taking ~ 1 photon to help make a adequate charge, however the pixels are small (2.4um), so maybe not gather light as well as larger pixels. It does have the best dark noise characteristics however. In comparison, the e2V sensor has 9.9 photons for abs sensitivity (not as good as 1 photon) and has a larger pixel size (bigger is better to collect light). The IMX174 proves to be interesting as well with the largest pixel of 5.86um and the highest QE @ 533nm.
Using the data from the spectral response curves however, helps give us more insight across the light spectrum. Given we are using a NIR enhanced camera, we will have significant more conversion of light to a create a charge on the sensor across most of the light spectrum. In turn, we expect we’d see brighter images from the e2V NIR IDS UI-3240 NIR camera.
As a note, one more data point is to look at the pixel well depth. Smaller pixels will saturate faster making the image brighter, so if other variables were close, this may also be taken into consideration.
As one can see, this is not trivial, but evaluating many of the data points, can give us some clues, but testing is really what it takes! So, lets now compare the images to see how they look.
The following images were taken with the same exposure, lens + f-stop in the identical low light environment. In the 2nd image, the e2v image sensor in the IDS-UI-3240CP NIR provides the brighter image as some of the data points started to indicate. The IDS UI-3060CP-M (IMX174) is second best.
In low light situations, we can always add camera gain, but we pay the price of adding noise to the image. Depending on the camera image sensor, some have the ability to provide more gain than others. This is another factor to review when considering adding gain. We need to also take into account read noise as this will get amplified with gain. Our next part of our test is to turn up the gain to see how we compare.
The following set of images was taken again with the same lens + f-stop, lighting, but with gain at max for each camera.
The IDS-UI-3060CP-M has the highest gain available, but still keeps the read noise relatively low with 6 electrons. This in low light WITH gain, gives us a nice image in nearly dark environments.
We can review the data points until we are blue in the face and they can be very confusing. We can however take in all the data and help make some more educated decisions on which cameras to test. For example in the first test, we had a good idea the NIR sensor would perform well looking a the QE curves along with other data. In our second test, we may have seen the UI-3060CP had 24X gain vs others still with low read noise, giving an indication, we’d have relatively clean image.
In the end, 1st Vision’s sales engineers will help provide the needed information and help conduct testing for you! We spend a lot of time in our lab in order to provide first hand information to our customers!
Magnification of the images differs due to sensor size. Working distance of the cameras was kept identical in all setups and focused accordingly with distance.
This topic can be very complex! If we were to dig in even deeper, we’d take into consideration charge convergence of the pixel which effects sensitivity aside from looking at just QE!.. That’s probably another blog post!
As a reference, this image was taken with an Iphone and set to best represent what my eye viewed during our lab test. Note that the left container with markers was non-distinguishable to the human eye