There is NO such thing as a “Megapixel” machine vision camera lens!.. Say what??

Lenses

Megapixel Machine vision lensesThere has been a lot written about the ratings of machine vision lenses1stVision had created white papers that describe this in detail. However, the lens industry continues to use the marketing term, Megapixel Machine Vision Camera Lenses.

Let’s get this out of the way right now. 

There is NO such thing as a Megapixel Machine vision Camera Lens.

But since it is me against the world, let me explain why sometimes a 12 MP lens is really the same resolution as a 5 MP quality lens.

The first thing to understand is that lenses are evaluated on their resolving power, which is a spatial resolution.  For lens used in the industrial imaging marketplace, this is normally given in terms as “Line Pairs per mm” (LP/mm).  The reason it is expressed this way is because to resolve a pixel of “X” um, you need to use the formula, 1 / 2X where “X” is the pixel size and 2 is the Nyquist limit.  So to resolve a pixel of 5um we need a resolution of 1/ ( 5um*2)  per line pair.  In LP/mm, this becomes 100 LP/mm.

A graph showing a lenses performance is shown in a  plot below, plotting intensity vs. LP/mm.  This is called the Modulation Transfer Function (MTF). Note that as the LP/mm increases and the lens can’t resolve it as well, the intensity falls off.  This measurement is variable to F stop and angle of light, so real MTF charts will indicated these parameters. This is the only real way to empirically evaluate how a lens will perform.

You can visually compare lenses, but to truly compare Brand A vs. Brand B you would have to test them under identical situations.  You can’t compare Brand A’s MTF vs. Brand B’s if you don’t know what the parameters used to test them are (need the same camera, with the same lighting, with the same focus, with the same f stop, the same gain, etc. etc.).  Unfortunately its very hard to get that information from most lens manufacturers.

1, 3, 5, 9, 12 Megapixel lens?

Tamron 12MP MPY lenses
Compliments of Computar

What does this mean?  As an example, Sony has recently introduced a new line of image sensors which  have  5MP, 9MP and 12MP sensors.  Many clients have called and said,  “I want to use the 12MP sensor, so please spec a lens that can do 12MP.”  Unfortunately, this isn’t correct as each of these sensors uses a 3.45um pixel.  They ALL need the same quality lens!  Why?  Because it is the size of pixel, what you have to resolve, that dictates the quality of the lens!

In the above situation, the 5MP sensor needs a 2/3” format lens, the 9MP needs a 1” lens, and 12 MP needs a 1.1” format lens.  (Multiply the size of the pixel by the number of H and V pixels to get the sensor format  – more on format HERE ).  However, this sensor needs about 144 LP/mm of resolving power as its a 3.45um pixel size.  As much as I detest the nomenclature of “5MP lens” etc., I do appreciate what Fuji  does; as they will state, “…. This  series of high-resolution lenses deliver 3.45um pixel pitch (equivalent to 5MP) on a 2/3″ sensor”.   Now this make more sense!

In turn, if you see a lens stated as a “Megapixel Machine vision” lens, question this!  It really needs to be stated in terms of its capability to resolve the pixel size in LP/mm!

Contact us

1stVision has a staff of machine vision veterans who are happy to explain this in more detail and help you specify the best lens for your application!   Contact 1st Vision!

Additional References:
For a comprehensive understanding on “How to Choose a Lens”, download our whitepaper HERE.  

Blog post:  Demystifying Lens performance specifications

Blog post:  Learn about FUJI’s HF-XA-5M (5 Megapixel) lens series which resolves 3.45um pixel pitch sensors! Perfect for cameras with Sony Pregius image sensors.

Use the 1st Vision lens selector allowing you to filter by focal length, format and manufacturer to name a few

How much resolution do I lose using a color industrial camera in a mono mode? Is it really 4X?

color vs monochrome imagesMany clients call us about doing measurements on grey scale data, but want to use a color machine vision industrial camera because they want the operator or client to see a more ‘realistic’ picture.  For instance, if you are looking at PCBs, need to read characters with good precision, but also see the colors on a ribbon cable,  you are forced to use a color camera.

In these applications, you could take out a monochrome image from a color sensor for processing, and use the color for cataloging and visualization.   But the question is, how much data is lost by using a color camera in mono mode?

First, the user must understand how a color camera works, and how it gets its picture.  Non 3-CCD cameras use a Bayer filter, which is a matrix of red, green, and blue filters over each pixel.  For each group of 4 pixels, there are 2 greens, 1 red and 1 blue pixel. (The eye being most sensitive in Green, has more to simulate the response).

Bayer image sensor

To get a color image out, each pixel out is a computation of a weighted sum of its nearest neighbor pixels which is known as Bayer interpolation.  The accuracy of the color on these cameras is a result of what the original image was, and how the camera algorithms interpolated the set of red, green and blue values for each pixel.

To get monochrome out, one technique is to have the image broken down into Hue, Saturation, and Intensity, with the intensity taken as the grey scale value.  Again, this is mathematical computation. The quality of the output is dependent upon the original image and the algorithms used to compute the output.

Mono image sensor

An image such as the above will give an algorithm a hard time as you are flipping between grey scale values of 0 and 255 for each pixel (assuming the check board lines up with each pixel).  Since the output of each pixel is based on it’s nearest neighbors, you could be replacing a black pixel with 4 white ones!

Grey scale image

On the other hand, if we had an image with a ramp of pixel values, in other words, each pixel was say 1 value less than the one next to it, the average of the the nearest neighbors would very close to the pixel it was replacing.

What does all this mean in real world applications?  Let’s take a look at a 2 images, both from the same brand of camera where one is the using the 5MP Sony Pregius IMX250 monochrome sensor, the other is using the same sensor, but the color version.  The images were taken with the same exposure and identical setup.  So how do they compare when we blow them up to the pixel level after we take the monochrome output from the color camera and compare it to the monochrome camera?

Grey Scale Analysis
(Left) – Color Image ——————————- (Right) – Monochrome Image

In comparing the color image (Left), if you expand the picture, you can see that the middle of the E is wider. The transition is not as close to a step function as you would want it to be. The vertical cross section is about 11 pixels with more black than white. Comparing the monochrome image (Right), the vertical cross section is closer to about 8 pixels.

Conclusion:

If you need pixel level measurement, and there is no need for a color image, USE A MONOCHROME MACHINE VISION CAMERA.

If you need to do OCR (as in this example) the above images using color or monochrome would work just fine.  This is given you have enough pixels to start and your spatial resolution is adequate.

CLICK HERE FOR A COMPLETE LIST OF MACHINE VISION CAMERAS

Do you lose 4x in resolution as some people claim?  Not with the image I have used above.  Maybe with the checkerboard pattern, but if you can have multiple pixels across your image to measure, you might be ok in with using a color camera and is really application dependent!  This post is to make you aware of the resolution loss specifically and 1st Vision can help in making decisions by contacting us for a discussion. 

Contact us

1stVision is the leading provider of machine vision components and has a staff of experienced sales engineers to help discuss your application.  Please do not hesitate to contact us to help you in calculating the resolution you need to calculating focal lengths for your application. 

Related links and blog posts

How does 3CCD cameras improve color accuracy and spatial resolution over Bayer cameras

Calculating resolution for machine vision

Use the 1st Vision camera filters to help ID the desired camera

How do I sort through all the new industrial camera image sensors to make a decision? Download the sensor cheat sheet!

industrial camera decision

industrial imaging sensor decisionThe latest CMOS image sensor technology from Sony and ON-SEMI have continued to expand the industrial camera market.  Sony has now reached its 3rd Generation Pregius sensors in addition to adding the low light performer, Starvis sensor.  ON-SEMI has also continued with higher resolutions and has the next generation in the works.

Given all these new sensors, we are often asked, “What is the best image sensor and camera for my application”?  

Although there are many considerations in general on selecting a camera (i.e Interface, Size, Color vs Mono etc), its best to start with the characteristics of  image sensor and performance.  Knowing the answers to questions relating to amount of available light, dynamic range requirements, wavelengths involved, and the type of application, the right sensor can start to be identified.  From there, we can select a camera with the appropriate sensor fitting other requirements such as interface, frame rate, bit depths etc.

In order to help pick a sensor, its extremely important to have the image sensor data that can be found on the EMVA1288 data sheets.  We have continued compiling this data into a “cheat sheet” along with required lens recommendations and comments how how some sensors relate to each other and older CCD sensors for your download.

industrial camera image sensor cheat sheet

The data shows us that not all industrial camera image sensors are created equally!  Within the Sony Pregius sensors, there is 1st and 2nd Generation sensors both having unique characteristics.  The 1st Generation provided great pixel well depth and dynamic range with 5.86um pixels.  The 2nd generation came along with smaller 3.45um pixels,  improved sensitivity and lower noise, but less well depth.  The next generation will have the best of both worlds.. more to come on that front.

Using this data as an example, if we had an application with a “fixed” amount of light and wanted a relatively bright image (given a fixed aperture and just considering sensor characteristics), what sensor is best?   Answer:  We’d probably look at Model A with a smaller well depth as the pixel will start to saturate faster than Model C.  Or possibly we have a very small amount of light?  We’d start looking at absolute (abs) sensitivity which tells us the smaller # of photon’s, 1.1 in this case, starts to provide a useful signal.

Example comparisons: 
industrial imager comparison
Don’t let yourself get frustrated trying to figure this out on your own!    1st Vision’s engineers have combined experience in the machine vision and imaging market of over 100 years!   Our team can help explain the various technical terms mentioned in this post and help in selecting the best image sensor and camera for an application.

Contact 1st Vision

Related Blog posts

What are the attributes to consider when selecting a camera and its performance?

IMX174 vs Starvis IMX290 – Battle of the 2 Megapixel Image sensors – Sony Pregius IMX174 vs Starvis IMX290

IMX174 vs CMOSIS CMV2000 – CMOS battle between 2MP Sony Pregius and CMOSIS

IMX250 vs ICX 625 – 5MP’s sensor battle between Sony’s older CCD vs new CMOS model

What are global shutters and rolling shutters in machine vision cameras? How can we use lower cost rolling shutter cameras?

machine vision cameras shuttersWe often are asked the question, “What is the difference between a global and rolling shutter image sensor in machine vision cameras? ”  Although they both take nice pictures, they are very different image sensors with pro’s and con’s of each.  In the end, rolling shutter image sensors cost less, but are not always recommended for moving objects.

In this blog post, we will explain the differences between global and rolling shutter sensors used in machine vision cameras.  Additionally, we highlight how to use a rolling shutter camera capable of  “Global Reset”  providing low cost solutions for some applications with moving objects.

First, let’s explain the differences between rolling shutter vs global shutter image sensors in machine vision cameras.

Global Shutter:  Image sensors with a global shutter allow all of the pixels to accumulate a charge with the exposure starting and ending at the same time.  At the end of the exposure time the charge is read out simultaneously.  In turn, the image has no motion blur on moving objects.  This is given the exposure is short enough to stop pixel blur which is a topic for another blog.
Global shutter image

Rolling shutter:  Image sensors with a rolling shutter do NOT expose all the pixels at the same time.  Alternatively, they expose the pixels by row with each row having a different start and end time frame.  The top row of the pixel array is the first to expose, reading out the pixel data followed by the 2nd, 3rd & 4th row and so on.  Each of the rows start and end point have a delay as the sensor is fully read out.  The result of this on moving objects is a skewed image
Rolling shutter image

What are the Pro’s and Con’s of each type of shutter?

Global Shutter:  
Pro:  Freeze Frame images with no blur on moving objects.

Con:  Global shutter sensors require more complicated circuit architecture, thus limiting the pixel density for a given physical size.  In turn, sensors with a global shutter will have a larger image format driving up lens cost.  The complicated circuits also drive up the overall camera cost and will be more expensive vs a rolling shutter sensor.

Rolling Shutter:
Pro:  Rolling shutter sensors have a simpler design with smaller pixels, allowing higher resolution in a smaller image format allowing use of lower cost lenses. The simpler pixel design results in lower camera costs!.. For example, Dalsa’s 18MP Nano for < $600!

Con:  Image distortion occurs due to the row by row integration and offset on moving objects.  Smaller pixels may also require a higher quality lens which is commonly gauged by the lens Modular Transfer Function (MTF).  This is really dependent upon your application and can be discussed with a sales engineer In turn, there maybe a small trade off to consider.

Is there a way to use a lower cost rolling shutter camera on moving objects?  Absolutely using a Global Reset mode found in various image sensors.

Using a rolling shutter capable of a “Global Reset” such as the AR1820HS found in the 18MP Teledyne Dalsa Nano C4900 camera will eliminate the image distortion.

A typical rolling shutter image sensor as described above exposes sensor rows separately with a delay as depicted below.
rolling shutter mode

Using a rolling shutter with global reset mode, all rows start integrating at the same time as shown below eliminating the image distortion.  It is highly recommended however to use a dedicated strobe and sync with the start of image acquisition.  A gradient in the image brightness from top to bottom maybe seen if not with some pixel blur due to longer row exposure
rolling shutter with global reset mode

A great camera to consider is the 18MP Teledyne Dalsa Nano C4900 camera.  This camera features the ON-SEMI AR1820HS sensor with this capability.  With a price point of < $600, this makes it one of the lowest cost cameras per pixel on the market.

contact us

1st Vision has over 100 years of combined experience and can help you with camera, lens and other peripheral recommendations.  If you have questions regarding the various sensor shutters, please do not hesitate to contact us!

Be sure to read our related blog posts:

What is a lens optical format? Can I use any machine vision camera with any format? NOT!

Demystifying Lens performance specifications – MTF