Spatial resolution is an essential machine vision concept

image sensor

Spatial resolution is determined by the number of pixels in a CMOS or CCD sensor array.  While generally speaking “more is better”, what really matters is slightly more complex than that.  One needs to know enough about the dimensions and characteristics of the real-world scene at which a camera is directed; and one must know about the smallest feature(s) to be detected.

Choosing the right sensor requires understanding spatial resolution

The sensor-coverage fit of a lens is also relevant.  As is the optical quality of the lensLighting also impacts the quality of the image. Yada yada.

But independent of lens and lighting, a key guideline is that each minimal real-world feature to be detected should appear in a 3×3 pixel grid in the image.  So if the real-world scene is X by Y meters, and the smallest feature to be detected is A by B centimeters, assuming the lens is matched to the sensor and the scene, it’s just a math problem to determine the number of pixels required on the sensor.

There is a comprehensive treatment how to calculate resolution in this short article, including a link there to a resolution calculator. Understanding these concepts will help you to design an imaging system that has enough capacity to solve your application, while not over-engineering a solution – enough is enough.

Finally, the above guideline is for monochrome imaging, which to the surprise of newcomers to the field of machine vision, is often more better than color, for effective and cost-efficient outcomes.  Certainly some applications are dependent upon color.  The guideline for color imaging is that the minimal feature should occupy a 6×6 pixel grid.

If you’d like someone to double-check your calculations, or to prepare the calculations for you, and to recommend sensor, camera and optics, and/or software, the sales engineers at 1stVision have the expertise to support you. Give us some brief idea of your application and we will contact you to discuss camera options.

Contact us

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

About Us | 1stVision

1st Vision is the most experienced distributor in the U.S. of machine vision cameras, lenses, frame grabbers, cables, lighting, and software in the industry.

Keys to Choosing the Best Image Sensor

Keys to Choosing the Best Image Sensor

Image sensors are the key component of any camera and vision system.  This blog summarizes the key concepts of a tech brief addressing concepts essential to sensor performance relative to imaging applications. For a comprehensive analysis of the parameters, you may read the full tech brief.

Download Tech Brief - Choosing the Best Image Sensor

While there are many aspects to consider, here we outline 6 key parameters:

  1. Physical parameters


    Resolution: The amount of information per frame (image) is the product of horizontal pixel count x by vertical pixel count y.  While consumer cameras boast of resolution like car manufacturers tout horsepower, in machine vision one just needs enough resolution to solve the problem – but not more.  Too much resolution leads to more sensor than you need, more bandwidth than you need, and more cost than you need.  Takeaway: Match sensor resolution to optical resolution relative to the object(s) you must image.

    Aspect ratio: Whether 1:1, 3:2, or some other ratio, the optimal arrangement should correspond to the layout of your target’s field of view, so as not to buy more resolution than is needed for your application.



    Frame rate: If your target is moving quickly, you’ll need enough images per second to “freeze” the motion and to keep up with the physical space you are imaging.  But as with resolution, one needs just enough speed to solve the problem, and no more, or you would over specify for a faster computer, cabling, etc.

    Optical format: One could write a thesis on this topic, but the key takeaway is to match the lens’ projection of focused light onto the sensor’s array of pixels, to cover the sensor (and make use of its resolution).  Sensor sizes and lens sizes often have legacy names left over from TV standards now decades old, so we’ll skip the details in this blog but invite the reader to read the linked tech brief or speak with a sales engineer, to insure the best fit.

  2. Quantum Efficiency and Dynamic Range:


    Quantum Efficiency (QE): Sensors vary in their efficiency at converting photons to electrons, by sensor quality and at varying wavelengths of light, so some sensors are better for certain applications than others.

    Typical QE response curve

    Dynamic Range (DR): Factors such as Full Well Capacity and Read Noise determine DR, which is the ratio of maximum signal to the minimum.  The greater the DR, the better the sensor can capture the range of bright to dark gradations from the application scene.

  3. Optical parameters

    While some seemingly-color applications can in fact be solved more easily and cost-effectively with monochrome, in either case each silicon-based pixel converts light (photons) into charge (electrons).  Each pixel well has a maximum volume of charge it can handle before saturating.  After each exposure, the degree of charge in a given pixel correlates to the amount of light that impinged on that pixel.

  4. Rolling vs. Global shutter

    Most current sensors support global shutter, where all pixel rows are exposed at once, eliminating motion-induced blur.  But the on-sensor electronics to achieve global shutter have certain costs associated, so for some applications it can still make sense to use rolling shutter sensors.

  5. Pixel Size

    Just as a wide-mouth bucket will catch more raindrops than a coffee cup, a larger physical pixel will admit more photons than a small one.  Generally speaking, large pixels are preferred.  But that requires the expense of more silicon to support the resolution for a desired x by y array.  Sensor manufacturers work to optimize this tradeoff with each new generation of sensors.

  6. Output modes

    While each sensor typically has a “standard” intended output, at full resolution, many sensors offer additional switchable outputs modes like Region of Interest (ROI), binning, or decimation.  Such modes typically read out a defined subset of the pixels, at a higher frame rate, which can allow the same sensor and camera to serve two or more purposes.  Example of binning would be a microscopy application whereby a binned image at high speed would be used to locate a target blob in a large field, then switch to full-resolution for a high-quality detail image.

For a more in depth review of these concepts, including helpful images and diagrams, please download the tech brief.

Download tech brief - Choosing the Best Image Sensor

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

What Are the Benefits of CMOS vs CCD Machine Vision Cameras?

Industrial machine vision cameras historically have used CCD image sensors, but there is a transition in the industrial imaging marketplace to move to CMOS imagers. Why is this?.. Sony who is the primary supplier of image sensors announced in 2015 it will stop making CCD image sensors and is already past its last time buy. The market was nervous at first until we experienced the new CMOS image sensor designs. The latest Sony Pregius Image sensors provide increased performance with lower cost making it compelling to make changes to systems using older CCD image sensors.

What is the difference between CCD and CMOS image sensors in machine vision cameras?

Both produce an image by taking light energy (photons) and convert them into an electrical charge, but the process is done very differently.

In CCD image sensors, each pixel collects light, but then is moved across the circuit via current through vertical and horizontal shift registers. The light level is then sampled in the read out circuitry. Essentially its a bucket brigade to move the pixel information around which takes time and power.

In CMOS sensors, each pixel has the read out circuitry located at the photosensitive site. The analog to digital circuit samples the information very quickly and eliminates artifacts such as smear and blooming. The pixel architecture has also radically changed moving the photosensitive electronics to be more efficient in collecting light.

6 advantages of CMOS image sensors vs CCD

There are many advantages of CMOS versus CCDs machine vision cameras outlined below:
1 – Higher Sensitivity due to the latest pixel architecture which is beneficial in lower light applications.
2 – Lower dark noise will contribute to a higher fidelity image.
3 – Pixel well depth (saturation capacity) is improved providing higher dynamic range.
4 – Lower Power consumption. This becomes important as lower heat dissipation equals a cooler camera and less noise.
5 – Lower cost! – 5 Megapixel cameras used to cost ~ $2500 and only achieve 15 fps and now cost ~ $450 with increased frame rates.
6 – Smaller pixels reduce the sensor format decreasing the lens cost.

Click to contact

What CMOS image sensors cross over from existing CCD image sensors?

1stVision can help in the transition starting with crossing over CCDs to CMOS using the following cross reference chart. Once identified, use the camera selector and select the sensor from the pull down menu.

Sony CCD to CMOS cross reference chart

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

Ph:  978-474-0044  /  info@1stvision.com  / www.1stvision.com

Related Blogs & Technical resources

CCD vs CMOS industrial cameras – Learn how CMOS image sensors excel over CCD!

CCD vs CMOSCMOS Image sensors used in machine vision industrial cameras are now the image sensor of choice!  But why is this?

Allied Vision conducted a nice comparison between CCD and CMOS cameras showing the advantages in the latest Manta cameras.

Until recently, CCD was generally recommended for better image quality with the following properties:

  • High pixel homogeneity, low fixed pattern noise (FPN)
  • Global shutters for machine vision applications requiring very short exposure times

Where in the past, CMOS image sensors were used due to existing advantages:

  • High frame rate and less power consumption
  • No blooming or smear image artifacts contrary to CCD image sensors
  • High Dynamic Range (HDR) modes for acquisition of contrast rich and extremely bright objects.

Today CMOS image sensors offer many more advantages in industrial cameras versus CCD image sensors as detailed below

Overall key advantages are better image quality than earlier CMOS sensors due to higher sensitivity,  lower dark noise, spatial noise and higher quantum efficiency (QE) as seen in the specifications comparing a CCD and CMOS camera.

CCD vs CMOS comparisonsSony ICX655 CCD vs a Sony IMX264 CMOS sensor

Comparing the specifications between CCD and CMOS  industrial cameras, the advantages are clear.

  • Higher Quantum Efficiency (QE) – 64% vs 49% where higher is better in converting photons to electrons. 
  • Pixel well depth (ue.sat: ) – 10613 electrons (e-) vs 6600 e- where a higher well depth is beneficial
  • Dynamic range (DYN) – Where CMOS provides almost +17 dB more dynamic range.  This is a partial result of the pixel well depth along with low noise.
  • Dark Noise:  CMOS is significantly less vs CCD with only 2 electrons vs 12!

Images are always worth a thousand words!  Below are several comparison images contrasting the latest Allied Vision CMOS industrial cameras vs CCD industrial cameras.

Dynamic Range of today’s CMOS image sensors are contributed to several of the characteristics above and can provide higher fidelity images with better dynamic range and lower dark noise as seen in this image comparison of a couple of electronics parts

Allied vision cmos vs ccdThe comparison above illustrates how higher contrast can be achieved with high dynamic range and low noise in the latest CMOS industrial cameras

  • High noise in the CCD image causes low contrast between characters on the integrated circuit, whereas the CMOS sensor provides higher contrast.
  • Increased Dynamic range from the CMOS image allows darker and brighter areas in an image to be seen.  The battery (left part) is not as saturated vs the CCD image allowing more detail to be observed.

Current CMOS image sensors eliminate several artifacts and provide more useful images for processing.  The images below are an example of a PCB with LEDs illuminated imaged with a CCD vs CMOS industrial camera

ccd vs cmos artifactsCMOS images will result in less blooming of bright areas (LED’s for example in the image), smearing (vertical lines seen in the CCD image) and lower noise (as seen in the darker areas, providing higher overall contrast)

  • Smearing (vertical lines seen in the CCD image) are eliminated with CMOS.  Smear has inherently been a bad artifact of CCDs.
  • Dynamic Range inherent to CMOS sensors allow the LED’s to not saturates as much as the CCD allowing more detail to be seen.
  • Lower noise in the CMOS image, as seen in the bottom line graph shows a cleaner image.

More advantages of new CMOS image sensors include:

  • Higher frame rates and shutter speeds over CCD resulting in less image blur in fast moving objects.
  • Much lower cost of CMOS sensors translate into much lower cost cameras!
  • Improved global shutter efficiency.

CMOS image sensor manufacturers are also working to design sensors that easily replace CCD sensors making for an easy transition which results in lower cost and better performance.  Allied Vision has several new cameras replacing current CCD’s with more to come!  Below are a few popular cameras / image sensors that have been recently crossed over to CMOS image sensors

Sony ICX424 and Sony ICX445 (1/3″ sensor)  found in the Manta G-032 and Manta G-125 cameras are now replaced by the Sony IMX273 in the Manta G-158 camera keeping the same sensors size.  (Read more here)

Sony ICX424 (1/3″sensor), can also be replaced by the Sony IMX287 (1/2.9″ sensor) with pixel sizes of 6.9um closely matching the older IMX424 having 7.4um pixels.  Allied Vision Manta G-040 is a nice solution with all the benefits of the latest CMOS image sensor technology.  View the short videos below for the highlights.

 

Contact us

 

 

 

 

 

Related Posts

What are the attributes to consider when selecting a camera and its performance?

Allied Vision Manta G-040 & G-158 provide great replacements to legacy CCD cameras

Upgrade your 5MP CCD (Sony ICX625) camera for higher performance with an Allied Vision Mako G-507 (IMX264)