Depth of Field – a balancing act

Most who are involved with imaging have at least some understanding of depth of field (DoF). DoF is the distance between the nearest and furthest points that are acceptably in focus. In portrait photography, one sometimes seeks a narrow depth of field to draw attention to the subject, while intentionally blurring the background to a “soft focus”. But in machine vision, it’s often preferred to maximize depth of field – that way if successive targets vary in their Z dimension – or if the camera is on a moving vehicle – the imaging system can keep processing without errors or waste.

Making it real

Suppose you need to see small features on an item that has various heights (Z dimension). You may estimate you need a 1″ depth of field. You know you’ve got plenty of light. So you set the lens to f11 because the datasheet shows you’ll reach the depth of field desired. But you can’t resolve the details! What’s up?

So I should maximize DoF, right?

Well generally speaking, yes – to a point. The point where diffraction limits negatively impact resolution. If you read on, we aim to provide a practical overview of some important concepts and a rule of thumb to guide you through this complex topic without much math.

Aperture, F/#, and Depth of Field

Aperture size and F/# are inversely correlated. So a low f/# corresponds to a large aperture, and a high f/# signifies a small aperture. See our blog on F-Numbers aka F-Stops on the way the F-numbers are calculated, and some practical guidance.

Per the illustration below, a large aperture restricts DoF, while a small aperture maximizes the DoF. Please take a moment to compare the upper and lower variations in this diagram:

Correlation between aperture and Depth of Field – Courtesy Edmund Optics

If we maximize depth of field…

So let’s pursue maximizing depth of field for a moment. Narrow the aperture to the smallest setting (the largest F-number), and presto you’ve got maximal DoF! Done! Hmm, not so fast.

First challenge – do you have enough light?

Narrowing the aperture sounds great in theory, but for each stop one narrows the aperture, the amount of light is halved. The camera sensor needs to receive sufficient photons in the pixel wells, according to the sensor’s quantum efficiency, to create an overall image with contrast necessary to process the image. If there is no motion in your application, perhaps you can just take a longer exposure. Or add supplemental lighting. But if you do have motion or can’t add more light, you may not be able to narrow the aperture as far as you hoped.

Second challenge – the Airy disk and diffraction

When light passes through an aperture, diffraction occurs – the bending of waves around the edge of the aperture. The pattern from a ray of light that falls upon the sensor takes the form of a bright circular area surrounded by a series of weakening concentric rings. This is called the Airy disk. Without going into the math, the Airy disk is the smallest point to which a beam of light can be focused.

And while stopping down the aperture increases the DoF, our stated goal, it has the negative impact of increasing diffraction.

Diffraction increases as the aperture becomes smaller –
Courtesy Edmund Optics

Diffraction limits

As focused patterns, containing details in your application that you want to discern, near each other, they start to overlap. This creates interference, which in turn reduces contrast.

Every lens, no matter how well it is designed and manufactured, has a diffraction limit, the maximum resolving power of the lens – expressed in line pairs per millimeter. There is no point generating an Airy disk patterns from adjacent real-world features that are larger than the sensor’s pixels, or the all-important contrast needed will not be achieved.

Contact us

High magnification example

Suppose you have a candidate camera with 3.45um pixels, and you want to pair it with a machine vision lens capable of 2x, 3x, or 4x magnification. You’ll find the Airy disk is 9um across! Something must be changed – a sensor with larger pixels, or a different lens.

As a rule of thumb, 1um resolution with machine vision lenses is about the best one can achieve. For higher resolution, there are specialized microscope lenses. Consult your lensing professional, who can guide you through sensor and lens selection in the context of your application.

Lens data sheets

Just a comment on lens manufacturers and provided data. While there are many details in the machine vision field, it’s quite transparent in terms of standards and performance data. Manufacturers’ product datasheets contain a wealth of information. For example, take a look at Edmund Optics lenses, then pick any lens family, then any lens model. You’ll find a clickable datasheet link like this, where you can see MTF graphs showing resolution performance like LP/mm, DOF graphs at different F#s, etc.

Takeaway

Per the blog’s title, Depth of Field is a balancing act between sharpness and blur. It’s physics. Pursue the links embedded in the blog, or study optical theory, if you want to dig into the math. Or just call us at 987-474-0044.

Contact us

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Helios2 Ray Outdoor Time of Flight camera by Lucid Vision Labs

Helios2 Outdoor ToF camera – Courtesy Lucid Vision Labs

Time of Flight

The Time of Flight (ToF) method for 3D imaging isn’t new. Lucid Vision Labs is a longstanding leader in 3D ToF imaging. To brush up on ToF vs. other 3D methods, see a prior blog on Types of 3D imaging: Passive Stereo, Structured Light, and Time of Flight (ToF).

Helios2 Ray 3D camera

What is new are the Helios2 Ray 3D ToF outdoor* camera models. With working distances (WD) from 0.3 meters up to 8.3 meters, exterior applications like infrastructure inspection, environmental monitoring, and agriculture may be enabled – or enhanced – with these cameras. That WD in imperial units is from 1 foot up to 27 feet, providing tremendous flexibility to cover many applications.

(*) While rated for outdoor use, the Helios2 3D camera may also be used indoors, of course.

The camera uses a Sony DepthSense IMX556 CMOS back-illuminated ToF image sensor. It provides its own laser lighting via 940nm VCSEL laser diodes, which operate in the infrared (IR) spectrum, beyond the visible spectrum. So it’s independent of the ambient lighting conditions, and self-contained with no need for supplemental lighting.

Operating up to 30 fps, the camera and computer host build 3D point clouds your application can act upon. Dust and moisture protection to the IP67 standard is assured, with robust shock, vibration, and temperature performance as well. See specifications for details.

Example – Agriculture

Outdoor plants imaged in visible spectrum with conventional camera – Courtesy Lucid Vision Labs
Colorized pseudo-image from 3D point cloud – Courtesy Lucid Vision Labs

Example – Industrial

Visible spectrum image with sunlight and shadows – Courtesy Lucid Vision Labs
Pseudo-image from point cloud via Helios2 Ray – Courtesy Lucid Vision Labs

Arena SDK

The Arena SDK makes it easy to configure and control the camera and the images. It provides 2D and 3D views. With the 2D view one can see the intensity and depth of the scene. The 3D view shows the point cloud, and can be rotated by the user in real time. Of course the point cloud data may also be process algorithmically, to record quality measurements, control robot arm or vehicle guidance, etc.

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Teledyne Dalsa Linea2 4k 5GigE camera

The new Linea2 4k color camera with a 5GigE interface delivers RGB images at a max line rate of 42kHz x3. That’s 5x the bandwidth of the popular Linea 1 GigE cameras.

Linea2 4k color cameras with 5GigE – courtesy Teledyne Dalsa

Perhaps you already use the Linea GigE cameras, at 1 GigE, and seek an upgrade path to higher performance in an existing application. Or you may have a new application for which Linea2 performance is the right fit. Either way, Linea2 builds on the foundation of Teledyne DALSA’s Linea family.

Why line scan?

While area scan is the right fit for certain applications, compare area scan to line scan for the hypothetical application illustrated below:

Area scan vs. Line scan – courtesy Teledyne DALSA

If one were to implement an area scan solution, you’d need multiple cameras to cover the field of view (FOV). Plus you’d have to manage lighting and framerate to avoid smear and frame overlaps. With line scan, one gets high resolution without smear, and a single camera solution – ideal to inspect a moving surface.

Call us at 978-474-0044 to tell us about your application, and we can guide you to a suitable line scan or area scan camera for your solution. Of course we also have the right lenses, lighting, and other components.

Sensor

The Trilinear CMOS line scan sensor is Teledyne’s own 4k color design, with outstanding spectral responsivity as shown below:

Linea2 Color responsivity – courtesy Teledyne DALSA

The integrated IR-cut filters insure true-color response is delivered on the native RGB data outputs.

Interface

With a 5GigE Vision interface, the Linea2 provides 5x the bandwidth of the conventional GigE interface, but can use the same Cat5e or Cat6 network cables – and does not require a frame grabber.

Software

Sapera LT software development kit is recommended, featuring:

  • Intuitive CamExpert graphical user interface for configuration and setup
  • Trigger-To-Image Reliability tool (T2IR) for system monitoring

Sapera LT has over 500,000 installations worldwide. Thanks to the 5GigE Vision interface, popular third party software is of course also compatible.

Applications

Application examples – courtesy Teledyne DALSA

While not limited to those listed below, known and suggested uses include:

  • Printing inspection
  • Web inspection
  • Food, recycling, and material sorting
  • Printed circuit board inspection
  • Web inspection
  • etc.

Call us at 978-474-0044. Or follow the contact us link below to provide your information, and we’ll call you.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!

Webcam vs. machine vision camera

Webcams aren’t (yet) found in Cracker Jack boxes, but they are very inexpensive. And they seem to perform ok for Zoom meetings or rendering a decent image of an office interior. So why not just use a webcam as the front end for a machine vision application?

Before we dig in to analysis and rationale, let’s motivate with the following side-by-side images of the same printed circuit board (PCB):

Machine vision camera and lens vs. webcam – Courtesy 1stVision

Side-by-side images

In the image pair above, the left image was generated with a 20MP machine vision camera and a high resolution lens. The right image used a webcam with a consumer sensor and optics.

Both were used under identical lighting, and optimally positioned within their specified operating conditions, etc. In other words we tried to give the webcam a fair chance.

Even in the above image, the left image looks crisp with good contrast, while the right image has poor contrast – that’s clear even at a wide field of view (FOV). But let’s zoom in:

Clearly readable labeling and contact points (left) vs. poor contrast and fuzzy edges (right)

Which image would you prefer to pass to your machine vision software for processing? Exactly.

Machine vision cameras with lens mounts that accept lenses for different applications

Why is there such a big difference in performance

We’re all so used to smartphones that take (seemingly) good images, and webcams that support our Zoom and Teams meetings, that we may have developed a bias towards thinking cameras have become both inexpensive and really good. It’s true that all cameras continue to trend less expensive over time, per megapixel delivered – just as with Moore’s law in computing power.

As for the seemingly-good perception, if the images above haven’t convinced you, it’s important to note that:

  1. Most webcam and smartphone images are wide angle large field of view (FOV)
  2. Firmware algorithms may smooth values among adjacent pixels to render “pleasing” images or speed up performance

Most machine vision applications, on the other hand, demand precise details – so firmware-smoothed regions may look nice on a Zoom call but could totally miss the defect-discovery which might be the goal of your application!

Software

Finally, software (or the lack thereof) is at least as important as image quality due to lens and sensor considerations. With a webcam, one just gets an image burped out, but nothing more.

Conversely, with a machine vision camera, not only is the camera image better, but one gets a software development kit (SDK). With the SDK, one can:

  • Configure the camera’s parameters relative to bandwidth and choice of image format, to manage performance requirements
  • Choose between streaming vs. triggering exposures (via hardware or software trigger) – trigger allows synchronizing to real world events or mechanisms such as conveyor belt movement, for example
  • Access to machine vision library functions such as edge detection, blob analysis, occlusion detection, and other sophisticated image analysis software

Proprietary SDKs vs. 3rd party SDKs

Speaking of SDKs, the camera manufacturers’ are often very powerful and user friendly. Just to name a few, Teledyne Dalsa offer Sapera, Allied Vision provides Vimba, and IDS Imaging supports both IDS Lighthouse and IDS Peak.

Compare to Apple or Microsoft in the computing sector – they provide bundled software like Safari and Edge, respectively. They work hard on interoperability of their laptops, tablets, and smartphones, to make it attractive for users to see benefits from staying within a specific manufacturer’s product families. Machine vision camera companies do the same thing – and many users like those benefits.

Vision standards – Courtesy Association for Advancing Automation,

Some users prefer 3rd party SDKs that help maintain independence to choose cameras best-suited to a given task. Thanks to machine vision industry standards like GigE Vision, USB3 Vision, Camera Link, GenICam, etc., 3rd party SDKs like MATLAB, OpenCV, Halcon, Labview, and CVB provide powerful functionality that are vendor-neutral relative to the camera manufacturer.


For a deeper dive into machine vision cameras vs. webcams, including the benefits of lens selection, exposure controls, and design-in availability over time, see our article: “Why shouldn’t I buy a $69 webcam for my machine vision application?” Or just call us at 978-474-0044.

In summary, yes a webcam is a camera. For a sufficiently “coarse” area scan application, such as presence/absence detection at low resolution – a webcam might be good enough. Otherwise note that machine vision cameras – like most electronics – are declining in price over time for a given resolution, and the performance benefits – including software controls – are very compelling.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of lensescablesNIC cards and industrial computers, we can provide a full vision solution!