Explained: Trifecta of lens f-stop, wavelength and Airy disc

In this blog we tackle a set of issues well-known to experts. It’s complex enough to be non-obvious, but easy enough to understand through this short tutorial. And better to learn via a no-cost article rather than through trial and error.

Alternative to reading on, let us help you get the optics right for your application. Or read on and then let us help you anyway. Helping machine vision customers choose optimal components is what we do. We’ve staked our reputation on it.

Aperture size and F-stop

Most understand that the F-stop on a lens specifies the size of the aperture. Follow that last link to reveal the arithmetic calculations, if you like, but the key thing to keep in mind at the practical level is that F-stop values are inversely correlated with the size of the aperture. So a large F-number like f/8 indicates a narrow aperture, while a small F-number like f/1.4 corresponds to a large aperture. Some lens designs span a wider range of F-numbers than others, but the inverse correlation always applied.

Iris controls the aperture – Courtesy Edmund Optics

Maximizing contrast might seem to suggest a large aperture

For machine vision it’s always important to maximize contrast. The target object can only be discerned when it is sufficiently contrasted against the background or other objects. Effective lighting and lensing is crucial, in addition to a camera sensor that’s up to the task.

“Maximizing light” (without over-saturating) is often a challenge, unless one adds artificial light. That would tend to suggest using a large aperture to let more light pass while still keeping exposure time short enough to “freeze” motion or maximize frames per second.

So for the moment, let’s hold that thought that a large aperture sounds promising. Spoiler alert: we’ll soften our position on this point in light of forthcoming points.

Depth of Field – DoF

While a large aperture seems attractive so far, one argument against that is depth of field (DoF). In particular, the narrowest effective aperture maximizes depth of field, while the largest aperture minimizes DoF.

Correlation of aperture size and depth of field – Courtesy Edmund Optics

Depending on the lens design, the difference in DoF between largest vs. smallest aperture may vary from as little as a few millimeters to as great as many centimeters. Your applications knowledge will inform you how much wiggle room you’ve got on DoF.

So what’s the sweet spot for aperture?

Barring further arguments to the contrary, the largest aperture that still provides sufficient depth of field is a good rule of thumb.

Where do diffraction limits and the Airy disc come into it?

Optics is a branch of physics. And just like absolute zero in the realm of temperature, Boyle’s law with respect to gases, etc., there are certain constraints and limits that apply to optics.

Whenever light passes through an aperture, diffraction occurs – the bending of waves around the edge of the aperture. The pattern from a ray of light that falls upon the sensor takes the form of a bright circular area surrounded by a series of weakening concentric rings. This is called the Airy disk. Without going into the math, the Airy disk is the smallest point to which a beam of light can be focused.

And while stopping down the aperture increases the DoF, our stated goal, it has the negative impact of increasing diffraction.

Correlation of aperture to diffraction pattern – Courtesy Edmund Optics

Diffraction limits

As focused patterns, containing details in your application that you want to discern, near each other, they start to overlap. This creates interference, which in turn reduces contrast.

Every lens, no matter how well it is designed and manufactured, has a diffraction limit, the maximum resolving power of the lens – expressed in line pairs per millimeter. There is no point generating an Airy disk pattern from adjacent real-world features that are larger than the sensor’s pixels, or the all-important contrast needed will not be achieved.

And wavelength’s a factor too?

Indeed wavelength is also a contributor to contrast and the Airy disc. As beings who see, we tend to default to thinking of light as white light or daylight, which is a composite segment of the spectrum, from indigo, blue, green, yellow, orange, and red. That’s from about 380 nm to 780 nm. Below 380 nm we find ultraviolet light (UV) in the next segment of the spectrum. Above 780 nm the next segment is infrared (IR).

Monochrome light better than white light

An additional topic relative to the Airy disc is that monochrome light is better than white light. When light passes through a lens, it refracts (bends) differently in correlation with the wavelength. This is referred to as chromatic aberration.

Transverse and longitudinal chromatic aberration – Courtesy Edmund Optics

If a given point on your imaged object reflect or emits light in two more more of the wavelengths, the focal point of one might land in a different sensor pixel than the other, creating blur and confusion on how to resolve the point.

An easy way to completely overcome chromatic aberration is to use a single monochromatic wavelength! If your target object reflects or emits a given wavelength, to which your sensor is responsive, the lens will refract the light from a given point very precisely, with no wavelength-induced shifts.

Or call us at 978-474-0044

The moral of the story

The takeaway point is that the trifecta of aperture (F-stop) and wavelength each have a bearing on the Airy disc, and that one wants to choose and configure the optics and lighting to optimize the Airy disc. This leads to effective applications performance – a must have. But it can also lead to cost-savings, as lower cost lenses, lighting, and sensors, optimally configured, may perform better than higher cost components chosen without sufficient understanding of these principles.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

EO HPI + Fixed Focal Length Lenses

HPI+ Fixed Focal Length Lenses – Courtesy Edmund Optics

Front-loading the article by unpacking the acronyms:

EO = Edmund Optics, longstanding innovators in machine vision lensing

HP = High Performance

I = Denotes “instrumentation” – Streamlined mechanical designs and fixed apertures

+ = Targeted for larger 4th gen SONY Pregius sensors: 24.5MP 1.2” IMX530 and IMX540 sensors

Fixed Focal Length Lenses… ok no acronym to unpack there… but worth noting that fixed focal length lenses, with fewer moving parts, offer high performance with lower manufacturing costs. Which translates to a compelling value proposition.

With 18 members in the EO HPI+ Fixed Focal Length Lens family, it’s possible to get the optimal fit in focal length and F-stop. These industrial lenses are built for exceptional performance in demanding factory automation (FA) and machine vision environments. The locking focus and iris rings prevent accidental adjustments.

Contact us for a quote

SONY Pregious sensors – once more with feeling

While not the only player in the sensor space, SONY remains one of the most innovative and respected manufacturers. They regularly succeed their own prior releases through incremental and disruptive innovation. As we write this, there are four generations of SONY Pregious sensors. The 4th generation Pregius S captures up to 4x as much light as Sony’s own highly-praised 2nd generation Pregius from just a few years ago!

Surface- vs back-illuminated image sensors – courtesy SONY Semiconductor Solutions Corporation

24.5MP 1.2” SONY IMX530 and SONY IMX540

Consider the SONY IMX540 sensor for a moment. It’s designed in to at least 17 different camera models carried by 1stVision, across three different camera maufacturers: Allied Vision, IDS Imaging, and JAI.

First few rows of 1stVision’s camera offerings using Sony IMX540 sensor

At almost 25MP, with 2.74µm square pixels, yet only a 1.2″ diagonal size, it’s suited to the C-mount lens format. That’s a robust mount design that’s widely popular in machine vision, so adopters of cameras with this sensor and mount have a wide range of lenses from which to choose. That in turn offers a range of choices along the price : performance spectrum.


EO HPI+ FFL Lens Performance

Machine vision pros know that lens performance is often characterized by the optical transfer function, also called the modulation transfer function. The shape and position of the curve says a lot about lens quality and performance. It’s also useful when comparing lenses from different manufacturers – or even lenses from different product families by the same manufacturer.

Here’s the MTF curve for one of the Edmund Optics lenses:

25mm, f/2.8: Identical to 29-278 – Courtesy Edmund Optics

That’s just a representative example. We’ve got the MTF curves for each lens… either on our website datasheets or on request.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

AT Sensors – 3D Families Comparison

3D point cloud

Preamble on 3D options

If you aren’t yet doing 3D imaging – but wonder if you should consider it – consider our TechBrief: Which Type of 3D imaging is best for my application? 3D machine vision provides contactless measurement. Learn the key approaches and their capabilities, constraints, and comparative costs.

Types of 3D imagining covered in the TechBrief linked above include:

  • Triangulation
  • Structured light
  • Time of Flight (ToF), and
  • Stereo vision

Since triangulation is fast, accurate, and affordable – for a wide range of applications to which it’s ideally suited, let’s presume for the rest of this blog you are pursuing that approach.

3D point cloud
From real space to 3D point cloud model – Image courtesy of AT Sensors

AT Sensor families

AT Sensors 3D laser profilers feature cutting-edge 3D laser sensors with high-speed and high-precision resolution with IP54 or IP67 protection for industrial environments. AT Sensors’s 3D laser profilers are factory calibrated with profile speeds up to 200 kHz, and profile resolutions up to 4096 points/profile. All AT Sensors 3D sensors laser profilers support the latest GigE Vision/GenICam 3D standard.

AT Sensors’ families – click to go to product pages

The AT Sensors’ series of 3D sensors include high performance CS 3D sensors, modular MCS 3D sensors, high precision XCS 3D sensors, and ECS 3D sensors with an excellent price-performance ratio. These innovative 3D sensors are suitable for numerous industries such as, inspection, automotive, infrastructure, and food and beverage.

Instructive short video featuring XCS dual-head inspection of Pin Grid Arrays (PGA) – Courtesy AT Sensors

How to choose?

If you’d like a very high level overview of the 4 AT Sensors 3D product families, consider:

AT Sensors 3D product families – a simplistic but helpful differentiation

*Internal values. The actual usable profile speed may be limited by the 1 Gbit/s interface.

Click to contact
Give us some brief idea of your application and we will contact you to discuss 3D options.

Field of View and Flexible configuration options:

AT Sensors 3D families: differentiated FoV, sensor count, configurability, and laser options

IP protection class and suggested applications

AT Sensors 3D families: IP protection class, software coverage, and suggested applications

How to choose? (in summary)

As you can see above, or by chasing the product detail pages, AT Sensors has a wide offering of 3D laser triangulation scanners, each of which builds a point cloud model of your targets. With four product families, and member breakouts within the families, there’s likely a model that meets your need for speed, resolution, IP class, and configurability.

At the risk of oversimplifying the AT Sensors 3D families:

ECS Series: Economical – fewer features but if it does the job low cost sounds nice

CS Series: Standard workhorse – high performance with several models available

XCS Series: High precision – think eXtra with the X – optional dual head sensors

MCS Series: Modular – also high performance but custom configurable too

Let us help you find the best fit!

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

What are the factors in 3D laser triangulation line rates?

When designing an application, one likes to read the specifications to determine whether a candidate solution will satisfy the applications requirements. Let’s say you want to design an application to do laser profiling of your continuously moving target(s). You know Teledyne DALSA is well-regarded for their Z-Trak 3D Laser Profiler. In the specifications you may see that up to 3.3K second are achievable, but what factors could influence the rate?

What factors affect the line rate?

When choosing a pickup truck or SUV, cubic displacement and horsepower matter. But so do whether you plan to tow a trailer of a certain weight. And whether the terrain is hilly or flat.

With an area scan camera, maximum framerate is expressed for reading out all pixels when operating at full resolution. Faster rates can be achieved by reading out partial rows with a reduced area of interest. One must match camera and interface capabilities to application requirements.

Laser triangulation is an effective 3D technique

Here too one must read the specifications – and think about application requirements.

Figure 1: Key laser profiler terms and concepts in relation to each other – Courtesy Teledyne DALSA

What considerations affect 3D triangulation laser profilers?

Data volume: With reference to Figure 2 below, the number of pixels per row (X) and the frequency of scans in the Y dimension, together with the number of Bytes expressed per pixel, determine the data volume. Ultimately you need what you need, and may purchase a line scanner with a wider or smaller field of view, or a faster or slower interface, or a more intense laser light, accordingly. Required resolution has a bearing on data volumes, too, and that’s the key consideration we’ll go into further below.

Figure 2: Each laser profile scan delivers X pixels’ Z values to build Y essentially continuous slices – Courtesy Teledyne DALSA

Resolution has a bearing on data volumes and application performance

Presumably it’s clear that application performance will require certain precision in resolution. In the Y dimension, how frequently do you need each successive data slice in order to track feature changes over time? In the Z dimension, how fine grained do you need to know of changes in object height? And in the X dimension, how many points must be captured at what resolution?

While you might be prepared to negotiate resolution tolerances as an engineering tradeoff on performance or cost or risk, generally speaking you’ve got certain resolutions you are aiming for if the technology and budget can achieve it.

We’re warming up to the key point of this article – how line rate varies according to application features. Consider Figure 3 below, noting the trapezoidal shape for 3 respective fields of view, in correlation with working distance.

Figure 3: Working distance in which Z dimension may vary also impacts resolution achievable for each value in the X dimension – Courtesy Teledyne DALSA.

Trapezoid bottom width and required X dimension resolution

To drive this final point home, consider both Figure 2 and Figure 3. Figure 2, among other things, reminds us that we need to capture each successive scan from the Y dimension at precisely timed intervals. Otherwise how would we usefully track the changes in height in the Z dimension as the target moves down the conveyance?

That means that regardless of target height, each scan must always take exactly the same time as each other scan – it cannot vary. But per Figure 3, regardless of whether using a short, medium, or longer working distance, X pixels correlating to target values found high up in the trapezoidal FoV will yield a de facto higher resolution than the same X pixels lower down.

Suppose the top of the trapezoid is 50cm wide, and the bottom of the trapezoid is 100cm wide. For any given short span along a line in the X dimension, the real-space mapped into a sensor pixel will be 2x and long for targets sampled at the bottom of the FoV.

Since the required minimum resolution and precision is an applications requirement, the whole system must be configured for sufficient resolution when sampling at the bottom of the trapezoid. So one must purchase a system the covers the required resolution, and deploy it in such a way that the “worst case” sampling at the limits of the system are within the requirements. One must sample as many points as needed at the bottom of the FoV, and that impacts line scan rate.

Height of object matters too

Not only the position of the object in the FoV matters – but also the maximum height of any object whose Z dimension you need to detect. Let’s illustrate the point:

Figure 4. The maximum height anticipated matters too – Courtesy Teledyne DALSA

Consider item labeled Object in Figure 4. Your application’s object(s) may of course be shaped differently, but this generic object serves discussion purposes just fine. In this conceptual application, there’s a continuous conveyor belt (the dark grey surface) moving at continous speed in the Y dimension. Whenever no Object is present, i.e. the gaps between Object_N and Object_N+1, we expect the profiler to deliver a Z value of 0 for each pixel. But when an Object is present, we anticipate positive values corresponding to the height of the object. That’s the whole point of the 3D application.

Important note re. camera sensor in 2D

While the laser emits a flat line as it exits the projector, the reflection sensed inside the camera is two-dimensional. The camera sensor is a rectangular grid or array of pixels, typically in a CMOS chip, similar to that used in an area-scan camera. If one needs all the data from the sensor, the higher data volume takes longer to transfer than if one only needs a subset. If you know your application’s design well, you may be able to achieve optimized performance by avoiding the transfer of “empty” data.

Now let’s do a thought experiment where we re-imagine the Object towards two different extremes:

Extreme 1: Imagine the Object flattened down to a few sheets of paper in a tight stack, or perhaps the flap of a cardboard box.

Extreme 2: Imagine the Object is stretched up to the height of a full box, as high in the Z dimension as in the X dimension shown.

If the Object would never be higher than Extreme 1, the number of pixel rows in the camera sensor registering non-zero values will be just a few rows. Which can be read out quickly, not bothering to read out the unused rows. Yielding a relatively faster line rate.

But if the Object(s) will sometimes be at Extreme 2, many/most of the pixel rows in the camera sensor will register non-zero values, per the reflected laser line ranging up to the full height of the Object. Consequently more lines must be read-out from the camera sensor in order to build the laser profile.

1. The application must be designed to perform for the tallest anticipated Object, as well as the width of the Object in the X dimension and the speed of motion in the Y dimension.

2. All other things being equal, shorter objects, utilizing less camera sensor real estate, will support faster line rates, than taller object.

Summary points regarding object height

By careful planning for your FoV, knowing your timing constraints, and selecting your laser profiler model within it’s performance range, you can optimize your outcomes.

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.

Also consider – interface capacity; exposure time

Just as with area scan cameras, output rates may be limited by any of interface limits, exposure duration, or data volumes.

Interface limits: Whether using GigE Vision, USB3 Vision, Camera Link HS – whatever – the interface standard, camera settings, cable, and PC adapter card together determine a maximum frame rate or line rate expressed in Gigabits per second (Gbps), typically. Your intended data volume is a function of exposure time and line rate or frame rate. Be sure to understand maximum practical throughput, choosing components accordingly.

Exposure duration: Even without readout timing considerations (overlapped readout together with start of next exposure – or completion of readout n before start of exposure n+1), if there are, say, 100 exposures per second, one cannot receive more than 100 datasets per second. Even if the camera is capable of faster rates.

That may seem obvious to experienced machine vision applications designers, but it needs mentioning for any new to this. Every application needs to achieve good contrast between the imaging subject and its background field. And if lighting and lensing are optimized, exposure time is the last variable to control. Ideally, lighting and lensing, together with the camera sensor, permit exposures brief enough so that exposure time meets application objectives.

But whether manually parameterized or under auto-exposure control, one has to do the math and/or the empirical testing to insure your achievable line rates aren’t exposure-limited.

Planning for your laser profiler application

Some months ago we wrote a blog which summarizes Teledyne DALSA’s Z-Trak line scan product families. Besides highlighting the characteristics of three distinct product families, we provided a worksheet to help users identify key applications requirements for line scanning. It’s worth offering that same worksheet again below. Consider printing the page or creating a copy of it in a spreadsheet, and fill in the values for your known or evolving application.

3D application key attributes

The moral of the story…

The takeaway is that the scan rate you’ll achieve for your application is more complex to determine than just reading a spec sheet about a laser profiler’s maximum performance. Your application configuration and constraints factor into overall performance.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.