Collimated lighting important with telecentric lens

LTCLHP Collimated Light – Courtesy Opto Engineering

Machine vision practitioners, regardless of application or lens type, know that contrast is essential. Without sharp definition, features cannot be detected effectively.

When using a telecentric lens for precision optical 2-D measurements, ideally one should also use collimated lighting. Per the old adage about a chain being only as good as its weakest link, why invest in great lensing and then cut corners on lighting?

contact us

WITH collimated light expect high edge definition:

The cost of the light typically pays for itself relative to quality outcomes. Below see red-framed enlargements of the same region of a part being imaged by the same telecentric lens.

The left-hand image was taken with a conventional backlight – note how the light wraps around the edge, creating “confusion” and imprecision due to refracted light coming from all angles.

The right-hand image was obtained with a collimated backlight – with excellent edge definition.

Conventional backlight (left) vs. collimated backlight (right) – Courtesy Opto Engineering.

It all comes down to resolution

While telecentric imaging is a high-performance subset of the larger machine vision field in general, the same principles of resolution apply. It takes several pixels to confidently resolve any given feature – such as an edge – so any “gray areas” induced by lower quality lighting or optics would drag down system performance. See our blog and knowledge-base coverage of resolution for more details.

Collimated lighting in more detail

Above we see the results of using “diffuse” vs. “collimated” light sources, which are compelling. But what is a collimated light and how does it work so effectively?

UNLIKE a diffuse backlight, whose rays emanate towards the object at angles ranging from 0 to almost 180°, a collimated backlight sends rays with only very small deviations from perfectly parallel. Since parallel rays are also all that the telecentric lens receives and transmits on to the camera sensor, stray rays are mitigated and essentially eliminated.

The result is a high-contrast image which is easier to process with high-reliability. Furthermore, shutter speeds are typically faster, achieving necessary saturation more quickly, thereby shortening cycle times and increasing overall throughput.

Many lights to choose from:

The video below shows a range of light types and models, including clearly labeled direct, diffuse, and collimated lights.

Several light types – including clearly labeled collimated lights

[Optional] Telecentric concepts overview

Below please compare the diagrams that show how light rays travel from the target position on the left, through the respective lenses, and on to the sensor position on the far right.

A telecentric lens is designed to insure that the chief rays remain parallel to the optical axis. The key benefit is that (when properly focused and aligned) the system is invariant to the distance of the object from the lens. This effectively ignores light rays coming from other angles of incidence, and thereby supports precise optical measurement systems – a branch of metrology.

If you’d like to go deeper on telecentrics, see the following two resources:

Telecentric concepts presented as a short blog.

Alternatively as a more comprehensive Powerpoint from our KnowledgeBase.

Video: Selecting a telecentric lens:

Call us at 978-474-0044 to tell us more about your application – and how we can guide you through telecentric lensing and lighting options.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

How to calculate line rate on a line scan camera based on conveyor speed

Unless one calculates and sets the line rate correctly, there’s a risk of blur and sub-optimal performance. And/or purchasing a line scan camera that’s not up to the task; or that’s overkill and costs you more than would have been needed.

Line Scan concept – Courtesy Teledyne DALSA

Optional line scan review or introduction

Skip to the next section if you know line scan concepts already. Otherwise…

Perhaps you know about area scan imaging, where a 2D image is generated with a global shutter, exposing all pixels on a 2D sensor concurrently. And you’d like to understand line scan imaging by way of comparing it to area scan. See our blog What is the difference between an Area Scan and a Line Scan Camera?

30 minute informative overview of Line Scan imaging – Courtesy Teledyne DALSA

Maybe you prefer seeing a specific high-end product overview and application suggestions, such as the Teledyne DALSA 16k TDI line scan camera with 1MHz line rate. Or a view to tens of different line scan models, varying not only by manufacturer, but by sensor size and resolution, interface, and whether monochrome or color.

Either you recall how to determine resolution requirements in terms of pixel size relative to defect size, or you’ve chased the link in this sentence for a tutorial. So we’ll keep this blog as simple as possible, dealing with line rate calculation only.

Line scan cameras – Courtesy Teledyne DALSA

Calculate the line rate

Getting the line rate right is the application of the Goldilocks principle to line scanning.

Line rate too slow…Line rate too fast…
Blurred image if due to too long exposure, and/or missed segments due to skipped “slices”Oversampling can create confusion by identifying the same feature as two distinct features
Why we need to get the line rate rate right

A rotary encoder is typically used to synchronize the motion of the conveyor or web with the line scan camera (and lighting if pulsed). Naturally the system cannot be operated faster than the maximum line speed, but it may sometimes operator more slowly. This may happen during ramp up or slow down phases – when one may still need to obtain imaging – or by operator choice to conserve energy or avoid stressing mechanical systems.

Naming the variables … with example values

Resolution A = object space correlation to sensor; FOV / pixel array; e.g. if 550mm FOV and 2k sensor = 550/2000 = 0.275 pixels per mm

Transport speed T = mm per sec; e.g. 4k / 1mm yields rate of motion

Sampling frequency F = T / A; for example values above F = 4000 / 0.275 = 14545.4545 = 14.5kHz; spelled out: Frequency = Transport_speed / Pixel_spatial_resolution (what 1 pixel equals in target space)

For the example figures used above, a line scan camera with 2k resolution and a line scan frequency of about 14.5 kHz will be sufficient.

Download spreadsheet with labeled fields and examples:

Just click here, or on the image below, to download the spreadsheet calculator. It includes clearly labeled fields, and examples, as the companion piece for this blog:

Not included here… but happy to show you how

We’ve kept this blog intentionally lean, to avoid information overload. Additional values may also be calculated, of course, such as:

Data rate in MB / sec: Useful to confirm camera interface can sustain the data rate

Frame time: The amount of time to process each scanned image. Important to be sure the PC and image processing software are up to the task – based on empirical experience or by conferring with software provider.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about

Explained: Trifecta of lens f-stop, wavelength and Airy disc

In this blog we tackle a set of issues well-known to experts. It’s complex enough to be non-obvious, but easy enough to understand through this short tutorial. And better to learn via a no-cost article rather than through trial and error.

Alternative to reading on, let us help you get the optics right for your application. Or read on and then let us help you anyway. Helping machine vision customers choose optimal components is what we do. We’ve staked our reputation on it.

Aperture size and F-stop

Most understand that the F-stop on a lens specifies the size of the aperture. Follow that last link to reveal the arithmetic calculations, if you like, but the key thing to keep in mind at the practical level is that F-stop values are inversely correlated with the size of the aperture. So a large F-number like f/8 indicates a narrow aperture, while a small F-number like f/1.4 corresponds to a large aperture. Some lens designs span a wider range of F-numbers than others, but the inverse correlation always applied.

Iris controls the aperture – Courtesy Edmund Optics

Maximizing contrast might seem to suggest a large aperture

For machine vision it’s always important to maximize contrast. The target object can only be discerned when it is sufficiently contrasted against the background or other objects. Effective lighting and lensing is crucial, in addition to a camera sensor that’s up to the task.

“Maximizing light” (without over-saturating) is often a challenge, unless one adds artificial light. That would tend to suggest using a large aperture to let more light pass while still keeping exposure time short enough to “freeze” motion or maximize frames per second.

So for the moment, let’s hold that thought that a large aperture sounds promising. Spoiler alert: we’ll soften our position on this point in light of forthcoming points.

Depth of Field – DoF

While a large aperture seems attractive so far, one argument against that is depth of field (DoF). In particular, the narrowest effective aperture maximizes depth of field, while the largest aperture minimizes DoF.

Correlation of aperture size and depth of field – Courtesy Edmund Optics

Depending on the lens design, the difference in DoF between largest vs. smallest aperture may vary from as little as a few millimeters to as great as many centimeters. Your applications knowledge will inform you how much wiggle room you’ve got on DoF.

So what’s the sweet spot for aperture?

Barring further arguments to the contrary, the largest aperture that still provides sufficient depth of field is a good rule of thumb.

Where do diffraction limits and the Airy disc come into it?

Optics is a branch of physics. And just like absolute zero in the realm of temperature, Boyle’s law with respect to gases, etc., there are certain constraints and limits that apply to optics.

Whenever light passes through an aperture, diffraction occurs – the bending of waves around the edge of the aperture. The pattern from a ray of light that falls upon the sensor takes the form of a bright circular area surrounded by a series of weakening concentric rings. This is called the Airy disk. Without going into the math, the Airy disk is the smallest point to which a beam of light can be focused.

And while stopping down the aperture increases the DoF, our stated goal, it has the negative impact of increasing diffraction.

Correlation of aperture to diffraction pattern – Courtesy Edmund Optics

Diffraction limits

As focused patterns, containing details in your application that you want to discern, near each other, they start to overlap. This creates interference, which in turn reduces contrast.

Every lens, no matter how well it is designed and manufactured, has a diffraction limit, the maximum resolving power of the lens – expressed in line pairs per millimeter. There is no point generating an Airy disk pattern from adjacent real-world features that are larger than the sensor’s pixels, or the all-important contrast needed will not be achieved.

And wavelength’s a factor too?

Indeed wavelength is also a contributor to contrast and the Airy disc. As beings who see, we tend to default to thinking of light as white light or daylight, which is a composite segment of the spectrum, from indigo, blue, green, yellow, orange, and red. That’s from about 380 nm to 780 nm. Below 380 nm we find ultraviolet light (UV) in the next segment of the spectrum. Above 780 nm the next segment is infrared (IR).

Monochrome light better than white light

An additional topic relative to the Airy disc is that monochrome light is better than white light. When light passes through a lens, it refracts (bends) differently in correlation with the wavelength. This is referred to as chromatic aberration.

Transverse and longitudinal chromatic aberration – Courtesy Edmund Optics

If a given point on your imaged object reflect or emits light in two more more of the wavelengths, the focal point of one might land in a different sensor pixel than the other, creating blur and confusion on how to resolve the point.

An easy way to completely overcome chromatic aberration is to use a single monochromatic wavelength! If your target object reflects or emits a given wavelength, to which your sensor is responsive, the lens will refract the light from a given point very precisely, with no wavelength-induced shifts.

Or call us at 978-474-0044

The moral of the story

The takeaway point is that the trifecta of aperture (F-stop) and wavelength each have a bearing on the Airy disc, and that one wants to choose and configure the optics and lighting to optimize the Airy disc. This leads to effective applications performance – a must have. But it can also lead to cost-savings, as lower cost lenses, lighting, and sensors, optimally configured, may perform better than higher cost components chosen without sufficient understanding of these principles.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

FPD-Link III vs GMSL2 vs CSI-2 vs USB considerations for deployment

New interface options arrive so frequently that trying to keep up can feel like drinking water from a fire hose. While data transfer rates are often the first characteristic identified for each interface, it’s important to also note distance capabilities, power requirements, EMI reduction, and cost.

Which interfaces are we talking about here?

This piece is NOT about GigE Vision or Camera Link. Those are both great interfaces suitable for medium to long-haul distances, are well-understood in the industry, and don’t require any new explaining at this point.

We’re talking about embedded and short-haul interface considerations

Before we define and compare the interfaces, what’s the motivation? Declining component costs and rising performance are driving innovative vision applications such as driver assistance cameras and other embedded vision systems. There is “crossover” from formerly specialized technologies into machine vision, with new camera families and capabilities, and it’s worth understanding the options.

Alvium camera with FPD-Link or GMSL interface – Courtesy Allied Vision Technologies

How shall we get a handle on all this?

Each interface has standards committees, manufacturers, volumes of documentation, conferences, and catalogs behind it. One could go deep on any of this. But this is meant to be an introduction and overview, so we take the following approach.

  • Let’s identify each of the 4 interfaces by name, acronym, and a few characteristics
  • While some of the links jump to a specific standard’s full evolution (e.g. FPD-Link including Gen 1, 2, and 3), per the blog header it’s the current standards as of Fall 2024 that are compelling for machine vision applications: CSI-2, GMSL2, and FPD-Link III, respectively
  • Then we compare and contrast, with a focus on rules of thumb and practical guidance

If at any point you’ve had enough reading and prefer to just talk it through:

FPD-Link III – Flat Panel Display Link

A free and open standard, FPD-Link has classically been used to connect a graphics display unit (GPU) to a laptop screen, LCD TV, or similar display.

FPD-Link automotive applications schematic – Courtesy Texas Instruments

FPD-Link has subsequently become widely adopted in the automotive industry, for backup cameras, navigation systems, and driver-assistance systems. FPD-Link exceeds the automotive standards for temperature ranges and electrical transients, making it attractive for harsh environments. That’s why it’s interesting for embedded machine vision too.

GMSL2 – Gigabit Multimedia Serial Link

GMSL – Courtesy Analog Devices

GMSL is widely used for video distribution in cars. It is an asymmetric full duplex technology. Asymmetric in that it’s designed to move larger volumes of data downstream, and smaller volumes upstream. Plus power and control data, bi-directionally. Cable length can be up to 15m.

CSI-2 – Camera Serial Interface (Gen. 2)

CSI-2 registered logo – Courtesy mipi alliance

As the Mobile Industry Processor Interface (MIPI) standard for communications between a camera and host processor, CSI-2 is the sweet spot for applications in the CSI standards. CSI-2 is attractive for low power requirements and low electromagnetic interference (EMI). Cable length is limited to about 0.5m between camera and processor.

USB – USB3 Vision

USB3 Vision registered logo – Courtesy Association for Advancing Automation

USB3 Vision is an imaging standard for industrial cameras, built on top of USB 3.0. USB3 Vision has the same plug-and-play characteristics of GigE Vision, including power over the cable, and GenICam compliance. Passive cable lengths are supported up to 5m (greater distances with active cables).

Compare and contrast

In the spirit of keeping this piece as a blog, in this compare-and-contrast segment we call out some highlights and rules-of-thumb. That, together with engaging us in dialogue, may well be enough guidance to help most users find the right interface for your application. Our business is based upon adding value through our deep knowledge of machine vision cameras, interfaces, software, cables, lighting, lensing, and applications.

CABLE LENGTHS COMPARED(*):

  • CSI-2 is limited to 0.5m
  • USB3 Vision passive cables to 5m
  • FPD-Link distances may be up to 10m
  • GMSL cables may be up to 15m

(*) The above guidance is rule-of-thumb. There can be variances between manufacturers, system setup, and intended use, so check with us for an overall design consultation. There is no cost to you – our sales engineers are engineers first and foremost.

BANDWIDTH COMPARED#:

  • USB3 to 3.6 Gb/sec
  • FPD-Link to 4.26 Gb/sec
  • GMSL to 6 Gb/sec
  • CSI-2 to 10 Gb/sec

(#) Bandwidth can also vary by manufacturer and configuration, especially for MIPI and SerDes [SerializerDeserializer], and per chipset choices. Check with us for details before finalizing your choices.

RULES OF THUMB:

  • CSI-2 often ideal if you are building your own instrument(s) with short cable length
  • USB3 is also good for building one’s own instruments when longer distances are needed
  • FPD-Link has great EMI characteristics
  • GMSL is also a good choice for EMI performance
  • IF torn between FPD-Link vs. GMSL, note that there are more devices in the GMSL universe, so that might skew towards easier sourcing for other components

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.