To learn what kinds of applications are well-suited for a Contact Image Sensor
To see the unique features only found in the Teledyne DALSA AxCIS series
You already know (or can catch up quickly):
Contact Image Sensors don’t actually contact the things they are imaging. But they get to within 15 mm = 0.59 inches! So they are ideal for space-constrained applications.
And they aren’t interchangeable with line scan cameras, they are a variant on line scan concepts. They share the requirement that “something is moving” and that the sensor array is a single row of pixels.
Applications for Contact Image Sensing
Courtesy Teledyne DALSA
Why Teledyne DALSA AxCIS in particular?
You may want to review the whole Teledyne DALSA AxCIS series, and the datasheet details. Go for it! Geek out. Full transparency as always.
Or maybe you’d like a little help on what we think is special about the Teledyne DALSA AxCIS series?
T2IR – Trigger to Image Reliability
This is a Teledyne DALSA proprietary innovation that helps to de-mystify what’s happening inside a complex vision system. It uses hardware and software to improve reliability. In high level terms, T2IR monitors from trigger through image capture, and on to host memory transfer, aiming to protect against data loss. And to provide insights for system tuning if needed. T2IR is compatible with many Teledyne DALSA cameras and frame grabbers – including the AxCIS series.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about
Unless one calculates and sets the line rate correctly, there’s a risk of blur and sub-optimal performance. And/or purchasing a line scan camera that’s not up to the task; or that’s overkill and costs you more than would have been needed.
Perhaps you know about area scan imaging, where a 2D image is generated with a global shutter, exposing all pixels on a 2D sensor concurrently. And you’d like to understand line scan imaging by way of comparing it to area scan. See our blog What is the difference between an Area Scan and a Line Scan Camera?
30 minute informative overview of Line Scan imaging – Courtesy Teledyne DALSA
Maybe you prefer seeing a specific high-end product overview and application suggestions, such as the Teledyne DALSA 16k TDI line scan camera with 1MHz line rate. Or a view to tens of different line scan models, varying not only by manufacturer, but by sensor size and resolution, interface, and whether monochrome or color.
Either you recall how to determine resolution requirements in terms of pixel size relative to defect size, or you’ve chased the link in this sentence for a tutorial. So we’ll keep this blog as simple as possible, dealing with line rate calculation only.
Line scan cameras – Courtesy Teledyne DALSA
Calculate the line rate
Getting the line rate right is the application of the Goldilocks principle to line scanning.
Line rate too slow…
Line rate too fast…
Blurred image if due to too long exposure, and/or missed segments due to skipped “slices”
Oversampling can create confusion by identifying the same feature as two distinct features
Why we need to get the line rate rate right
A rotary encoder is typically used to synchronize the motion of the conveyor or web with the line scan camera (and lighting if pulsed). Naturally the system cannot be operated faster than the maximum line speed, but it may sometimes operator more slowly. This may happen during ramp up or slow down phases – when one may still need to obtain imaging – or by operator choice to conserve energy or avoid stressing mechanical systems.
Naming the variables … with example values
Resolution A = object space correlation to sensor; FOV / pixel array; e.g. if 550mm FOV and 2k sensor = 550/2000 = 0.275 pixels per mm
Transport speed T = mm per sec; e.g. 4k / 1mm yields rate of motion
Sampling frequency F = T / A; for example values above F = 4000 / 0.275 = 14545.4545 = 14.5kHz; spelled out: Frequency = Transport_speed / Pixel_spatial_resolution (what 1 pixel equals in target space)
For the example figures used above, a line scan camera with 2k resolution and a line scan frequency of about 14.5 kHz will be sufficient.
Justclick here, or on the image below, to download the spreadsheet calculator. It includes clearly labeled fields, and examples, as the companion piece for this blog:
Not included here… but happy to show you how
We’ve kept this blog intentionally lean, to avoid information overload. Additional values may also be calculated, of course, such as:
Data rate in MB / sec: Useful to confirm camera interface can sustain the data rate
Frame time: The amount of time to process each scanned image. Important to be sure the PC and image processing software are up to the task – based on empirical experience or by conferring with software provider.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about
Superior vibration resistance and mechanical design
Two of six available ML-M-HR lens models – Courtesy Moritex
Value Engineering
Every field with diverse product offerings has its own broad mix from which to choose. Some offerings are generalized; some fit a specific niche. When designing a machine vision application, and choosing components like cameras, sensors, lenses, lighting, etc., each component has to be at least slightly better than “good enough”, but need not be more than needed. The whole field of value engineering has evolved to guide practitioners in achieving required functionality while also respecting budgetary goals.
Larger FA series design goals which ML-M-HR series share- Courtesy Moritex
Moritex ML-M-HR lens series
The Moritex ML-M-HR series is part of Moritex larger FA Series of lenses. The “FA” stands for Factory Automation, which suggests points including:
Robust mechanical engineering
Quality designed to deliver reliable results and stand the test of time
Priced to permit volume purchases by achieving return on investment
Of course, the lenses are not constrained to factory automation, and you may purchase as few as you need. The factory automation insight just helps to understand their design heritage and largest market.
Specifics:
Key attributes of the Moritex ML-M-HR lens series
Pixel size and resolving capacity
Designed for 4.5 µm pixels, the lenses may of course also be used with larger pixels, but aren’t suitable for smaller. So they are an ideal fit for Sony Pregius 3rd generation sensors.
With wide-range anti-reflective (AR) coatings, the lenses provide consistent transmission from visible (Vis) through near infrared (NIR) wavelengths, i.e. 400 ~ 1100nm.
Six member family
There are six choices in the Moritex ML-M-HR series, spanning from focal lengths 8mm – 50mm, at typical intervals. The link in the previous sentence takes you to the detailed table – and quote request buttons.
Robotics application is just an example – Courtesy Moritex
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about
In this blog we tackle a set of issues well-known to experts. It’s complex enough to be non-obvious, but easy enough to understand through this short tutorial. And better to learn via a no-cost article rather than through trial and error.
Alternative to reading on, let us help you get the optics right for your application. Or read on and then let us help you anyway. Helping machine vision customers choose optimal components is what we do. We’ve staked our reputation on it.
Most understand that the F-stop on a lens specifies the size of the aperture. Follow that last link to reveal the arithmetic calculations, if you like, but the key thing to keep in mind at the practical level is that F-stop values are inversely correlated with the size of the aperture. So a large F-number like f/8 indicates a narrow aperture, while a small F-number like f/1.4 corresponds to a large aperture. Some lens designs span a wider range of F-numbers than others, but the inverse correlation always applied.
Iris controls the aperture – Courtesy Edmund Optics
Maximizing contrast might seem to suggest a large aperture
For machine vision it’s always important to maximize contrast. The target object can only be discerned when it is sufficiently contrasted against the background or other objects. Effective lighting and lensing is crucial, in addition to a camera sensor that’s up to the task.
“Maximizing light” (without over-saturating) is often a challenge, unless one adds artificial light. That would tend to suggest using a large aperture to let more light pass while still keeping exposure time short enough to “freeze” motion or maximize frames per second.
So for the moment, let’s hold that thought that a large aperture sounds promising. Spoiler alert: we’ll soften our position on this point in light of forthcoming points.
Depth of Field – DoF
While a large aperture seems attractive so far, one argument against that is depth of field (DoF). In particular, the narrowest effective aperture maximizes depth of field, while the largest aperture minimizes DoF.
Correlation of aperture size and depth of field – Courtesy Edmund Optics
Depending on the lens design, the difference in DoF between largest vs. smallest aperture may vary from as little as a few millimeters to as great as many centimeters. Your applications knowledge will inform you how much wiggle room you’ve got on DoF.
So what’s the sweet spot for aperture?
Barring further arguments to the contrary, the largest aperture that still provides sufficient depth of field is a good rule of thumb.
Where do diffraction limits and the Airy disc come into it?
Optics is a branch of physics. And just like absolute zero in the realm of temperature, Boyle’s law with respect to gases, etc., there are certain constraints and limits that apply to optics.
Whenever light passes through an aperture, diffraction occurs – the bending of waves around the edge of the aperture. The pattern from a ray of light that falls upon the sensor takes the form of a bright circular area surrounded by a series of weakening concentric rings. This is called the Airy disk. Without going into the math, the Airy disk is the smallest point to which a beam of light can be focused.
And while stopping down the aperture increases the DoF, our stated goal, it has the negative impact of increasing diffraction.
Correlation of aperture to diffraction pattern – Courtesy Edmund Optics
Diffraction limits
As focused patterns, containing details in your application that you want to discern, near each other, they start to overlap. This creates interference, which in turn reduces contrast.
Every lens, no matter how well it is designed and manufactured, has a diffraction limit, the maximum resolving power of the lens – expressed in line pairs per millimeter. There is no point generating an Airy disk pattern from adjacent real-world features that are larger than the sensor’s pixels, or the all-important contrast needed will not be achieved.
And wavelength’s a factor too?
Indeed wavelength is also a contributor to contrast and the Airy disc. As beings who see, we tend to default to thinking of light as white light or daylight, which is a composite segment of the spectrum, from indigo, blue, green, yellow, orange, and red. That’s from about 380 nm to 780 nm. Below 380 nm we find ultraviolet light (UV) in the next segment of the spectrum. Above 780 nm the next segment is infrared (IR).
Transverse and longitudinal chromatic aberration – Courtesy Edmund Optics
If a given point on your imaged object reflect or emits light in two more more of the wavelengths, the focal point of one might land in a different sensor pixel than the other, creating blur and confusion on how to resolve the point.
An easy way to completely overcome chromatic aberration is to use a single monochromatic wavelength! If your target object reflects or emits a given wavelength, to which your sensor is responsive, the lens will refract the light from a given point very precisely, with no wavelength-induced shifts.
The takeaway point is that the trifecta of aperture (F-stop) and wavelength each have a bearing on the Airy disc, and that one wants to choose and configure the optics and lighting to optimize the Airy disc. This leads to effective applications performance – a must have. But it can also lead to cost-savings, as lower cost lenses, lighting, and sensors, optimally configured, may perform better than higher cost components chosen without sufficient understanding of these principles.
About you: We want to hear from you! We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics… What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.