Whitepaper: Event-based sensing paradigm

Except for sometimes compelling line-scan imaging, machine vision has been dominated by frame-based approaches. (Compare Area-scan vs. Line-scan). With an area-scan camera, the entire two-dimensional sensor array of x pixels by y pixels is read out and transmitted over the digital interface to the PC host. Whether USB3, GigE, CoaXPress, CameraLink, or any other interface, that’s a lot of image data to transport.

Download whitepaper
Event-based sensing as alternative to frame-based approach

If your application is about motion, why transmit the static pixels?

The question above is intentionally provocative, of course. One might ask, “do I have a choice?” With conventional sensors, one really doesn’t, as their pixels just convert light to electrons according to the physics of CMOS, and readout circuits move the array of charges on down the interface to the host PC, for algorithmic interpretation. There’s nothing wrong with that! Thousands of effective machine vision applications use precisely that frame-based paradigm. Or the line-scan approach, arguably a close cousin of the area-scan model.

Consider the four-frame sequence to the left, relative to a candidate golf-swing analysis application. Per the legend, with post-processing markup the blue-tinged golfer, club, and ball are undersampled in the sense that there are unshown phases of the swing.

Meanwhile the non-moving tree, grass, and sky are needlessly re-sampled in each frame.

It takes an expensive high-frame-rate sensor and interface to significantly increase the sample rate. Plus storage capacity for each frame. And/or processing capacity – for automated applications – to separate the motion segments from the static segments.

With event-based sensing, introduced below, one can achieve the equivalent of 10k fps – by just transmitting the pixels whose values change.

Images courtesy Prophesee Metavision.

Event-based sensing only transmits the pixels that changed

Unlike photography for social media or commercial advertising, where real-looking images are usually the goal, for machine vision it’s all about effective (automated) applications. In motion-oriented applications, we’re just trying to automatically control the robot arm, drive the car, monitor the secure perimeter, track the intruder(s), monitor the vibration, …

We’re NOT worried about color rendering, pretty images, or the static portions in the field of view (FOV). With event-based sensing, “high temporal imaging” is possible, since one need only pay attention to the pixels whose values change.

Consider the short video below. The left side shows a succession of frame-based images for a machine driven by an electric motor and belt. But the left hand image sequence is not a helpful basis for monitoring vibration with an eye to scheduling (or skipping) maintenance, or anticipating breakdowns.

The right-hand sequence was obtained with an event-based vision sensor (EVS), and absolutely reveals components with both “medium” and “significant” vibration. Here those thresholds have triggered color-mapped pseudo-images, to aid comprehension. But an automated application could map the coordinates to take action, such as gracefully shutting down the machine, scheduling maintenance according to calculated risk, etc.

Courtesy Prophesee Metavision

Another example to help make it real:

Here’s another short video, which brings to mind applications like autonomous vehicles and security. It’s not meant to be pretty – it’s meant to show the sensor detects and transmits just the pixels that correlate to change:

Courtesy Prophesee Metavision

Event-based sensing – it really is a different paradigm

Even (especially?) if you are seasoned at line-scan or area-scan imaging, it’s a paradigm shift to understand event-based sensing. Inspired by human vision, and built on the foundation of neuromorphic engineering, it’s a new technology – and it opens up new kinds of applications. Or alternative ways to address existing ones.

Download whitepaper
Event-based sensing as alternative to frame-based approach

Download the whitepaper and learn more about it! Or fill out our form below – we’ll follow up. Or just call us at 978-474-0044.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

#EVS

#event-based

#neuromorphic

Guidelines selecting machine vision camera interface

Machine Vision Interfaces
Machine Vision Interfaces

Industrial machine vision camera interfaces continue to develop, allowing cameras to transfer megapixel images at extremely high frame rates.  These advancements open up endless applications, however each machine vision camera interface has its own pros and cons.

Selecting the best digital camera interface can be done by taking in several considerations, helping you to make an optimal selection.

The following are some considerations in making an interface selection.

  1. Bandwidth (Resolution and frame rate)
  2. Cable Length
  3. Cost
  4. Complexity

Updated whitepaper available

For a comprehensive treatment of these issues, links to various standards and products, and a helpful comparative table, download our freshly updated whitepaper Camera Interfaces Explained:

Download whitepaper
Download whitepaper Camera Interfaces Explained

Some of what’s in that whitepaper – just a teaser view

Bandwidth:  This is one of the biggest factors in selecting an interface as it essentially is the size of the pipe to allow data to flow.  Bandwidth can be calculated by (resolution) x (frame rate) x (bit depth).   You essentially find out pixels / second x the frame bit depth resulting in your total Megabits / second (Mb/sec).  Large frame sizes at high speeds will require a large data pipe!  If not, you’ll be bandwidth limited, so one would need to reduce the frame rate and image size or a combination of both.

Cable Length:  The application will dictate the distance between the camera and industrial computer.  In factory automation applications, the cameras can be located in most cases within meters from the computer vs a stadium sports analytics application requiring 100’s of meters.

Assorted machine vision cables – Courtesy CEI

Cost:  Budgets must also be considered.  Interfaces such as USB are very low cost versus a CoaxPress interface which will require a $2K frame grabber and more expensive cables.

Complexity:  Not all interfaces are plug and play and require more complex configuration.  If you are leaning towards interfaces using frame grabbers and have no vision experience, you may want to elect using a certified systems integrator.

Digital machine vision camera interfaces

The interfaces each have pros and cons aside from the designated bandwidth, cable lengths and costs, and are outlined as follows:

USB2.0 is an older standard for machine vision cameras and now superseded by USB3.0 / 3.1 .  Early on, this was popular allowing cameras to easily plug and play with standard USB ports.  This is still a great option for lower frame rate applications and comes with low cost.  Click here for USB2 cameras.

USB3.0 / 3.1  is the next revision of USB2.0 allowing high data rates, plug and play capabilities and is ratified by the AIA standards, being “USB3 Vision” compliant.  This allows plug and play with 3rd party software following the GENICAM standards.  Cables lengths are limited to 5 meters, but can be overcome with active and optical cables.   Click here for USB3 cameras

GigE Vision was introduced in 2006 and is a widely accepted standard following GENICAM standards.  This is a the most popular high bandwidth interface allowing plug and play capabilities and allowing long cable lengths.  Power Over Ethernet (PoE) will allow 1 cable to be used for data and power making a simpler installation.  GigE is still not was fast as USB3.0, but has benefits of 100 meter cable lengths.  Click here for GigE cameras.

5 GiGe (aka N-base T) & 10GigE similar to USB2 moving to USB3, is the next iteration of the GigEVision standard providing more bandwidth.   Both follow the same GigE Vision standards, but now at higher bandwidths.  Specific NIC cards will be required to handle the interface.  Click here for 5 GigE cameras. 

The following interfaces typically require frame grabbers:

CoaxPress (CXP) is a relatively new standard released in 2010, supported by GENICAM, utilizing coax cable to transmit data, trigger signals and power using one cable..  It is a scaleable interface via additional coax cables supporting up to 25Gb/s (3125MB/s) and higher now with CXP12.  The interface can support extremely high bandwidth as seen in the above chart with long cable lengths to 100+ meters depending on the configuration.  This interface requires a frame grabber which adds cost and some complexity in the overall setup.  Click here for CoaxPress cameras

Camera link is a well established standard, dedicated machine vision standard released in 2000 allowing high speed communications between cameras and frame grabbers.  It includes provisions for data, communications, camera timing and real time signaling to the camera.  A frame grabber is required similar to CXP adding cost and some complexity and is limited in cable lengths to 10 meters.  Longer cable lengths can be achieved with active and fiber optic cable solutions which additionally add cost.   Click here for CameraLink cameras

CameraLink HS is a dedicated machine vision standard taking key aspects of CameraLink and expanding upon it with more features.  This is a scaleable high speed interface with reliable data transfer and long cable lengths up to 300+ meters with low cost fiber connections.  Similar to CXP and camera link a frame grabber is required adding cost.  Click here for Cameralink HS cameras


Only in the full whitepaper:

Single-table one-page comparison of key interface attributes, including data throughput, cable lengths, powering options.

DOWNLOAD WHITEPAPER TO VIEW COMPREHENSIVE TABLE

Helpful tips and practical advice

Emerging standards updates

For a comprehensive treatment of these issues download our freshly updated whitepaper Camera Interfaces Explained:

Download whitepaper
Download whitepaper Camera Interfaces Explained

If you prefer to be guided, just call us at 978-474-0044. Tell our sales engineer a bit about your application, and we’ll help guide you to a best-fit solution.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Alvium Frame – better than a bare board

OEM vision system builders often favor bare board cameras. They are typically very small and compact, attractively priced, and easy to fit into small spaces. Ideal for embedding into systems of your own design.

Alvium housed camera (red) vs. Alvium Frame in metallic aluminum – Courtesy Allied Vision

But sensor alignment can be challenging – enter the Alvium Frame

With a traditional housed camera, the sensor is typically aligned within the camera, to very precise tolerances. That insures that when the lens is mounted, the field of view is optimally transferred through the optics and onto the sensor, without introducing tip/tilt/focus losses.

Conversely, a board level camera is wonderfully compact, “just” a sensor and other electronics on a printed circuit board (PCB). But the integrator, often an OEM systems developer, is then left to his own skills aligning the PCB optical sensor and a corresponding lens. Even the most skilled mechanical engineers and systems builders can find optical alignment challenging. The tolerances are very fine, and besides optics knowledge and experience, it often takes special instrumentation and tooling to set, test, and tune the alignment. And then to insure the positioning remains stable relative to future vibrations once deployed in the target system.

Alvium Frame – sized like a board, aligned like a camera

Allied Vision’s innovation with the Alvium Frame was to recognize a market for something almost as small as a board level camera, yet sensor-aligned in a frame. And with helpful heat dissipation characteristics besides.

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.

What happens if the sensor isn’t well-aligned?

Image quality can be negatively affected, by getting the geometry wrong. Optics is all about mapping the real world target through a lens and onto the 2D array of pixels in the sensor. So true orientation without rotation in the X or Y axis is preferred, as is proper depth in the Z axis, and avoidance of tip or tilt.

Consider the following illustration of sensor rotation off by just 1 degree. While the camera body and PCB board being imaged were squared to each other, for this illustration the sensor was rotated by 1 degree around the Z axis. It would not pass the alignment process in manufacturing production and quality control like this! Notice how the white line slopes down to the right, for this misaligned sensor.

Whether your application is PCB inspection, optometry, lasers, or any precision work, optical alignment matters.

Misaligned sensor causes horizontal white line to appear sloped – Courtesy Allied Vision

Sensor tip/tilt impacts Modulation Transfer Function (MTF)

In our blogs, newsletters, Tech Briefs, and Knowledge Base, we periodically talk about the Modulation Transfer Function (MTF). It’s an important measure of lens performance. One typically seeks a lens that’s more than good enough for the task at hand, just as one’s camera, sensor, lighting, and data rates each have to be up to the job – a system is no better than its weakest link.

But a lens’ reported MTF is based on testing instances of the lens mounted on cameras with precisely aligned sensors. Any tip or tilt of the sensor away from the orthogonal plane might not greatly impact the central area of the image. But even a little tip or tilt, with the top leaning forward and the bottom backward, or the other way around, may lead to out-of-focus conditions away from the center of the sensor.

So even if the lens’ MTF is very good, an insufficiently aligned sensor can drag down imaging performance. Which is why it’s important to know the engineering tolerances of every optical component – or to trust your provider publishes and warranties their tolerances.

Alvium camera family – more than 200 variants

First came the “original” Alvium housed camera, of which there are more than 10 members in each of 1GigE and 5GigE interfaces. These compact cameras are attractively priced, with a range of sensors, and are feature-rich.

Alvium G1 with lens – Courtesy Allied Vision

Variations are available through the Alvium Flex concept, with options for Open Housing, Flex Frame, and Bare Board. Designed for space-constrained applications, the Alvium Flex cameras offer USB and MIPI CSI-2 interfaces. Still feature-rich and with many sensor options, pricing in single or small quantities, or OEM volumes are available.

Al
Alvium Flex options – Courtesy Allied Vision

In another nod to OEM customers, the USB3 version of the Alvium Flex Frame has a choice of 2 interface positions, a rear exit (1800) or a side exit (90), as shown below:

Interface options helpful for tight spaces – Courtesy Allied Vision

So the Flex Frame offering fills a niche between housed vs. board level cameras. It’s close in size to board level, but with the alignment benefits of a housed camera.

So what are the alignment tolerances?

Let’s use the diagram and definition of terms below as a framework:

Sensor shift and rotation framework – Courtesy Allied Vision

For housed-sensor Alvium models, Allied Vision asserts the following manufacturing accuracy for sensor positioning:

Sensor positioning accuracy for housed-sensor Alvium models – Courtesy Allied Vision

For Alvium Frame models, the table below shows sensor positioning tolerances:

Sensor positioning accuracy for Alvium Frame models – Courtesy Allied Vision

Did I read that right? Frame model tolerances tighter than housed?

Exactly. Sensor shift in the x/y axis is +/- 90 µm in for Alvium Frame models fully 60 µm tighter tolerance than the +/- 150 µm for sensors in the housed Alvium models.

Likwise in the z axis, the optical back focal length is within 0 to -50 µm for the Alvium Frame models, while the Alvium housed models are factory calibrated to within 0 to -100 µm.

Other features of the Alvium Frame

In addition to the optically important sensor alignment, the frame helps to dissipate heat via the metallic surface area, a benefit over a bare-board approach. The frame has 4 M2 screw holes for easy mounting. And there are two options to help you align the Alvium Frame within your own system:

Two options to align Alvium Frame units – Courtesy Allied Vision

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

What can you see with a 67MP camera?

Remember when machine vision pioneers got stuff done with VGA sensors at 0.3MP? And the industry got really excited with 1MP sensors? Moore’s law keeps driving capacities and performance up, and relative costs down. With the Teledyne e2v Emerald 67MP sensor, cameras like the Genie Nano-10GigE-8200 open up new possibilities.

12MP sensor image – Courtesy Teledyne DALSA
67MP sensor image – Courtesy Teledyne DALSA

So what? 67MP view above right doesn’t appear massively compelling…

Well at this view, without zooming in, we’d agree….

But at 400% zoom, below, look at the pixelation differences:

Both images below show the same target region, with the same lighting and lens, and each zoomed (with Gimp) to 400%. There is so much pixelation in the 12MP image to raise doubts about effective edge detection on either the identifying digits (33) or for the metal-rimmed holes. Whereas the 67MP image has far less pixelation, thereby passing a readily usable image to the host for processing. How much resolution does your application require?

12MP zoomed 400%
67MP zoomed 400%

Important “aside”: Sensor format and lens quality also important

Sensor format refers to the physical size of the sensor and the pixel shape and pixel density. Of course the lens must physically mount to the camera body (e.g. S, C, M42, etc.), but it must also create an image circle that appropriately covers the sensor’s pixel array. The Genie Nano-10Gige-8200 uses the Teledyne e2V Emerald 67M packs just over 67 million pixels, each square pixel just 2.5 µm wide and high, onto a CMOS sensor measuring only 59mm x 59mm.

Consider other good quality cameras and sensors, with pixel sizes in the 4 – 5 µm range, which leads to EITHER fewer pixels overall in the same size sensor array; OR to a much larger sensor to accommodate more pixels. The former may limit what can be accomplished with a single camera. The latter would necessarily make the camera body larger, the lens mount larger, and the lens more expensive to manufacture.

The lens quality, typically expressed via the Modulation Transfer Function (MTF), is also important. Not all lenses are created equal! A “good” quality lens may be enough for certain applications. For more demanding applications, one would be wasting a large format sensor if the lens’ performance fails below the sensor’s capabilities.

Two different lenses were used to take the above images, both fitting the sensor size. However the right image was taken with a lens designed for smaller pixels versus the left. – Courtesy Teledyne DALSA

The high-level views of the test chart above tease at the point we’re making, but it really pops if we zoom in. Look at the difference in contrast in the two images below!

Lens nominally a fit for the sensor format and mount type, but NOT designed for 2.5 µm pixels.
Lens designed for 2.5 µm pixels.

The takeaway point of this segment is lensing matters! The machine vision field benefits users tremendously with segmented sensor, camera, lensing, and lighting suppliers. Even within the same supplier’s lineup, there are often sensors or lenses pitched at differing performance requirements. Consider our Knowledge Base guide on Lens Quality Considerations. Or call us at 978-474-0044.


Another example:

Below see the same concentric rings of a test chart, under the same lighting. The left imaged was obtained with a good 12MP sensor and good quality lens matched to the sensor format and pixel size. The right imaged used the 67MP sensor in the Genie-Nano-10GigE-8200, also with a well-matched lens.

12MP sensor, zoomed 1600%
67MP sensor, zoomed to same FOV

If you need a single-camera solution for a large target, with high levels of detail, there’s no way around it – one needs enough pixels. Together with a well-suited lens.

Teledyne DALSA 10GigE Genie Nano
Genie Nano 10GigE 8200 – Courtesy Teledyne DALSA

The Genie Nano 10GigE 8200, in both monochrome and color versions, is more affordable than you might think.

Once more with feeling…

Which of the following images will lead to the more effective outcomes? Choose your sensor, camera, lens, and lighting accordingly. Call us at 978-474-0044. Our sales engineers love to create solutions for our customers.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.