Ensenso 3D for logistics applications

Previously we’ve written about Ensenso series members, like the B-Series for closeup, the C-Series for color, or the whole Ensenso family (B, C, N, S, X, and XR). Or you may have read 3D scanning overviews. 3D applications are myriad, from medicine, industrial, robotics, and more.

Whether you are new to applying imaging to logistics, or looking to upgrade current systems, 3D machine vision continues to drive innovation and opportunities.

Materials handling in a warehouse – Courtesy IDS Imaging
Contact us

In this piece we focus on logistics. Consider:

Conveyor object inspection and classificationdepth data enables detection, sorting, and volume measurement
Bin-picking and parts handlingaccurate depth perception helps robots identify and locate items in bulk containers
(De-) Palletizing automation3D vision supports robot arms in stacking and unstacking pallets
Loading / Unloading trucks3D object localization improves automation
Some popular logistics tasks supported by 3D imaging

Application areas

  • Detect and recognize
  • Bin picking
  • De-palletizing

Detect and recognize

The ability to accurately detect moving objects to select, sort, verify, steer, or count can enhance (or create new) applications. Ensenso C’s high-luminance projector enables high pattern contrast for single-shot images. Video courtesy of IDS Imaging.


Bin picking

Regardless of a robot’s gripping sensitivity, speed, and range of motion, 3D imaging accuracy is central to success. Ensenso C’s integrated RGB sensor can make all the difference for color-dependent applications. Video courtesy of IDS Imaging.


De-palletize

De-palletizing might seem like a straightforward operation, but must detect object size, rotation and position even with different and densely stacked goods. Ensenso supports all those requirements – even from a distance. Video courtesy of IDS Imaging.


How does stereo imaging work?

Two-eyed humans and other animals, as well as two-camera stereo systems, use triangulation to achieve depth perception. If a given point on an object’s surface is offset more from one sensor than another, the collection of all such measurements can be used to create a point cloud model of the 3D scene.

Note the differential offsets for the projection beams of two cones – Courtesy IDS Imaging

You’ve got options – multiple stereo imaging setups

IDS Imaging Ensenso 3D cameras and camera systems are built for industrial 3D imaging with a GigE interface for ease of setup. There are monochrome and color options, as well as hybrid/blended systems. Short-distance capabilities to a few millimeters. Long-distance systems with WD to 5 meters. Modular pre-housed systems. And ruggedized systems for harsh environments.

Ensenso product family – Courtesy IDS Imaging
Short distance applications – Courtesy IDS Imaging
Ensenso XR with working distance to 5m – Courtesy IDS Imaging

Want some help with your logistics systems planning?

Call us at 978-474-0044. Our sales engineers come from diverse machine vision backgrounds, and we stake our reputation on helping clients select the best components and systems.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

Drone detection event-based cameras from Prophesee

Event-based cameras outperform frame-based approaches for many applications. We provided insight to the event-based paradigm in a recent blog. Or download our whitepaper on event-based sensing.

In this piece, we focus on drone detection, a task at which event-based imaging excels. For full-impact, please view the following in full-screen mode using the “four corners” button.

Find the drone – Event-based approach beats frame-based method – Courtesy scientific paper attribution

As discussed in the event-based paradigm introductions links above, frame-based approaches would struggle to track a drone moving in a visually complex environment (above left), having to parse for drone shapes and orientations, occlusions, etc., even when most of the imagery is static.

Meanwhile, as seen in the event-based video (above right), the new paradigm only looks for “what’s changed”, which amounts to showing “what’s moving?”. For drone detection, as well as other perimeter intrusion applications, vibration monitoring, etc., that’s ideal.

1stVision represents Prophesee’s event-based sensors and cameras, built on neuromorphic engineering principles inspired by human vision. Call us at 978-474-0044 to learn more or request a quote.

Contact us for a quote

For some applications, one only needs an event-based sensor – problem solved. For other applications, one might combine different imaging approaches. Consider the juxtaposition of three methods shown below:

Visible, polarization, and event-based approaches – Courtesy Prophesee and EOPTIC

The multimodal approach above is utilized in a proprietary system developed by EOPTIC, in which visible, polarization, and event-based sensors are integrated. For certain applications one may require the best of speed, detail, and situational awareness, for automated “confidence” and accuracy, for example.

Here’s another side-by-side video on drone detection and tracking:

Visible vs. (hybrid) event-based imaging – Courtesy Prophesee and NEUROBUS

The above-left video uses conventional frame-based imaging, where it’s pretty hard to see the drone until it rises above the trees. But the event-based approach used by Prophesee’s customer Neurobus, together with their own neuromorphic technologies, identifies the drone event amidst the trees – a level of early warning that could make all the difference.

By the numbers:

Enough with the videos – looks compelling but can you quantify Prophesee event-based sensors for me please?

Quantifying key attributes – Courtesy Prophesee

Ready to evaluate event-based vision in your application?

1stVision offers Prophesee Metavision® evaluation kits designed to help engineers and developers quickly assess event-based sensing for high-speed motion detection, drone tracking, robotics, and other dynamic vision applications. Each kit provides everything needed to get started with Prophesee’s Metavision technology, including hardware, software tools, and technical support from our experienced machine vision team. Request a quote to discuss kit availability, configuration options, and how we can help accelerate your proof-of-concept or system deployment.” – we can link that page with the kits. 

Technical note: The GenX320 Starter kit for Raspberry Pi 5” utilizes the Sony IMX636 sensor, expressly designed for event-based sensing.

Kit or camera? You choose.

The kits described and linked above are ideal for those pursuing embedded designs. If you prefer a full camera – still very compact at less than 5cm per side – and you want a USB3 interface – see IDS uEye event-based cameras. You’ve got options.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

AVT updates bring new Alvium features

Here are some cool new features. At least they’re cool if you already use AVT Alvium cameras and want to get even more out of them. Conversely the features may get your attention to give Alvium a look for your next application.

We call out five specific new features (or feature sets):

  • Liquid lens autofocus controls – great for logistics applications: fast focus change
  • Power saving standby mode – heat minimization for embedded designs
  • Improved recovery from over-temperature power savings mode – automated recovery
  • More GenICam features for V4L2 Video for Linux – great to have Linux options
  • Additional registers and controls – if some DRA is good, more is better

… especially for the Alvium camera families, including USB3 and MIPI CSI-2, and 1 GigE and 5 GigE models.

Alvium USB3, MIPI CSI-2, 1 GigE and 5 GigE compact and powerful cameras – Courtesy AVT – a TKH brand

Call us at 978-474-0044 to speak to one of our experienced sales engineers. Or tell us what you’d like to know more about – whether concepts, features, or pricing – and we’ll get back to you:

Click to contact
Give us some brief idea of your application and we will contact you to discuss camera options.

Liquid Lens Autofocus Controls

If you’re new to liquid lenses, see our prior blog for examples and an overview. Liquid lenses can change focus within milliseconds, far faster than mechanical apertures.

Below you can see the hardware configuration, which new new autofocus controls can utilize.

Courtesy AVT – a TKH Vision brand

So AVT provides the lens controlling capability on the camera side, and you can optionally connect a liquid lens if that would help your application. Naturally AVT Alvium cameras may also be used with conventional lenses, including S, CS, C, closed, open, and bare-board – range of options varies slightly by model. Please review when ordering or confer with us per adage “measure twice cut once”.

Power saving standby mode

There are at least to reasons why you might be interested in power savings. The layman’s view might be to preserve the environment or save on energy costs. But compact sensors and cameras don’t use a lot of power, often just +/- 1 watt. The primary motivator, for embedded systems designers, is to reduce heat, during periods when no imaging is required. That in turn enhances image quality and prolongs system life.

Power saving mode enabled vs. disabled – Courtesy AVT – a TKH Vision brand

Improved Recovery from over temperature mode

When the camera goes into over temperature mode, it automatically stops power draw as a self-protection mechanism. In firmware V13 this required a camera reboot to resume imaging. Now in V15 the camera resumes normal function without requiring reboot.

Improved recovery from over temperature mode – Courtesy AVT – a TKH Vision brand

(More) GenICam features for V4L2 Video for Linux

If you favor video for Linux (V4L2) drivers and APIs for your development and production controls, below see GenICam features now available to you.

Courtesy AVT – a TKH Vision brand

Additional Registers and Controls

In addition to all the registers previously available on Alvium’s MIPI CSI-2 cameras, below are a number of new registers, whose names suggest their meaning and use. One may control each feature through any of GenICam APIs, V4L2 Video for Linux, or by Direct Register Access (DRA) memory addressing. Whichever method you prefer.

New registers available for DRA – Courtesy AVT – a TKH Vision brand

Manuals for all AVT cameras and SDKs are downloadable, of course. Drill in on any feature or attribute of interest.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.

JAI prism-based 5 GigE cameras for superior color

If monochrome sensors and methods aren’t enough for your application, a machine vision color camera may be needed. And if color is needed, is “good enough” from a single sensor with a Bayer filter all you need? Or do you need the precision of a prism-based 3 sensor camera, one for each of R, G, and B? See our whitepaper Considerations for Color Machine Vision Cameras.

Prism-based 3-sensor imaging vs. interpolated Bayer mosaic sensors

Bryce Bayer, the engineer at Eastman Kodak whose name is associated with his Bayer filter innovation, created a very compact and efficient way to layer a color filter atop a monochrome sensor. The vast majority of today’s color cameras – in both machine vision and consumer imaging – utilize precisely such a color filter mechanism to interpolate color. When the resolution is sufficiently fine, the rendered image is typically good enough for many applications.

But “good enough” for some isn’t the same as good enough for all

Interpolation is a form of estimation – in the case of a Bayer filter its design presumes that the Red, Green, and Blue values between each of the “true” measurements of those values is the average of the values at the accurate points. So the in-between values are computed, and may or may not correlate to the true color present at the source.

For certain machine vision, industrial imaging, and medical applications, maximum color accuracy is essential.

What’s best for my application?

Read on, for more detail. Or give us a call at 978-474-0044. Or tell us about your requirements and we’ll contact you.

Contact us

For certain applications, color accuracy and fidelity is essential

Applications note provides further information – Courtesy JAI
See corresponding applications note for details – Courtesy JAI
See applications note for more – Courtesy JAI
All four images and and associated texts above – Courtesy JAI

JAI adds 3 new 5.1 Mpix cameras to its Apex Series

5.1 Megapixel prism-based 3 sensor camera – Courtesy JAI

Previously JAI’s Apex prism-based camera series included 1.6 Mpix and 3.2 Mpix models. Three new models join the series, at 5.1 Megapixels each. The new members all use the same SONY IMX548, one of the Pregius S sensors.

If the new 5.1 Mpix models all use the same sensors, why are there three models? Because there are three interface options, depending on your need for speed.

  • 5 GigE model: 32 fps
  • CoaXpress model: 75 fps
  • Camera Link model: 55 fps

Numerous features and benefits

There are many features designed into the Apex series cameras, including binning, single and multi-region ROI, chromatic aberration correction, and automatic level control. Download a manual for details. Or call us at 978-474-0044.

Feature highlight: Per-channel exposure control

Since the rationale for a 3 sensor prism camera is color performance, the per-channel exposure control feature helps to achieve that goal. By adjusting the exposure time for each channel separately, the camera increases signal without amplifying noise.

Per-channel exposure control – Courtesy JAI

Call us at 978-474-0044 to learn more about JAI Apex cameras. Tell us about your application goals and requirements, and we’ll help you determine the best camera, lens, lighting, filters, and software. It’s what we do.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera and components selection.  With a large portfolio of cameraslensescablesNIC cards and industrial computers, we can provide a full vision solution!

About you: We want to hear from you!  We’ve built our brand on our know-how and like to educate the marketplace on imaging technology topics…  What would you like to hear about?… Drop a line to info@1stvision.com with what topics you’d like to know more about.