Types of 3D imaging systems – and benefits of Time of Flight (ToF)

Time Of Flight Gets Precise: Whitepaper

2D imaging is long-proven for diverse applications from bar code reading to surface inspection, presence-absence detection, etc.  If you can solve your application goal in 2D, congratulations!

But some imaging applications are only well-solved in three dimensions.  Examples include robotic pick and place, palletization, drones, security applications, and patient monitoring, to name a few.

For such applications, one must select or construct a system that creates a 3D model of the object(s).  Time of Flight (ToF) cameras from Lucid Vision Labs is one way to achieve cost-effective 3D imaging for many situations.

ToF systems setup
ToF systems have a light source and a sensor.

ToF is not about objects flying around in space! It’s about using the time of flight of light, to ascertain differences in object depth based upon measurable variances from light projected onto an object and the light reflected back to a sensor from that object.  With sufficiently precise orientation to object features, a 3D “point cloud” of x,y,z coordinates can be generated, a digital representation of real-world objects.  The point cloud is the essential data set enabling automated image processing, decisions, and actions.

In this latest whitepaper we go into depth to learn:
1. Types of 3D imaging systems
2. Passive stereo systems
3. Structured light systems
4. Time of Flight systems
Whitepaper table of contents
Download

Let’s briefly put ToF in context with other 3D imaging approaches:

Passive Stereo: Systems with cameras at a fixed distance apart, can triangulate, by matching features in both images, calculating the disparity from the midpoint.  Or a robot-mounted single camera can take multiple images, as long as positional accuracy is sufficient to calibrate effectively.

Challenges limiting passive stereo approaches include:

Occlusion: when part of the object(s) cannot be seen by one of the cameras, features cannot be matched and depth cannot be calculated.

ToF diagram
Occlusion occurs when a part of an object cannot be imaged by one of the cameras.

Few/faint features: If an object has few identifiable features, no matching correspondence pairs may be generated, also limiting essential depth calculations.

Structured Light: A clever response to the few/faint features challenge can be to project structured light patterns onto the surface.  There are both active stereo systems and calibrated projector systems.

Active stereo systems are like two-camera passive stereo systems, enhanced by the (active) projection of optical patterns, such as laser speckles or grids, onto the otherwise feature-poor surfaces.

ToF diagram
Active stereo example using laser speckle pattern to create texture on object.

Calibrated projector systems use a single camera, together with calibrated projection patterns, to triangulate from the vertex at the projector lens.  A laser line scanner is an example of such a system.

Besides custom systems, there are also pre-calibrated structured light systems available, which can provide low cost, highly accurate solutions.

Time of Flight (ToF): While structured light can provide surface height resolutions better than 10μm, they are limited to short working distances. ToF can be ideal for or applications such as people monitoring, obstacle avoidance, and materials handling, operating at working distances of 0.5m – 5m and beyond, with depth resolution requirements to 1 – 5mm.

ToF systems measure the time it takes for light emitted from the device to reflect off objects in the scene and return to the sensor for each point of the image.  Some ToF systems use pulse-modulation (Direct ToF).  Others use continuous wave (CW) modulation, exploiting phase shift between emitted and reflected light waves to calculate distance.

The new Helios ToF 3D camera from LUCID Vision Labs, uses Sony Semiconductor’s DepthSense 3D technology. Download the whitepaper to learn of 4 key benefits of this camera, example applications, as well as its operating range and accuracy,

Download whitepaper
Download whitepaper
Time Of Flight Gets Precise: Whitepaper
Download Time of Flight Whitepaper

Have questions? Tell us more about your application and our sales engineer will contact you.

1st Vision’s sales engineers have an average of 20 years experience to assist in your camera selection.  Representing the largest portfolio of industry leading brands in imaging components, we can help you design the optimal vision solution for your application.

What is a 3D camera and how is it used in machine vision?

A 3D Profile sensor (aka camera) relies on 3D Laser Triangulation techniques that have been around for a long time, but until now were expensive. 3D Laser triangulation a decade ago consisted of using separate components in complicated setups using lasers, lighting, optics and algorithms to capture 3D information. Today, this has become simplified into a single package. Teledyne Dalsa Z-Trak profile sensor puts the optics, lasers and cameras into a single package with comprehensive free software.

Ask us for a quote on Z-trak!

How does the Z-Trak Profile sensor capture 3D information?
As shown in the image below, a laser stripe is projected on the object and imaged on an image sensor. This gives the position of the laser stripe and provides lateral information and depth giving X and Z axis data. By moving the object in the Y-Scan direction the Y-axis data point is provided then giving full X, Y & Z dimensional information.

What applications do 3D laser triangulation solve?
Z-Trak laser profile cameras are GigE Vision compliant permitting it to be used with any image processing software that supports 16 bit acquisition using the GigE Vision protocol. Using 3rd party and open platform software development packages such as Dalsa Sapera Processing 3D, Sherlock 8 3D, Stemmer CVB, GeniCAM tools and MvTec Halcon many applications can be solved.
A partial list of applications is as follows:

Teledyne Dalsa provides free software packages consisting of Sapera Processing with run time licenses and Sherlock 3D. Easy to use demo programs are also included. A few examples using the Sapera source code are as follows:

Full specifications, Data sheets and manual for Teledyne Dalsa Z-Trak can be found HERE.
or request a Quote HERE

Click to contact
Give us some brief idea of your application and we will contact you to
discuss camera options.

1st Vision’s sales engineers have over 100 years of combined experience to assist in your camera selection.  With a large portfolio of lenses, cables, NIC card and industrial computers, we can provide a full vision solution!

Ph:  978-474-0044  /  info@1stvision.com  / www.1stvision.com