Robotic Vision: What is 3D camera

CATAGORY
SHARE ON

Table of Contents

Abstract

In the past decade, 3D sensors have become one of the most versatile and widely used sensors in robotics technology. In many robot applications, 3D sensors have become the preferred choice for tasks such as near-field object detection and avoidance, surface and object inspection, and map creation. This article focuses on three of the most common 3D sensing technologies used: CMOS stereo vision (active and passive), structured light, and time-of-flight (ToF). Although laser radar (LiDAR) data is also 3D, this article does not cover LiDAR.

Stereo Sensors

These widely used sensors offer a combination of good overall performance and low cost. Stereo 3D sensors come in two basic types: passive and active.

Stereo Sensors from realsense
               Stereo Sensors from Realsense

Passive stereo 3D sensors are the most affordable among all 3D sensors. They use readily available and cost-effective off-the-shelf components, and they offer various baseline options for users to choose the most suitable perception range for their use cases. However, most passive stereo 3D sensors rely on visible light to function effectively, which means they perform poorly in low-light or no-light conditions.

Active stereo 3D sensors add an infrared (IR) pattern projector, which improves the accuracy of captured 3D data and increases reliability in low-light conditions. However, the range of action for the IR projector on these sensors is limited, making them suitable only for near and medium-range applications. With the addition of the IR pattern projector, these sensors effectively become hybrid stereo/structured light sensors.

Some examples of stereo 3D sensors include:

Stereolabs Zed – Offers large and small passive stereo 3D sensors with options for wide and narrow baselines. Zed Mini and Zed 2 have integrated inertial measurement units (IMU) for dynamic data streaming.

Ensenso – Industrial-grade stereo 3D sensors with robust build and Gigabit Ethernet connectivity.

Occipital Structure PRO – Active stereo 3D sensor with a custom infrared projector and built-in IMU for capturing dynamic data. Available as an embeddable module or standalone packaged sensor.

Intel RealSense D Series (D415, D435, D435i, D455) – A series of cost-effective active stereo 3D sensors. D435i includes an integrated IMU for dynamic data, while the latest D455 has an extended range suitable for medium-range robot navigation.

Structured Light Sensors

Structured light is the most common 3D perception method in the field of robotics, thanks to the initial popularity of the PrimeSense 3D sensor, which includes the first-generation Microsoft Kinect.

Struct light Sensor
                                 Struct light Sensor

Structured light 3D sensors combine low cost with high-fidelity 3D data capture and perform well under various lighting conditions. However, there is one notable exception: direct sunlight or bright sunlight. This is because structured light sensors use an infrared light source that gets suppressed by the natural infrared light in sunlight of the same frequency. Similar to stereo vision 3D sensors, structured light sensors offer various baseline lengths, suitable for tasks with different distance detection between the sensor and the target.

Some examples of structured light 3D sensors include:

Orbbec Astra – A low-cost structured light 3D sensor commonly used as a replacement for the original Microsoft Kinect.

Photoneo – Robust 3D sensors for industrial applications, the Photoneo phoXi series offers various baseline lengths.

Zivid – A range of structured light 3D sensors with multiple baseline lengths. Zivid One series uses USB3 connectivity, while Zivid Two uses CAT 6A Ethernet connectivity.

Time-of-Flight (ToF) Sensors

ToF sensors are essentially LiDAR sensors, also known as optical radars. They emit packets of infrared light signals and record the time taken for the infrared light signals to return. Although using infrared light sources like structured light sensors, ToF sensors are less sensitive to interference from bright lights and indirect sunlight due to the way they send and receive light signals. When using a ToF camera, providing a light source capable of illuminating the entire scene allows the sensor to determine the depth of all points, resulting in a range image where each pixel encodes the distance from the corresponding point in the scene.

ToF sensors typically have broader range capabilities than structured light sensors and can operate accurately over longer distances. However, they cannot provide high-fidelity 3D images like structured light or stereo 3D sensors, making them more suitable for navigation-assisted tasks rather than surface inspection tasks.

3D Vision TOF Sensor

 

Some examples of ToF sensors include:

Vi-LiDAR – A leading and authorized distributor for multiple Chinese LiDAR manufacturers, offering a wide range of cost-effective LiDAR solutions, backed by robust after-sales support and a diverse selection of products tailored to meet your specific needs.

PMD Pico – A simple, compact ToF 3D sensor available as an embedded module or standalone packaged sensor.

Asus Xtion 2 – Becoming harder to find. However, similar to the original Microsoft Kinect, PrimeSense Carmine, and original Occipital Structure Sensor 3D sensors, Xtion 2 is compatible with OpenNI for depth-driven application development.

Sensor Comparison

Comparing stereo vision sensors with structured light and ToF sensors, the primary factors to consider when choosing a 3D sensor include:

  • Sensor Performance
  • Impact on Host System
  • Environmental Limitations

Let’s delve into each factor in detail and then examine the differences between each type of sensor.

Sensor Performance

The main parameters to explore here are perception range, resolution, and reliability.

Perception range might be the most critical parameter, as different robot tasks are defined by the range at which they occur. For example, for high-speed environment map generation in autonomous vehicles, rotating LiDAR devices are often used because their detection range can exceed 100 meters, allowing them to observe distant obstacles well before the autonomous vehicle reaches them. However, the same LiDAR devices may perform poorly in visual inspection tasks that require sub-meter-range sensors to support higher detail capture and accuracy.

3D Sensor Compare
3D Sensor Compare

For stereo and structured light 3D sensors, the range is a function of the baseline between the optical elements of the sensor. As a rule of thumb, the wider the baseline, the farther the range. Similarly, narrower baselines result in shorter perception ranges. Sensors with wider baselines may overlook objects less than 2 meters from the sensor, while sensors with shorter baselines may not have as far-reaching a range but may begin perceiving within 0.25 meters from the sensor. Therefore, the baseline and the resulting range become a trade-off between the near range needing perception and the far range needing perception, and you can only choose one and not get both simultaneously.

The range of most 3D sensors starts from approximately 0.5 to 1 meter from the sensor, extending up to 3 to 6 meters at most. Note that at the far end of the sensor’s range, resolution and accuracy may rapidly decrease. ToF 3D sensors typically have a larger range capability than structured light or stereo 3D sensors and can reliably operate at distances up to 25 meters from the sensor. However, ToF sensors often suffer from excessive noise and may offer lower resolution than other 3D technologies in a similar price range.

Impact on Host System

Here, we’ll explore key parameters like power consumption, utilization of computational resources, heat dissipation, data transmission, and form factor. All robot hosts have limited computing and power resources, and robots, drones, and other devices with visual capabilities typically involve resource balancing, where multiple sensors, motors, actuators, and other components compete for the limited system resources. Different sensors have different impacts on the system. For example, compared to passive stereo 3D sensors, sensors with active infrared (IR) elements have higher power demands. However, stereo sensors with high dynamic range (HDR) cameras might require more computational resources since they capture images with richer data. Therefore, it is crucial to consider other components needing system resources when making sensor decisions. Similarly, data transmission and storage are often limited for robots and drones. With the use of sensors equipped with HDR, the enormous scale of data captured and transmitted by the data stream might even stress powerful WiFi networks.

Finally, industrial design considerations might limit the available options for sensors. These considerations include the size and weight of the sensor (especially important for lightweight devices like drones), as well as the sensor’s heat dissipation capacity. The latter is particularly important for sensors with infrared lasers, as they depend on operating within a specific temperature range to maintain their operational bandwidth. Even a deviation of just a few degrees Fahrenheit from the operating temperature range can cause significant variations in sensor readings, rendering the generated data useless. Therefore, factors such as ventilation or thermal insulation for these sensors must be considered in the usage environment.

Environmental Limitations

In this section, we’ll explore environmental lighting and connectivity, two major parameters. It was mentioned earlier that many sensors are sensitive to environmental lighting, but it is worth emphasizing again. Lighting issues are one of the most common causes of 3D sensor failures in robots. Robots that perform well in the lab with consistent lighting conditions suddenly encounter severe and intermittent failures when placed in real-world environments with varying and inconsistent lighting conditions. Therefore, understanding all possible lighting variations in the deployment and considering them when selecting sensors is crucial.

Connectivity is also an important consideration. If the sensor data is to be processed and used locally, then connectivity is not an issue. However, for robots deployed in low-bandwidth environments, sensor data must be compressed or filtered on the host before transmission, or low-bandwidth sensor types (such as ToF) must be chosen. Similarly, it is essential to test the threshold of data packet size under actual bandwidth limitations, as lab environments may not fully reveal the problem. According to a convenient chart made, it shows different robot tasks and the 3D sensors that suit these scenarios.

Request User Manual

Request Datasheet