Why LiDAR is the “eyes of driverless cars”

Author: Neuvition, IncRelease time:2021-07-31 05:19:43

For a car to be driverless, just like a human walking, it must control our legs through our eyes and brain.

For cars, the eyes can be cameras, ordinary radars (ie radio radars), and LiDARs. Generally, these three are the same.

The brain is to the car, that is the chip. It will finally control how the wheels should go through rigorous calculations (which can be combined with databases such as maps) based on the scenes seen by the “eyes”.

LiDAR (LiDAR, a combination of “light” and “radar”) is a sensor designed to quickly build these point clouds. By using light to measure distance, LiDAR can collect samples very quickly—up to 1.5 million data points per second. This sampling rate allows the technology to be deployed in applications such as autonomous vehicles.

LiDAR depicts several main parameters of the surrounding environment, including the number of lines, dot density, horizontal and vertical viewing angles, detection distance, scanning frequency, accuracy, etc. In addition to the position and distance information, the LiDAR also provides the density information of the scanned object, and the subsequent algorithm can judge the reflectivity of the scanned object based on this, and then proceed to the next step. By detecting the spatial orientation and distance of the target object, describing the 3D environment model through the point cloud, providing the target’s laser reflection intensity information, and providing the detailed shape description of the detected target, it not only performs well in an environment with good lighting conditions but also in the night It also performs better in extreme conditions such as rainy weather. In general, LiDAR sensors perform well in indicators such as accuracy, resolution, sensitivity, dynamic range, sensor viewing angle, active detection, low false alarm rate, temperature adaptability, dark and bad weather adaptability, and signal processing capabilities. It is difficult to achieve safe autonomous driving only with a single type of sensor and a single technology. Remind us that key sensors cannot be reduced in the most basic sensing scheme, and multiple types of sensor redundant configuration and information fusion are also required.

Perhaps for LiDAR, the high equipment cost is the biggest challenge it needs to overcome. Although the cost of this technology has been greatly reduced since its application, it is still an important obstacle to its widespread adoption.

Finally, although we treat LiDAR as a component of computer vision, point clouds are rendered entirely based on geometry. On the contrary, the human eye can recognize other physical properties of objects, such as color and texture, in addition to shape. Today’s LiDAR systems cannot distinguish the difference between paper bags and rocks, which should have been a factor considered when the sensor understands and tries to avoid obstacles.