Is It Time for Another Look at AV Sensor Systems?

New advances enabling software-based, real-time calibration between cameras now let stereovision – a reliable, fast, and highly accurate 3D imaging technique – reach distances beyond 1,000 m.

Brad Rosen

July 7, 2023

6 Min Read
Nodar stereovision
NODAR stereovision generates images from two cameras installed in sideview mirrors.

The rules of the road for autonomous vehicles are very much an international work in progress, as we can tell by the latest update to regulations for automatic lane-keeping systems (ALKS).

UN Regulation No. 157 was amended to allow assisted lane-keeping at higher speeds, increasing the limit from 37 mph (60 km/h) to 80 mph (130 km/h) as of January 2023. The regulation also states that the vehicle must be able to detect obstacles 150 m (492 ft.) ahead.

The standard represents “a new milestone in mobility,” according to the United Nations Economic Commission for Europe (UNECE) because it marks the first global regulation for Level 3 vehicle automation.

Allowing autonomous cars to travel that much faster clearly has significant implications for safe autonomous driving. Doubling speed translates into twice the momentum or four times the kinetic energy, meaning vehicles need more than four times the distance to stop – thus the extension of the minimum forward detection range from 46 m (150 ft.) to 150 meters. And the faster a vehicle travels, the more dangerous collisions at that speed are likely to be.

What are the implications of this new rule for autonomous-vehicle manufacturers?

First, the new rule highlights the importance of vehicle sensor systems and their ability to “see” long distances. A self-driving vehicle requires enough time to detect an obstacle, understand if it is a threat, and take corrective action. For a vehicle traveling at 130 km/h, this implies the vehicle’s perception system must first detect an obstruction at 150 m, giving it less than 4 seconds to make and execute a decision.  

A distance of 150 m is a long way to detect a relatively small yet dangerous object in the road – like a tire, a fallen motorcyclist or a mattress that may have fallen from a vehicle.  As humans, we can see such obstacles as long as we are paying attention. But this task is actually more difficult for a sensor. The sensor system must first understand where the road plane lies, then have enough resolution to see an object sitting above the road and, further, must have enough data to understand the size of the object.

This turns out to be extremely difficult for existing sensors on the market. 

The two major 3D sensor categories available today--lidar and camera-based systems--work in fundamentally different ways. How well does lidar perform at 130 km/h? How well do cameras perform? Car makers need to know the strengths and weaknesses of each sensor type to understand if they are able to satisfy L3 regulations.

Steady on With Level 3 ALKS

AVs are categorized into five levels, with Level 3 defined as conditional automation. At that level, vehicles are self-driving, but drivers must stay ready to take over within a specified timeframe. A Level 3 ALKS keeps an autonomous vehicle moving steadily and safely along the road while the driver is free to, say, catch up with social media or read a book. The vehicle can sense the lanes on the road and other vehicles surrounding it, and automatically makes steering, acceleration and braking adjustments to keep the vehicle safely moving forward. 

That process illustrates why the capabilities of the different sensor systems matter. Lidar cannot detect objects in the roadway if they are too small and/or too far away. Lidar is also limited in the amount of power it can transmit because of standards to protect human eye safety, thus preventing further improvements in performance.  Rain, fog, dust and snow also are problematic for lidar, and cause false returns and scatter laser light, preventing the ALKS system from operating safely.

Stereovision, the use of two cameras to build a 3D picture of the world, has been used since the late 1800s. New advances enabling software-based, real-time calibration between cameras now lets this reliable, fast, and highly accurate 3D imaging technique reach distances beyond 1,000 m (3,280 ft.). Called “untethered” stereovision, cameras can now be mounted independently of one another while the system remains impervious to the vibrations caused by a bumpy road or the engine of a car, truck or even heavy machinery. 

The farther apart the stereovision cameras are positioned on the vehicle, the longer the sensing range of the system. For instance, stereovision cameras mounted behind the windshield separated by just 60 cm (24 ins.) (called the “baseline”) have been demonstrated to detect an object sitting only 12 cm (5 ins.) above the roadway at a distance of 172 m (564 ft.) (130 m [426 ft.] at night).

Not only can untethered stereovision systems detect very small objects at long range, but because of their precise auto-calibration, high resolution and advanced algorithms, they can also outperform other sensors at night and in bad weather. With only low-beam headlights active, an untethered stereovision system can “see” several hundred meters ahead on a highway, ensuring safe operation at night and at highway speeds. Similarly, untethered stereo sensors receive ample information from the surrounding scene when confronted with heavy rain, snow or dense fog, and have recently demonstrated the ability to “see” better than a human in these conditions. In such bad weather, lidar systems often fail because the laser light emanating from the sensor bounces off the precipitation rather than the objects in the scene surrounding the vehicle.

The Case for Stereovision

Brad Rosen Headshot Updated.png

Brad Rosen Headshot Updated

The fact that camera-based stereovision can detect objects as small as 12 cm high from 172 m away and works well at night and in bad weather makes this technology perfectly suited for ALKS and high-speed autonomous highway vehicle operation. Camera-based stereovision sensors offer high-resolution, high frame-rate 3D data and 50 times more data per second than lidar. These sensors work with the precision, accuracy and speed needed to detect a fallen motorcycle, a lost piece of lumber or stray tire in the middle of a road in time for the car to safely avoid a crash. Because stereovision cameras are mounted independently on the vehicle, they offer car manufacturers the flexibility to choose mounting locations. For instance, the stereo cameras could be embedded in the roof, side mirrors or headlights, in addition to behind the windshield. 

As UNECE considers regulations a “strategic area for the future of mobility,” the framework for global autonomous driving standards will continue to grow, and always with safety “at the core.” Its regulators aim to harness advanced technologies such as ALKS with one ultimate goal: to cut down crashes everywhere with “a holistic, safe system approach to road safety.” That’s a goal every autonomous car maker and OEM should be glad to embrace.

Brad Rosen (pictured, above left) is chief operating officer of NODAR, developer of a camera-based 3D vision system for autonomous vehicles.

Subscribe to a WardsAuto newsletter today!
Get the latest automotive news delivered daily or weekly. With 5 newsletters to choose from, each curated by our Editors, you can decide what matters to you most.

You May Also Like