The Dead Bug Problem
Source: Advantage Autoglass
Lidar on the Road
We often get asked how our sensor performs when rain, dust, snow, dirt, or other particles obscure the lidar window, much like particles on a vehicle windshield. These environmental obscurants can block or deflect laser beams and decrease sensor visibility.
Today, we’re breaking down optical aperture, and sharing one of the ways we’ve minimized the impact of obscurants - through the use of large optical apertures.
The OS1, fresh from a shower with refractive beam obscurants (aka water droplets)
What is Optical Aperture and Why is it Important?
Camera lenses are a simplified version of our lidar lenses, so let’s consider the camera case first, to gain some intuition. In cameras, aperture refers to the opening that controls light coming in to the lens system and the image sensor. The aperture is typically located deep inside the lens, but opening and closing it will also affect the size of the light collecting area on the front lens surface. This is called the “front lens aperture” - or “entrance pupil” in optics design circles - and for the purposes of this article, aperture refers to the front lens aperture and not the mechanical aperture buried inside the lens.
The aperture size also affects the impact obscurants have on the image sensor. If an opaque obscurant (like dirt or a bug) is larger than the aperture, it will completely block it and blind the camera. On the other hand, if the aperture is larger than the obscurant, light is only partially attenuated and the image will show partial impact.
Source: Kurt Munger
Camera lens with partial obstruction - the larger aperture allows light through and the image shows partial impact
In our lidar sensors, a lens system expands and collimates the VCSEL lasers. The optical aperture directly relates to the diameter of the laser beam as it exits the lens. In the receiver, the direction of light is reversed, but it also has an optical aperture that light passes through before being focused at the detector array, just like a camera.
Similar to cameras, lidar sensors with smaller optical apertures are less resilient to obscurants. Opaque or refractive obscurants, like a raindrop, can deflect laser light, attenuate the lidar signal and thereby reduce range in that pixel. With larger beam aperture, signal strength is only partially attenuated instead of completely blocked, and the point cloud would show minimal impact.
Small apertures are more susceptible to obscurants blocking all light
That is all to say, a small optical aperture poses an issue for any camera or lidar operating in an environment where dust, rain, mud, snow and other obscurants are to be expected.
Field Test!: The Dead Bug Problem
We conducted a series of tests to demonstrate how the large aperture of the OS1 is minimally impacted by even relatively large window obscurants, like a large insect.
Our setup consisted of an OS1-64 lidar sensor and a calibrated 10% reflective lambertian target. To benchmark performance, we gathered intensity values, or return strength of the laser pulse, with a clean sensor window. Then, we placed “bugs” measuring 5 mm in diameter on the sensor window facing the target board, and measured the new intensity values. These two values were compared to determine the impact on signal intensity.
To measure intensity degradation over range, we moved the lidar sensor farther from the target and repeated this process at various distances.
The OS1 and a calibrated 10% lambertian target board. Test conducted at multiple distances: 6m, 12m, and 18m.
We used stickers to characterize the effect of obscurant size on performance, and covered beams facing the target board
Using Ouster Studio software, we highlighted the target board in the point cloud and gathered intensity values
The Field Test Results
This experiment was done with a sample size n=3 for each distance
Even with opaque obscurants on the OS1-64, the sensor still maintained vision, though with a 47% reduction in intensity. This makes intuitive sense given a 5 mm bug would block less than half of the beam at aperture, and halving the photons reduces range by ~15%. Our large aperture allowed enough of the beam to pass around the bug to identify the target, with only a relatively minor impact on maximum range.
We’ll talk more about the photon count (intensity) to implied range relationship in a future post.
The MEMS Problem
Particles can easily block the small apertures of MEMS scanners
Micro-Electro-Mechanical Systems (MEMS lidar) have a significant disadvantage with small apertures of only 1-4 mm in diameter for oscillating mirror modules. In comparison, multi-beam flash lidar have apertures anywhere from 2 to 10x greater in diameter and 4 to 100x greater in area and similarly increased sensitivity to obscurants.
The aperture in a MEMS system is constrained by the size of the mirror which is used to scan the laser beam. Intuitively, increasing the mirror size would be a simple fix to increase the optical aperture, but in practice, large MEMS mirrors have fundamental issues with oscillating quickly, which results in a trade off between smaller field-of-view, lower frame rate, and reliability issues that few manufacturers are willing to make.
It Only Gets Better From Here
Metrics like robustness to obscurants often get lost in performance conversations that focus on range, resolution, and field-of-view. But these metrics are important and can be a dealbreaker when picking a sensor for safety-critical applications. We considered this from day one in the design of our sensor, and we’re improving this robustness further with our OS2. The OS2 has an aperture that’s twice as large as the OS1’s, and is even more resilient to obscurants for the long-range sensing it is designed for. Stay tuned for our launch later this year!