Imaging In Autonomous Vehicles

The push to introduce autonomous vehicles is already well underway. As a result, driving is slowly changing from a human-directed activity involving a series of subjective judgments based on sensory perceptions into a technology-directed activity involving a series of algorithm-driven calculations based on sensor measurements.

Over time, this transformation has the potential to effect vast improvements in public safety and well-being. Self-driving vehicles will be capable of helping passengers park more easily and avoid traffic, while also reducing the likelihood of accidents and crashes.

Even so, autonomous vehicles will only be as good as the technology with which their makers equip them, and the technologies involved in autonomous driving are still a work in progress. Achieving the goal of turning out fully autonomous vehicles that present minimal safety concerns in complex and unpredictable environments is occurring not in one giant leap but in stages. That is, manufacturers are working in increments, slowly introducing higher and higher levels of automation requiring less and less human input.[i]

They have already encountered bumps in the road. In March 2018, an autonomous vehicle owned by Uber, the ride-sharing and technology company, struck and killed a pedestrian in Tempe, Arizona. Subsequent investigations revealed that driver error had been a contributing factor to the accident.[ii] Even so, the event has spurred debate about how best to improve self-driving technology.

This debate centers largely on the imaging systems used in autonomous vehicles. This is logical, given that safe driving necessarily involves imaging – i.e., the process of creating visual representations of one’s surroundings. But it also creates difficulties. Developing imaging systems for robots and other technological implements has been unexpectedly challenging. Scientists and engineers have yet to create imaging technology that is good enough to serve as a replacement for human visual processing. Algorithms are simply not as good as the brain at differentiating objects from backgrounds, distinguishing objects in motion, identifying partly concealed or occluded objects, recognizing objects that have been damaged or otherwise altered, and understanding how objects move and behave in three-dimensional space.[iii]

This essay will examine some of the ways that technology developers and suppliers have tried to optimize imaging systems.

Visual spectrum

Given that vehicle manufacturers are moving in the direction of replacing human operators, the obvious starting point for developers of imaging systems is the visual spectrum. In other words, autonomous vehicles need systems that can perceive and measure the same things that humans see, using the same narrow range of electromagnetic frequencies.

The easiest way to do this is to use existing technology – namely, cameras. This type of equipment is already equipped to monitor the environment in the human visual spectrum, and it is already available in digital forms that can share data easily with other computerized systems. It is also easy to supplement with image sensor processors (ISPs) that can recognize key environmental features such as traffic signals on the street or brake lights on other vehicles. [iv-i][iv-ii]

Nevertheless, cameras equipped with image-processing capabilities have drawbacks. As noted above, they do not outperform the human visual processing system. They also perform less well in extreme heat or cold, in rain and other types of precipitation, or in light that is too variable, too low, or too bright. As a result, the visual images they generate tend to be less accurate – and therefore less capable of prompting the vehicle’s other systems to respond in ways that maximize safety and convenience – when environmental conditions are suboptimal. [v-i] [v-ii]

Meanwhile, camera-ISP pairings also pose logistical concerns. Because they generate large amounts of data and use them to guide the vehicle, they require higher bandwidths than are available under the current imaging standards now used by vehicle manufacturers. Bit Flow, based in Boston, recently put its own CoaXPress (CXP) technology forward as a possible solution. In an article published in June 2019, it said the high-speed version of this point-to-point serial communication standard for video data transmission was better than current standards at supporting autonomous vehicles that collect data from cameras and other sensors and then integrate it in a centralized, high-performance artificial intelligence (AI) computer that generates operational commands.[vi]

Thermal imaging

Other technology providers have argued, though, that focusing on systems capable of handling large amounts of data from the visual spectrum misses the point. They believe that the deficiencies of visual data are significant enough to justify the development of sensors that collect other types of information.

This includes thermal imaging systems – namely, equipment that measures infrared electromagnetic radiation, which has longer wavelengths than visual light. This type of system is designed to detect the heat signatures that make animals and people stand out from the background. It has clear advantages, in that it performs better in conditions where lighting is low or variable.[vii] But it is also expensive to produce, and manufacturers have not yet found a way to produce such systems both economically and on a large scale.[viii]

Nevertheless, FLIR Systems of Oregon believes it is close to a solution. Earlier this year, it pointed out that it had already succeeded in turning out nearly 2 million units of its Lepton thermal camera, as well as more than 500,000 automotive-qualified thermal sensors. It said it expected costs to keep falling and offered to help other manufacturers by making its machine-learning thermal dataset available at no cost.[ix]

Radar and LiDAR

Meanwhile, some technology providers have turned to other methods. They have developed systems that use radio waves or infrared lasers – that is, radar or LiDAR – to detect objects and shapes and to estimate speed and direction of movement. These systems give autonomous vehicles the ability to generate images of their surroundings adjust their operations accordingly.

Each of these techniques has its own strengths and weaknesses. Radar, for example, is effective in all weather conditions and lighting levels. However, it has difficulty picking up light-based signals such as traffic lights. Additionally, it typically generates relatively low-resolution images.[x] However, researchers and developers have been working to improve resolution. Arbe, an Israeli company, has made significant progress on this front. CEO Kobi Marenko claimed recently that Arbe’s Phoenix radar system was capable of generating exactly the kind of ultra-high-resolution data on distance, speed, and horizontal and vertical positioning needed to support autonomous vehicle operations.[xi]

LiDAR, for its part, also has clear advantages. Like radar, it can operate in all lighting conditions. It generates data that are higher in resolution than most radar systems, and it its datasets are digitized – and therefore able to work well with other types of computers and automated equipment. Even so, it tends to be short in range, unpredictable with respect to reliability, and very high in cost.[xii] It also works poorly in rain and snow, and like radar, it cannot register light-based signals.[xiii]

Researchers and engineers are working to overcome some of these limitations. Kevin Mak, a principal analyst for Strategy Analytics, recently highlighted the progress made by Luminar Technologies, based in Orlando, Florida. He told ITU News recently that the company was now offering LiDAR-based solutions that carried a price tag of less than $1,000 per unit. Prices are likely to go down even more as engineers develop solid-state LiDAR to replace the current mechanical units, he added.[xiv]

Conclusion

Obviously, all of the imaging systems mentioned here have deficiencies. So far, developers and manufacturers of autonomous vehicles have tried to compensate for these weaknesses by using two or more in tandem, outfitting their units with some combination of cameras, thermal sensors, radar, and LiDAR.[xv]

Even enthusiasts for one type of system over another acknowledge the need for multiple types of sensors. As Marenko wrote in an article talking up Arbe’s Phoenix radar system: “The reason several technologies are used is because each has strengths and weaknesses, and the combinations complement one another. When used independently, no sensor is completely reliable.”

This multi-pronged approach is likely to drive research and development as the shift from human-directed vehicles to autonomous vehicles continues. The future may owe a great deal to firms such as Montana’s TechLink, which was formed to market new technologies of military origin.

TechLink recently offered to help private-sector companies acquire licenses for a three-dimensional, stereoscopic imaging system developed by the Air Force as a means to pilot unmanned aerial vehicle (UAVs, also known as drones). In a post on its website, the company noted that the system “fuses visible, infrared, and multispectral images from multiple cameras for maneuvering unmanned or autonomous vehicles.” It called the technology a good fit for self-driving vehicles, saying it could be “embedded in the grille of an autonomous vehicle in order to provide enhanced safety through additional awareness.”

Author Bio:

   Gregory Miller is a writer with DO Supply (https://www.dosupply.com) who covers Robotics, Artificial Intelligence and Automation. When not writing, he enjoys hiking, rock climbing and opining about the virtues of coffee.

 

HOME PAGE LINK