I don’t have a solution for flying cars, but there is something that’s making unmanned autonomous vehicles a reality. Multiple sensor fusion holds the answers for the strict safety requirements and fickle driving conditions of self-driving cars. However, designers should still consider the wide variety of pros and cons associated with diverse instrumentation.
Why Multiple Sensor Fusion is Necessary
Every engineer knows that having the right tool for the job is important. Without it, your work can range from frustrating to impossible. For autonomous vehicles, the “right tool” seems to be an amalgamation of several different smaller tools. Every car experiences varying road conditions that are best addressed using different sensors.
One great example is rear-view cameras, which will be required for new cars in the US past 2018. In the past, these cameras were simply used to show drivers what was behind them. However, designers have now combined ultrasonic sensors with rear-view cameras to enable self-parking. Ultrasonic sensors are good for judging distances, and cameras can better see obstacles. Together they easily allow a car to park itself.
Cars need a wide range of sensors in order to fully understand their environment. Ultrasonic is good for judging a car’s distance to objects, but only at short ranges (~5-10 m). Radar can detect objects at long ranges regardless of the weather but has low resolution. LIDAR has high resolution but loses sight in heavy snow and rain. Every sensor has its strengths and weaknesses, but when combined, you get a stronger system overall. While this is an appealing solution, it also has many challenges.
Multiple sensors will give your car a better picture of its environment.
The pros of multiple sensor fusion are many. The primary benefits are fewer false positives or negatives risk mitigation.
False positives and negatives are the banes of a self-driving system. No matter how well you’ve programmed your processor, bad data will lead it to make incorrect choices. Bad things can happen when you have too few sensors.
With 3 or more sensors, your processor can determine when one sensor is acting up. In addition, using several sensors can give you environmental information. If the LIDAR signal becomes scattered, and the passive visual shows only white, your car may be in a snowstorm. In this scenario, your car can be programmed to rely more heavily on its radar and ultrasonic sensors. All in all, using a variety of 3 or more sensors will help your processor know if the data it’s receiving is accurate. It’s apparent that multiple sensors allow for the kind of redundancy that self-driving cars will need.
Multiple sensor fusion comes with a complex set of problems as well as solutions. The most difficult hurdles to come are high cost, heavy processing, and increased system complexity.
The first problem you’ll find when looking into using lots of sensors is price. Passive visual systems are low cost, buy ultrasonic and radar arrays will are more expensive. Then there’s LIDAR. While LIDAR manufacturers have been making recent advances, their sensors are still very expensive. Currently if yo, want to include every kind of sensor available, they’ll cost you more than you can sell the car for.
At least you don’t have to mount this on any cars.
You’re probably already trying to make your code as efficient as possible to save processing power. More sensors will require even faster processors. The processors on the market right now are many times too small to meet the requirements of fully autonomous systems. The industry will require much more advanced processors in order to realize unmanned autonomous vehicles.
The hunger for processing power has pushed many developers to start using centralized microprocessor control units (MCUs) instead of distributed electronic control units (ECUs). These MCUs come with their own complexities, such as memory concerns. Since all systems are using the same memory, you’ll have to use a memory protection unit (MPU) to meet safety requirements. These MPUs can be very difficult to handle without the right tools. You’ll need to learn how to deal with MCUs if you want to use multiple sensors. You’ll also probably need some help with the libraries to process sensor data. You might be able to write a library for a LIDAR array or an ultrasonic sensor, but writing libraries for a more than 2 sensors take up a ton of time. You’ll need to choose good, optimized, standard libraries to help you efficiently process data from your instruments.
Fully autonomous vehicles will not be possible without using a wide variety of sensors. These sensors will add redundancy and take over for each other in case of failure. Sensor developers are already working on ways to lower the costs of their products. You should also be prepared to learn more about MCUs and MPUs, as these will be a necessary in order to run a multi-sensor system.
While you will have to help overcome some of the challenges facing multiple sensor fusion, you don’t have to do it alone. TASKING has created a variety of tools to help developers like you tackle the task of fully autonomous driving. You may not be able to control the wind or the rain, but with TASKING’s tools, your sensors will be able to accurately navigate your vehicle through them.
Have more questions about multiple sensor fusion? Call an expert at TASKING.