Autonomous Transportation Systems are undoubtedly the biggest  development in modern technology. Compared to robots that have dexterous manipulators capabilities, the Autonomous Vehicle and other Wheeled Mobile Robots are fairly simple from a control point of view; you control steering ( lateral movement ) and acceleration/braking ( for-aft movement ), simultaneously. The environment and speeds these robots operate in is highly complicated however. There are complex man made signs, signal and markings that for a system of constraints; there are other vehicles piloted by human drivers that may or may no follow the law. There are pedestrians which  enter the roadway intentionally and animals and debris that. Then there are weather conditions that compromise perceptual range and detail ( fog dramatically reduces both range and detail ).

Current efforts in Autonomous Vehicle projects are not showing signs of converging on the level of confidence required by consumers for safety. The key approach has been to structure the environment allowing the car to require less reasoning power. This is done by scanning the environment, often leveraging LiDAR combined with computer vision, and post processing the data to reconstruct the environment with appropriate markers forming an interpretation of the environment that the Autonomous Vehicle can use to avoid having to interpret signs and markings and at least know where to look for signals. The problem is..does the car understand the environment or does it just follow instructions. What happens when something changes, or is wrong or there is incomplete mapping?

Comments are closed.