Human nature vs. automated vehicles
The idea behind self-driving vehicles is to outfit them with cameras that can track all surrounding objects and have the vehicle react appropriately. However, this is incredibly complex. Programming a vehicle to follow a list of rules of the road isn’t going to drive the vehicle as well as a human does. Humans can make judgment calls that an automated vehicle won’t be able to make, like making eye contact with others to confirm who has right of way.
Imagine driving down a dual carriageway, with one lane blocked off with cones, when suddenly a child runs into the road. Human nature tells you to swerve through the cones to save the child. However, would the self-driving vehicle do the same, considering that this would be breaking the law? Is it possible to implement human thought processes and reactions into an automated vehicle?
Chris Gerdes, a professor of Mechanical Engineering at Stanford University and Director of the Centre for Automotive Research at Stanford (CARS), commented: “We need to take a step back and say, ‘We need to think about traffic codes reflecting actual behaviour to avoid putting the programmer in a situation of deciding what is safe versus what is legal.” So, if they’re so complex, what’s the point? Government data identifies driver behaviour or error as a factor in 94% of crashes. Amongst many other benefits, self-driving vehicles will help reduce driver error. They should also have the potential to reduce risky and dangerous driving, including drink driving and speeding.
While we’re certainly on the road to an autonomous future, the consensus across the car world is that true self-driving vehicles are still a way off. The most immediate change we can expect to see is the transition to electric vehicles (EVs).