Driverless cars: are you ready?
The most ambitious leap in the field of transportation is no longer a distant vision. Hussein Dia, Associate Professor at Swinburne University of Technology, sets the scene…
The transition to fully driverless cars is still several years away, but vehicle automation has already started to change the way we are thinking about transportation, and it is set to disrupt business models throughout the automotive industry.
Driverless cars are also likely to create new business opportunities and have a broad reach, touching companies and industries beyond the automotive industry and giving rise to a wide range of products and services.
New business models
We currently have Uber developing a driverless vehicle, and Google advancing its driverless car and investigating a ridesharing model. Meanwhile, Apple is reportedly gearing up to challenge Tesla in electric cars and Silicon Valley is extending its reach into the auto industry.
These developments signal the creation of an entirely new shared economy businesses that will tap into a new market that could see smart mobility seamlessly integrated in our lives. Consider, for example, the opportunity to provide mobility as a service using shared on-demand driverless vehicle fleets. Research by Deloitte shows that car ownership is increasingly making less sense to many people, especially in urban areas.
Individuals are finding it difficult to justify tying up capital in an under-utilised asset that stays idle for 20 to 22 hours every day. Driverless on-demand shared vehicles provide a sensible option as a second car for many people and as the trend becomes more widespread, it may also begin to challenge the first car.
Results from a recent study by the International Transport Forum that modelled the impacts of shared driverless vehicle fleets for the city of Lisbon in Portugal demonstrates the impacts. It showed that the city’s mobility needs can be delivered with only 35 per cent of vehicles during peak hours, when using shared driverless vehicles complementing high capacity rail. Over 24 hours, the city would need only 10 per cent of the existing cars to meet its transportation needs.
The Lisbon study also found that while the overall volume of car travel would likely increase (because the vehicles will need to re-position after they drop off passengers), the driverless vehicles could still be turned into a major positive in the fight against air pollution if they were all-electric.
It also found that a shared self-driving fleet that replaces cars and buses is also likely to remove the need for all on-street parking, freeing an area equivalent to 210 soccer fields, or almost 20 per cent of the total kerb-to-kerb street space. Other studies have also shown that dynamic ridesharing using driverless vehicles will increase vehicle utilisation up to eight hours per day.
Car insurance
A recent study by McKinsey on disruptive technologies suggests that up to 90 per cent of all accidents could be prevented by driverless vehicles. So why buy insurance if automation makes accidents far less likely?
“The truth is, if it’s a safer way of driving, it’s good for society and it’s bad for our insurance business,” the US business magnate Warren Buffet said recently when asked about the impact driverless vehicles may have on his car insurance subsidiary. “Anything that cuts accidents by 30 per cent, 40 per cent, 50 per cent would be wonderful, but we won’t be holding a party at our insurance company.”
Other studies have speculated that premiums could be reduced by 75 per cent, especially if drivers are no longer required to get coverage, and liability is shifted from drivers to manufacturers and technology companies. Under this scenario, insurers might move away from covering private customers from risk tied to “human error” to covering manufacturers and mobility providers against technical failure. A Rand Corporation report also predicts that drivers might end up covering themselves with health insurance instead of vehicle insurance.
Will driverless vehicles destroy the very idea of ownership?
Does all this mean car ownership is passé? In some ways, you may not own every facet of your driverless car anyway. Vehicle manufacturers are arguing that since they own the software that runs a connected vehicle, they also own the machine that runs that program.
In comments submitted to the US Copyright Office, vehicle manufacturers argue that purchasers are only licensing the product and it would be unsafe for them to modify the vehicle programming or even make a repair. The Copyright Office is currently holding a hearing on the issue. If it rules in favour of the manufacturers, it will set a precedent that can change the whole landscape of vehicle ownership.
Not everyone will be excited by this vision, and many would be sceptical and disagree that we are at the cusp of a transformation in mobility. Others still want to drive and not everyone is likely to want to rideshare on a daily basis. Many might also argue that better investment in public transport would achieve similar outcomes.
Whether you embrace or object to these scenarios, the reality is driverless vehicles are coming and they will have socio-economic impacts and other effects on our society – some good and some bad.
I see them, along with urban transport technologies, as having a role in delivering new mobility solutions as part of a holistic approach to improve road safety and promote low carbon mobility. The market will ultimately determine whether they can succeed.
The regulatory challenge
What procedure can be used to verify compliance? Should the AI self-driving software pass a benchmark test, developed specifically for autonomous vehicles, before it can be recognised as a legal driver? Who should develop such a test and what should it include?
Make no mistake, car manufacturers and technology companies are working towards a vision of fully autonomous vehicles, and that vision includes taking the human driver out of the loop. They have already made huge advancements in this space. The self-driving software that has been developed, based on “deep neural networks”, includes millions of virtual neurons that mimic the brain. The on-board computers have impressive supercomputing power packed inside hardware the size of a lunchbox.
The neural nets do not include any explicit programming to detect objects in the world. Rather, they are trained to recognise and classify objects using millions of images and examples from data sets representing real-world driving situations.
But the driving task is much more complex than object detection, and detection is not the same as understanding. For example, if a human is driving down a suburban street and sees a soccer ball roll out in front of the car, the driver would probably stop immediately since a child might be close behind.
Even with advanced AI, would a self-driving vehicle know how to react? What about those situations where an accident is unavoidable? Should the car minimise the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Should it be given the choice to select between these extremes?
These are not routine instances. Therefore, lacking a large set of examples, they would be relatively resistant to deep learning training. How can such situations be included in a benchmark test?
Turing tests
The question of whether a machine could “think” has been an active area of research since the 1950s, when Alan Turing first proposed his eponymous test. The basis of the Turing Test is that a human interrogator is asked to distinguish which of two chat-room participants is a computer, and which is a real human. If the interrogator cannot distinguish computer from human, then the computer is considered to have passed the test.
The Turing Test has many limitations and is now considered obsolete. But a group of researchers have come up with a similar test based on machine vision, which is more suited to today’s AI evaluations. The researchers have proposed a framework for a Visual Turing Test, in which computers would answer increasingly complex questions about a scene.
The test calls for human test-designers to develop a list of certain attributes that a picture might have. Images would first be hand-scored by humans on given criteria, and a computer vision system would then be shown the same picture, without the “answers,” to determine if it was able to pick out what the humans had spotted.
There are a few vision benchmark data sets used today to test the performance of neural nets in terms of detection and classification accuracy. The KITTI data set, for example, has been extensively used as a benchmark for self-driving object detection. Baidu, the dominant search engine company in China, and which is also a leader in self-driving software, is reported to have achieved the best detection score of 90 per cent on this data set.
A modified Visual Turing Test can potentially be used to test the self-driving software if it’s tailored to the multi-sensor inputs available to the car’s computer, and is made relevant to the challenges of driving. But putting together such a test would not be easy. This is further complicated by the ethical questions surrounding self-driving cars. There are also challenges in managing the interface between driver and computer when an acceptable response requires broader knowledge of the world.
Policy remains the last major hurdle to putting driverless cars on the road. Whether the final benchmark bears any resemblance to a Turing-like test, or something else we have not yet imagined, remains to be seen. As with other fast-moving innovations, policymakers and regulators are struggling to keep pace. Regulators need to engage the public and create a testing and legal framework to verify compliance. They also need to ensure that it is flexible but robust.
Without this, a human will always need to be in the driver’s seat and fully autonomous vehicles would go nowhere fast.