Autonomous Vehicles – The Future Of Transportation

Autonomous vehicles will change the future of transportation. We will discuss the ethical implications.

Autonomous vehicles are one of the most interesting topics in human-robot interaction right now. While they are not humanoid in shape, they are one of the biggest, deadliest and most promising robots available to the general public. They have the potential to dramatically change how we get from point a to b and what infrastructure we need. But they also raises many ethical issues besides a long list of technical challenges.

Driving a car is inherently dangerous once you drive at a practical speed. Your autonomous vehicle needs to be able to deal with all sorts of traffic situations, weather conditions and even unpredictable human operators. This is an enormous challenge.

I talked with Professor Robert Sparrow from the Monarch University, Professor Tracy Hresko Pearl and Professor Alan R. Wagner about the future of transportation with autonomous vehicles. The core question is: when will it become illegal for humans to drive cars?

Transcript

The transcript of the episode is available as a PDF. You can also follow the episode with subtitles through Descript.

HRI-Podcast-Episode-001-Autonomous-Vehicles-Future-Transportation-Transcript

ISSN 2703-4054

Relevant links:

Transcript:

Welcome to the human-robot interaction Podcast. I’m your host, Christophe Bartneck, and I’m fascinated with how humans and robots interact with each other. In this series we will explore this relationship with philosophers, engineers, psychologists and artists. We will look into the ethical implications off this new technology and how we can make it work for humans. We will shine light on the technologies that make robots intelligent and useful. We will also look at the people behind the science. Why are they interested in human-robot interaction? What career path did they take? And what does it mean to be a postgraduate student in this area? This is a lot of ground to cover. So let’s get started with one of the most interesting topics in human-robot interaction right now. Autonomous vehicles. While they are not human in shape they are one off the biggest, deadliest and most promising robots available to the general public. They have the potential to dramatically change how we get from point A to B and what infrastructure we need. But they also raise many ethical issues besides a long list off technical challenges In France, Germany and England, terrorists used vehicles as weapons and drove them into crowds of people, killing and injuring many. At the same time, hackers were able to remote control the Jeep Cherokee. This does raise considerable security concerns. Driving a car is inherently dangerous once you drive at a practical speed. The autonomous vehicle needs to be able to deal with all sorts of traffic situations, weather conditions and even unpredictable human operators. This is an enormous challenge. Despite the recent advances that Waymo and practically every major car manufacturer made, we did already encounter several fatalities with autonomous vehicles. In 2018 an Uber car with switched on auto pilot even killed a pedestrian crossing the street. Professor Rob Sparrow from the Monash University in Australia wrote an article called “When Human Beings Are like Drunk Robots, Driverless Vehicles, Ethics and the Future of Transport”. I talked to Robert in Sydney during a symposium on ethics in robotics and AI. Rob, how do you see people interacting with autonomous vehicles?

[00:03:02] Robert: So what’s of interest here is how the system as a whole, the sort of human plus robot or human plus AI system will function when the machines are less than perfect. You know when one is trying to reproduce human performance at some task with an AI or a robot. Usually people can get a fair way, but they can’t always produce perfect task performance. In fact, in some context, you might wonder whether there’s any such thing as perfect driving performance. How does the system as a whole operate when the machine only works part of the time?

[00:03:49] Christoph: Part of the time? Are we not talking about autonomous vehicles? Are they’re not supposed to drive all the time? The sad news is that today’s autonomous vehicles are unable to drive all the time. Governments have specified levels of autonomy ranging from zero no driving automation to five full driving automation for better managing the legal aspect of the today’s autonomous vehicles. Level Zero represents traditional vehicles without any automatic functionality. At level one, the vehicle has one type of automatic functionality, for example, breaking automatically when encountering an obstacle. At Level two, the vehicle can perform both breaking and accelerating functions as well as changing lanes. However, the driver has to monitor the system at all times and be ready to take control whenever necessary. For example, all Tesla vehicles are officially considered level two automation. At level three, the driver does not need to monitor the system at all times. Under certain circumstances, the system can work autonomously. The system gives the driver time, for example, ten seconds before handing back control. In 2018 the Audi A8 claimed to be the first car capable off level three automation. At level four the vehicle can perform all driving functions understand standard circumstances. Non standard conditions would include bad weather, such a snow or heavy rain. Finally, at level five, the vehicle can perform all driving functions in all circumstances. But now back to Rob Sparrow.

[00:05:37] Robert: And this is actually really hard problem in sort of systems design, because one thing we know is that human beings come to rely on machines very quickly. They over rely on machines. And so the sort of naive solution which is for the human being to take over the task when the machine is failing actually works much less effectively than people might think. Because when the driverless vehicle, for instance, says big beep, beep driving context exceed specifications, please take control. The person who is nominally in charge of the vehicle may not be paying attention, and that is likely to generate accidents.

[00:06:25] Alan: The problem is that humans are just not cognitively built to be able to re engage the autonomous vehicle on very short notice in very dynamic situations. We are not meant for that we shouldn’t be asked to. And it’s simply because we’ve bought a Tesla or whatever car and have agreed to the terms that we probably never read should not imperil us.

[00:06:45] Robert: I mean, this is quite this sort of problem of what’s called automation bias is quite well known, has been studied for a long time and has really, I guess, now led some of the people draw doing driverless car research to think that these systems won’t be safe if they rely on human beings at all. That you simply cannot expect human drivers to take control in a short amount of time in a dangerous situation. And so really, the performance you need from a machine needs to be at least as good as the performance you’d get out of a human driver.

[00:07:28] Christoph: In 2018 we had several accidents with autonomous vehicles that demonstrated that their performance is not yet where we want it to be. First, in March an autonomous Uber car in Tempe Arizona killed Elaine Herzberg crossing the road at night. The Uber taxi drove in the autonomous mode and the backup human driver, Rafaela Vasquez, failed to take back control in time. She was distracted watching television on her phone. Still, the vehicle had detected Elaine around six seconds prior to impact but only decided to trigger the emergency breaks around one second prior to impact. Unfortunately, the emergency braking system was disabled that day as “emergency braking manoeuvres are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior”, according to Uber. The second accidents on October 19th had a happier but still very enlightening result. The driver of a Waymo car took back control to avoid car that had cut into its lane. The driver changed the lane unaware that a motorcyclist was driving in it. The motorcyclist was hurt but survived the crash. Waymo claimed that its simulation based on the data gathered from the vehicle showed that the car would have avoided all accidents if it was only left in control.

[00:08:58] Christoph: What are your thoughts on these accidents, Rob?

[00:09:01] Robert: Well again, I think that’s entirely predictable. The system’s involving human beings and machines are quite complicated, and you can get both circumstances. You can get cases where people over rely on the machine to the detriment of the performance of the system. But you can also get the situation where people haven’t exaggerated sense of their own abilities and take over performance in ways that degrade the performance of the system as a whole. I think that’s actually an early example of a scenario that we predict in our paper called “When human beings are like drunk robots”. Which is that you can imagine the motorcyclist in that case actually taking legal action against the human being who took control off the vehicle, pointing out that they wouldn’t have been injured except for that person’s actions. And that is, I think, the sort of scenario that is likely to eventually render driving illegal that once the machines outperform human beings in the driving task. And I think they need to be able to do that in order for it to be ethical, to put them on the roads. Once they do achieve that level of performance, then human beings taking control will generate accidents. And where those accidents involve third parties eventually people are going to say, Look, you’ve got to stop taking control of these vehicles when you drive, you kill people. And so I think the future of driverless vehicles is actually cars with no steering wheels. And then I think that has some really interesting implications for things like the psychology of vehicle ownership, how people relate to other people in vehicles and, indeed the entire nature of our urban infrastructure.

[00:10:58] Christoph: When we slowly stop driving ourselves and let the machines take care of us, we are likely to also become less skilled drivers. How will this transition period workout?

[00:11:11] Robert: So it is a really hard problem. And, you know, in some ways, the easiest solution would be to move very rapidly to a fully driverless automated driving system. That is actually a much easier performance scenario for driverless vehicles. If they didn’t have to worry about the unpredictable behaviour of human beings think politically, that’s unlikely to happen. I guess I think that if the machines aren’t safer, there really is three possibilities here. One is that we way we introduce driving systems that do fail, and we hope that human beings can take control for the reasons we’ve already discussed I think that’s actually quite dangerous and that you would expect relying upon human beings to continue to concentrate and pay attention to the road situation whilst not actually being in control of the vehicle. That’s really unlikely, and I think, quite dangerous. So if you’re not going to be able to rely upon people in most of the context in which accidents might occur, I think the performance of the driver’s vehicle has to be better than the average human driver before you should be allowed to put it on the road. I mean, if you said, Look, the average human driver has so many accidents, my car has more accidents than that. We shouldn’t accept that vehicle on the road. If you can show that your vehicle outperforms the average human driver, then it seems to me that it’s unethical for people to drive in that circumstance. It’s actually slightly more complicated than that, because driver performance varies quite a bit. Most accidents are caused by a small minority of drivers, young men, sometimes people in advanced old age, so the most drivers are better than the average driver. Which means that actually, you might have a situation where replacing the entire vehicle fleet with driverless vehicles would reduce the rate of accidents but would still expose most drivers to a slightly higher risk of death. That’s quite a sort of classical philosophical problem in how we make tradeoffs between the value of the total aggregate consequences or utility and our treatment of individuals. My suspicion is that once vehicles can outperform the average driver, they will very quickly move past that to outperform any human driver, and then I think it will be unethical for human beings to drive. I think politically it’ll be quite hard for governments to legislate. To prohibit driving, not least because it will require everybody to buy new cars. But I think it is quite plausible to insist that from some in particular date, you’re not allowed to have a steering wheel in a car that’s intended for use on the public roads. When that date will occur, I’m really not sure it’s still bizarrely hard for someone not at the heart of this research programme of driverless vehicles to get a real sense of the capacity off off these systems, I’ve been to lots of driverless vehicle conferences. Now I’ve seen multiple people with PhDs from Stanford and MIT, and some of them say, Look, we’re just two years away from having a vehicle that is safer than the average human driver. Others say it’ll be another fifty years before these cars are on the road. As a philosopher, I’m not especially well call for it to make that judgement, but I am confident that if they’re not safer than human beings, they shouldn’t be on the road. Once they are safer than human beings, it’ll be unethical for us to drive.

[00:15:20] Christoph: Not everybody agrees that driving a car should become illegal. Here’s Alan Wagner from the Pennsylvania State University.

[00:15:29] Alan: So I sort of feel that we should have a right to be able to drive. We should have a right to take certain risks. Understanding that there are risks. Autonomous vehicles shouldn’t be used or any type of robot shouldn’t be used as a method to prevent us from taking on these risks. You could imagine. And this has been written about in science fiction, right? These robots that sort of lock you into your you’re home to prevent you from any kind of fall or getting sick or any kind of danger that comes to you. As human beings we should have our own autonomy and driving. Maybe one of those things that which we should retain the right to do.

[00:16:09] Tracy: As a law professor, I would push back on that and say that you don’t have a legal right to drive. You have a desire to drive. But at least in the United States, you have no legal right to do so. The state retains its right to revoke your licence for any for any number of reasons. You know, my, my hope is that I think private driving tracks will become a very popular recreational destination in short order. So anybody looking for new investments, I would put you there.

[00:16:41] Robert: It’s important to acknowledge that we already I mean the moment you get on the roads in a car, you place the lives of pedestrians at at risk and the faster you drive the more risk, you place people under. So we are as a society. When we’ve accepted that people should be allowed to drive at one hundred kilometres an hour rather than thirty kilometres an hour. We’ve already said we are willing to accept us a number of fatalities. You could reduce the road toll to almost zero by simply speed limiting all cars to thirty kilometres an hour. And then we would live in a society with a very, very, very low road toll. But it would take along the time to get to work. I mean, it’s not just engineers building driverless vehicles that are trading off human lives for the performance off the system. That’s absolutely part of the road industry.

[00:17:43] Alan: Many people die in the roads every year. Certainly in America think it’s sixty five first seventy thousand a year we could. We could eliminate all those deaths, right? We could. We could force everyone to our helmet. We could have roll cages in every car, and we could set the speed limits to five miles an hour. But we choose not to. We choose to sacrifice sixty five to seventy thousand people a year for civil expediency, but also because we feel like we should have the autonomy to take some of these risks. And in many ways this comes down to a public safety versus individual rights debate. Where does it seems like it’s a slippery slope, arguing that for safety’s sake, you should take away right? You could also argue that for safety sake, you should not be able to eat cheeseburgers because they’re fattening. They clog your arteries and they cause greater health care risks. They might even be made more dangerous than driving in a car to a certain place. But it’s very I would argue that you should have the right to choose the type of food that you want, even if it’s bad for you, just as you should have the right to choose how you arrive at a location.

[00:18:53] Tracy: I think we will get to the point. Not soon, not in the next ten years. But I think you know, thirty years from now, this is going to be a very interesting political debate. I think that the risk introduced by a single driver onto a public road is going to be so comparatively high that from a regulatory perspective, I think we’re gonna be left with little choice but to ban humans from the road. That’s the right perspective from a risk profile perspective. Politically, I think this might be a very unpopular thing to do and the joke that I make in America is that people who have their keys might be like gun owners, right, saying, You’re gonna have to pry my human driven car from my cold, dead hands. I’m optimistic, though, that by the time we reached that point, the nature of driving and of car ownership will have changed so dramatically that it won’t be the kind of horrible scenario that the people currently ambition. Look, I think that my children are going to have a very different relationship to cars and driving than people my age Do. I think that they’ll be raised in a world with driverless cars and will be so attached to their keys? My name is Tracy Pearl, and I am a professor of law at Texas Tech University School of Law in Lubbock, Texas.

[00:20:11] Robert: As for the future of transport. There’s a vision that I would like to see realised. And then there’s a vision that I, in some ways I suspect, might be realised, and they’re quite different visions. So I think the best possible outcome for this technology is to use it to solve what’s called the last mile transport problem, which is rail systems are very efficient movers of people into central location, so most people’s commute takes them into a city or another place where a lot of people working and there’s often a train for part of the way. But getting from your house to the station and getting from the station in city to your office block. Those require walking and, you know, maybe it’s raining. Maybe you can’t afford the time, so people tends to drive and you see these cities where everybody is doing the same commute along the freeway. It would be much more efficient if people were in buses or on trains. A driverless vehicle technology would enable a scenario where you had essentially fleets off driverless minibuses that people interacted with with an app on their phone. They said. Look, I need to be at the railway station by nine AM. Here’s where I live and an Autonomous vehicle would come and pick them, and I don’t know four other people in their neighbourhood along the route. Take them to the railway station, commute on the train into town and, if necessary, the same process at the other end. And that would essentially reduce the number of vehicles on the road. It would really are change urban infrastructure massively because you wouldn’t need so much. Well, you wouldn’t need private motor vehicles, for instance, or all houses could lose their garage is the shopping malls and shopping centres could lose all the car parking space around them. There’s a very attractive future where people spend less time in vehicles and there are fewer cars on the road. Unfortunately, I believe that scenario will only come about if governments regulate because the other or other scenario is, you might still lose the private motor vehicle and have people doing most of their travel in autonomous vehicles. So something like Uber. But there’s no driver in the car. Drop a pin on your phone car, comes and picks you up, but drives you by yourself to your workplace. And so we’ve got the same number of vehicle miles being driven, though there might be fewer vehicles on the roads, but environmentally, that scenario is much less attractive. And indeed, there’s likely to be many more trips occurring as well, because, for instance, there’s a lot of people who can’t drive at the moment. And part of the attraction of driverless vehicle technology is that you could be blind and or you could be young, child and safely travel in an automobile. All the people who don’t have driver’s licences could take car trips so you expand the population of people who can potentially travel. You also make things possible, like, you know, I want a pizza and I can have it delivered by an autonomous vehicle. Unfortunately, I think in that scenario is in some ways, is more likely if we leave this, if there’s a policy vacuum in this area.

[00:24:26] Christoph: In New Zealand, we have the problem that many tourists are unfamiliar with driving on the left side of the road, and this does cause some accidents. Autonomous vehicles could solve this problem and at the same time enable the tourists to enjoy the scenery. This could be a great business model for rental companies. Lift and Uber are working in this direction, but it seems odd that the major rental car companies do not yet seem to have engaged with this new technology. But this is only a small proportion off the road traffic. Autonomous vehicles have the potential to dramatically change how we relate to cars, ownership, and transportation.

[00:25:13] Tracy: You know, there’s some really interesting data out of the United States showing that young people now teenagers are actually getting their licences at much lower rates as they did before. I mean, I remember when I was I grew up in Hawaii and fifteen was the age at which you get your licence? I remember for my friends and I. The day we turned fifteen, we went down to our department of motor vehicles and got our licences. We don’t see that now with with teenagers because the availability of things like Uber and Lyft it’s just so much easier now to secure transportation for yourself. You can do it from an app on your phone on. So I think that that’s likely to be the case. I think it’s going to become economically inefficient to own your own vehicle. I think that’s one of the many societal changes that’s going to be ushered in by an era of autonomous vehicles. I think we won’t need parking lots anymore. We won’t need to structure our urban centres in the way that we do. I think that’s really exciting, and I think people don’t realise the full spectrum of changes that are coming.

[00:26:13] Christoph: It seems absolutely necessary for us as a society to have reliable and valid data on the progress that the autonomous vehicles make. Tesla and others reports some statistics that could be interpreted as that they are already driving safer than an average human being. But we have to keep in mind that the test drives only happen under conditions under which the companies feel safe for their cars and other traffic participants. This usually country roads or freeways in a city’s or bad weather conditions are often avoided and hence the statistics available can be misleading. For a country to pass legislation that would dramatically alter how we relate to cars and traffic. It does require considerable societal discussion and consent building. Recently, the result of a large scale study on the moral dilemmas around autonomous vehicles was published in nature. Several million participants made choices between two unfavourable options. The Autonomous Car would, for example, either run into a wall and kill its driver. Or run into a group of children. Such scenarios are often referred to as trolley problems. By systematically varying factors, the researchers were able to shed light on the moral preferences off people around the world. They do, for example, tend to spare the young over the old on the many over the few. But the study also showed that the participants tended to safe man over women.

[00:27:53] Robert: I guess one thing to keep in mind about the Moral machines study is that from a philosophical perspective, and actually the authors are conscious of this, ethics, both shouldn’t be but isn’t a popularity contest. And you can’t really settle ethical questions by polling People. So for instance, you can imagine if you performed a similar exercise I don’t know, one hundred fifty years ago, then no number off African Americans would outweigh the life of one white person, for instance, and no one would say, Well, that’s the behaviour We should instantiate in our machines. I mean, it’s useful, vital to see what people do think and philosophers are increasingly interested in doing empirical work on people’s intuitions. But I would really resist the suggestion that we can settle ethical questions by simply polling people. You know, philosophy and ethics are about more than that. They’re about thinking deeply about matters and considering arguments that mightn’t occur to most. Most people. Now, having said that, you are absolutely right that in public policy, people do make decisions about how much to spend, for instance, to ensure that people don’t die on construction sites. Every skyscraper you see in a city that people have been killed building, building, that you can reduce the number of deaths through policy choices. It’s just that those policy choices, either contrary to the interests of wealthy and powerful people and or expensive. So yes, at one level we do already place a value on human life. There are good reasons, though, for not wanting to extend that and make it explicit. There is something about a fundamental moral equality of every human being that I think we need to hold on to. And we need to resist the idea that some lives are worth more more than others. So I would be very hostile to suggestions that we should allow people to pay their way out of accidents or that we should be privileging certain classes off citizens. Those judgments are notoriously unreliable and bigoted, For instance. I mean, in lots of societies, for instance, it’s pretty clear that people value the lives of women less than the lives of men. And that’s not something I think we should be building into our automated systems.

[00:30:32] Tracy: Yes, So I mean, look, I’m with you. I’m a torts professor, so I am very risk averse. And so I would be in favour of taking humans off the road next year. I think politically in the United States, that’s just I can’t imagine you can imagine the electorate being willing to put down their keys. I think ten years from now it’s going to be slightly different. Twenty years or not may be a lot different. It will happen and I agree with you should happen sooner than later. But that’s going to be a situation not for professors to decide. Unfortunately.

[00:31:07]Christoph: Autonomous vehicles will play an important role in our society, and there’s so much more to know and to talk about it. So join me again next week in the next episode of the human-robot interaction focusing on autonomous vehicles. Thank you for listening.

Author: bartneck

Dr. Christoph Bartneck is an associate professor and director of postgraduate studies at the HIT Lab NZ of the University of Canterbury. He has a background in Industrial Design and Human-Computer Interaction, and his projects and studies have been published in leading journals, newspapers, and conferences. His interests lie in the fields of Human-Computer Interaction, Science and Technology Studies, and Visual Design.