YOUR SETTINGS
More Info
Your Location
You've entered an invalid zip code.
Your Credit Estimate
Your Credit
We are using your credit decision on file.
Cancel
More Info
BACK
Why we ask for your zip code.

Your zip code helps us provide you with the most accurate vehicle pricing and vehicle availability.

Why we estimate your credit score.

We estimate your credit score to give you an idea of your monthly payments. To get an accurate payment amount, complete our credit application by clicking the Start Credit Application button below.

start credit application
Will self-driving cars mean that robots will make ethical decisions for humans?

Ethics of Robotics and Self-Driving Cars

Written By, Chloe L.

As companies continue working on self-driving technologies, the reality of them is getting closer. In fact, in September Uber actually started testing self-driving cars on voluntary users in Pittsburg. And Ford recently announced that they expect to start selling driverless cars by 2025.

If the idea of self-driving cars isn’t hard enough for people to fathom, there are a variety of ethical questions making the possibility of autonomous driving possibility even harder.

To Prick or to Spare

Have you heard of the artist and roboticist, Alexander Reben? Neither had we, until we heard the recent segment on National Public Radio (NPR), “A Robot That Harms: When Machines Make Life Or Death Decisions,” that discusses Reben’s work.   

Reben created a robot arm that decides whether or not to prick someone’s finger. When a willing participant approaches the robot, the robot goes through an algorithm to decide whether or not it's going to prick the person’s finger. In essence, this robot is making a decision whether or not to injure a person.

While the consequences of Reben’s robot are minute, that decision it is making -- whether or not to harm a human -- might be a decision that self-driving robots will need to make.

Who’s best interest?

Though it is believed that self-driving cars will be safer, that doesn’t mean they will be perfect. So when in a situation where there will be an accident, how will the robot decide how to handle it? Who would it protect? The passengers or others?

 Say there’s two cars that are going to collide, would a robot risk crashing into a wall or a tree rather than injure a human in another car or a pedestrian?

So what if a robot was able to make the most ethical decision, is that what people really want? Do they want pure ethics or ethical self-interest?

The same NPR segment reported the findings in a recent poll by the MIT Media Lab that aimed to find out the answer to that question. The study found that half of the participants said they would be likely to buy a driverless car that put the highest protection on passenger safety. But only 19 percent said they'd buy a car programmed to save the most lives.

So if there is an accident - who’s responsible?

If a human is not responsible for the car’s driving, would people need driver’s insurance anymore? Wouldn’t the company who sells the car be more responsible for the software that drives the car rather than the person who is in the car?

 If people could use cars as they needed, and only pay for the usage of a car rather than the cost of owning a car and car insurance, people could save thousands of dollars every year.  

 However, let’s say that humans don’t need insurance because the company is responsible for covering the claim, would the robot then make safety decisions in the best interest of the company?

 In closing, the jury is certainly still out on self-driving cars. However, there are certainly impacting discussions to be had and good questions to be considered.

 Most importantly, what is your opinion on all of this? Would you rather make the decisions that need to happen behind the wheel yourself or would you rather leave it up to a robot? We’d love to hear your opinions and thoughts. We have a lot of questions about things ourselves so we’re eager to hear what others have to say about everything!

FOLLOW US