What Can the Trolley Problem Teach Self-Driving Car Engineers?

OK, tell me if you’ve heard this one before. A trolley, a diverging track, a fat man, a crowd, a broken brake. Let the trolley continue to speed the way it’s going, and it will smash into the crowd, obliterating the people in its way. Hit the switch, and the trolley will careen into the fat man, KOing him—permanently—on impact.

That is, of course, the classic trolley problem, devised in 1967 by the philosopher Philippa Foot. Almost 50 years later, though, researchers in the Scalable Cooperation group at the Massachusetts Institute of Technology Media Lab revived and revised the moral quandary. It was 2016, so the trolley was now a self-driving car, and the trolley “switch” the car’s programming, designed by godlike engineers. MIT’s “Moral Machine” asked users to decide whether to, say, kill an old woman walker or an old man, or five dogs, or five slightly tubby male pedestrians. Here, the decision is no longer a split second one, but something programmed into the car in advance—the sort of (theoretically) informed prejudgement that helps train all artificial intelligence.

Two years on, those researchers have collected a heck of lot of data about people’s killing preferences: some 39.6 million judgement calls in 10 languages from millions of people in 233 different countries and territories, according to a paper published in Nature today. Encoded inside are different cultures’ various answers to the ethical knots of the trolley problem.

For example: participants from eastern countries like Japan, Taiwan, Saudi Arabia and Indonesia were more likely to be in favor of sparing the lawful, or those walking with a green light. Participants in western countries like the US, Canada, Norway, and Germany tended to prefer inaction, letting the car continue on its path. And participants in Latin American countries, like Nicaragua and Mexico, were more into the idea of sparing the fit, the young, and individuals of higher status. (You can play with a fun map version of the work here.)

Across the globe, some major trends do emerge. Moral Machine participants were more likely to say they would spare humans over animals, save more lives over fewer, and keep the young walking among us.

browse around this website
view website
my sources
webpage
Discover More Here
Learn More Here
company website
click for info
Read Full Article
his response
click over here
take a look at the site here
more tips here
helpful resources
check out this site
look at this website
have a peek at this site
the original source
Continue
visit our website
visit this website
go to this website
pop over here
Home Page
Recommended Reading
these details
advice
try these out
check my reference
her comment is here
useful link
Resources
hop over to here
click this link here now
blog link
Continue eading
Click Here
Clicking Here
Go Here
Going Here
Read This
Read More
Find Out More
Discover More
Learn More
Read More Here
Discover More Here
Learn More Here
Click This Link
Visit This Link
Homepage
Home Page
Visit Website
Website
Web Site
Get More Info
Get More Information
This Site
More Info
Check This Out
Look At This
Full Article
Full Report
Read Full Article
Read Full Report
a cool way to improve
a fantastic read
a knockout post
a replacement
a total noob
about his
active
additional hints
additional info
additional reading
additional resources
address
advice
agree with
anchor
anonymous
are speaking
article
article source
at bing
at yahoo
basics
best site
blog
bonuses
breaking news
browse around here
browse around these guys

The point here, the researchers say, is to initiate a conversation about ethics in technology, and to guide those who will eventually make the big decisions about AV morality. As they see it, self-driving car crashes are inevitable, and so is programming them to make tradeoffs. “The main goal is to capture how the public reaction is going to be once those accidents happen,” says Edmond Awad, an MIT Media Lab postdoctoral associate who worked on the paper. “We think of this as a big forum, where experts can look and say, ‘This is how the public will react.’”

So what do the people actually building this technology think about the trolley problem? I’ve asked lots of AV developers this question over the years, and the response is generally: sigh.

“The bottom line is, from an engineering perspective, solving the trolley problem is not something that’s heavily focused on for two reasons,” says Karl Iagnemma, the president of Aptiv Automated Mobility and cofounder of the autonomous vehicle company nuTonomy.1 “First, because it’s not clear what the right solution is, or if a solution even exists. And second, because the incident of events like this is vanishingly small and driverless cars should make them even less likely without a human behind the wheel.”

Leave a Reply

Your email address will not be published. Required fields are marked *