cua cà mau cua tươi sống cua cà mau bao nhiêu 1kg giá cua hôm nay giá cua cà mau hôm nay cua thịt cà mau cua biển cua biển cà mau cách luộc cua cà mau cua gạch cua gạch cà mau vựa cua cà mau lẩu cua cà mau giá cua thịt cà mau hôm nay giá cua gạch cà mau giá cua gạch cách hấp cua cà mau cua cốm cà mau cua hấp mua cua cà mau cua ca mau ban cua ca mau cua cà mau giá rẻ cua biển tươi cuaganic cua cua thịt cà mau cua gạch cà mau cua cà mau gần đây hải sản cà mau cua gạch son cua đầy gạch giá rẻ các loại cua ở việt nam các loại cua biển ở việt nam cua ngon cua giá rẻ cua gia re crab farming crab farming cua cà mau cua cà mau cua tươi sống cua tươi sống cua cà mau bao nhiêu 1kg giá cua hôm nay giá cua cà mau hôm nay cua thịt cà mau cua biển cua biển cà mau cách luộc cua cà mau cua gạch cua gạch cà mau vựa cua cà mau lẩu cua cà mau giá cua thịt cà mau hôm nay giá cua gạch cà mau giá cua gạch cách hấp cua cà mau cua cốm cà mau cua hấp mua cua cà mau cua ca mau ban cua ca mau cua cà mau giá rẻ cua biển tươi cuaganic cua cua thịt cà mau cua gạch cà mau cua cà mau gần đây hải sản cà mau cua gạch son cua đầy gạch giá rẻ các loại cua ở việt nam các loại cua biển ở việt nam cua ngon cua giá rẻ cua gia re crab farming crab farming cua cà mau
Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

Should your self-driving car kill you to save a school bus full of kids?

Volvo Autonomous Car
Volvo
It’s the near future and you’re reading this on your way to work in your self-driving car. The human driver of the car in front of yours slams on the brakes. Your car’s reaction time is roughly the speed of light, so it has time to realize that the stopping distance is too short and to see that the lane next to you is empty.

A quick swerve barely interrupts your morning browse of the headlines. The system works.

Now it’s ten years later. Human-driven cars have been banned from the major commuter routes because they’re unsafe at any speed. Wouldn’t you know it, but exactly the same situation comes up. This time, though, your car accelerates and slams itself into a nearby abutment, knowing full well that the safety equipment isn’t going to save you.

Your car murdered you. As it should have.

Once cars are networked, it would be immoral and irresponsible to continue to take self-preservation as the highest value.

In this second scenario, not only are all the cars on the highway autonomous, they are also networked. The cars know one another’s states and plans. They can – and should – be programmed to act in such a way that the overall outcome is the best possible: more humans saved, fewer injured. It’s just like a simulation in which a computer is given some distressing multi-car situation and has to figure out what combined set of actions would be best. But now, it’s real.

Unfortunately for you, the networked cars figured out that to save the busload of children, you had to be sacrificed.

The two scenarios represent programs embodying different moral philosophies, a topic scientists and philosophers are now beginning to notice: The MIT Technology Review cites a study about whether people are ok with their cars making such decisions. (Result: Yes, so long as the respondents are not the ones sacrificed.) This summer a workshop at Stanford considered some of these questions, as did an Oxford University Rhodes Scholar, Ameen Barghi. In fact, I posed some of these questions a year ago). But this process has just begun.

Meanwhile, the problems get complex quickly.

In the first near future scenario, each self-driving car is designed to maximize the safety of its occupants. That’s all the cars can do because they don’t know what any other car is going to do.

2010-Autonomous-Audi-TTS-Pikes-Peak-10
Audi
Audi

The engineers who wrote the first scenario’s car program thought of it as a set of accident-avoidance routines. But it embodies a moral imperative: prioritize preserving the life of this car’s passengers. That’s really all that the designers of the first generation of self-driving cars can do, even though focusing only on one’s own welfare without considering the effect on others is what we would normally call immoral.

But once cars are networked, it would be immoral and irresponsible to continue to take self-preservation as the highest value.  If a human acted that way, we well might sympathize, explaining it as a result of what we think of as genetic wiring. But we also admire those who put themselves at risk for the sake of others, whether they’re medical personnel flocking to ebola sites, teachers who step in front of a gunman entering a classroom, or soldiers who throw themselves on hand grenades. We recognize their ultimate sacrifice while wondering if we would manage to do the same.

Networked self-driving cars can get over weaknesses in the moral decisions made by human drivers.

Self-driving cars will have two moral advantages over us: when networked they can see more of a situation than any individual human can, and they can be hard-wired to steel their nerves when it comes time to make the ultimate sacrifice…of their passengers.

Networked self-driving cars can in these ways get over weaknesses in the moral decisions made by human drivers. But, this will require their human programmers to make moral decisions based on values about which humans will not, and perhaps cannot, agree.

For example, perhaps the networked results show that either of two cars could be sacrificed with equal overall results. One has a twenty-five year old mother in it. The other has a seventy-year old childless man in it. Do we program our cars to always prefer the life of someone young? Of a parent? Do we give extra weight to the life of a medical worker beginning a journey to an ebola-stricken area, or a renown violinist , or a promising scientist or a beloved children’s author? Or, should we simply say that all lives are of equal worth? That may well be the most moral decision, but it is not one we make when deciding which patients will get the next available organ for transplantation. In fact, should we program our cars so that if they have to kill someone, they should do it in the way least likely to damage their transplantable organs? Do we prefer to sacrifice the person who by speeding, or by failing to get her brakes inspected, caused the accident?

autonomous driving
Volvo
Volvo

These same questions arise in every situation in which we require our machines to make decisions. An autonomous killing drone, especially when networked with other drones, can know more about a complex situation than human pilots can. But how are we going to decide what counts as an “acceptable risk” of civilian casualties? It’s entirely plausible that the moral answer is “zero,” even though that is not the answer we give when it’s a human finger on the bomb release trigger. And that, of course, ignores the inevitability of failures of the programming, a risk that has been highlighted recently by Stephen Hawking, Elon Musk, Bill Gates, and quite clearly by the Terminator and Robocop series.

The behavior of programmable machines is an extension of human desires, will, and assumptions. So of course the programs themselves express moral preferences. As more of our lives are wrapped into autonomous machines, we’ll have to take the moral dimension of our programmed devices more seriously. These decisions are too important to be left to the commercial entities that are doing the programming. It’s just not clear who should be settling these difficult questions of morality.

David-shorensteinDavid Weinberger writes about the effect of technology on ideas. He is the author of Small Pieces Loosely Joined and Everything Is Miscellaneous, and is the co-author of The Cluetrain Manifesto. His most recent book, Too Big to Know, about the Internet’s effect on how and what we know.

Dr. Weinberger is a senior researcher at the Berkman Center. He has been a philosophy professor, journalist, strategic marketing consultant to high tech companies, Internet entrepreneur, advisor to several presidential campaigns, and a Franklin Fellow at the US State Department. He was for four years the co-director of the Harvard Library Innovation Lab, focusing on the future of libraries.

David Weinberger
Former Digital Trends Contributor
Dr. Weinberger is a senior researcher at the Berkman Center. He has been a philosophy professor, journalist, strategic…
Nvidia’s Drive Concierge will fill your car with screens
An interior view of Nvidia's Drive Concierge in-car infotainment system, showing various in-car displays in use.

At Nvidia’s GTC show today, the company announced two new systems in its in-car computing efforts, including a new product that could outfit your vehicle with an array of AI-powered screens and dashboards.

The first announcement is a new in-car infotainment system that includes graphics and visuals for drivers alongside game and movie streaming for passengers. Dubbed Drive Concierge, Nvidia says it will make driving “more enjoyable, convenient and safe.”

Read more
Ex-Apple employee pleads guilty to nabbing Apple Car secrets
The Apple logo is displayed at the Apple Store June 17, 2015 on Fifth Avenue in New York City

A former Apple employee on Monday pled guilty to the theft of trade secrets from the tech firm.

The material stolen by Xiaolang Zhang was linked to Apple’s work on its first-ever automobile, a project that’s been in and out of the headlines for years though never officially confirmed by the company.

Read more
A weird thing just happened with a fleet of autonomous cars
A passenger getting into a Cruise robotaxi.

In what must be one of the weirder stories linked to the development of autonomous vehicles, a fleet of Cruise self-driving cars gathered together at an intersection in San Francisco earlier this week, parked up, and blocked traffic for several hours. And to be clear: No, they weren't supposed to do that.

Some observers may have thought they were witnessing the start of the robot uprising, but the real reason for the mishap was more prosaic: An issue with the platform's software.

Read more