Ethical Impacts

The implementation of autonomous vehicles will have a number of ethical implications that could potentially cost people their lives, with one of the most controversial being the pre-programmed scenarios for who dies in an accident. Other ethical issues are who becomes liable for an accident,  should passengers get input on the choice of who dies in an accident, and potential incidents with vehicles being hacked.

One of the largest ethical issues with autonomous vehicles is the question of who gets killed in an accident. Currently, many autonomous vehicles, if they must choose between killing their own passengers or someone else, will choose someone else. This idea ties in with Achille Mbembe’s idea of necropolitics, where companies and government agencies are not choosing between who lives, but rather who dies in order to pursue their political agenda (Mbembe). Companies would rather kill those outside their vehicles because they want to protect the customers that paid for their product. In the end, it is all about money for the large corporations developing autonomous vehicles, even if they must make some unethical decisions along the way.

One way to try to bring more humanity into the decision of life and death is to allow the customer to choose between three settings, which determine how the car will act in the case of an accident. Those settings are egoistic, altruistic, and impartial, which correspond to having the passengers survive, others survive, and it being up to chance, respectively (Lau). While this choice can toll on one’s conscience, it does make it more ethical by allowing the human to make the decision rather than it being solely up to an algorithm. This does, however, lead to the question of responsibility.

Who should be held liable in the case of an accident with self-driving cars? Should it be the “driver,” the manufacturer, or the person who wrote the algorithm? If the car is fully autonomous, it would seem unjust for any driver or passenger to be liable for an accident, especially if the sole passenger is a child who isn’t even fit to drive. The company, however, may also not be in a position where they should be held liable, especially if it is a situation that could never be reasonably accounted for. It is also unlikely that a company would willingly sell a product in which they could be held accountable for potentially millions of dollars in damages. The same goes for the person writing the algorithm. This lack of liability from any party is a dangerous situation, and it is a question that needs to be resolved prior to the full release of these cars.

A final issue that is likely to arise with the new vehicles is their ability to be hacked into. While almost every car today has some sort of electronic component, none have quite the abilities of a fully autonomous car. The issue with these extra capabilities though, is that they are more susceptible to being hacked into, thus allowing for the driving algorithm to be tampered with or for data to be stolen (Hansson et al.). This can lead to a variety of issues, with one of the most significant being the potential for terrorism. Cyber terrorism is one of the largest growing ways that terror attacks are being committed, and this new vulnerability adds yet another way for cyberattacks to manifest in physical ways.