Capurro, Rafael
 Incorporated contributions
Capurro (30/10/09)
 Usage domain
Information ethics
 German Roboethik  

As Capurro and Nagenborg (2009) state, ethics and robotics are two academic disciplines, one dealing with the moral norms and values underlying implicitly or explicitly human behaviour and the other aiming at the production of artificial agents, mostly as physical devices, with some degree of autonomy based on rules and programmes set up by their creators. Since the first robots arrived on the stage in the play by Karel Čapek (1921) visions of a world inhabited by humans and robots gave rise to countless utopian and dystopian stories, songs, movies, and video games.


Human-robot interaction raises serious ethical questions right now that are theoretically less ambitious but practically more important than the possibility of the creation of moral machines that would be more than machines with an ethical code. But, even when the process of invention and development of robotic technologies take place in a global level, in which diverse cultures, therefore also diverse systems of values, beliefs and expectations are involved, intercultural roboethics is still in its infancy, no less than intercultural robotics (®Intercultural Information Ethics).

Rougly speaking, the following ethical theories and moral values as well as principles are predominant in Western and Eastern traditions rising different questions with regard to human-robot interaction such as:

  • Europe: Deontology (Autonomy, Human Dignity, Privacy, Anthropocentrism): Scepticism with regard to robots
  • USA (and anglo-saxon tradition): Utilitarian Ethics: will robots make “us” more happy?
  • Eastern Tradition (Buddhism): Robots as one more partner in the global interaction of things

The difference morality and ethics should be understood as follows:

  • Ethics as critical reflection (or problematization) of morality
  • Ethics is the science of morals as robotics is the science of robots

Different ontic or concrete historical moral traditions are for instance

  • in Japan: /Seken/ (trad. Japanese morality), /Shakai/ (imported Western morality) and /Ikai/ (old animistic tradition)
  • In the „Far West“: Ethics of the Good (Plato, Aristotle), Christian Ethics, Utilitarian Ethics, Deontological Ethics (Kant)

The ontological dimension, Being or (Buddhist) Nothingness, can be conceived as the space of open possibilities that allow us to criticize concrete or ‘ontic’ moralities. The human relation to such ontological dimension is always based on basic moods (like sadness, happiness, astonishment etc.) through which the uniqueness of the world and human existence is experienced differently in different cultures. A future intercultural roboethics should reflect on the ontic as well as on the ontological dimensions for creating and using robots in different cultural contexts and with regard to different goals. Trends, contributions and bibliography focused in this crossroad can be found in the mentioned book, edited by Capurro and Nagenborg.



  • ČAPEK, Karel (1920).  R.U.R. (Rossumovi univerzální roboti)  [English translation: R.U.R. (Rossum's Universal Robots), New York: Pocket Books, 1970.
  • CAPURRO, Rafael and NAGENBORG, Michael (Eds.) (2009), Introduction. In: Ethics and Robotics. Berlin: Akademische Verlagsgesellschaft.
New entry. Before doing a new entry, please, copy this line and the following ones and paste them at the column bottom. Next fill out the fields: 'name', 'date' and 'text', and delete this upper blue paragraph.
Last Name, First Name (date)

[Entry text, including a section of references using the style given in the left column]

[Substitute this text by the knowledge field in which your entry was done]
Incorporated entries
Rafael Capurro (30/10/2009)
[It corresponds with the first version of the article, which is now showed in the left column.]

Entries under work
Sienna Archer (December 2019, contribution elaborated within the Seminar "A Journey through Philosophy and Information" facilitated by J.M.Díaz at the Hochschule München)

Ethical Issues broached in the Topic of Self Driving Cars


This entry outlines the ethical issues which are encountered in the topic of self-driving cars and other automated vehicles (AVs). The topics covered are job displacement, responsibility, algorithmic bias and the Trolley Problem. The purpose of addressing these problems is not to outline any solutions but to explain some of the possible decisions and through examples show the pros and cons and the ethical concerns faced which accompany each possible outcome.  


With the prospect of completely AVs becoming a reality the associated ethical debates have to be addressed. In the following report the ethical concerns of job displacement, responsibility for the actions of the AV and the problems associated with algorithmic bias and the Trolley Problem are addressed. There are other ethical problems associated with this topic which will not be addressed in this report but still need to be considered in the overall topic of ethical issues surrounding self-driving cars. These problems include what happens if the AV is publicly owned? Is there an existential threat to the insurance industry? The concern of hacking and the safeguard of personal information. No one knows what is in store in the future of AVs but it is clear there is much debate to be had about the ethical issues that arise when dealing with AVs.


Job Displacement

With driving being such a large part of modern society the introduction of AVs will likely result in many areas, including peoples jobs being affected as technological advances have long brought substantial change to the workplace (Pettigrew, 2018). As more companies embrace the use of automation, human jobs are likely to displaced by AVs and artificial intelligence (AI). AVs are potential replacements for humans as they can perform certain tasks faster, cheaper and more efficiently constituting in a major disruptive technological change that is likely to have enormous implications for many workers around the world as jobs involving driving become progressively redundant (Pettigrew, Fritschi and Norman, 2018). This drives the ethical question of introducing AVs in the workplace as by introducing them could possibly result in the loss of many jobs around the world.

There have been many revolutions in modern history that have created a complete change in society. The scientific revolution in the 18th century, the industrial revolution in the 19th and the current internet revolution have all sparked changes in what jobs are available. The internet revolution, for example, has recently allowed retailers such as eBay and Amazon to edge out brick-and-mortar retailers, who have much higher operating expenses, one of the largest of which is labour. However, in this case it has lead to a change in the available job market rather than a major loss in job availability. Since the start of these revolutions, a recurrent fear has been that automation and technological advance will produce mass unemployment, even though that prediction has proven to be incorrect. (González-Fierro, 2019)

The three primary ways in which AVs are being introduced into society are, public transport becoming autonomous in the forms of self-driving trains and busses, ride-share companies who are developing autonomous fleets and the purchase of personal vehicles with autonomous features (Eby et al., 2016). There are estimates that AVs will be widely used by around 2040 (Litman, 2018). There have been projections that globally there will be wide-scale redundancies among drivers  in many industries including the trucking, taxi, ride-share, courier, and food-delivery industries. Workers in other related industries are also projected to be effected, these include industries such as warehousing and manufacturing. (Hanna, 2017)(Snyder, 2016). The Department of Commerce estimates that one in nine workers are currently in occupations that will be affected by the introduction of AVs (Beede, 2016). Under these circumstances, the ethical principles which are concerned are of that of government and company responsibility. Should they be required to provide workers with education or should they implement processes to be able to relocate jobs. The question remains if they will be able to cope with these changes.

With the rise of AVs many advantages and disadvantages are anticipated and there are different estimates for when there will be widespread use of autonomous vehicles. There is limited evidence to suggest that job loss is an actual concern, in a survey it was found to be ranked behind other perceived problems relating to safety, liability, security, and privacy (König and Neumayr, 2017). However, it is still reported as a concern when brought up, in a survey published in the International journal of environmental research and public health (Pettigrew, Fritschi and Norman, 2018) 60% of respondents reported being at least moderately concerned about job loss and 71% were at least moderately concerned about loss of driving skills when asked about it in relation to AVs. It is also to be noted that around half believed that the introduction of AVs would result in increased jobs in technical areas. Overall it is unlikely that job displacement will be a major ethical concern in relation to robot ethics, however it still is something to consider.




With the development of AVs the question of who is responsible for the actions taken by the AV is a question that arises, especially with respect to ethical concerns. This is most notably a concern when the AV will have to make decisions regarding whom to save or protect in the event of a collision or unforeseen obstacle. Even today cars can perform complex tasks related to braking and steering often without the awareness of the driver. (Heaps, 2009) The question of who exactly has the power to decide who lives and who dies and in what situation is raised. Four of the options for this include the manufacturer, the individual who owns the AV, the insurer with a minimal damage approach or should this responsibility fall to the laws and policy?


The idea to make the manufactures in charge of making these ethical decisions, is one which makes sense as it is them who will be developing the software which directly controls the actions of the AV in the event an ethical decision needs to be made. This would follow the traditional product liability notions as currently it is generally considered that the manufacturer is "ultimately responsible for the final product." (Gary et al., 2012). This currently means that if a product design defect leads to some sort of harm, the manufacturer is liable for this harm. However, this raises the question if a human driver could have made the same decision is it considered a “defect”, questioning the resulting product liability.    

Another view to consider is the individual owner/user being responsible, which is an already well established concept in driving accidents (Alexander, Nida-Rtimelin, 2014). However in the case of AVs is it ethical to presume the ‘driver’ is responsible when they have no role in the decision making process. Which then allows the idea of the owner being able to program the AV to act certain ways in certain situations. In an Open Roboethics Initiative survey roughly 64% of people said they would prefer the car to protect their lives and those of their passengers before any pedestrian. On a statistical standpoint the average driver is likely to have a collision roughly once every 17.9 years (Toups, 2011) so is it ethical to assume that by just engaging in the activity of operating an AV the driver is agreeing to be liable at some point.

A responsible party which would aim to minimise the economic impact of an accident would be the insurance company. A self-driving car under an insurer's influence will always choose the option which caused the least amount of damage, however, two main issues arise with this solution, why should the owner of a vehicle be subject to the ethical values of some other entity when there exists no morally "right" answer and is the most economically viable option always the best one. The most economically viable option could create improper incentives, (Tuffley, 2014) since an AV that aims to minimize overall damage will target people and objects that are less likely to suffer costly injuries.

It might be the legislature which results in the best position to meet the legal and ethical demands of self-driving cars as it will make everything standardised and more people would be prompted to adopt the use of AVs since it is possible the liability would have been too great otherwise (Morris 2006). This would overall increase the safety of roads as according to the Eno Center for Transportation, as many as 4.2 million accidents could be avoided if 90% of vehicles in the U.S. were self-driving (TIME, 2017).

Algorithmic Bias and the Trolley Problem

When developing AI algorithms every effort should be made to avoid bias, discrimination and choose the most ethical option. During the operation of an AV some accidents are unavoidable, and therefore AVs will need to engage in crash-optimization. This means to choose the course of action which will likely lead to the least amount of harm, however this is not a simple decision and involves many ethical questions. Outlined below are a few ethical situations and the proceeding ethical dilemmas which could arise. Often brought up with the mention of AVs and ethical dilemmas is “trolley problem” however, the ethical decisions outlined may be considered even more difficult than this due to the decision being pre-meditated.

One scenario which was bought up by Patrick Lin suggests the case where the autonomous car encounters this terrible choice, swerve left and kill an eight-year old child, or swerve right and kill an 80-year old grandmother (Lin, 2019). If either way someone was going to die what would be the ethically correct decision? If it was a human driving it would be an unmediated decision but with the introduction of AVs the response would have to be premediated, which leads to the question: What do you program the behaviour to be if it ever encountered such a situation? There are several views on this question all of which give justifiable answers which is why it is considered a moral dilemma.

The term “the lesser of two evils” can be brought up in this context, to some people swerving towards the grandmother could be seen as the better option, as the child has her whole life ahead of her while the grandmother has already presumably had a full life. However, according to relevant professional codes of ethics either choice is morally incorrect. For example the Institute of Electrical and Electronics Engineers (IEEE), states “to treat fairly all persons and to not engage in acts of discrimination based on race, religion, gender, disability, age, national origin, sexual orientation, gender identity, or gender expression” (, 2019). Which leads to the point that despite the disparity of life experiences between the old and the young, that doesn’t mean it is an appropriate basis for different treatment, since this would seem to be the same as any other type of discrimination. In many countries it would actually be currently illegal for companies to stipulate this kind of bias, for example in Germany the right to life and human dignity is basic and set forth in the first two articles of the very first chapter in the nation’s constitution (, 2019) and similarly in the United States similar conclusions are made in the fourteenth amendment of its constitution.

Another solution to this problem is in such a case to make no decision at all, allowing both victims to be struck, but this is allowing two people to die when at least one death could have been avoided. This situation can also be modified to make the “make no decision at all” stance seem much worse, what if it was one hundred deaths compared to one. Another options is to use a choice which is an arbitrarily, unpredictable and without prejudice to either person (Lin et al., 2019). However, this too can also be considered ethically immoral, in that there is a choice between lives without any deliberation at all, when there are potentially some reasons to prefer one over the other. This is a dilemma that is not easily solvable and is therefore why there is a need for ethical deliberation when developing AVs.

Another case can be put forward to always choose to protect the driver. In the case of the young child vs the grandmother the better option would be to hit the lighter object, the child. In this specific case the choice between the child and the grandmother seems irrelevant to the driver, however when viewed in a different situation, a choice between two vehicles then should the AV choose the heaver vehicle, say a truck, protecting the other driver or the lighter vehicle, say a motorcycle, protecting the occupant of the car (Maurer et al., n.d.). This can also lead to the point of whether to choose to hit a car with a higher safety rating minimising the overall injury or in the case of having to hit a motorcyclist with or without a helmet, in both cases the choice between either probably doesn’t matter particularly to the occupant of the AV since they are crashing either way, however it does matter to those being hit. With the motorcyclist example, the rider without a helmet would probably not survive, therefore should it be the case that the car should hit the driver wearing the helmet, this essentially penalises motorcyclists who wear helmets, which in turn could encourage some motorcyclists not to wear helmets. The argument could then be made in turn if this is the case, to hit the motorcyclist without a helmet, as they acted recklessly and therefore is more deserving of harm (Maurer et al., n.d.).

Another situation which brings about ethical considerations is one in which an AV is driving along a narrow road alongside a cliff and a school bus appears around the corner, partially in your lane (Maurer et al., n.d.). The AV calculates that there is no possible situation in which it can avoid harming the driver. One of the standard ethical viewpoints on this situation is to optimize results, that is, maximize survivors, no matter who they are, and minimize harm. The two main choices available to the AV are crash into the bus endangering the lives of those on the bus or drive off the cliff saving the lives of everyone on the bus. In the minimal damage ethical decision, the solution would be to drive off the cliff and sacrifice the driver, since it is better that only one person should die rather than more than one. However, as usual this leads to another ethical dilemma, the question of the choice in the sacrifice. If the car was being manually driven, and the driver had the chance to weigh up the decision they still might choose the sacrifice, however it is one thing to willingly make that choice and quite another matter for a machine to make that decision without your consent or foreknowledge that self-sacrifice was even a possibility.

Situations such as these are often related to “the trolley problem” (Cathcart, 2013) a case in which a choice has to be made between letting five people die or pulling a leaver and killing one person. In relation to AVs consider a situation where either intentionally or not a car is being manually driven and is heading towards five pedestrians, should the crash-avoidance system take over and swerve even if the only other choice is to swerve and kill one other pedestrian. In the stance to maximize survivors the answer would be yes, to swerve and kill that one person, however it could also be argued that there is a moral distinction between killing and letting die. In this case if the AV didn’t take control then is it responsible for taking those five lives but in making the decision to take control it is making the choice to kill that person. The difference here between the trolley problem and this is that all of this is premeditated where the original problem relies on instinctual reactions and it can be argued either way, that deaths should be minimised however it can also been seen that there is a moral difference between letting die and killing especially in the eyes of the law. (Maurer et al., n.d.).



Even though some of these scenarios may seem like a “one in a million shot” according to “It is estimated that over 1 billion passenger cars travel the streets and roads of the world today.” So with these rudimentary numbers, that is 1000 times these scenarios could possibly happen. Even if there is the tiniest chance that an AV could encounter any problem that requires an ethical decision to be made, they need to be considered beforehand as the AV cannot make its own decision. This means the seemingly impossible decisions surrounding the ethical dilemmas mentioned, job loss, accountability, algorithmic bias as well as others are something that will need to be made if AV’s are going to be, not something of the future but of the present.  


Beede, D., Powers, R. and Ingram, C. (2017). The Employment Impact of Autonomous Vehicles. SSRN Electronic Journal.

"Cars Produced In The World - Worldometer". Worldometers.Info, 2020, Accessed 17 Jan 2020.

Cathcart, T.: The Trolley Problem, or Would You Throw the Fat Guy Off the Bridge? Workman Publishing Company, New York (2013)

David Tuffley, Self-Driving Cars Need Adjustable Ethics' Set by Owners, THE CONVERSATION (Aug. 24, 2014) adjustable-ethics-set-by-owners-30656.

Des Toups, How Many Times Will You Crash Your Car, FORBES (July 27, 2011, 6:50 PM),

Eby, D., Molnar, L., Zhang, L., St. Louis, R., Zanier, N., Kostyniuk, L. and Stanciu, S. (2016). Use, perceptions, and benefits of automotive technologies among aging drivers. Injury Epidemiology, 3(1).

Eric Morris, From Horse to Horsepower: The External Costs of Transportation in the 19th Century City (2006) (M.A. Thesis, UCLA)

Gary E. Marchant & Rachel A. Lindor, The Coming CollisionBetween Autonomous Vehicles and the Liability System, 52 SANTA CLARA L. REV. 1321, 1329 (2012). (2019). Basic Law for the Federal Republic of Germany. [online] Available at: [Accessed 29 Dec. 2019].

González-Fierro, M. (2019). 10 Ethical Issues of Artificial Intelligence and Robotics. [online] Available at: [Accessed 27 Dec. 2019].

Hanna, M.J. Policy memorandum: The case for adopting autonomous vehicles technology and supporting research in artificial intelligence. J. Sci. Policy Gov. 2017, 11, 1.

Heaps, R. (2009). 8 great new advances in auto technology. [online] Bankrate. Available at: [Accessed 26 Dec. 2019].

Hevelke, A. and Nida-Rümelin, J. (2014). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science and Engineering Ethics, 21(3), pp.619-630. (2019). IEEE Code of Ethics. [online] Available at: [Accessed 27 Dec. 2019].

König, M. and Neumayr, L. (2017). Users’ resistance towards radical innovations: The case of the self-driving car. Transportation Research Part F: Traffic Psychology and Behaviour, 44, pp.42-52.

Lin, P. (2019). The Ethics of Autonomous Cars. [online] The Atlantic. Available at: [Accessed 27 Dec. 2019].

Lin, P., Lin, P., So, A., Staff, W., Grey, J., Goode, L., Calore, M., Staff, W., Ceres, P. and Strampe, L. (2019). The Robot Car of Tomorrow May Just Be Programmed to Hit You. [online] WIRED. Available at: [Accessed 27 Dec. 2019].

Litman, T. (2013). Changing North American vehicle-travel price sensitivities: Implications for transport and energy policy. Transport Policy, 28, pp.2-10.

Maurer, M., Gerdes, J., Lenz, B. and Winner, H. (n.d.). Autonomous Driving. pp.69-86.

OPEN ROBOETHICS INITIATIVE, If Death by Autonomous Car is Unavoidable, Who Should Die? ReaderPoll Results, ROBOHUB (June 23, 2014), unavoidable-who-should-die-results-from-our-reader-poll/.

Pettigrew, S., Fritschi, L. and Norman, R. (2018). The Potential Implications of Autonomous Vehicles in and around the Workplace. International Journal of Environmental Research and Public Health, 15(9), p.1876.

Snyder, R. Implications of autonomous vehicles: A planner’s perspective. Inst. Transp. Eng. J. 2016, 86, 25

TIME (2017). Artificial Intelligence: The Future of Humankind. 1st ed. Time Inc. Books.