Autonomous vehicles represent the first large-scale social manifestation of Artificial Intelligence for the general public. While media may focus on Terminator scenarios that are steeped in dystopian science fiction, the vehicles we’ll be riding in on a regular basis within the next seven to ten years represent a genuine opportunity for individuals and society to address the larger ethical issues around these intelligent technologies, today.
While it’s easy to get caught up in the classic ethical Tunnel problem regarding self-driving cars, it’s critical to examine larger issues regarding personal data access and how all the vehicles and systems we’ll be interacting with in the future align with our values on an ongoing basis.
To address these and similar issues, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (AI/AS) recently finalized the first draft of Ethically Aligned Design, a code of ethics for the algorithmic era created by over one hundred AI/Ethics thought leaders that are members of The IEEE Global Initiative. The document contains over eighty pragmatic Issues and Candidate Recommendations for technologists to utilize in their work today to create a positive future.
Autonomous vehicles will interact with multiple sets of stakeholders once they’re in widespread use. This includes manufacturers, technicians, and a variety of end users. This will be especially true of vehicles that are shared and need to be adaptive to various users’ values. These may be as simple as recognizing how fast a vehicle should operate to not make a sensitive rider sick, or to allow for various privacy settings based on a rider’s preference for how their data may be shared with a manufacturer or its partners. How should manufacturers creating autonomous vehicles face the ethical challenges associated with this kind of scenario?
Moral Overload as an Autonomous Issue
As an example, here’s an excerpt from Ethically Aligned Design provided by the Embedding Values Into Autonomous Intelligent Systems (AIS) Committee on how to deal with these situations:
Moral overload – AIS are usually subject to a multiplicity of norms and values that may conflict with each other.
An autonomous system is often built with many constraints and goals in mind. These include legal requirements, monetary interests, and also social and moral values. Which constraints should designers prioritize? If they decide to prioritize social and moral norms of end users (and other stakeholders), how would they do that?
Our recommended best practice is to prioritize the values that reflect the shared set of values of the larger stakeholder groups. For example, a self-driving vehicle’s prioritization of one factor over another in its decision making will need to reflect the priority order of values of its target user population, even if this order is in conflict with that of an individual designer, manufacturer, or client. For example, the Common Good Principle could be used as a guideline to resolve differences in the priority order of different stakeholder groups.
We also recommend that the priority order of values considered at the design stage of autonomous systems have a clear and explicit rationale. Having an explicitly stated rationale for value decisions, especially when these values are in conflict with one another, not only encourages the designers to reflect on the values being implemented in the system, but also provides a grounding and a point of reference for a third party to understand the thought process of the designer(s). The Common Good Principle mentioned above can help formulate such rationale.
We also acknowledge that, depending on the autonomous system in question, the priority order of values can dynamically change from one context of use to the next, or even within the same system over time. Approaches such as interactive machine learning (IML), or direct questioning and modeling of user responses can be employed to incorporate user input into the system. These techniques could be used to capture changing user values.
The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE, 2016. http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html.
Setting the Human Standard for AI/AS Ethics
Along with creating and evolving Ethically Aligned Design, members of The IEEE Global Initiative are tasked with making recommendations for potential Standards Projects based on their work. Currently there are three Working Groups focused on these areas that all affect aspects of autonomous vehicles.
Here’s a description about this work:
Engineers, technologists and other project stakeholders need a methodology for identifying, analyzing and reconciling ethical concerns of end users at the beginning of systems and software life cycles. The purpose of this standard is to enable the pragmatic application of this type of Value-Based System Design methodology which demonstrates that conceptual analysis of values and an extensive feasibility analysis can help to refine ethical system requirements in systems and software life cycles.
- IEEE P7001: Transparency of Autonomous Systems
Here’s a description of this work, which has particular relevance to self-driving vehicles:
A key concern over autonomous systems (AS) is that their operation must be transparent to a wide range of stakeholders, for different reasons. (i) For users, transparency is important because it builds trust in the system, by providing a simple way for the user to understand what the system is doing and why. If we take a care robot as an example, transparency means the user can quickly understand what the robot might do in different circumstances, or if the robot should do anything unexpected, the user should be able to ask the robot ‘why did you just do that?’. (ii) For validation and certification of an AS transparency is important because it exposes the system’s processes for scrutiny. (iii) If accidents occur, the AS will need to be transparent to an accident investigator; the internal process that led to the accident need to be traceable. Following an accident (iv) lawyers or other expert witnesses, who may be required to give evidence, require transparency to inform their evidence. And (v) for disruptive technologies, such as driverless cars, a certain level of transparency to wider society is needed in order to build public confidence in the technology.
Finally, IEEE P7002, focused on creating a Data Privacy Process, also deeply relates to the issues of how autonomous vehicles may use rider data, especially considering how many new models can track facial or biometric data.
Our Machines Ourselves
In terms of best defining how to recognize and provably align end user values when building autonomous cars or other forms of autonomous or intelligent technologies, the critical point to focus on is incorporating these ethical considerations into these systems and devices today to answer the question:
How will machines know what we value, if we don’t know ourselves?
It’s time to identify and imbue the values that will increase human wellbeing in the algorithmic era.
Contributing Author: Dr. Ing Konstantinos Karachalios, Managing Director, IEEE Standards Association
Konstantinos Karachalios is Managing Director of IEEE Standards Association and member of IEEE’s Management Council. He is also Director at Large for ANSI. He has a PhD in Nuclear Reactor Safety and a long experience and engagement in intellectual property matters and governance issues in the global knowledge economy. He considers the launch of the IEEE’s Internet Initiative and of the technology ethics programs described in this article as concrete implementations of IEEE’s tagline “advancing technology for humanity”.
I am glad to hear of the new and enhanced advancements in technology of today. It is impressive what can be done to make technology work with us in order to make our lives easier; however, there are a couple of things that concern me with the AI system.
1) If the AI system can continuously learn from our actions, is it a good idea to introduce it to society yet? My thought is this…if the machine is built to adjust to our ethical thinking and our responses, will there be a chance of "confusing" the machine? The problem is that society today has yet to maintain, or even agree upon, what is ethical and unethical in dealing with others. If our views on the matter continue to constantly change, will it affect the machine's way of interacting?
2)If the end user is able to ask the machine questions if it was to do something unusual, would there be any potential physical risk involved to the human? Could they ask questions in time to prevent bodily harm to themselves?
With the increased amount of hacking and malfunction of today's machinery and internet, it may cause some concern of safety issues.
Exactly what we need to think about and predicate our solutions on, “ethics”!