1
Programmed Permission: How Could an Android Provide Genuine Consent?
casrmcca
The field of artificial intelligence is racing forward, increasing the need to differentiate between robots that are created solely to complete a specific set of tasks and mechanical creatures with complex social capabilities. Brave New Worlds: The Oxford Dictionary of Science Fiction defines an android as “an artificial being that resembles a human in form.” While androids could be programmed to complete tasks, their ability to interact socially with intelligence and emotion identifies them as something more than just a robot. As social beings, how might the human construct of consent apply to androids? The Routledge Encyclopedia of Philosophy states that consent is a moral, political, and legal concept that is “widely recognized as justifying or legitimating acts, arrangements, or expectations” (p. 6). As humanity continues to advance artificial intelligence, robotic programming should allow for conceptualizing consent. In addition, humanity needs to grant androids the respect to make decisions on their own, even if those choices conflict with human interests. In order for an android to provide genuine consent for an action, the android must have a degree of autonomy and the ability to refuse consent, and humanity must create legal protections against android abuse.
To ensure genuine consent, the Routledge Encyclopedia of Philosophy has five conditions that must be met. Three of these conditions require a full understanding of the arrangement by all parties, the intentional action of providing consent, and for consent to be provided voluntarily. In addition, the Routledge Encyclopedia specifies that consent can only be given by the “competent, which may exclude in various contexts apparent consent given by the insane, severely retarded, emotionally disturbed, immature, intoxicated and so on.” The final stipulation requires the agreement to be legally allowed. For example, consenting to arrangements to “become a slave or allow yourself to be killed are not enforceable,” and thus, even if consent is provided, the agreement is not binding (Simmons).
For a manufactured being, having meaningful consent that is not just written into code is a complex issue. Many debates circle on whether or not a robot has “free will” as the determination for whether or not genuine consent can be given, but this creates more philosophical questions about whether something created by humans for a specific set of tasks can ever truly have full autonomy. Considering the implications of sexual human-robot relationships, Professor of Philosophy Lily Frank offers this discussion of robotic autonomy:
Can it take in information about alternatives open to it and then evaluate those alternatives on the basis of certain values and priorities that it operates on the basis of? Can the robot take a stance, based on the information it processes and its evaluation of its options? If a robot is able to perform these agency-functions, we think it has enough by way of what can be considered as basic free will for it to make sense to regard the robot as giving consent. (Frank)
If the future holds a place for social humanoid artificial intelligence, androids must be given the capacity to process complex situations, analyze potential benefits and consequences, and fulfill their own desires. If these requirements are satisfied, with a complete understanding of the risks associated, an android could give meaningful consent, not just for android sex work but for all android-human interactions.
For any decision to be made fairly, more than one safe and viable option is necessary, including the opportunity to decline consent. Receiving consent without having provided the ability to safely refuse is coercion and cannot be accepted as genuine consent. This applies to how humans treat androids rather than how humans program them. In her article about informed refusal in medical experimentation, Princeton University Professor Ruha Benjamin comments, “Rather than simply acknowledge that ‘refusers’ are justified in their distrust of the medical and scientific establishment, a substantive approach to enacting justice requires a reorientation away from the purported traits and dispositions of ‘problem people,’ to paraphrase Du Bois (1903), towards the relative trustworthiness of institutions.” Although Benjamin is discussing a different field, her argument focuses on the dynamics of power, which can be applied to defining consent for androids. If we assume that the consent of an android is automatically guaranteed, then it is not true consent.
In Benjamin’s example, the medical researchers control the resources and knowledge, rather than the individuals undergoing testing. Due to this unequal power dynamic, Benjamin argues that when an individual refuses consent, that individual’s decision should be respected, without the repercussions of being labeled a “problem”. Instead, those who hold the power should be held accountable to ensure that the decision is fair and free of coercion. While a non-cognitive robot could be forced to do the same task repeatedly, androids have the intellectual capability for choice. Just as a human can quit a job or opt out of a test, androids should be given that same freedom.
The final piece of an android’s ability to provide genuine consent is having protective measures in place to prevent abuse. To illustrate this point, Tufts University researcher Vasanth Sarathy describes a scenario where “B verbally abuses A, a personal assistant robot, by shouting expletives.” The article then presents the question, “In such a role, we must ask whether it is okay for the robot to call out and sanction normatively harmful behavior. In [this situation] could the robot defend itself and protest A’s abuse? Should the robot do such a thing?” (Sarathy, p. 15). This is an important dynamic that must be fleshed out in the legal and business worlds before androids are introduced to our society. It could be argued that as humans, the wants and needs of other humans should always take precedence over the needs of androids, but this creates an owner-slave dynamic that would be incredibly unethical and even harmful, especially if future androids will have the social understanding and intellectual capability to consent to the same extent that humans do.
If sophisticated android entities are to become an integral part to human society, there must be protections put in place to ensure that people do not take advantage of robots for their own personal benefit. Even if a robot has the understanding needed to give or refuse consent, this is meaningless if people can easily trick robots into giving consent or force a robot to do tasks regardless of consent without substantial repercussions. Frank suggests that one way to prevent this would be to grant robots a level of legally binding rights and protections, like America does for corporations (Frank). This would ensure that in the case that a robot does fall victim to abuse from a human, there is a course of action for handling this situation before it arises. However, pushing for protections before androids are engrained into society is unlikely to produce legislation as it would not seem as pressing or necessary as other political issues. Another solution would be to provide androids with a deep understanding of human body language as a preventative measure against abuse. If a robot understands the way that body language impacts spoken language, this could help robots gather information about the situation to determine the human’s intentions and pick out red flags that could lead to manipulation or abuse more accurately (Frank). Having these safeguards would not only protect robots from harm, but would lead to increased robot-human communication, making consent even more meaningful.
If someone is standing over you with a scalpel, the only thing that changes the scenario in question from surgery to assault is informed, meaningful consent (Frank). If humanity is creating a form of intelligence, even artificially, they deserve the same level of basic respect that we give to all other humans. The framework of consent in the medical system that Benjamin explores provides a parallel to imagine how androids could have the agency to advocate for themselves. The only way to create a sustainable relationship is to make it beneficial to both parties, meaning asking for and receiving consent from both humans and androids should be imperative in all interactions. For an android to give meaningful, genuine consent, that being should have enough autonomy and intelligence to freely weigh the risks and benefits to each decision, recognizing alternatives and coming to the conclusion that best supports the android’s pursuit of its wants and needs. From there, humanity must respect an android’s decision to provide or refuse consent and pass legal protections for androids against human abuse. Although this idea of humans and androids interacting socially seems far off, intelligent mechanical beings could hit the market as early as ten to fifteen years from now (Frank). Creating a safe and ethical relationship between humans and androids will take a fair amount of work. By starting these discussions now, we can construct a moral and legal framework that prevents harm before it arises.
Works Cited
Benjamin, Ruha. “Informed Refusal: Toward a Justice-Based Bioethics.” Science, Technology, & Human Values, vol. 41, no. 6, Nov. 2016, pp. 967–990, doi:10.1177/0162243916656059.
Frank, L., Nyholm, S. Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?. Artif Intell Law 25, 305–323 (2017). https://doi.org/10.1007/s10506-017-9212-y
Prucher, Jeff. Brave New Worlds: The Oxford Dictionary of Science Fiction. Oxford University Press, 2007.
Sarathy, Vasanth, Thomas Arnold, and Matthias Scheutz. 2019. When Exceptions Are the Norm: Exploring the Role of Consent in HRI. ACM Trans. Hum.-Robot Interact. 8, 3, Article 14 (July 2019), 21 pages.
Simmons, A. John. Consent, 1998, doi:10.4324/9780415249126-S011-1. Routledge Encyclopedia of Philosophy, Taylor and Francis, https://www.rep.routledge.com/articles/thematic/consent/v-1.