Autonomous artificial moral agents (AMAs), or robots, created with a stream of consciousness in the future have moral worth, according to Dr. Sandra Alexander, AUD professor of humanities, and speaker of “Imagining the Limits & Ethics of Artificial Intelligence” held last week in association with the AUD School of Arts and Sciences.
An artificial moral agent “can be defined as any machine, system or robot — any intelligent artifact with the ability to make complex moral decisions, and act upon them,” Dr. Alexander said.
“None of the machines that exist today meet the criteria for full ethical agency, or consciousness,” she said. “There will, likely, be a time when we will have to seriously consider this.”
The AUD humanities professor said that forcing autonomous agents to perform certain tasks, such as military service or labour, may violate their rights as autonomous moral agents. “By not giving them a chance to exercise their autonomy or not giving them a say in what tasks they will perform,” we would be essentially, enslaving them, she said.
There are two types of artificial intelligence, as Dr. Alexander mentioned during her talk: Weak and Strong AI. Weak AI simulate human cognitive functions, and are limited to the rules set in place. Strong AI have the capabilities of a human brain, and can think critically; this type has not been fully developed yet.
“Weak AI shouldn’t be treated ethically because they only mimic human actions, so they haven’t developed consciousness, but, strong AI should, since they have the ability to think critically,” Tareq Katbi, a junior international relations student, told the MBRSC Post.
Dr. Alexander said that including AI in the moral community depends on how and why they were created. If a robot is created for the sole purpose of fulfilling a task, without consciousness, they do not need to be part of the moral community. Whereas, if humans create robots with consciousness, they do. She gave the example of the movie Blade Runner, in which a robot was created for labour, and had a limited lifespan. “The problem was he was conscious .He was conscious of himself, and of his place in the world. He decided that he did not want to have this limited lifespan. He wanted to live, and he was created, essentially, for enslavement.”
Dr. Sandra Alexandra earned her D.Phil from the University of Oxford in 2003, and joined the American University in Dubai in 2008. Her research focuses on ethics. She was named Fellow at the Ferrater Mora Oxford Centre for Animal Ethics in 2013, and received the President’s Award for Teaching Excellence at AUD for the academic year 2014-2015.