Saturday, April 9, 2022

546. AI, moral agents and patients

There is talk of Artificial Intelligence (AI), robots, overtaking the human being in intelligence, and dominating it. If that were the case, not to harm us, could robots develop a sense of morality, virtues, to prevent humanity from being overwhelmed? Robots are agents, they act, such as automated soldiers, drones, medical operators. AI includes many things that do not act, but are in boxes and spout results of algorithms.

I follow Damasio (2003) in seeing the human body and mind as a homeostatic system, keeping the physical and mental organism within bounds of viability. Emotions serve to trigger beneficial actions and obstruct harmful actions and influences, contributing to survival. Emotions and intelligence are intertwined in guiding human action (Nussbaum, 2001).

It seems that to be moral, robots would have to be conscious. How would we know if that is the case? We could have something like a ‘Turing test’: if we cannot, in their observed behaviour, distinguish robots without inner morals from those with it, can we not call robots with moral conduct but without corresponding mental states and emotions ‘moral’?

In the development of robots there is a ‘top-down’ approach, in programming rules into robots, and a ‘bottom-up’ approach, of self-learning, robots who develop their own internal structure in interaction with their environment, mimicking evolution. If benevolence and morality with humans arose from their  beneficial effect on group survival in evolution, how could that work for robots? What selection environment could, in an evolutionary process, with robots interacting with each other and humans, be devised for robots to develop benevolence and morality? Could robots not develop, on the contrary, to develop a destructive stance toward humans, seeing them as a threat to their survival? How do we decide on the selection environment conducive to outcomes that are beneficial to humans?  

Someone involved in developing robots said that it would be best to aim at the optimal use of complementary skills of robots and people. That would require some social skills in robots. It will take considerable time for robots to develop such skills. It is predictable that the logic of markets would cause acceptance in the shorter term of cheaper robots without such skills. This would cause an undirected, haphazard, uncertain and risky development of the stance of robots towards people.

Danaher (2019a) presented a different view of robots and the threat they may yield, even if they are benevolent and function very well. He distinguished between ‘moral agents’, who act and take responsibility for the morality of their actions, or the lack of it, and recognise the moral agency of others, and moral ‘patients’, who passively profit from the blessings of technology and the moral beneficence of the morality of others. AI can enable agency but also furthers patiency of people. Agency and patiency are not all or nothing, but more or less of both. There is, however, tendency for people to shift from moral agency to moral patiency. It has been going on for some time, but it accelerates, can reach its pinnacle in the use of robots.

Danaher gives the example of cars, They used to enable agency, in getting us to places, but with gimmicks of GPS, route planning and automated driving they contribute to moral patiency, to the point of leaving care against accidents to the robot. That could, we hope, still enable activities such as sitting in the back of the car and reading a book, but how many people will do that? It contributes to the overall surrender by people of life to robots.

Danaher (2019b) also asked the question whether human can be friends with robots, and answers in the affirmative. He goes back to Aristotle’s view of the dimensions of friendship: mutuality, honesty/authenticity, equality/no dominance, and diversity of interactions, which one can all doubt with respect to robots. However, human-human friendships ae also seldom equal, and do not extend across the full range of human life. If a relationship is does not cover the full range of life, this can be a blessing. Donaher gives the example of contact via internet. That can make the contact shallow but it can also help in avoiding prejudices that hinder direct relationships, such as race, colour, class, education, which fall outside the internet contact.

It is more difficult to assign mutuality and authenticity to robots. Robots may evolve by adapting to circumstances, but what if those circumstances radically change? I agree that robots can still be friends in the sense of yielding benefits and doing pleasurable things together. Danaher quotes the work of Julie Carpenter’s example of bomb disposal squads seeing their robots as friends and even honouring them at funerals when they fall. Robots  can compensate for disabilities, and one can become ‘friends’ with them in the way that a lame person can become friends with its guide dog.

However, though less blatantly than with sex robots, people may turn from human friends to robot friends because they are more obedient and less contrary. That would be disastrous for humanity, since one needs precisely the opposition of the other to develop one’s identity.

On the other hand, with their potential wealth of knowledge robots may act as intermediaries between humans, and Danaher argues that robots can be used to ‘outsource’ activities that form obstacles to human-human friendship. Danaher tells the example where ongoing pressures of one side of the friendship to play tennis may irritate the other and block the friendship, while now one play tennis with the robot. Of course, one can also seek a human friend who likes to tennis, even if that friendship is limited to the joint enjoyment of tennis.

Summing up: robots can be friends even if that is not a ‘deep’ friendship, and they can facilitate human-human friendship.

  

Danaher, M. 2019a, ‘The rise of robots and the crisis of moral patiency’, AI and Society

Danaher, M. 2019b, ‘The philosophical case of robot friendship’, Journal of Post-Human studies.

Damasio, A. 2003, Looking for Spinoza; Joy, sorrow and the feeling brain, Orlando FA: Harcourt

Nussbaum, M. 2001, Upheavals of thought, Cambridge: Cambridge University Press.

  

No comments:

Post a Comment