Is Best Intelligence Authentic, Not Artificial?
Artificial Intelligence Can Impact Moral Decision Making
Exploring the Ethics
Artificial assistants promote efficiency, but do they facilitate authenticity?
Within many professions, the use of Artificial Intelligence (AI) can make jobs easier and more efficient. It can save companies time and money, but sometimes at human cost.
Although we worry about replacing jobs for people who need them, in many cases, AI can augment rather than replace human labor, by performing mundane or time-consuming responsibilities that employees are delighted to delegate to digital assistants.
But . . . what about other job responsibilities like working in teams, interacting with others, and making collaborative business decisions?
The reality is that no worker earns “Employee of the Month” simply by going through the motions.
- Along the same lines, can AI similarly be designed to pursue not merely competence but excellence?
- And what role do designers play in programming professionalism?
Research reveals why these questions are important:
Zaixuan Zhang et al. (in 2022) examined the link between AI and moral dilemmas, evaluating the perception of ethical decision-making using AI.
They found that AIs are perceived as more likely than humans to make utilitarian choices when faced with moral dilemmas.
They described the utilitarian approach as one that accepts harm and focuses on outcomes, as compared to the deontological approach, which rejects harm, focusing instead on the nature of the moral action.
Zhang et al. (ibid.) also found that the perception of warmth explains the perceived differences in the way humans and AI makes decisions, which were evident across a variety of different types of moral dilemmas.
Examining a different angle, Jonathan Gratch and Nathanael J. Fast (2022) examined the extent to which AI assistants might facilitate unethical behavior.
They explored the new ways in which AI is trained to exercise and experience power through performing interpersonal tasks such as negotiating deals, interviewing and hiring workers, and even managing and evaluating work, in connection with the extent to which such personalization permits users to dictate the ethical values that drive AI behavior.
Gratch and Fast (ibid.) recognize that acting through human agents (indirect agency) has the potential to weaken ethical estimation so that people believe they are behaving ethically yet demonstrate a lesser degree of benevolence for the recipients of their power, find themselves less blameworthy for ethical lapses, and expect a lesser degree of negative consequences as a result of unethical behavior.
Gratch and Fast (supra) then examined research that illustrates how within a wide variety social tasks, individuals may behave less ethically and may be more willing to deceive others when they are interacting through AI.