DUOMat Chivers & Cansu Canca


Traduction à venir

When we try to navigate potential impacts of artificial intelligence (AI), we invariably ask: Is it for the good? Often, implicit in the question is: Is it good for humans and humanity? But those two questions might differ significantly. In fact, while an “ethical AI” is, by definition, for the good, it might not necessarily be good for humans and humanity in all circumstances. Could we then reframe this question and ask instead: Is AI for the good—and good for whom? Displacing the anthropocentric perspective could prove insightful in leading us to a more ethical world order.

Humans are among the estimated 8.7 million species living on Earth. Yet, looking at the order we impose on the world and on other beings, it appears that we came to view our place within this complex ecosystem as the absolute royals of the kingdom. This becomes apparent in our cruel treatment of animals, in our systematic destruction of nature, and in our disregard about the effects of our culinary, cosmetic, and other preferences. We act in ways that seem to give absolute priority to ourselves. What we ignore in the process is that our well-being as individuals, as societies, and as a species is often deeply intertwined with the wider ecosystem we exist in relationship with.

As we design AI systems to assist us with our decision-making, we have to face the daunting task of integrating value trade-offs into them. As AI systems grow more robust making “autonomous” decisions, these value trade-offs will increasingly matter in how AI systems weigh various competing demands. More specifically, AI systems will have to weigh the value of human well-being against the value of other beings (including the well-being of AI agents if and when AI systems acquire moral status). Such value-laden decisions will have significant consequences in how we allocate resources or structure our society. As we design an “ethical AI”, we must recognize that such an AI might not put absolute value on what is good for humans. Yet, if we were to instead choose a “human-centric AI” with the fear of losing our crown, we might be committing a moral crime. With the development of AI, the humanity faces perhaps its biggest moral challenge: endorsing what is right despite perpetually becoming vulnerable in the face of it.