04 dicembre 2019

AI and Humanity: between mutualism and antagonism

  • Di John E. Jackson

Professor John E. Jackson is a retired Navy Captain and the E.A. Sperry Chair of Unmanned and Robotic Systems at the U.S. Naval War College. Additionally, he is program manager for the Chief of Naval Operation's professional reading program. His main research interests are in areas of national security decision-making, logistics, and unmanned and robotic systems. His latest book is “One Nation, Under Drones”, published by the Naval Institute Press. During the conference on the Ethics and Law of AI, organized by Fondazione Leonardo, he underlined one of the paradoxes of the emerging governance of new technologies.

Among the many phrases that resonate with me is a quote from Eliezer Yudkowsky, who said that by far the greatest danger of Artificial intelligence (AI) is that people conclude too early that they understand it. I suspect that very few, if any, among software developers and computer engineers would claim to fully understand AI, but I am not sure if the other seven billion people on the globe are equally aware of their collective lack of knowledge in this area. The risk is that complacency will set-in among the general population resulting in delaying vital progress in studying, understanding and controlling the spread of AI and machine learning applications in virtually every aspect of life.

Investigating the ethics of AI seems to me, in large part, to be a clarion call to validate and to guarantee the essence of everything it means to be human. One wonders if ever in recorded history the concept of humanity has been similarly challenged: perhaps the future will see the emergence of a new species, a co-evolved human-machine hybrid form, capable of dramatically new and perhaps even undreamed of forms of calculation, cognition, emotion and even consciousness itself. So the question then is: how can we ensure that our coevolution with AI is mutualistic rather than antagonistic?

Idealistically, we should strive for full transparency in every algorithm, in order to understand how an AI came to make the recommendations it is making. Such transparency, however, is unlikely to be allowed by commercial entities, who see an algorithm as a resource that may have cost million to develop and that represents a competitive advantage. If we concentrate on the application of AI in the security domain, complete transparency would allow adversaries to understand and perhaps counter-attack military systems crucial to the defense of a sovereign state. If neither the commercial nor the defense side is willing to submit to review by some supra-national organization, who will open their “black-boxes” to protect humanity’s future?

Condividi sui Social

Il Nostro Magazine

Civiltà delle Macchine

Ultimo numero Nuova Edizione
Copyright © 2019 Leonardo S.p.A. Privacy & Cookie Policy