Among the many phrases that resonate with me is a quote from Eliezer Yudkowsky, who said that by far the greatest danger of Artificial intelligence (AI) is that people conclude too early that they understand it. I suspect that very few, if any, among software developers and computer engineers would claim to fully understand AI, but I am not sure if the other seven billion people on the globe are equally aware of their collective lack of knowledge in this area. The risk is that complacency will set-in among the general population resulting in delaying vital progress in studying, understanding and controlling the spread of AI and machine learning applications in virtually every aspect of life.
Investigating the ethics of AI seems to me, in large part, to be a clarion call to validate and to guarantee the essence of everything it means to be human. One wonders if ever in recorded history the concept of humanity has been similarly challenged: perhaps the future will see the emergence of a new species, a co-evolved human-machine hybrid form, capable of dramatically new and perhaps even undreamed of forms of calculation, cognition, emotion and even consciousness itself. So the question then is: how can we ensure that our coevolution with AI is mutualistic rather than antagonistic?
Idealistically, we should strive for full transparency in every algorithm, in order to understand how an AI came to make the recommendations it is making. Such transparency, however, is unlikely to be allowed by commercial entities, who see an algorithm as a resource that may have cost million to develop and that represents a competitive advantage. If we concentrate on the application of AI in the security domain, complete transparency would allow adversaries to understand and perhaps counter-attack military systems crucial to the defense of a sovereign state. If neither the commercial nor the defense side is willing to submit to review by some supra-national organization, who will open their “black-boxes” to protect humanity’s future?