The digital plane is increasingly integrated with the physical plane and the development of Artificial Intelligence has already opened up great possibilities for human civilisation to progress. But the question arises: how can we be sure that our software does not act prejudicially and against human values?
Increasingly sophisticated algorithms are used to automate important strategic and investment decisions by states and private corporations, while robots are beginning, in various forms, to populate our factories, homes and streets. The wave of technological innovation seen in the new millennium and the renewed interest in A.I. research - already renamed as the fourth industrial revolution - promises to radically transform the labour market. In today's world, as The Economist says, the most important resource is no longer oil, but digital data.
The digital plane is becoming more and more integrated with the physical one and the development of A.I. has already opened up great possibilities for human civilisation to advance. The spread of increasingly advanced information systems, however, also hides some pitfalls that must be addressed collectively if we are to unleash the full potential of smart machines. A.I. poses enormous ethical problems: some machines find themselves having to make practical decisions on which the life of human beings can depend, as in the case of autonomous drive cars and military drones. At the same time, the development of automatic learning techniques, so-called machine learning, raises delicate questions about the neutrality and operating transparency of algorithms: how can we be sure that our software will not act with prejudice and against human values?
Such ethical issues, combined with caution about personal privacy, have triggered a sweeping international debate on the new moral challenges posed by the technology of the future. Large hi-tech companies have begun to adopt guidelines to ensure the transparent, reliable development of their A.I. research. At public level, however, the regulatory environment remains anarchic and underdeveloped. Discussions on the ethical codes of new technologies are moving forward almost entirely in the private arena, without direct democratic accountability. Many grey areas remain in the legislation that should regulate the role of machines in our society. Specifying the legal status of A.I. – to move from an ethical to a legislative level - is important not only to prevent the inevitable social and cultural problems of automation, but also to offer important opportunities for growth, routing research in such a way as to make A.I. a beneficial force in our society.
The aim of the conference was to stimulate a debate on the A.I. Code of Ethics in Italy, underlining the need for and benefits of a more formalised legal framework. A new technological humanism that keeps man at the centre of the age of machines passes through the creation of a reliable context of [...]Go to page
During the conference on the ethics and law of artificial intelligence, Fondazione Leonardo Civiltà delle Macchine presented a manifesto of ethical guidelines and legal proposals, aimed at offering a framework of good practice to AI-related industry and guide the legislator in outlining a new set [...]Go to page