Home > News content

IEEE issued three new standards for artificial intelligence, involving high level ethical issues

via:博客园     time:2017/11/23 17:35:18     readed:759

IEEE has announced three new standards for the development of artificial intelligence.

IEEE claims to be


In the report of ethical design: the vision of human welfare and artificial intelligence and the priority of autonomous systems, this paper is divided into eight parts to illustrate the development of new artificial intelligence. Are: the general principle of artificial intelligence system research and design ethics guidance assignment; methodology; the security and welfare of general artificial intelligence and super artificial intelligence; personal data and personal access control; re construction of automatic weapon system; economic / humanitarian law.


The general principles involve high-level ethical issues that apply to all types of AI and autonomous systems. In determining general principles, three main factors are considered: the expression of human rights; the priority given to maximizing the benefits to humankind and the natural environment; and the weakening of the risks and negative effects of artificial intelligence.

The principle of human interest requires considering how to ensure that AI does not violate human rights. The principle of responsibility involves how to ensure that AI is accountable. In order to solve the fault problem and avoid public confusion, the AI ​​system must be accountable at the procedural level to prove why it operates in a certain way. The principle of transparency means that the functioning of autonomous systems must be transparent. AI is transparent means people can find out how and why they make a specific decision.

In how to embed human norms and ethical values ​​in the AI ​​system, the report stated that as AI systems are becoming more autonomous in making decisions and manipulating their environment so that they can adopt, learn and comply with the societies they serve And the norms and values ​​of the community are crucial. The purpose of embedding value into an AI system can be achieved in three steps:

First, identify the norms and values ​​of a particular society or group;

Second, to compile these norms and values ​​into the AI ​​system;

Third, assess the validity of the norms and values ​​that are written into the AI ​​system, ie, whether they are consistent and realistic with norms and values.

Although the above related research has been ongoing, such as Machine Morality, Machine Ethics, Moral Machine, Value Alignment, Artificial Morality, Safe AI, friendly AI, etc. However, developing computer systems that recognize and understand human norms and values ​​and have them considered in the making of decisions has always plagued people. There are currently two main paths: the top-down path and the bottom-up path. Research in this area needs to be strengthened.

The report also pointed to the problems that need to be addressed in the development of the eight AIs, such as methodologies to guide ethical research and design, safety and well-being of GER and GA, reconstructing autonomous weapons systems, and economic / humanitarian issues. The report gathered over 100 thought leaders from the academic, scientific, and government-related sectors in the field of artificial intelligence, incorporating expertise in several areas such as artificial intelligence, ethics, philosophy and policy. Its revised edition of EAD (Ethically Aligned Design) will be available by the end of 2017 and the report will be expanded to thirteen chapters with more than 250 essences of global thought leaders.

Satoshi Tadokoro, chairman of the IEEE Robotics and Automation Society, explains why they wanted to set such a standard: "Robots and automated systems will bring major innovations to society. Recently, the public is increasingly paying attention to the social problems that may occur and the great potential benefits that may arise. Unfortunately, in these discussions, there may be some false information from fictional and imaginative sources. & rdquo;

Tadokoro continues: "The IEEE will introduce knowledge and wisdom based on accepted facts of science and technology to help shape public decision-making and maximize the overall human benefits. "In addition to AI ethical standards, there are two other AI standards that have been introduced into the report, each led by domain experts.

The first criterion is: "Ethical promotion of robotic systems, intelligent systems and automated systems". This standard explores "push", in the world of artificial intelligence, which refers to the subtle actions that AI can affect human behavior.

The second standard is "Fail-safe Design Standards for Automatic and Semi-Automatic Systems." It incorporates automated technology that can be dangerous to humans if they fail. For now, the obvious problem is self-driving cars.

The third standard is "Well-being metrics for moralized AI and automated systems". It illustrates how the benefits of advanced artificial intelligence techniques can benefit humanity.

These standards are likely to be implemented earlier than we think, as companies like OpenAI and DeepMind are pushing AI faster and faster, and even creating AI systems that are self-learning and expand in the "smart" arena. Experts believe such artificial intelligence will undermine the stability of the world, lead to massive unemployment and war, and even turn to creating "killing weapons". Recently, important United Nations debates have prompted people to start serious thinking about the need to tighten the regulation of artificial intelligence as a weapon.

Summary of Lei Feng Network: 2017 is the fastest-growing AI year, and the AI ​​threat theory is on the ups and downs. Among these issues are AI abuse, AI transparency, fairness of the algorithm, AI ethics, AI regulation and responsibility, AI trust, and so on. We see some of them developing relevant standards and norms. However, the future development of AI still needs more discussion and dialogue. On the one hand, it is necessary to ensure that laws and policies will not hinder the process of innovation; on the other hand, emerging issues such as ethics, responsibility, safety and privacy need to be cross-linked Based on disciplines and based on close cooperation among governments, enterprises, non-governmental organizations and the public, jointly weakened the negative impact of AI and created a future where people and AI trust each other.


China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments