Home > News content

The dissolution of the Google AI Ethics Committee How to make artificial intelligence distinguish between right and wrong?

via:博客园     time:2019/4/17 13:32:12     readed:213

According to foreign media reports, on March 26th, Kent Walker, senior vice president of global affairs at Google, announced on the Google blog that in order to implement the Google AI seven principles and improve the relevant internal governance structure and processes, the ATEAC committee was established to handle labor. A tricky problem in intelligence.

The committee has eight members, including economists, psychologists, and former US Deputy Secretary of State, but less than a week after the establishment, an expert voluntarily withdrew. This week, Google employees launched a petition asking the company to remove another committee member because he opposed the law protecting the rights of homosexuals and transgender people.

In response to the protest, Google decided to completely disband the team called the Advanced Technology External Advisory Committee (ATEAC) and stated in the statement: “It is clear that in the current environment, ATEAC cannot operate in the way we want, so We want to disband it and return to the original plan. We will continue to do our work on the important issues brought by AI, and we will find different ways to get external opinions on these issues. ”

Although Google’s AI Ethics Committee collapsed during the week of its establishment, this does not mean the failure and meaninglessness of the AI ​​Ethics Committee. In fact, the existence of an artificial intelligence ethics committee is especially necessary. It is still worth pondering where AI ethical issues go.

How to avoid moral hazard in the balance between AI and ethics?

As early as Isaac · Isaac Asimov proposed "three laws of robots", and then to "Western World", "Mechanical Ji", "Matrix", "Terminator" and other film and television dramas are all right The ethical issue of artificial intelligence has been considered. Artificial intelligence can liberate the human labor, but it may also harm human interests and lead the public to the abyss.

The algorithm of artificial intelligence comes from the data. When there are potential values ​​in the data, there are hidden problems such as prejudice, which are absorbed by the system. Who will maintain the consensus of the people and ensure the fairness of the society? When the machine self-evolution has a sense of autonomy and is out of human control, can human beings as their production creators still be able to grasp the overall situation steadily? When the immature artificial intelligence technology is caught in an abnormality, causing mistakes, or even illegal intrusion and harming human beings, who should bear responsibility for this?

Real-life failures such as Google’s self-driving cars causing casualties, robotic injuries, and insurance companies’ use of Facebook data to predict accident rates are alleged to be biased. This is also an example of the ethical and moral risks of artificial intelligence.

In the face of ethical issues caused by artificial intelligence, technology giants have set up ethics committees to meet the challenges. In 2016, Amazon, Microsoft, Google, IBM, and Facebook formed a non-profit, on-the-job partnership (AI), and Apple joined the organization in January 2017.

In the consensus of the technology giants, instead of relying on the outside world to impose restrictions on AI development, it is better to establish an independent supervision of the ethics committee. The AI ​​Ethics Committee is a kind of supervision and protection, and it is a risk assessment standard. It can help AI companies make ethical decisions that benefit society.

As Professor Rosalind ·, director of the MIT Emotional Computing Research Group, said, “The more free the machine, the more ethical it is needed. & rdquo; And scholars have also given new thinking to let artificial intelligence avoid risks.

"Ethical Machines: How to Make Robots Distinguish Right and Wrong" in American Cognitive Science philosopher Colin · Allen and technical ethics expert Wendell · Wallach emphasizes two dimensions —— autonomy dimension for moral-related facts And the sensitivity dimension, gives an understanding framework for the increasingly complex AMAs (artificial ethics) trajectory, and “evaluates the ethical acceptability of the options they face”. They believe that the “top-down” and “bottom-up” patterns are the best choice for machine moral development. The bottom-up from the bottom of the data learning experience and the pre-programming with certain rules are combined from top to bottom.

The founder of the American Institute of Machine Intelligence, Yudkovsky, put forward the concept of “friendly artificial intelligence”, which is considered “friendly” and should be injected into the intelligent system of the machine from the very beginning.

On December 12, 2017, the Institute of Electrical and Electronics Engineers (IEEE) published the Code of Ethics for Artificial Intelligence Design (2nd Edition), which states that the ethical design, development, and application of artificial intelligence techniques should follow the following general principles: Human Rights & mdash ;— ensure that they do not infringe on internationally recognized human rights; well-being —— prioritize indicators of human well-being in their design and use; accountability —— ensure that their designers and operators are responsible and questionable Responsibility; transparent —— ensure that they run in a transparent manner.

Scholars and experts from various countries have continuously proposed new ethical principles on artificial intelligence, but the concept of people-oriented has never changed. The EU has also submitted their answers to the ethics of artificial intelligence in recent days.

Moral Reconstruction in the Age of Artificial Intelligence: Trustworthy AI

On April 9th, the European Union issued the AI ​​ethics guidelines, as the company and government agencies should follow the seven principles in the future development of AI, calling for “trustworthy artificial intelligence”. “Reliable AI” is a moral reconstruction of the artificial intelligence era, which indicates an ethical direction for the development of AI.

The draft EU AI code of ethics states that trustworthy AI has two components: First, it should respect basic rights, apply regulations, core principles and values ​​to ensure “ethical purpose” and (2) Robustness and reliability, because even with good intentions, lack of technical mastery can cause unintentional harm.

orgsrc=//img2018.cnblogs.com/news/34358/201904/34358-20190417130505631-183001441.jpg

The EU proposes a draft code of ethics and a framework for trustworthy AI

The seven key principles are as follows:

Human role and supervision: Artificial intelligence should not trample on human autonomy. People should not be manipulated or coerced by the AI ​​system, and humans should be able to intervene or supervise every decision made by the software.

Robustness and security of technology: Artificial intelligence should be safe and accurate. It should not be susceptible to external attacks (such as confrontational instances) and should be fairly reliable.

Privacy and data management: Personal data collected by artificial intelligence systems should be secure and private. It should not be accessible to anyone and should not be easily stolen.

Transparency: The data and algorithms used to create an artificial intelligence system should be accessible, and the decisions made by the software should be “understood and tracked by humans”. In other words, the operator should be able to explain the decisions made by the AI ​​system.

Diversity, non-discrimination and fairness: The services provided by artificial intelligence should be accessible to all, regardless of age, gender, ethnicity or other characteristics. Similarly, the system should not be biased in these areas.

Environmental and social well-being: Artificial intelligence systems should be sustainable (ie they should be ecologically responsible) and “promote positive social change”.

Accountability: Artificial intelligence systems should be auditable and included in the corporate reportable form to be protected by existing rules. The possible negative effects of the system should be notified and reported in advance.

According to statistics, there are 2,039 artificial intelligence companies in the world ranking first in the world, and China (excluding Hong Kong, Macao and Taiwan) has 1,040, accounting for more than half of the total number of artificial intelligence companies in the world. The 2019 Global AI Talent Report shows More than half of the world's 36,524 top AI talents are in Central America. China and the United States have an advantage in talents and enterprises in the field of AI. It can be said that you are chasing after each other. It is difficult to distinguish between high and low. The EU has taken the lead in ethical norms and may be a good strategy.

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments