Home > News content

The New York Times reveals that Facebook content cleanup work: "tired AI" can not complete the task

via:博客园     time:2019/5/27 17:33:04     readed:124

Editor's Note: A series of recent scandals have caused social networking giant Facebook to sit on the crater. The platform is neutral and can't interfere with user-generated content and can no longer be an excuse. But the amount of content that 2 billion users generate every day is simply a fantasy. Even if AI is used to assist, there will always be an unexpected situation of AI. This is like a game of cats and mice, and like Sisyphus pushing a boulder up the mountain. Whenever it reaches the top of the mountain, the stone will slip off its hands and have to be pushed back again, doing endless labor. The CTO had to help Facebook explore the new realm of AI applications for the future, and now has to shoulder this heavy burden. Cade Metz and Mike Isaac's article in The New York Times reported on Facebook content cleanup efforts.

The original title is:Artificial Intelligence and the Job of Cleaning Up Facebook

《纽约时报》揭秘

For half an hour, we were sitting in a conference room at Facebook headquarters. Surrounded by whiteboards filled with blue and red markers, we are discussing the technical difficulty of removing harmful content from the social network. Then we brought up a video to prove that this challenge is difficult to deal with: the shooting of Christchurch in New Zealand.

In March of this year, a gunman shot 51 people in 2 mosques and also lived on Facebook. It took the company about 1 hour to clear the video from the website. But at this time the bloody lens has spread on social media.

Schroepfer is silent. There seems to be something flashing in his eyes.

A minute later, he tried to maintain a calm tone: “We are working on this now. This will not be a night's work. But I don't want to have another conversation in six months. We can do much better than this. ”

The question is whether this is true or Facebook is just joking.

In the past three years, this social network has been subject to censorship because of the proliferation of false, misleading, and inappropriate content on its website. CEO Zuckerberg has called a technology that he can help eliminate problematic posts: artificial intelligence.

Last year, in front of Congress, Zuckerberg testified that Facebook is developing a machine-based system to “identify specific categories of bad activities” and announced that “in 5 to 10 years, we will have AI tools” to detect Test and remove hate speech. Since then he has been repeating these words in the media, on the Wall Street conference call and on Facebook's own events.

Schroepfer—— or internally called Schrep—— is the project leader for Facebook. He wants to lead the team in developing automated tools for sorting and deleting millions of such posts. But this task is like Sisyphus pushing the stone uphill —— is futile, he acknowledged this in three recent interviews.

This is because each time Schroepfer and his more than 150 engineering experts have just made an AI solution for marking and cleaning up hazardous materials, new, suspicious posts (and therefore not caught) that the AI ​​system has not seen have come back. . Plus, "bad activities" are often bystanders, and don't talk about machines. Even humans disagree on what it is, which makes this task even more difficult.

In an interview, Schroepfer was forced to admit that Facebook is not ruled by AI alone. He said: "My confirmation is now entering the final stage. ”But “I don't think ‘ everything has been resolved & rsquo;, you can pack things home. ”

But the pressure is still there. In the past week, after Christchurch’s video was criticized, Facebook revised its policy to limit the use of streaming services. When attending a summit in Paris on Wednesday with French President Mark Long and New Zealand Prime Minister Jessinda · Aden, the company signed a letter of commitment to re-examine its tools for identifying violent content.

The 44-year-old Schroepfer is now in a position he never wants to sit on. Over the years, his work has been to help Facebook build and first-class AI labs. Here, the smartest mind will solve the technical challenges of using machines to select faces from photos. He and Zuckerberg want to make an AI department that rivals Google, the company that is widely regarded as the most powerful AI researcher. So he recruited doctors from New York University, University of London, and Paris Sixth University.

But slowly, he has become a threat and harmful content remover. Now, he and the people he recruited spend a lot of time using AI to identify and remove death threats, suicide videos, error messages, and complete lies.

John Lilly is the former CEO of Moziila and now a venture capitalist at Greylock Partners, who worked with Schroepfer in computer science in the mid-1990s. She said: "We have never seen anything like this." How to solve these problems no one can ask for help. ”

Facebook allows us to talk to Schroepfer because it wants to show how AI captures those annoying content, presumably because it is interested in humanizing its own director. According to many people who know him, the CTO often shows his feelings.

Jocelyn Goldfein has worked with Schroepfe on Facebook. The former is a venture capitalist at Zetta Venture Partners. He testified: “I have seen Schrep crying at work. I don't think it would be inappropriate to say this. ”

But few people can predict how Schroepfer will react to our problems. In two of the interviews, the AI ​​may be the solution. He began to convey optimistic information and then became emotional. He once said that sometimes it is a struggle to come to work. Every time he talks about the scale of the problems Facebook faces and the responsibility he has to change the situation, he will choke.

When talking about those posts that have problems, he said: "It is never possible to drop to zero." ”

“What a heavy burden, what a huge responsibility. ”

On a Sunday in December 2013, Clément Farabet walked into the top floor suite at the Harrah's Casino Hotel in Lake Tahoe, Nevada. Inside, he was welcomed by Schroepfer and Zuckerberg.

Zach did not wear shoes. For the next 30 minutes, the CEO walked back and forth in socks, talking to New York University AI researcher Farabet. Zuckerberg said that AI is "the next big thing", which is "the next step of Facebook". Schroepfer sits on the couch and occasionally inserts words to emphasize a certain point.

They came to the city to recruit AI talent. That year Lake Tahoe was the site of the NIPS (Neural Information Systems Processing Conference). NIPS is a professional AI academic conference that attracts top researchers from around the world every year. Facebook management has introduced Yann LeCun, a New York University scholar who is considered one of the fathers of the modern AI movement. After he was recruited, he founded Facebook's AI Lab. Farabet, who sees LeCun as his mentor, is also among their last candidates.

Speaking of Zuckerberg, Farabet said: "He basically wants everyone. He knows the name of each researcher in this section. ”

That time was a fascinating day for Facebook, and then their trajectory and mission of AI work began to change.

At the time, from Google to Twitter, Silicon Valley's largest technology companies were vying to become the backbone of AI. This technology has been abandoned by Internet companies for many years. But at the university, researchers like LeCun have quietly nurtured an AI system called “Neural Network”, a complex mathematical system that can learn tasks by analyzing massive amounts of data. To the surprise of many people in Silicon Valley, these awkward and somewhat mysterious systems are finally working.

Schroepfer and Zuckerberg hope to push Facebook into this competition and see this rapidly improving technology as something the company must seize. AI can help the social network identify the faces in photos and videos posted on the site, Schroepfer said, and can also be used to make better targeted ads, organize their news streams, and translate languages. AI can also be used to provide digital electronics like “chat bots” that allow businesses to interact with customers.

Schroepfer said: “We intend to recruit the best talent in the world. We want to build a new type of research laboratory. ”

Since 2013, Schroepfer has been recruiting researchers who specialize in neural networks, where star rewards in the field were millions or even tens of millions of dollars (4, 5 years). On the Sunday of 2013, they did not successfully recruit Farabet, who later founded an AI startup and was subsequently acquired by Twitter. But Schroepfer has digged dozens of top researchers from Google, NYU, and the University of Montreal.

Schroepfer also formed a second organization, applying a machine learning team, to transform Facebook AI Lab's technology into real-world applications such as face recognition, language translation, and augmented reality tools.

At the end of 2015, some AI work began to transform. The catalyst is a Paris attack. In the attack, Islamic militants killed 130 people and injured 500 people. Afterwards, according to anonymous sources, Zuckerberg asked how the application machine learning team Facebook can fight terrorism.

In response, the team used technology developed within the new Facebook AI lab to build a system that identifies terrorism on the social network. The tool analyzes Facebook's mention of Islamic State or Al Qaeda posts and then marks those posts that are most likely to violate the company's counter-terrorism policy. Then manually post the post.

This is the turning point for Facebook to use AI to post stickers.

This work soon has a strong momentum of development. In November 2016, Trump was elected president of the United States, and everyone began to boycott the Facebook website as a hotbed of false information, because those false information may affect voting and lay the foundation for Trump's victory.

Although the company has begun to deny its role in false information dissemination and elections, it has begun to transfer technical resources to automatically identify a wide range of harmful content, including nudity photos and fake accounts, in early 2017. It also set up dozens of “integrity” jobs specifically to combat the harmful content of different sections of the site.

By 2017, harmful content testing has become the focus of the application machine learning team. Schroepfer said: “The number one priority of our content understanding work is clearly integrity. ”

Then, in March 2018, the New York Times and others reported that the British political consultancy, Cambridge, analyzed the information of millions of Facebook users without consent, and then provided the file information of voters for Trump’s campaign team. . The anger on the social network began to break out.

Soon Schroepfer was called to handle the incident. In April 2018, he was appointed to head to London to face a British parliamentary committee to answer questions about the Cambridge analysis scandal. There, he was tortured by members of the parliamentary committee for four hours.

During the hearing to the global live broadcast, Labour Party politician Ian Lucas asked the supervisor who looked like Tieqing: “Mr. Schroepfer, is your head honest? I still don't believe that your company has integrity. ”

Forest Key is the CEO of Pixvana, a virtual reality startup that has been known since the beginning of the 1990s when working together in a film effects technology startup. He said: "I can hardly see it." What a heavy burden this is. What a huge responsibility this is. ”

The challenge of using AI to contain Facebook's content is still going on —— Schroepfer's burden is heavy.

“ Persuading engineers not to retreat to the drums”

When it was first arrived at Facebook, Schroepfer was seen as a problem solver.

Schroepfer grew up in Delray Beach, Florida. His parents run a 1000-watt FM radio station, first rock and rock, and later R&B. In 1993, Schroepfer moved to Stanford, California. He studied computer science both undergraduate and graduate students, and mixed with technical experts such as Lilly and Adam Nash (now Dropbox executives).

After graduation, Schroepfer stayed in Silicon Valley and began a painful technical career. He first emerged as a film-effect start-up, and later set up a company to develop software for large-scale data centers, which was subsequently acquired by Sun Microsystems. In 2005, he joined Mozilla as Vice President of Engineering. The browser of this non-profit organization challenged the monopoly of Microsoft's Internet Explorer. At the time, no technical tasks were bigger than their projects.

Mozilla co-founder Mike Shaver has worked with Schroepfer for a few years. He said: "The browser is a complex product, and the competitive landscape at the time was incredible. Even in the early days of his career, I have never doubted his ability to handle. ”

In 2008, Facebook co-founder Dustin Moskovitz retired from the position of engineering director. Schroepfer joined to take over his role. At that time, Facebook served about 2 million users, and his job was to ensure that the site continued to run without the number of users. This work involves managing thousands of engineers and tens of thousands of computer servers around the world.

Schroepfer said: “Most of the work is like a bus that is rolling off the hill but the four wheels are smashing. The question is how to keep it going. & rdquo; A big part of his day is "Talk to engineers to calm them down and don't want to think about it" because they are dealing with problems all day long.

In the next few years, his team developed a series of new technologies to make such a big service (Facebook has more than 2 billion users now). They introduced new programming tools to help companies deliver Facebook to their laptops and phones faster and more reliably. It introduces custom servers to the data center, making the vast server computer network operations smooth. In the end, Facebook significantly reduced service disruptions.

Schroepfer said: "I don't remember when it was the last time an engineer conversation that was exhausted because of the expansion problem. ”

Because of these efforts, Schroepfer's responsibilities are also growing. In 2013, he was promoted to CTO. His work has turned to the future, tracking new areas of technology that companies should explore. Want to know how important his role is? His desk is next to Zuckerberg, sandwiched between the CEO and COO Sheryl Sandberg.

Regarding Schroepfer, Zuckerberg said: "He is a good representative of how many people think and operate. Schrep's super powers teach and build teams across different problem areas. I haven't worked with anyone else who can do this like him. ”

So, it's no surprise that Zuckerberg will find Schroepfer to handle all the harmful content on Facebook.

Broccoli vs. marijuana

On a recent afternoon, in a Facebook meeting room, Schroepfer took two pictures from his ass laptop. One is a picture of broccoli and the other is a cluster of cannabis buds. Everyone is staring at these pictures. Some don't dare to decide which one is which.

Schroepfer showed these pictures to illustrate. Even though some of us have difficulty distinguishing, Facebook's AI system is now able to find patterns from thousands of images to distinguish the buds of marijuana. Once the AI ​​marks the marijuana picture, many of which are attached to the Facebook ad, using the picture to sell marijuana through the social network, the company will find out to delete it.

Schroepfer said: "Now we can take the initiative to catch such things. ”

The problem is that the confrontation between marijuana and broccoli is not only a sign of progress, but also a sign of Facebook's limitations. Schroepfer's team has developed an AI system that companies use to identify and remove marijuana images, nudes and terrorist-related content. But those systems can't pull out all the pictures, because there will always be unexpected things, which means that millions of nude, marijuana-related and terrorist-related posts will continue to enter the eyes of Facebook users.

Identifying rogue images is also one of the easier tasks for AI. It would be more difficult to build systems that identify fake news or hate speech. Fake news can easily be shaped into something that looks real. Hate speech is also problematic because the subtle differences in machine recognition language are too difficult. Many subtle differences can vary from language to language, and the context of conversations can quickly evolve, making it difficult for machines to keep up.

The AI ​​Foundation is a non-profit organization that explores how artificial intelligence struggles with false information. Its research director, Delip Rao, described the challenge as “an arms race”. & rdquo; AI is built according to what happened before. But it's too common to have nothing to learn. Behavior changes. Attackers create new technologies. Obviously, this is a game of cat and mouse.

Rao said: “Sometimes you are one step ahead of those who cause harm. Sometimes they are ahead of you. ”

That afternoon, Schroepfer tried to answer our questions about cat and mouse games with data and numbers. He said that Facebook now automatically removes 96% of the nude content of the social network. Hate speech is even more tricky, he said —— the company currently only captures 51% of it (Facebook later increased to 65%)

Schroepfer acknowledged the existence of the elements of the arms race. He said that although Facebook automatically detects and removes the problematic live stream, it does not recognize the New Zealand video in March because it is not the same as any content uploaded to the social network in the past. This video uses a first person perspective, just like a computer game.

When designing a system to identify image violence, Facebook generally has to take the existing image for processing —— those who kick the cat, the dog attack the person, the car hits the pedestrian, one person takes the baseball bat and swings to another person, etc. picture of. However, he said: "There are not many similarities with this video. ”

The novelty of the shooting video was the reason why it was so shocking, Schroepfer said. & ldquo; This is why it was not immediately marked. & rdquo;, and added that I watched the video several times to figure out how Facebook can identify it next time.

Finally he said: "I really hope that I have not seen those things. ”

Original link:https://nytlicensing.com/story/pLTjoQ94/

Translator: boxi.

China IT News APP

Download China IT News APP

Please rate this news

The average score will be displayed after you score.

Post comment

Do not see clearly? Click for a new code.

User comments