Wednesday

Facebook: We’re better at policing nudity than hate speech


SAN FRANCISCO — Getting rid of racist, sexist, and other hateful remarks on Facebook is more challenging than weeding out other types of unacceptable posts because computer programs still stumble over the nuances of human language, the social network giant revealed on Tuesday.

Facebook’s self-assessment showed its policing system is far better at scrubbing graphic violence, gratuitous nudity, and terrorist propaganda. Automated tools detected 86 percent to 99.5 percent of the violations Facebook identified in those categories.

For hate speech, Facebook’s human reviewers and computer algorithms identified just 38 percent of the violations. The rest came after Facebook users flagged the offending content for review.

Facebook also disclosed that it disabled nearly 1.3 billion fake accounts in the six months ending in March. Had the company failed to do so, its user base would have swelled beyond its current 2.2 billion. Fake accounts have gotten more attention in recent months after it was revealed that Russian agents used them to buy ads to try to influence the 2016 elections.

Even after all that disabling, though, Facebook has said that 3 percent to 4 percent of its active monthly users are fake, meaning up to 88 million fake accounts slip through.

The report was Facebook’s first breakdown on how much material it removes for violating its policies. The statistics cover a relatively short period, from October 2017 through March of this year, and did not disclose how long it takes Facebook to remove material violating its standards. The report also did not cover how much inappropriate content Facebook missed.

“Even if they remove 100 million posts that are offensive, there will be one or two that have some really bad stuff and those will be the ones everyone winds up talking about on the cable-TV news,” said Timothy Carone, who teaches about technology at the University of Notre Dame.

The report also did not address how Facebook is tacking another vexing issue — the proliferation of fake news stories planted by Russian agents and other fabricators trying to sway elections and public opinion.

It was not surprising that Facebook’s automated programs have the greatest difficulty trying to figure out differences between permissible opinions and despicable language that crosses the line, Carone said.

“It’s like trying to figure out the equivalent between screaming ‘Fire!’ in a crowded theater when there is none and the equivalent of saying something that is uncomfortable but qualifies as free speech,” he said.

Facebook said it removed 2.5 million pieces of content deemed unacceptable hate speech during the first three months of this year, up from 1.6 million during the previous quarter. The company credited better detection, even as it said computer programs have trouble understanding context and tone of language.

SAN FRANCISCO — Getting rid of racist, sexist, and other hateful remarks on Facebook is more challenging than weeding out other types of unacceptable posts because computer programs still stumble over the nuances of human language, the social network giant revealed on Tuesday.

Facebook’s self-assessment showed its policing system is far better at scrubbing graphic violence, gratuitous nudity, and terrorist propaganda. Automated tools detected 86 percent to 99.5 percent of the violations Facebook identified in those categories.

For hate speech, Facebook’s human reviewers and computer algorithms identified just 38 percent of the violations. The rest came after Facebook users flagged the offending content for review.


Facebook also disclosed that it disabled nearly 1.3 billion fake accounts in the six months ending in March. Had the company failed to do so, its user base would have swelled beyond its current 2.2 billion. Fake accounts have gotten more attention in recent months after it was revealed that Russian agents used them to buy ads to try to influence the 2016 elections.

Even after all that disabling, though, Facebook has said that 3 percent to 4 percent of its active monthly users are fake, meaning up to 88 million fake accounts slip through.

The report was Facebook’s first breakdown on how much material it removes for violating its policies. The statistics cover a relatively short period, from October 2017 through March of this year, and did not disclose how long it takes Facebook to remove material violating its standards. The report also did not cover how much inappropriate content Facebook missed.

“Even if they remove 100 million posts that are offensive, there will be one or two that have some really bad stuff and those will be the ones everyone winds up talking about on the cable-TV news,” said Timothy Carone, who teaches about technology at the University of Notre Dame.

The report also did not address how Facebook is tacking another vexing issue — the proliferation of fake news stories planted by Russian agents and other fabricators trying to sway elections and public opinion.

It was not surprising that Facebook’s automated programs have the greatest difficulty trying to figure out differences between permissible opinions and despicable language that crosses the line, Carone said.

“It’s like trying to figure out the equivalent between screaming ‘Fire!’ in a crowded theater when there is none and the equivalent of saying something that is uncomfortable but qualifies as free speech,” he said.

Facebook said it removed 2.5 million pieces of content deemed unacceptable hate speech during the first three months of this year, up from 1.6 million during the previous quarter. The company credited better detection, even as it said computer programs have trouble understanding context and tone of language.

ADVERTISEMENT

Facebook took down 3.4 million pieces of graphic violence during the first three months of this year, nearly triple the 1.2 million during the previous three months.

In this case, better detection was only part of the reason. Facebook said users were more aggressively posting images of violence in places like war-torn Syria.

The increased transparency came as the Menlo Park, California-company tried to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump’s 2016 campaign to harvest personal information on as many as 87 million users. The content screening has nothing to do with privacy protection, though, as it was aimed at maintaining a family-friendly atmosphere for users and advertisers.  /kga

source: technology.inquirer.net