News

Actions

Privacy questions as humans reviewed user audio at Facebook

Posted
and last updated

NEW YORK (AP) — Facebook has paid contractors to transcribe audio clips from users of its Messenger service, raising privacy concerns for a company with a history of privacy lapses.

The practice was, until recently, common in the tech industry. Companies say the use of humans helps improve their services. But users aren't typically aware that humans and not just computers are reviewing audio.

Transcriptions done by humans raise bigger concerns because of the potential of rogue employees or contractors leaking details. The practice at Google emerged after some of its Dutch language audio snippets were leaked. More than 1,000 recordings were obtained by Belgian broadcaster VRT NWS, which noted that some contained sensitive personal conversations — as well as information that identified the person speaking.

“We feel we have some control over machines,” said Jamie Winterton, director of strategy at Arizona State University’s Global Security Initiative. “You have no control over humans that way. There’s no way once a human knows something to drag that piece of data to the recycling bin.”

Jeffrey Chester, executive director for the Center for Digital Democracy privacy-advocacy group, said it’s bad enough that Facebook uses artificial intelligence as part of its data-monitoring activities. He said the use of humans as well is “even more alarming.”

Tim Bajarin, tech columnist and president of Creative Strategies, said it’s a bigger problem when “what those humans are doing with it is outside of what its intended purpose is.”

Facebook said audio snippets reviewed by contractors were masked so as not to reveal anyone’s identity. It said it stopped the practice a week ago. The development was reported earlier by Bloomberg.

Google said it suspended doing this worldwide while it investigates the Dutch leaks. Amazon said it still uses humans, but users can decline, or opt out, of the human transcriptions. Published reports say Apple also has used humans, but has stopped.

A report last week said Microsoft also uses human transcribers with some Skype conversations and commands spoken to Microsoft’s digital assistant, Cortana. Microsoft told tech news site Motherboard that it has safeguards such as stripping identifying data and requiring non-disclosure agreements with contractors and their employees. Yet details leaked to Motherboard.

It makes sense to use human transcribers to train artificial intelligence systems, Winterton said. But the issue is that companies are leading people to believe that only machines are listening to audio, causing miscommunication and distrust, she said.

“Communicating to users through your privacy policy is legal but not ethical,” she said.

The companies’ privacy policies — usually long, dense documents — often permit the use of customer data to improve products and services, but the language can be opaque.

“We collect the content, communications and other information you provide when you use our Products, including when you sign up for an account, create or share content, and message or communicate with others,” Facebook’s data-use policy reads . It does not mention audio or voice specifically or using transcribers.

Bajarin said tech companies need to use multiple methods to refine artificial intelligence software, as digital voice assistants and voice-to-text technology are still new. But he said being more clear about the human involvement is “the very least” companies could do.

“They should be very clear on what their policies are and if consumer messages or whatever it is are going to be seen,” he said. “If humans are part of the process for analysis that needs to be stated as well.”

Irish data-protection regulators say they’re seeking more details from Facebook to assess compliance with European data regulations. The agency’s statement says it’s also had “ongoing engagement with Google, Apple and Microsoft” over the issue, though Amazon wasn’t mentioned.

Facebook is already under scrutiny for a variety of other ways it has misused user data. It agreed to a $5 billion fine to settle a U.S. Federal Trade Commission probe of its privacy practices.