Exclusive-Brands blast Twitter for ads next to child pornography 
		accounts
						
		 
		Send a link to a friend  
 
		
		
		 [September 29, 2022]  By 
		Sheila Dang and Katie Paul 
		 
		(Reuters) -Some major advertisers including 
		Dyson, Mazda, Forbes and PBS Kids have suspended their marketing 
		campaigns or removed their ads from parts of Twitter because their 
		promotions appeared alongside tweets soliciting child pornography, the 
		companies told Reuters. 
		 
		Brands ranging from Walt Disney Co, NBCUniversal and Coca-Cola Co to a 
		children's hospital were among more than 30 advertisers that appeared on 
		the profile pages of Twitter accounts peddling links to the exploitative 
		material, according to a Reuters review of accounts identified in new 
		research about child sex abuse online from cybersecurity group Ghost 
		Data. 
		 
		Some of tweets include key words related to "rape" and "teens," and 
		appeared alongside promoted tweets from corporate advertisers, the 
		Reuters review found. In one example, a promoted tweet for shoe and 
		accessories brand Cole Haan appeared next to a tweet in which a user 
		said they were "trading teen/child" content. 
		 
		"We're horrified," David Maddocks, brand president at Cole Haan, told 
		Reuters after being notified that the company's ads appeared alongside 
		such tweets. "Either Twitter is going to fix this, or we'll fix it by 
		any means we can, which includes not buying Twitter ads." 
		  
						
		
		  
						
		 
		In another example, a user tweeted searching for content of "Yung girls 
		ONLY, NO Boys," which was immediately followed by a promoted tweet for 
		Texas-based Scottish Rite Children's Hospital. Scottish Rite did not 
		return multiple requests for comment. 
		 
		In a statement, Twitter spokesperson Celeste Carswell said the company 
		"has zero tolerance for child sexual exploitation" and is investing more 
		resources dedicated to child safety, including hiring for new positions 
		to write policy and implement solutions. 
		 
		She added that Twitter is working closely with its advertising clients 
		and partners to investigate and take steps to prevent the situation from 
		happening again. 
		 
		Twitter's challenges in identifying child abuse content were first 
		reported in an investigation by tech news site The Verge in late August. 
		The emerging pushback from advertisers that are critical to Twitter's 
		revenue stream is reported here by Reuters for the first time. 
		 
		Like all social media platforms, Twitter bans depictions of child sexual 
		exploitation, which are illegal in most countries. But it permits adult 
		content generally and is home to a thriving exchange of pornographic 
		imagery, which comprises about 13% of all content on Twitter, according 
		to an internal company document seen by Reuters. 
		 
		Twitter declined to comment on the volume of adult content on the 
		platform. 
		 
		Ghost Data identified the more than 500 accounts that openly shared or 
		requested child sexual abuse material over a 20-day period this month. 
		Twitter failed to remove more than 70% of the accounts during the study 
		period, according to the group, which shared the findings exclusively 
		with Reuters. 
		 
		Reuters could not independently confirm the accuracy of Ghost Data's 
		finding in full, but reviewed dozens of accounts that remained online 
		and were soliciting materials for "13+" and "young looking nudes." 
		  
						
		
		  
						
		 
		After Reuters shared a sample of 20 accounts with Twitter last Thursday, 
		the company removed about 300 additional accounts from the network, but 
		more than 100 others still remained on the site the following day, 
		according to Ghost Data and a Reuters review. 
		 
		Reuters then on Monday shared the full list of more than 500 accounts 
		after it was furnished by Ghost Data, which Twitter reviewed and 
		permanently suspended for violating its rules, said Twitter's Carswell 
		on Tuesday. 
		 
		In an email to advertisers on Wednesday morning, ahead of the 
		publication of this story, Twitter said it "discovered that ads were 
		running within Profiles that were involved with publicly selling or 
		soliciting child sexual abuse material." 
		 
		Andrea Stroppa, the founder of Ghost Data, said the study was an attempt 
		to assess Twitter's ability to remove the material. He said he 
		personally funded the research after receiving a tip about the topic. 
		 
		Twitter's transparency reports on its website show it suspended more 
		than 1 million accounts last year for child sexual exploitation. 
						
		It made about 87,000 reports to the National Center for Missing and 
		Exploited Children, a government-funded non-profit that facilitates 
		information sharing with law enforcement, according to that 
		organization's annual report. 
						
		"Twitter needs to fix this problem ASAP, and until they do, we are going 
		to cease any further paid activity on Twitter," said a spokesperson for 
		Forbes. 
						
		
		  
						
		
            [to top of second column]  | 
            
             
            
			  
            A promoted tweet on Twitter app is 
			displayed on a mobile phone near a Twitter logo, in this 
			illustration picture taken Sept. 8, 2022. REUTERS/Florence 
			Lo/Illustration/File Photo/File Photo 
            
			
			  
            "There is no place for this type of content online," a spokesperson 
			for carmaker Mazda USA said in a statement to Reuters, adding that 
			in response, the company is now prohibiting its ads from appearing 
			on Twitter profile pages. 
			 
			A Disney spokesperson called the content "reprehensible" and said 
			they are "doubling-down on our efforts to ensure that the digital 
			platforms on which we advertise, and the media buyers we use, 
			strengthen their efforts to prevent such errors from recurring." 
			 
			A spokesperson for Coca-Cola, which had a promoted tweet appear on 
			an account tracked by the researchers, said it did not condone the 
			material being associated with its brand and said "any breach of 
			these standards is unacceptable and taken very seriously." 
			 
			NBCUniversal said it has asked Twitter to remove the ads associated 
			with the inappropriate content. 
			 
			CODE WORDS 
			 
			Twitter is hardly alone in grappling with moderation failures 
			related to child safety online. Child welfare advocates say the 
			number of known child sexual abuse images has soared from thousands 
			to tens of millions in recent years, as predators have used social 
			networks including Meta's Facebook and Instagram to groom victims 
			and exchange explicit images. 
			 
			For the accounts identified by Ghost Data, nearly all the traders of 
			child sexual abuse material marketed the materials on Twitter, then 
			instructed buyers to reach them on messaging services such as 
			Discord and Telegram in order to complete payment and receive the 
			files, which were stored on cloud storage services like New 
			Zealand-based Mega and U.S.-based Dropbox, according to the group's 
			report. 
			 
			A Discord spokesperson said the company had banned one server and 
			one user for violating its rules against sharing links or content 
			that sexualize children. 
			  
            
			  
			 
			Mega said a link referenced in the Ghost Data report was created in 
			early August and soon after deleted by the user, which it declined 
			to identify. Mega said it permanently closed the user's account two 
			days later. 
			 
			Dropbox and Telegram said they use a variety of tools to moderate 
			content but did not provide additional detail on how they would 
			respond to the report. 
			 
			Still the reaction from advertisers poses a risk to Twitter's 
			business, which earns more than 90% of its revenue by selling 
			digital advertising placements to brands seeking to market products 
			to the service's 237 million daily active users.  
			 
			Twitter is also battling in court Tesla CEO and billionaire Elon 
			Musk, who is attempting to back out of a $44 billion deal to buy the 
			social media company over complaints about the prevalence of spam 
			accounts and its impact on the business. 
			 
			A team of Twitter employees concluded in a report dated February 
			2021 that the company needed more investment to identify and remove 
			child exploitation material at scale, noting the company had a 
			backlog of cases to review for possible reporting to law 
			enforcement. 
			 
			"While the amount of (child sexual exploitation content) has grown 
			exponentially, Twitter's investment in technologies to detect and 
			manage the growth has not," according to the report, which was 
			prepared by an internal team to provide an overview about the state 
			of child exploitation material on Twitter and receive legal advice 
			on the proposed strategies. 
			 
			"Recent reports about Twitter provide an outdated, moment in time 
			glance at just one aspect of our work in this space, and is not an 
			accurate reflection of where we are today," Carswell said. 
			 
			The traffickers often use code words such as "cp" for child 
			pornography and are "intentionally as vague as possible," to avoid 
			detection, according to the internal documents. The more that 
			Twitter cracks down on certain keywords, the more that users are 
			nudged to use obfuscated text, which "tend to be harder for 
			(Twitter) to automate against," the documents said. 
			  
            
			  
			 
			Ghost Data's Stroppa said that such tricks would complicate efforts 
			to hunt down the materials, but noted that his small team of five 
			researchers and no access to Twitter's internal resources was able 
			to find hundreds of accounts within 20 days. 
			 
			Twitter did not respond to a request for further comment.  
			 
			(Reporting by Sheila Dang in New York and Katie Paul in Palo Alto; 
			Additional reporting by Dawn Chmielewski in Los Angeles; Editing by 
			Kenneth Li and Edward Tobin) 
			[© 2022 Thomson Reuters. All rights 
				reserved.] 
			This material may not be published, 
			broadcast, rewritten or redistributed.  
			Thompson Reuters is solely responsible for this content.  |