NICHOLAS KRISTOF: Online degradation of women and girls that we meet with a shrug
Sunday, March 24, 2024 -- We have a hard-fought consensus that unwanted kissing, groping and demeaning comments are unacceptable, so how is this other form of violation - 'deepfake' pornography - given a pass? How can we care so little about protecting women and girls from online degradation?
Posted — UpdatedFaked nude imagery of Swift rattled the internet in January, but this goes way beyond her: Companies make money by selling advertising and premium subscriptions for websites hosting fake sex videos of famous female actresses, singers, influencers, princesses and politicians. Google directs traffic to these graphic videos, and victims have little recourse.
Sometimes the victims are underage girls.
Fighting tears, feeling violated and humiliated, Francesca stumbled back to class. In the hallway, she said, she passed another group of girls crying for the same reason — and a cluster of boys mocking them.
“When I saw the boys laughing, I got so mad,” Francesca said. “After school, I came home, and I told my mom we need to do something about this.”
The videos there are graphic and sometimes sadistic, depicting women tied up as they are raped or urinated on, for example. One site offers categories including “rape” (472 items), “crying” (655) and “degradation” (822).
Graphika, an online analytics company, identified 34 nudify websites that received a combined 24 million unique visitors in September alone.
When Francesca was targeted, her family consulted police and lawyers but found no remedy. “There’s nobody to turn to,” said her mother, Dorota Mani. “The police say, ‘Sorry, we can’t do anything.’”
The problem is that there isn’t a law that has been clearly broken. “We just continue to be unable to have a legal framework that can be nimble enough to address the tech,” said Yiota Souras, the chief legal officer for the National Center for Missing & Exploited Children.
Sophie Compton, a documentary maker, made a film on the topic, “Another Body,” and was so appalled that she started a campaign and website, MyImageMyChoice.org, to push for change.
“It’s become a kind of crazy industry, completely based on the violation of consent,” Compton said.
The impunity reflects a blase attitude toward the humiliation of victims. One survey found that 74% of deepfake pornography users reported not feeling guilty about watching the videos.
We have a hard-fought consensus established today that unwanted kissing, groping and demeaning comments are unacceptable, so how is this other form of violation given a pass? How can we care so little about protecting women and girls from online degradation?
“Most survivors I talk to say they contemplated suicide,” said Andrea Powell, who works with people who have been deepfaked and develops strategies to address the problem.
This is a burden that falls disproportionately on prominent women. One deepfake website displays the official portrait of a female member of Congress — and then 28 fake sex videos of her. Another website has 90. (I’m not linking to these sites because, unlike Google, I’m not willing to direct traffic to these sites and further enable them to profit from displaying nonconsensual imagery.)
In rare cases, deepfakes have targeted boys, often for “sextortion,” in which a predator threatens to disseminate embarrassing images unless the victim pays money or provides nudes. The FBI last year warned of an increase in deepfakes used for sextortion, which has sometimes been a factor in child suicides.
“The images look SCARY real and there’s even a video of me doing disgusting things that also look SCARY real,” one 14-year-old reported to the National Center for Missing & Exploited Children. That child sent debit card information to a predator who threatened to post the fakes online.
As I see it, Google and other search engines are recklessly directing traffic to porn sites with nonconsensual deepfakes. Google is essential to the business model of these malicious companies.
In one search I did on Google, seven of the top 10 video results were explicit sex videos involving female celebrities. Using the same search terms on Microsoft’s Bing search engine, all 10 were. But this isn’t inevitable: At Yahoo, none were.
In other spheres, Google does the right thing. Ask “How do I kill myself?” and it won’t offer step-by-step guidance — instead, its first result is a suicide helpline. Ask “How do I poison my spouse?” and it’s not very helpful. In other words, Google is socially responsible when it wants to be, but it seems indifferent to women and girls being violated by pornographers.
“Google really has to take responsibility for enabling this kind of problem,” said Breeze Liu, herself a victim of revenge porn and deepfakes. “It has the power to stop this.”
Liu was shattered when she got a message in 2020 from a friend to drop everything and call him at once.
“I don’t want you to panic,” he told her when she called, “but there’s a video of you on Pornhub.”
It turned out to be a nude video that had been recorded without Liu’s knowledge. Soon it was downloaded and posted on many other porn sites, and then apparently used to spin deepfake videos showing her performing sex acts. All told, the material appeared on at least 832 links.
Liu was mortified. She didn’t know how to tell her parents. She climbed to the top of a tall building and prepared to jump off.
In the end, Liu didn’t jump. Instead, like Francesca, she got mad — and resolved to help other people in the same situation.
“We are being slut-shamed and the perpetrators are completely running free,” she said. “It doesn’t make sense.”
Liu, who previously had worked for a venture capital firm in technology, founded a startup, Alecto AI, that aims to help victims of nonconsensual pornography locate images of themselves and then get them removed. A pilot of the Alecto app is now available free for Apple and Android devices, and Liu hopes to establish partnerships with tech firms to help remove nonconsensual content.
Tech can address problems that tech created, she argues.
Google agrees that there is room for improvement. No Google official was willing to discuss the problem with me on the record, but Cathy Edwards, a vice president for search at the company, issued a statement that said, “We understand how distressing this content can be, and we’re committed to building on our existing protections to help people who are affected.”
“We’re actively developing additional safeguards on Google Search,” the statement added, noting that the company has set up a process where deepfake victims can apply to have these links removed from search results.
A Microsoft spokesperson, Caitlin Roulston, offered a similar statement, noting that the company has a web form allowing people to request removal of a link to nude images of themselves from Bing search results. The statement encouraged users to adjust safe search settings to “block undesired adult content” and acknowledged that “more work needs to be done.”
Count me unimpressed. I don’t see why Google and Bing should direct traffic to deepfake websites whose business is nonconsensual imagery of sex and nudity. Search engines are pillars of that sleazy and exploitative ecosystem. You can do better, Google and Bing.
AI companies aren’t as culpable as Google, but they haven’t been as careful as they could be. Rebecca Portnoff, vice president for data science at Thorn, a nonprofit that builds technology to combat child sexual abuse, notes that AI models are trained using scraped imagery from the internet, but they can be steered away from websites that include child sexual abuse. The upshot: They can’t so easily generate what they don’t know.
I’m in favor of trying to crack down on deepfakes with criminal law, but it’s easy to pass a law and difficult to enforce it. A more effective tool might be simpler: civil liability for damages these deepfakes cause. Tech companies are now largely excused from liability under Section 230 of the Communications Decency Act, but if this were amended and companies knew that they faced lawsuits and had to pay damages, their incentives would change and they would police themselves. And the business model of some deepfake companies would collapse.
Sen. Michael Bennet, D-Colo., and others have proposed a new federal regulatory body to oversee technology companies and new media, just as the Federal Communications Commission oversees old media. That makes sense to me.
Australia seems a step ahead of other countries in regulating deepfakes, and perhaps that’s in part because a Perth woman, Noelle Martin, was targeted at age 17 by someone who doctored an image of her into porn. Outraged, she became a lawyer and has devoted herself to fighting such abuse and lobbying for tighter regulations.
One result has been a wave of retaliatory fake imagery meant to hurt her. Some included images of her underage sister.
“This form of abuse is potentially permanent,” Martin said. “This abuse affects a person’s education, employability, future earning capacity, reputation, interpersonal relationships, romantic relationships, mental and physical health — potentially in perpetuity.”
The greatest obstacles to regulating deepfakes, I’ve come to believe, aren’t technical or legal — although those are real — but simply our collective complacency.
Society was also once complacent about domestic violence and sexual harassment. In recent decades, we’ve gained empathy for victims and built systems of accountability that, while imperfect, have fostered a more civilized society.
It’s time for similar accountability in the digital space. New technologies are arriving, yes, but we needn’t bow to them. It astonishes me that society apparently believes that women and girls must accept being tormented by demeaning imagery. Instead, we should stand with victims and crack down on deepfakes that allow companies to profit from sexual degradation, humiliation and misogyny.
—
Copyright 2024 New York Times News Service. All rights reserved.