What started as a ski vacation Instagram publish resulted in monetary wreck for a French inside designer after scammers used AI to persuade her she was in a relationship with Brad Pitt.
The 18-month rip-off focused Anne, 53, who acquired an preliminary message from somebody posing as Jane Etta Pitt, Brad’s mom, claiming her son “wanted a girl such as you.”
Not lengthy after, Anne began speaking to what she believed was the Hollywood star himself, full with AI-generated photographs and movies.
“We’re speaking about Brad Pitt right here and I used to be surprised,” Anne instructed French media. “At first, I assumed it was faux, however I didn’t actually perceive what was occurring to me.”
The connection deepened over months of each day contact, with the faux Pitt sending poems, declarations of affection, and ultimately a wedding proposal.
“There are so few males who write to you want that,” Anne described. “I beloved the person I used to be speaking to. He knew how you can speak to girls and it was at all times very nicely put collectively.”
The scammers’ techniques proved so convincing that Anne ultimately divorced her millionaire entrepreneur husband.
After constructing rapport, the scammers started extracting cash with a modest request – €9,000 for supposed customs charges on luxurious items. It escalated when the impersonator claimed to wish most cancers therapy whereas his accounts had been frozen attributable to his divorce from Angelina Jolie.
A fabricated physician’s message about Pitt’s situation prompted Anne to switch €800,000 to a Turkish account.

“It price me to do it, however I assumed that I is likely to be saving a person’s life,” she mentioned. When her daughter acknowledged the rip-off, Anne refused to imagine it: “You’ll see when he’s right here in individual then you definately’ll ask for forgiveness.”
Her illusions had been shattered upon seeing information protection of the actual Brad Pitt along with his accomplice Inés de Ramon in summer season 2024.
Even then, the scammers tried to take care of management, sending faux information alerts dismissing these reviews and claiming Pitt was really courting an unnamed “very particular individual.” In a last roll of the cube, somebody posing as an FBI agent extracted one other €5,000 by providing to assist her escape the scheme.
The aftermath proved devastating – three suicide makes an attempt led to hospitalization for melancholy.
Anne opened up about her expertise to French broadcaster TF1, however the interview was later eliminated after she confronted intense cyber-bullying.
Now residing with a good friend after promoting her furnishings, she has filed legal complaints and launched a crowdfunding marketing campaign for authorized assist.
A tragic scenario – although Anne is actually not alone. Her story parallels an enormous surge in AI-powered fraud worldwide.
Spanish authorities not too long ago arrested 5 individuals who stole €325,000 from two girls via comparable Brad Pitt impersonations.
Talking about AI fraud final yr, McAfee’s Chief Know-how Officer Steve Grobman explains why these scams succeed: “Cybercriminals are ready to make use of generative AI for faux voices and deepfakes in ways in which used to require much more sophistication.”
It’s not simply people who find themselves lined up within the scammers’ crosshairs, however companies, too. In Hong Kong final yr, fraudsters stole $25.6 million from a multinational firm utilizing AI-generated govt impersonators in video calls.
Superintendent Baron Chan Shun-ching described how “the employee was lured right into a video convention that was mentioned to have many members. The sensible look of the people led the worker to execute 15 transactions to 5 native financial institution accounts.”
Would you be capable of spot an AI rip-off?
Most individuals would fancy their possibilities of recognizing an AI rip-off, however analysis says in any other case.
Research present people wrestle to distinguish actual faces from AI creations, and artificial voices idiot roughly 1 / 4 of listeners. That proof got here from final yr – AI voice picture, voice, and video synthesis have developed significantly since.
Synthesia, an AI video platform that generates sensible human avatars talking a number of languages, now backed by Nvidia, simply doubled its valuation to $2.1 billion. Video and voice synthesis platforms like Synthesia and Elevenlabs are among the many instruments that fraudsters use to launch deep faux scams.
Synthesia admits this themselves, not too long ago demonstrating its dedication to stopping misuse via a rigorous public pink staff take a look at, which confirmed how its compliance controls efficiently block makes an attempt to create non-consensual deepfakes or use avatars for dangerous content material like selling suicide and playing.
Whether or not or not such measures are efficient at stopping misuse – clearly the jury is out.
As corporations and people wrestle with compellingly actual AI-generated media, the human price – illustrated by Anne’s devastating expertise – will most likely rise.