In a chilling demonstration of how far artificial intelligence has come—and how easily it can be abused—a sophisticated voice-cloning scam has shaken Italy’s elite in 2025. The target? Some of the country’s most prominent business figures, including fashion magnate Giorgio Armani and luxury brand mogul Patrizio Bertelli, co-CEO of Prada.
The con unfolded like a Hollywood thriller. Scammers, equipped with advanced AI voice-cloning software, replicated the voice of Italian Defense Minister Guido Crosetto. Using expertly crafted deepfake audio, they called multiple high-profile individuals, urgently requesting financial aid to secure the release of allegedly kidnapped journalists abroad. The calls were so convincing, complete with natural speech pauses, matching tonal nuances, and authoritative phrasing, that at least one victim—Massimo Moratti, former Inter Milan president—wired a substantial amount of money before the fraud came to light.
Italian authorities swiftly launched an investigation after irregularities were spotted, leading to the freezing of nearly €1 million in suspicious transactions. According to law enforcement sources, the scam was part of a larger international fraud ring that appears to have been testing a new wave of AI-powered deception.
What makes this incident particularly unsettling is the psychological precision used in the attack. The scammers didn’t just impersonate a generic government official—they cloned a recognizable national figure, exploiting his position of power and public familiarity to trigger emotional and impulsive responses in their victims. These weren’t random phishing emails or dubious links. These were seemingly urgent and personal requests from someone the victims trusted.
The defense minister himself has since issued a public statement, confirming the misuse of his identity and condemning the incident. “It is a grave and disturbing abuse of technology,” Crosetto stated. “No voice should be borrowed to commit crimes.”
Experts warn that this is only the beginning. As generative AI becomes more accessible and realistic, impersonation attacks may increase in frequency and sophistication. Cybersecurity firms across Europe are now urging businesses and individuals to verify any unusual requests through multiple channels, even when the voice on the other end of the line sounds familiar.
"This isn't the scammer from the 2000s asking for your bank account," said Dr. Lucia Vassallo, a cybersecurity analyst based in Milan. "This is the future of fraud—hyper-personalized, ultra-convincing, and increasingly difficult to detect."
The broader implications of this scam touch not just on personal security, but national security and political stability. In the wrong hands, AI voice-cloning could be used to issue false military orders, manipulate elections, or discredit leaders—all with a few seconds of source audio.
Italy’s financial watchdog and national security agencies are now coordinating with Interpol and Europol in hopes of tracing the criminal network behind the scam. In the meantime, the incident has become a cautionary tale across Europe and beyond, sounding the alarm on just how fragile reality can become in an age of synthetic voices and artificial trust.
For Italian business leaders, it’s a hard-earned lesson. In the age of AI, hearing is no longer believing.