The Rise of AI Voice Phishing

CSO Online

The rise of AI voice phishing poses a new and dangerous threat to the world.

Karyss Park, Photojournalist

Is a loved one suddenly ringing you up for money? Maybe they need to find their credit card or some extra cash. Even worse, perhaps they’re being threatened and require you to pay a ransom for their freedom. Something feels off, but you can’t just leave them hanging. Wait! Before handing over the money, realize that you may be the victim of an AI voice phishing attack.

Voice phishing has been one of the many methods used to scam people through phone calls, but recently a new and more dangerous form has been developed—AI voice phishing. Voice phishing involves scammers who impersonate companies, the government, specific individuals, and more to demand money and information (www.ag.state.mn.us). Although voice phishers will target anyone they can, many of their victims are older or specific people they have gathered intel on that they can use to deceive or blackmail them more easily.

With AI voice phishing, scammers can replicate and use an individual’s voice to say almost anything. They use computer programs to edit voice clips to create a natural sound and rearrange words or other speech to serve their purpose. Because of its ability to almost perfectly imitate anyone, AI voice phishing is being used to target people’s loved ones due to the strong bond and lack of rationality that makes it easier for them to give in. In March, CNN News featured a man targeted by an AI voice phishing attack. After hearing what sounded like his daughter crying and pleading for help over the phone, he immediately drove to deliver a large ransom under instructions from scammers. Fortunately, his wife was able to catch on and prevent him from handing over the money, but this incident illustrates the very possible and plausible danger of AI phishing technology.

In 2022, individuals lost about $11 million, and AI voice phishing has much potential to become catastrophically harmful to companies and people’s lives (Wright & Schwartz). Mrs. Stedman (S), YLHS’ government teacher, “believes [s] AI voice phishing is scary because most people don’t even know what it is. It bothers [her] to think of all the ways this could be used negatively.” The victim count is constantly increasing, and AI technologies are only becoming increasingly advanced. Other AI technologies like Chat GPT and AI art generators have also been double-edged swords for the world, stirring controversy regarding their ability to be used for cheating in schools and stealing intellectual property such as art and writing.

Ultimately, appropriate countermeasures must be developed to regulate and combat the detrimental effects of AI technologies in general.