Since its inception, the voice recognition technology was seen as a savior to enhance cybersecurity. However, reports have it that that the same software used by voice identification systems can work against the people it was meant to protect. Tabled already at the court is a $243,000 voice fraud watertight case – as hinted by the Wall Street Journal.
The norm has been that a signature can easily be forged and the voice recognition technology, which became popular a few years back seemed to have come to the rescue.
In an article by QUARTZ, the writer – Ephrat Livni elaborates that cases of cybercriminals using artificial intelligence to robe huge chunks of money through ‘hard to decipher voice calls’ has been on the rise. In other words, crooks are now able to use AI to fake a person’s voice for illegal use.
A case in point confirming this new version of swindling is something that happened in March this year. Fraudsters used software powered by artificial intelligence to impersonate one of the executive directors of a UK-based energy firm and managed to convince one of his accounting staffs to urgently send a great amount of money to a Hungarian supplier.
He then suspected that something was not right and tried to request for a refund, so in the process of waiting for the reverse, another ‘had to suspect call came in from Australia with a pressing urge to have some money transferred again to a certain company but this time he didn’t respond.
Later it turned out that the CEO never asked for any transaction to be made and that the calls were artificially generated using the software.
The ordinary cautionary alert has been, never trust what you see as signatures can be faked; not to mention that deep fake videos can be used for impersonation to give false instructions.
Now the new warning is, be careful as to what you hear may not be who you think it is. AI-powered software impersonation is real and is already on the rise.