Cryptocurrency news

AI fraud: types and methods of fighting

Forklog / 01.07.2024 / 16:37
AI fraud: types and methods of fighting

Artificial intelligence significantly simplifies workflows and increases productivity. However, it is also often a tool for scammers, because it allows you to create audio, video, messages, and deepfakes cheaply and efficiently, TechCrunch writes. 

We talk about popular ways of cheating using AI and options for dealing with intruders. 

Cloning the voice of family members and friends

Speech synthesis has been around for decades, but thanks to the development of generative artificial intelligence, scammers can create a fake from just a few seconds of audio. The voice of any person who has been publicly broadcast, for example in a news report or in a YouTube video, can be cloned. 

Scammers have the ability to generate convincing and fake versions of friends or relatives of a potential victim. A request for help is often used, the newspaper notes. For example, a parent may receive a voice message from an unknown number from a "son" who talks about stealing things during a trip and asks to send money.

In January, an unknown person cloned the voice of US President Joe Biden and made a series of calls to voters in New Hampshire, calling for a boycott of the vote. As a result, the Federal Communications Commission proposed a fine of $6 million for fake calls. 

Another example: in the United States, a physical education teacher was accused of using a voice deepfake to frame the school principal and get him fired. However, independent experts found that the audio was fabricated because it has "an even tone, an unusually clear background, and the absence of consistent breathing sounds or pauses."

How to deal with voice cloning?

The technology has developed so much that it is extremely difficult to recognize a fake voice. Sometimes even experts can't do it.

Anything that happens from an unknown number, email address, or account should be automatically considered suspicious, TechCrunch advises. If someone introduces themselves as a friend or relative, it is worth contacting this person through other communication channels.

Personalized Phishing and Email Spam

Artificial intelligence allows you to generate texts much faster and in different languages, including those adapted for a specific person. Security company SlashNext noted that the number of phishing email attacks has increased by 856% over the past year, and by 4,151% since the launch of ChatGPT in 2022.

According to the publication, personalized letters are especially dangerous. Since personal data leaks occur regularly, most of them are publicly available. Scammers can use information about recent location, purchases or habits, and the language model will write thousands of emails in seconds. Journalists give a possible example of such a message:

"Hello, Doris! I'm from the Etsy promotion team. A 50% discount on the product you have recently been interested in! And delivery to your Bellingham address will be free if you use this link."

Such a letter looks plausible, and the neural network can create them in large quantities for different people, the article says.

How to deal with email spam?

Do not click on questionable links or download strange files. If there is no absolute certainty about the authenticity and identity of the sender, it is better not to open the letter or links in it. 

Identity Substitution Attack

Using leaks of personal data, scammers can log into an account on any resource /exchange. The publication indicates that the neural network is able to create a fake copy of the user's identity and use personal information to contact the support service with a request to restore the account. 

As a rule, data such as date of birth and phone number are required to restore access. Sometimes you need to take a selfie, but AI can do this task, TechCrunch emphasizes.

How to deal with identity fraud?

Using multi-factor authentication. Any suspicious activity on the account will be broadcast to your email or directly to your phone.

Deepfakes and blackmail

AI allows you to superimpose the face of any person on another body and thus make a believable photo (or video). This type of fraud is used to blackmail or spread fakes. 

https://forklog.com/cryptorium/ai/chto-takoe-dipfake

For example, in January, an employee of a multinational company in Hong Kong transferred $25 million to fraudsters after calling with "colleagues" who turned out to be fake. The victim received an invitation allegedly from the financial director of the company to an online video meeting, where it was planned to discuss some confidential transaction. 

All the "company executives" present at the call were deepfake imitations. They convinced the employee to transfer $25.5 million from the company's funds to five bank accounts.

How to deal with deepfake and blackmail?

Instead of fulfilling the demands of the blackmailer, it is better to contact the police and the hosting companies with a request to delete the photos.

https://forklog.com/exclusive/ai/fejki-dipfejki-i-chat-boty-kak-raspoznat-ii-manipulyatsii-v-internete

Recall that in June, Elliptic experts noted an increase in the use of AI in cryptocrimes. 

In early June, Dogecoin graphic designer under the nickname DogeDesigner discovered on YouTube a live broadcast with a deepfake of Tesla and SpaceX CEO Elon Musk promoting a cryptocurrency scam.

Source
Recently News

© Token Radar 2024. All Rights Reserved.
IMPORTANT DISCLAIMER: All content provided herein our website, hyperlinked sites, associated applications, forums, blogs, social media accounts and other platforms (“Site”) is for your general information only, procured from third party sources. We make no warranties of any kind in relation to our content, including but not limited to accuracy and updatedness. No part of the content that we provide constitutes financial advice, legal advice or any other form of advice meant for your specific reliance for any purpose. Any use or reliance on our content is solely at your own risk and discretion. You should conduct your own research, review, analyse and verify our content before relying on them. Trading is a highly risky activity that can lead to major losses, please therefore consult your financial advisor before making any decision. No content on our Site is meant to be a solicitation or offer.