How underground groups use stolen identities and Deepfake

These bogus videos are already used to cause problems for public people. The easiest way to track down celebrities, high-ranking government officials, famous corporate personalities and others who have a lot of high-definition photos and videos online. We can see that social engineering scams using their faces and voices are already spreading.

Given the tools and the deepfake technology available, we can expect even more attacks and scams aimed at manipulating victims with voice and video spoofs.

How deepfakes can affect existing attacks, fraud and monetization patterns

Criminals can adapt Deepfake to current malicious activities and we are already seeing the first wave of these attacks. Below is a list of both existing attacks and attacks that we can expect in the near future:

IM scam. Impersonating a money manager and making money transfer calls has been a popular scam for years, and now criminals can use deepfakes in video calls. For example, they might impersonate someone and contact their friends and family to request a transfer or a simple top-up of their phone balance.

BEC. This attack was already quite successful, even without a deepfake. Now, attackers can use fake videos in conversations, impersonate management or business partners, and request money transfers.

Keeping accounts. Criminals can use deepfake to bypass identity verification services and create accounts with banks and financial institutions, and even government services, on behalf of others, using copies of stolen ID documents. These criminals can exploit the victim’s identity and bypass the verification process that often takes place via video calls. Such accounts can later be used for money laundering and other malicious activities.

Account Hijacking. Criminals can hijack accounts that require identification through video calls. They can take over your financial account and simply withdraw or transfer funds. Some financial institutions require online video verification to have certain features enabled in their online banking apps. Of course, such checks can also be the target of deepfake attacks.

Blackmail. By using deepfake videos, malicious actors can create stronger extortion and other extortion attacks. They may even post false evidence created using deepfake technology.

Disinformation campaigns. Deepfake videos also create more effective disinformation campaigns and can be used to manipulate public opinion. Some attacks, such as pump-and-dump patterns, rely on messages from famous people. Now these messages can be created using deepfake technology. Such schemes can certainly have financial, political and even reputational consequences.

Technical Support Scams. Entities using Deepfake technology can use false identities to social engineer unsuspecting users to share payment credentials or gain access to IT resources.

Social engineering attacks. Malicious actors can use deepfake to manipulate the friends, family, or co-workers of the person they are impersonating. Social engineering attacks, such as those for which Kevin Mitnick was famous, can therefore take on a new dimension.

Takeover of Internet of Things (IoT) devices. Devices that use voice or face recognition, such as Amazon’s Alexa and many other smartphone brands, will be on the targeted deepfake list.

Conclusions and safety recommendations

We are already seeing the first wave of criminal and malicious activities using deepfake. However, it is likely that more serious attacks will occur in the future because of the following issues:

  1. There is enough content on social media to create deepfake models for millions of people. People in every country, city, village or specific social group have their social media exposed to the world.
  2. All technology pillars are in place. Implementation of attacks does not require significant investment, and attacks can be carried out not only by nation states and corporations, but also by private individuals and small criminal groups.
  3. Actors can already impersonate politicians, senior executives and celebrities and steal their identities. This can significantly increase the success rate of certain attacks, such as financial plans, short-term disinformation campaigns, public opinion manipulation and extortion.
  4. Common people’s identities can be stolen or recreated from publicly exposed media. Cyber ​​criminals can steal pretend victims or use their identities for malicious activities.
  5. Modifying deepfake models can lead to the mass emergence of identities of people who never existed. These identities can be used in a variety of fraud patterns. Traces of such appearances have already been noticed in the wild.

What can individuals and organizations do to counter and mitigate the effects of deepfake attacks? We have some recommendations for casual users as well as organizations that use biometric templates for validation and authentication. Some of these validation methods can also be automated and widely implemented.

  • The multi-factor authentication approach should be the standard for authenticating sensitive or critical accounts.
  • Organizations should authenticate a user by three basic factors: something the user has, something the user knows, and something the user is. Make sure the “something” items are chosen wisely.
  • In financial organizations, training in staff awareness, carried out on appropriate samples, and the principle of knowing your customer (KYC) are essential. Deepfake technology is not perfect, and organization employees should be alert to certain red flags.
  • Social media users should minimize exposure to high-quality personal images.
  • When verifying sensitive accounts (for example bank or business profiles), users should prioritize the use of biometric templates that are less visible to the public, such as irises and fingerprints.
  • Significant policy changes are needed to tackle this problem on a larger scale. These policies should address the use of up-to-date and previously disclosed biometric data. They also need to take into account the current state of cybercrime and prepare for the future.

The consequences of the security of deepfake technology and attacks using it are real and harmful. As we have shown, the potential victims of these attacks are not only organizations and senior management, but also ordinary people. Given the wide availability of the necessary tools and services, these techniques are accessible to less tech-savvy attackers and groups, meaning malicious activities can be performed on a large scale.

Leave a Reply