Biometric fingerprint ID card
Generative artificial intelligence - unfortunately also a superpower for cyber criminals
Protect yourself from cyber criminals who are now using even more perfidious methods to defraud people thanks to learning AI and data theft.
LKA NRW

Generative artificial intelligence (generative AI) not only opens up opportunities for innovative developments, but also offers cybercriminals the chance to commit crimes, particularly in the area of cybercrime. The ability of generative AI to generate text, images and even speech harbors many potential dangers.

Cybercrime with the help of generative AI

Social engineering

In social engineering, human characteristics such as helpfulness, trust, fear or respect for authority are exploited to cleverly manipulate people. In this way, cyber criminals trick the victim into disclosing confidential information, bypassing security functions, making bank transfers or installing malware on a private device or a computer in the company network, for example. In social engineering, attackers exploit the "human factor" as the supposedly weakest link in the security chain in order to realize their criminal intentions. Generative AI can be used in social engineering scenarios to carry out personalized attacks that are tailored to the individual behaviours or preferences of potential victims.

 

Phishing

AI can be used by perpetrators to make phishing attacks more effective. For example, they can use AI to create masses of personalized phishing emails that are tailored to the potential victims. This increases the likelihood that recipients will disclose sensitive information.

Deepfake

Deepfake AI enables the creation of realistic-looking fake media content, for example in the form of manipulated videos or audio recordings. Cyber criminals use this technology to defraud people. Possible crimes range from fraud (CEO fraud, call center fraud, grandchild fraud), blackmail, defamation, insults, false suspicions and the distribution of (fake) pornographic content to political motivations.

 

Social bots

AI social bots are computer programs created with the help of AI to imitate human-like behavior in social networks. They can be used by cybercriminals to spread disinformation, manipulate political opinions or for phishing purposes. Social bots can automatically create content, publish comments or even fake interactions with real users.

Online fraud/fake stores

KI can also be used to create fake stores in order to automatically create visually convincing but fake websites. In addition, fake product reviews can be generated to make the fake store appear more authentic.

Money laundering

In money laundering, AI can be used in various ways. For example, it can carry out automated transactions in small, hard-to-trace steps to disguise the flow of money. Artificial intelligence is also now capable of stealing or falsifying identities to open bank accounts or initiate money transactions.

 

All of these methods are conceivable scenarios that are frightening and have serious consequences and only hint at the potential financial damage, psychological consequences for victims and the potential impact on our society as a whole.

Generative AI - what exactly is it?

When people talk about AI, they usually mean "generative AI". It encompasses all machine learning architectures and technologies that can generate new content based on previously analyzed examples ("training"). This is where generative models differ from classic machine learning models, which only examine data for existing patterns.

Generative models are used in a wide variety of areas, such as computer vision, object detection, natural language processing and the creation of creative content.

Application examples

Text generation - the computer trains itself and learns continuously

Generative models can automatically create human-like texts. These models are trained using extensive data sets and can then independently produce new, coherent and context-rich texts. The possible applications range from simple sentences to complex, thematically diverse paragraphs. The best-known example is ChatGPT.

Style transfer and modification - the "swirl face"

Generative models are able to use algorithms to transfer the style of a template to another image or video. This process makes it possible to change the artistic character of an image or video, for example by transferring the style of a famous painting to a photo. AI models can learn to extract stylistic features and apply them to other visual content. It is also possible to undo filters used to edit an image with the help of AI. This enabled the FBI to identify and arrest the criminal "Swirl Face", who blurred his face with the swirl filter.

Image generation

AI can also independently create realistic images of faces, artworks or scenes. This AI is trained using extensive data sets that provide it with a wide range of visual information. There are no longer any limits to the creation of images. The best-known examples of such AI are Stable Diffusion and Midjourney.

Data augmentation

Data augmentation refers to a process in which generative models are used to expand and improve existing data sets. This is usually done by transforming and varying existing data to generate new examples that help the model during the learning phase. In language processing, variations in sentence structure, word choice or grammatical transformations are incorporated. The purpose of data augmentation is to improve the robustness and capability of the model by training it with a wider variety of data. This is particularly useful when the original data set is limited. This allows the model to better adapt to different situations and increase generalization capabilities. I suspect that it is very awkwardly described here that AI can also be trained to improve its results. That should be enough.

How to protect yourself against data theft

Cyber criminals are also using AI

The increasing use of AI is also having an impact on crime by opening up new opportunities for criminals to deceive people. In this context, it is crucial to follow the tried and tested prevention tips for online behavior, e.g. using the internet sparingly or not posting photos of your children.

Be careful with personal data online

The police generally advise users to be careful with personal data online. A healthy degree of skepticism towards unexpected messages, links or requests, even when surfing websites, can provide effective protection against cybercrime when AI is operating in the background.

Critical reflection on what is disclosed is particularly important when using social media. Freely available images, videos and spoken texts can be used by AI systems as training material. Cyber criminals could use this content to deceive employers, friends or relatives, for example.

Check your user settings

It is therefore important to check the privacy settings of user accounts and adjust them if necessary. You can find practical tips and information on optimization on the website of the EU initiative klicksafe under the information on the respective providers, as well as on saferinternet.at under "Privacy Guides".

Further information

The Federal Office for Information Security also provides prevention tips and detailed information on the topic of deepfake.

BSI on the topic of deepfakes

On the website of the Police Crime Prevention Programme (ProPK), you will also find prevention tips on the topic of deepfakes. Link to the ProPK website and the topic of deepfake.

We also recommend that all citizens secure their digital accounts with different and strong passwords. Additional information on this is provided by the NRW state campaign www.mach-dein-passwort-stark.de.

Translated with DeepL.com (API Version)
In urgent cases: Police emergency number 110