Europol warning as criminals commandeer AI chatbots | Digital Trends



Europol this week issued a stark warning highlighting the risks posed by criminals as  they get to grips with the new wave of advanced AI chatbots.

In a post shared online this week, Europe’s law enforcement agency described how tools such as OpenAI’s ChatGPT and GPT-4, and Google’s Bard, will be increasingly used by criminals looking for new ways to con members of the public.

It identified three specific areas that concern it most.

First up is fraud and social engineering, where emails are sent to targets in the hope of getting them to download a malware-infected file or click on a link that takes them to an equally dangerous website.

Phishing emails, as they’re known, are usually full of grammatical errors and spelling mistakes and end up in the junk folder. Even those that do make it to the inbox are so appallingly written that the recipient is able to quickly discard it without a second thought.

But AI chatbots are able to create well-written messages free of sloppy errors, allowing criminals to send out convincing emails that mean recipients will have to pay extra attention when checking their messages.

Europol said the advanced chatbots are able to “reproduce language patterns can be used to impersonate the style of speech of specific individuals or groups,” adding that such a capability can be “abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.”

A more convincing form of disinformation is also set to proliferate, with the new wave of chatbots excelling at creating authentic-sounding text at speed and scale, Europol said, adding: “This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.”

Thirdly, Europol cited coding as a new area being seized upon by cybercriminals to create malicious software. “In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages,” the agency pointed out. “For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code.”

It said the situation “provides a grim outlook” for those on the right side of the law as nefarious activity online becomes harder to detect.

The AI chatbot craze took off in November 2022 when Microsoft-backed OpenAI released its impressive ChatGPT tool. An improved version, GPT-4, was released just recently, while Google has also unveiled its own similar tool, called Bard. All three are noted for their impressive ability to create natural-sounding text with just few prompts, with the technology likely to assist or even replace a slew of different jobs in the coming years.

Other similar AI-based technology lets you create original images, videos, and audio with just a few text prompts, highlighting how no form of media will escape AI’s impact as the technology continues to improve.

Some leading voices have understandable concerns about its rapid rise, with a recent open letter signed by Elon Musk, Apple co-founder Steve Wozniak, and various experts claiming AI systems with human-competitive intelligence can pose “profound risks to society and humanity.” The letter called for a six-month pause to allow for the creation and implementation of safety protocols for the advanced tools, adding that if handled in the right way, “humanity can enjoy a flourishing future with AI.”

Editors’ Recommendations








Source link