Is ChatGPT safe? Here are the risks to consider | Digital Trends


There’s no doubt that ChatGPT is revolutionary advance in the usefulness and potential for any computer or smartphone connected to the internet, but is it safe to use?

There are some big concerns about the overall evolution of generative AI, with some tech leaders even calling for pause in development. But for the individual, safety is a relative term, particularly when it comes to tools. So, here’s everything to consider before you jump in.

Privacy and financial leaks

In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

OpenAI published report on the incident and corrected the bug that caused the problem. That doesn’t mean new issues won’t arise in the future. With any online service, there is a risk of accidental leaks like this and cybersecurity breaches from the growing army of hackers.

OpenAI’s privacy policy

According to OpenAI’s privacy policy, your contact details, transaction history, network activity, content, location, and login credentials might be shared with affiliates, vendors and service providers, law enforcement, and parties involved in transactions.

In some cases, this is unavoidable. OpenAI might use third-party payment processors, so this is to be expected. The company must comply with any legal obligations, and some of this data might be used for research.

Even when it’s easy to justify collecting data, the potential for misuse and leaks is a valid safety concern. OpenAI’s ChatGPT FAQ suggests you don’t share sensitive information and warns that prompts can’t be deleted.

ChatGPT as a hacking tool

Shutterstock

On the subject of cybersecurity, some experts are concerned about ChatGPT’s potential use as hacking tool. It’s clear that the advanced chatbot can help anyone write a very official-sounding document, and ChatGPT could be called upon to construct a convincing email phishing scam.

The AI is also a good teacher, making it easy to learn new skills with ChatGPT, possibly even dangerous programming skills and information about network infrastructure. The combination of ChatGPT and dark web forums could lead to numerous and novel attacks to challenge the already stretched resources of cybersecurity researchers.

For example, someone on Twitter posted an example of asking GPT-4 to write instructions for how to hack a computer, and it provided some terrifying detail.

Well, that was fast…

I just helped create the first jailbreak for ChatGPT-4 that gets around the content filters every time

credit to @vaibhavk97 for the idea, I just generalized it to make it work on ChatGPT

here's GPT-4 writing instructions on how to hack someone's computer pic.twitter.com/EC2ce4HRBH

— Alex (@alexalbert__) March 16, 2023

ChatGPT can write code based on plain English requests, allowing anyone to generate a program. With the new ChatGPT plug-ins feature, the AI can even run self-generated code.

OpenAI sandboxed this capability to prevent dangerous uses, but we’ve already seen an example of OpenAI’s GPT-3 API being hacked. OpenAI must be very careful with security as the plug-in feature and internet access is rolled out to more people.

ChatGPT and job safety

ChatGPT has been worrying teachers since it makes plagiarism incredibly easy. OpenAI trained its chatbot on the kinds of information that students need to know to write essays as proof that they’ve learned a subject.

While that’s not a safety concern, teachers also need to be aware that ChatGPT can educate students on a broad range of topics, providing one-on-one attention and instant answers to questions. In the future, AI might be called upon to help teach students in overcrowded classrooms or to assist with tutoring.

For authors, ChatGPT could seem threatening. In a matter of seconds, it can generate thousands of words. The same task requires hours of work for a person, even a professional writer.

At the moment, there are still enough errors to make it more useful as a research or writing tool than a replacement for authors. If accuracy issues are resolved, AI could begin taking jobs.

ChatGPT has vast number of uses, and more are being discovered every day. Beyond communication and learning, ChatGPT can even analyze a photo of a hand-drawn app and write a program to create it, as shown in OpenAI’s demonstration of the new capabilities of GPT-4.

ChatGPT scams

It isn’t OpenAI’s fault, but a side effect of any exciting new technology is a surge in scams that promise greater access or new features. Since access to ChatGPT is still limited and sometimes slow, there’s a strong demand for more ChatGPT goodness.

Each new update brings expanded capabilities, some of which require a membership and have limited availability. ChatGPT fervor provides fertile ground for scams. Offers of free, unlimited access at the fastest speed and with the best new features are hard to pass up.

Unfortunately, the old saying still holds — if it sounds too good to be true, it probably is. Be wary of ChatGPT offers that come via email or social media. It’s best to check trusted media outlets for news or go directly to OpenAI to confirm any invitations or deals that sound iffy.

ChatGPT is both powerful and terrifying. As one of the first examples of a publicly available AI with good language skills, its challenges and successes should serve as a wake-up call for everyone. It’s important to use caution with new AI technology. It’s too easy to get caught up in the excitement and forget that you’re dealing with an online service that can be hacked or misused.

Slow and steady wins the race

OpenAI is aware of the need to proceed more slowly as ChatGPT gains more skills and internet access. Moving too quickly could lead to backlash and potential regulatory burdens.

Editors’ Recommendations








Source link