ChatGPT isn’t quite so clever yet that it can find its own flaws, so its creator is turning to humans for help.
OpenAI unveiled a bug bounty program on Tuesday, encouraging people to locate and report vulnerabilities and bugs in its artificial-intelligence systems, such as ChatGPT and GPT-4.
In a post on its website outlining details of the program, OpenAI said that rewards for reports will range from $200 for low-severity findings to up to $20,000 for what it called “exceptional discoveries.”
The Microsoft-backed company said that its ambition is to create AI systems that “benefit everyone,” adding: “To that end, we invest heavily in research and engineering to ensure our AI systems are safe and secure. However, as with any complex technology, we understand that vulnerabilities and flaws can emerge.”
Addressing security researchers interested in getting involved in the program, OpenAI said it recognized “the critical importance of security and view it as a collaborative effort. By sharing your findings, you will play a crucial role in making our technology safer for everyone.”
With more and more people taking ChatGPT and other OpenAI products for a spin, the company is keen to quickly track down any potential issues to ensure the systems run smoothly and to prevent any weaknesses from being exploited for nefarious purposes. OpenAI therefore hopes that by engaging with the tech community it can resolve any issues before they become a more serious problem.
The California-based company has already had one scare where flaw exposed the titles of some users’ conversations when they should have stayed private.
Sam Altman, CEO of OpenAI, said after the incident last month that he considered the privacy mishap a “significant issue,” adding: “We feel awful about this.” It’s now been fixed.
The blunder became a bigger problem for OpenAI when Italy expressed serious concerns over the privacy breach and decided to ban ChatGPT while it carries out a thorough investigation. The Italian authorities are also demanding details of measures OpenAI intends to take to prevent it from happening again.
Editors’ Recommendations