GPT-4 is now official, having been announced by OpenAI on Tuesday with several updates focusing on accuracy, creative expression, and collaboration — along with a focus on safer and more accurate content.
ChatGPT Plus users will be able to try the new model today, along with developers through the API. OpenAI President and Co-Founder, Greg Brockman, plans to discuss with developers some of the capabilities and limitations of GPT-4 in a live stream demo at 1 p.m. PT.
Among its new features, the latest iteration of the GPT language model introduces several new modes of input capabilities. In addition to text, you can now upload images for analysis and receive answers via text. Additionally, GPT-4 can offer you a more creative text result from a more detailed prompt.
The language model also now supports up to 25,000 words of text, which suggests greater accuracy. Prior models could handle only about 1,000 words of text at a time and there are recommendations for giving prompts of 500 words at a time to keep the ChatGPT generator from getting confused.
GPT-4 was developed over the course of six months and was trained on Microsoft Azure AI supercomputers. OpenAI claims this training has made the model “safer and more aligned,” with it 82% less likely to respond to prompts for negative content and 40% more likely to generate desired information.
However, the brand notes that limitations, including “social biases, hallucinations, and adversarial prompts,” remain in the language model and are something that it continues to work on with “transparency, user education, and wider AI literacy.”
OpenAI detailed its collaboration with several brands that have built its app features using GPT-4, including Duolingo which has deepened language conversations, BeMyEyes, which has transformed visual accessibility, and Stripe, which has an updated user experience to combat fraud. Other brands and organizations include Morgan Stanley, Khan Academy, and the Government of Iceland.