Google has announced the launch of its most extensive artificial intelligence model, Gemini, and it features three versions: Gemini Ultra, the largest and most capable; Gemini Pro, which is versatile across various tasks; and Gemini Nano, designed for specific tasks and mobile devices. The plan is to license Gemini to customers through Google Cloud for use in their applications, in a challenge to OpenAI’s ChatGPT.

Gemini Ultra excels in massive multitask language understanding, outperforming human experts across subjects like math, physics, history, law, medicine, and ethics. It’s expected to power Google products like Bard chatbot and Search Generative Experience. Google aims to monetize AI and plans to offer Gemini Pro through its cloud services.

“Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research,” wrote CEO Sundar Pichai in a blog post on Wednesday. “It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across, and combine different types of information including text, code, audio, image, and video.”

An infograph showcasing how Google's Gemini Ai is more efficient than ChatGPT.
Google

Starting December 13, developers and enterprises can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI, while Android developers can build with Gemini Nano. Gemini will enhance Google’s Bard chatbot, using Gemini Pro for advanced reasoning, planning, and understanding. An upcoming Bard Advanced, using Gemini Ultra, is set to launch next year, and will likely be positioned to challenge GPT-4.

Despite questions about monetization with Bard, Google emphasizes creating a good user experience and does not provide specific details about pricing or access to Bard Advanced. The Gemini model, particularly Gemini Ultra, has undergone extensive testing and safety evaluations, according to Google. While it is the largest model, it is claimed to be more cost-effective and efficient than its predecessors.

Google also introduced its next-generation tensor processing unit, TPU v5p, for training AI models. The chip promises improved performance for the price compared to TPU v4. This announcement follows recent developments in custom silicon by cloud rivals Amazon and Microsoft.

The launch of Gemini, after a reported delay, underscores Google’s commitment to advancing AI capabilities. The company has been under scrutiny for how it plans to turn AI into profitable ventures, and the introduction of Gemini aligns with its strategy to offer AI services through Google Cloud. The technical details of Gemini will be further outlined in a forthcoming white paper, providing insights into its capabilities and innovations.

Editors’ Recommendations








Source link

By HS

Leave a Reply

Your email address will not be published. Required fields are marked *