In December, we kicked off the agentic period by releasing an experimental model of Gemini 2.0 Flash — our extremely environment friendly workhorse mannequin for builders with low latency and enhanced efficiency. Earlier this yr, we up to date 2.0 Flash Pondering Experimental in Google AI Studio, which improved its efficiency by combining Flash’s velocity with the power to purpose by extra advanced issues.
And final week, we made an up to date 2.0 Flash obtainable to all customers of the Gemini app on desktop and cellular, serving to everybody uncover new methods to create, work together and collaborate with Gemini.
As we speak, we’re making the up to date Gemini 2.0 Flash usually obtainable by way of the Gemini API in Google AI Studio and Vertex AI. Builders can now construct manufacturing functions with 2.0 Flash.
We’re additionally releasing an experimental model of Gemini 2.0 Professional, our greatest mannequin but for coding efficiency and sophisticated prompts. It’s obtainable in Google AI Studio and Vertex AI, and within the Gemini app for Gemini Superior customers.
We’re releasing a brand new mannequin, Gemini 2.0 Flash-Lite, our most cost-efficient mannequin but, in public preview in Google AI Studio and Vertex AI.
Lastly, 2.0 Flash Pondering Experimental shall be obtainable to Gemini app customers within the mannequin dropdown on desktop and cellular.
All of those fashions will function multimodal enter with textual content output on launch, with extra modalities prepared for basic availability within the coming months. Extra data, together with specifics about pricing, may be discovered within the Google for Builders weblog. Wanting forward, we’re engaged on extra updates and improved capabilities for the Gemini 2.0 household of fashions.
2.0 Flash: a brand new replace for basic availability
First launched at I/O 2024, the Flash collection of fashions is standard with builders as a robust workhorse mannequin, optimum for high-volume, high-frequency duties at scale and extremely able to multimodal reasoning throughout huge quantities of knowledge with a context window of 1 million tokens. We’ve been thrilled to see its reception by the developer neighborhood.
2.0 Flash is now usually obtainable to extra folks throughout our AI merchandise, alongside improved efficiency in key benchmarks, with picture era and text-to-speech coming quickly.
Attempt Gemini 2.0 Flash within the Gemini app or the Gemini API in Google AI Studio and Vertex AI. Pricing particulars may be discovered within the Google for Builders weblog.
2.0 Professional Experimental: our greatest mannequin but for coding efficiency and sophisticated prompts
As we’ve continued to share early, experimental variations of Gemini 2.0 like Gemini-Exp-1206, we’ve gotten wonderful suggestions from builders about its strengths and greatest use circumstances, like coding.
As we speak, we’re releasing an experimental model of Gemini 2.0 Professional that responds to that suggestions. It has the strongest coding efficiency and skill to deal with advanced prompts, with higher understanding and reasoning of world information, than any mannequin we’ve launched up to now. It comes with our largest context window at 2 million tokens, which allows it to comprehensively analyze and perceive huge quantities of knowledge, in addition to the power to name instruments like Google Search and code execution.