Google Launches Gemini 2.0, Shows Off More Project Astra Assistant With Glasses

Gemini 2.0 Advanced

To close out 2024, Google announced Gemini 2.0, it’s newest version of its AI model that will basically power all of its AI endeavors going forward. As a part of the announcement, Google showed off a bunch of future plans, like its Project Astra assistant that will likely like within a pair of smart glasses at some point soon.

For now, Gemini 2.0 is really here for developers before it launches more openly to consumers. If you happen to be a developer in the AI space, you’ll want to read this blog post from Google on what you can do with Gemini 2.0 and to get started. For the rest of us, Gemini 2.0 is currently only available as Gemini 2.0 Flash on desktop and the mobile web. It should launch to the Gemini mobile app “soon” and in more Google products you might use next year.

As far as capabilities, the simplest description of this new model is that it is twice as fast as Gemini 1.5 Pro. That’s something, since this is only Gemini 2.0 Flash. Here’s how Google describes the new 2.0 release:

Notably, 2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed. 2.0 Flash also comes with new capabilities. In addition to supporting multimodal inputs like images, video and audio, 2.0 Flash now supports multimodal output like natively generated images mixed with text and steerable text-to-speech (TTS) multilingual audio. It can also natively call tools like Google Search, code execution as well as third-party user-defined functions.

Google is prototyping several ideas with Gemini 2.0 that you should at least be aware of: Project Astra, their research prototype exploring future capabilities of a universal AI assistant; a new Project Mariner, that “explores the future of human-agent interaction, starting with your browser”; and Jules, an AI-powered code agent for developers.

Project Astra

Look, there’s a lot of Gemini 2.0 stuff to cover and most of it is not the type of thing we would typically report on. But one thing is – Project Astra. You see, we all watched Google Assistant slowly dwindle down its feature set and frustrate long-time users, only to witness the rise of Gemini as a potential replacement, which it has sucked at for the most part.

And that’s where Project Astra comes in. Google believes this is the potential future of an Assistant in your hands, at least on Android for now. Project Astra was first introduced at Google I/O and it is now powered by Gemini 2.0, where it could live on your phone and be an assistant that accompanies you throughout a day.

Here’s how Google describes it living with you and what it can do:

  • Better dialogue: Project Astra now has the ability to converse in multiple languages and in mixed languages, with a better understanding of accents and uncommon words.
  • New tool use: With Gemini 2.0, Project Astra can use Google Search, Lens and Maps, making it more useful as an assistant in your everyday life.
  • Better memory: We’ve improved Project Astra’s ability to remember things while keeping you in control. It now has up to 10 minutes of in-session memory and can remember more conversations you had with it in the past, so it is better personalized to you.
  • Improved latency: With new streaming capabilities and native audio understanding, the agent can understand language at about the latency of human conversation.

Since that’s a bit nerd speak, the video below shows Project Astra in practice or the real world. It’s basically an Assistant that can also use your camera to help you as you need it. And yes, Google plans to enable it to work within smart glasses at some point. In this video, you’ll see those glasses that are in “prototype” form.

What do you guys think? Is AI starting to finally look like the future? Are you mored scared than ever?

To read more about Project Mariner and Jules, Google has a whole blog post.

This post was last modified on December 11, 2024 8:43 am