Highlights from Google I/O 2023: Here’s a List of Everything Announced
Listen to the Podcast:
The internet search and advertising company delivered a succession of rapid announcements during its developer conference on the main day of Google I/O, including plenty of reveals of current topics it’s been working on.
We know you don’t always have time to sit through a two-hour presentation. Here, we have a list of the most important news from the keynote with short links to each one. So here we go:
In some places, a new feature called “Immersive View for Routes” was added to Google Maps. The new tool puts all the information a user might need in one place, like traffic simulations, bike lanes, tricky crossings, parking, and more.
Read More: Top Google Map Features
Magic Editor and Magic Composition
We often want to modify something about the photo we just took, and Google’s Magic Editor tool now uses AI to do more intricate tweaks to particular portions of the photo, such as the foreground or background and can fill gaps in white. To get a better frame, you can crop the photo or even move the subject. Examine it.
A new program called Magic Compose was shown off today, and it can rewrite text in a variety of styles just by looking at it. The function “could make the message sound more positive or professional, or it could be fun and make the message sound like it was written by your favorite playwright aka Shakespeare,” for instance. Sara writes the Continue.
Federico examines PaLM 2, Google’s latest Large Language Model (LLM). According to him, “PaLM 2 will power Google’s updated Bard chat tool, the company’s competitor to OpenAI’s ChatGPT, and serve as the base model for most of the new AI features the company is announcing today.” PaLM 2 now includes enhanced support for authoring and debugging code. There’s more here. Kyle also goes into PaLM 2 by taking a closer look at the model via the prism of a Google research paper.
Bard Gets Smarter
Google will not only remove Bard from its waitlist and make it available in over 180 countries and territories in English, but it will also add support for Japanese and Korean, with the objective of supporting 40 languages in the near future. The possibility for Bard to display graphics in his reply is also new.
Furthermore, Google is collaborating with Adobe on some art-generating capabilities via Bard. “Bard users will be able to generate images through Firefly and then modify them using Express,” Kyle explains. Users will be able to select from themes, typefaces, and stock pictures, as well as other materials from the Express collection, within Bard.”
Google’s Workspace suite is also getting an AI boost, with the addition of automatic table production (but not formula generation) in Sheets and image creation in Slides and Meet. The automatic table is initially simpler, but Federico points out that there is more to come in terms of using AI to construct formulas. New capabilities for Slides and Meet include the opportunity to type what style of display you want and the AI will construct it. That includes personalized backdrops for Google Meet.
Google’s latest AI effort, MusicLM, takes text and transforms it into music. Kyle notes that he may use the software to generate several permutations of “soulful jazz for a dinner party” if he is hosting a dinner party.
Darrell is interested in Sidekick, a new tool announced today that is aimed “to help provide better cues, potentially usurping the one thing people are supposed to do best in the entire generative AI cycle.” The Sidekick panel will appear in Google Docs as a side panel and will be “constantly engaged in reading and processing your entire document as you type, providing contextual suggestions that refer specifically to what you’ve typed.”
Codey is Google’s newest tool for code generation and completion. It’s part of a suite of AI-centric coding tools being introduced today and is Google’s response to GitHub’s Copilot, a similar chat platform for asking code questions. Codey is an artificially intelligent chatbot designed to respond to questions about programming and the Google Cloud especially. Federico has further details that can be shared.
Google Search has two new capabilities that help users better grasp the content and context of an image in search results. According to Sara, this includes extra information with an “About this Image” feature as well as new markup in the file itself that will allow photographs to be labelled as “AI-generated.” Both are extensions of existing work, but they are meant to provide more transparency on whether the “image is credible or AI-generated,” albeit they are not the end of the road in solving the larger problem of image misleading from AI.
Aisha has more on Search, including the fact that Google is testing an AI-powered discussion mode. “Users will see suggested next steps when doing a search and will be shown an AI-powered snapshot of key information to consider, with links to dig deeper,” she said. When she taps the proposed next step, Search switches to a new conversation mode, where she can ask Google more questions about the topic she’s researching. The context will be carried over from one inquiry to the next.”
There was also the launch of a new “Insights” filter, which will soon appear at the top of some search results when they “benefit from the experiences of others,” according to Google. Postings on discussion forums, question-and-answer sites, and social media platforms, including those with video, are examples. Consider how much easier it would be to find Reddit links or YouTube videos, Sara says.
A new A3 supercomputer virtual machine has arrived in town. “This A3 has been specifically designed to handle the considerable demands of these resource-intensive use cases,” Ron writes, adding that it is “armed with Nvidia’s H100 GPU and combining it with a dedicated data centre for immense computational power with high performance and low latency, all they suggest is a more reasonable price than you would normally pay for such a package.”
Google also unveiled new AI models for Vertex AI, a fully managed AI service, including Image, a text-to-image model. Kyle claims that Image was previewed in November via Google’s AI Test Kitchen app. You can make and edit photographs, as well as add captions to existing ones.
Also Read: Project Magi
Find My Device
Taking advantage of Apple and Google’s collaboration on Bluetooth tracker security measures and a new specification, Google announced its own set of enhancements to its Find My Device network, including proactive alerts about unknown trackers travelling with you, as well as support for Apple’s AirTag and others. Some of the additional capabilities will include notifications to users if their phone detects an unknown tracker travelling beside them, as well as communication with other Bluetooth trackers.
Google’s goal with the modifications is to “increase the safety of their respective user bases by making these alerts work across platforms in the same way,” referring to Apple’s work to make AirTags more secure following accusations that they were being used for stalking.”It would also be available for Android devices,” Sarah writes.
Google’s Pixel 7a will be available for $100 less than the Pixel 7 ($499) on May 11. It, like the Pixel 6a, features a 6.1-inch display as opposed to the 6.4-inch Pixel 7. It was also made available in India. It has a slightly higher pixel density than the 7 Pro, but Brian says, “I really miss the 7 Pro’s flexibility and zoom, but I was able to take some good photos around my neighbourhood with the 7a’s cameras.” Face Unblur and Super Res Zoom are among the functions enabled by its new chip. The complete breakdown can be found here.
The titles sound more like a covert government assignment, but Google’s Project Tailwind is an AI-powered notebook application it’s developing with the purpose of automatically organizing and summarizing notes taken by users. The tool is accessible via Labs, Google’s up-to-date repository for experimental projects.
This is how it works: Users select files from Google Drive, and Project Tailwind creates a private AI model with expertise in that information, as well as a bespoke interface to assist in sorting through notes and documents. Examine it.
AI Generative Wallpapers
You’ve had to make that new Pixel 7a seem nice now that you’ve received it! Google will provide AI-generative wallpapers this fall, allowing Android users to explain their vision by responding to given prompts. The functionality will generate new and unique wallpapers using Google’s text-to-image broadcast models, and your Android system’s colour palette will automatically match the wallpaper you’ve chosen. There’s more here.
Use Operating System 4
Wear OS 4 is the latest version of Google’s wearable operating system. You’ll notice improved battery life and functionality, as well as additional accessibility capabilities like text-to-speech. Developers can now create new Wear OS watch faces and distribute them on Google Play using new tools.
Watch for the introduction of Wear OS 4 later this year. Continue reading. There are also new apps for smartwatches, such as improvements to its suite of offerings such as Gmail, Calendar, and so on, as well as updates to WhatsApp, Peloton, and Spotify.
Read Also More: Google Bard First Experimental Update
Google is also releasing a high-tech new way to translate that can change videos into a different language and sync the speaker’s lips to words they never said. It was called “Universal Translator,” and Devin said it was “an example of something that has only recently become possible because of advances in AI, but also comes with serious risks that need to be thought about from the start.”
This is how it works: The “experimental” service takes an input video, in this case, an English lecture from an online course, transcribes it, translates it, regenerates it (matching style and tone), and then edits the video so that the speaker’s lips more closely match the new audio. More on this later.
You knew it was coming, and we can finally confirm that the Pixel tablet has arrived. Brian believed the UI resembled a “giant Nest Home Hub,” but he liked the base and style.
In addition, Brian notes that the Pixel Tablet “isn’t just a tablet: it’s a smart home controller/hub, teleconferencing device, and video streaming machine,” which is particularly relevant given that tablets are mostly used in the house.A better way to watch YouTube videos, but it won’t replace your TV. This link provides additional information.
One of the major revelations that have already occurred, as covered by Brian, is that Google utilized May 4th (aka “May the Fourth Be With You” day) to declare that it will be releasing a foldable Pixel phone. In a recent piece, Brian delves into the phone, which he claims Google has been working on for five years.
He also mentions that “the true secret sauce in the Pixel Fold experience is, predictably, the software. The app’s continuity is maintained by transitioning between exterior and interior views.