In the past month, Google’s co-founders, Larry Page and Sergey Brin participated in multiple meetings with business leaders. The topic at hand was a competitor’s brand-new chatbot, a brilliant artificial intelligence tool that appeared to have the potential to become the first notable threat to Google’s $149 billion search business in decades.
Page and Brin, who had not spent much time at Google since they left their daily roles with the company in 2019, are said to have reviewed Google’s AI product strategy, according to two people who had knowledge of the meetings but were forbidden to discuss them. Page and Brin left their daily roles with the company in 2019. They voted to support plans and concepts that would add additional chatbot functionality to Google Search. In addition, they provided guidance to the executives of the organization, who have placed AI at the forefront of their strategic planning.
Read More: How The Use of Chatbots?
The re-engagement of Google’s founders, which took place at the invitation of the company’s current CEO, Sundar Pichai, brought to light the sense of urgency that many Google executives had towards artificial intelligence (AI) and that chatbot, ChatGPT.
Users were blown away by the ease with which the bot explained difficult notions and came up with new concepts from start after being published two months ago by a small San Francisco company called OpenAI. It appeared as though it could give a new way to search for content on the internet, which was a crucial consideration for Google.
The introduction of this cutting-edge AI technology has jolted Google out of its norm. Pichai issued a “code red,” which derailed previously established strategies and accelerated the development of AI. According to a slide presentation that was reviewed by The New York Times as well as two people with knowledge of Google’s plans who were not authorized to discuss them, the company has decided to unveil more than twenty new products and demonstrate a version of its search engine with chatbot features this year.
D Sivakumar, a former Google research director who helped build a business called Tonita, which produces search technology for e-commerce enterprises, stated that “this is a moment of considerable vulnerability for Google.” The company ChatGPT has “placed a stake in the ground” by demonstrating “Here’s what a fascinating new search experience could look like.” Sivakumar went on to say that Google had been successful in overcoming previous obstacles and could use its arsenal of AI to continue being competitive.
According to two people who are aware of the situation, Page and Brin have adopted a hands-off strategy toward Google ever since they stepped aside from their day-to-day responsibilities at the company. While they worked on other endeavors, such as developing companies for flying cars and assisting victims of natural disasters, they delegated management of the business to Pichai at Alphabet, the parent company of Google.
Read Also: Highest Paid CEOs 2023
According to one person, the primary purpose of their trips to the company’s offices in Silicon Valley over the past few years has been to check in on the status of the so-called moonshot initiatives that Alphabet refers to as “Other Bets.” Up until quite recently, they did not have a significant amount of involvement with the search engine.
However, they have expressed interest in incorporating AI into Google’s products for a long time. A former senior vice president at Google named Vic Gundotra recalled an incident in which he provided Larry Page with a demonstration of a new Gmail function around the year 2008. On the other hand, Page was dissatisfied with the effort and said, “Why can’t it automatically write that email for you?” DeepMind, a major artificial intelligence research laboratory situated in London, was bought by Google in 2014.
According to the slide presentation, Google’s Advanced Technology Review Council, a panel of executives that includes Jeff Dean, the company’s senior vice president of research and artificial intelligence, and Kent Walker, Google’s president of global affairs and chief legal officer, met less than two weeks after the launch of ChatGPT to discuss the company’s initiatives.
They went over the plans for products that were supposed to be introduced at Google’s company conference in May. These products included Image Generation Studio, which generates and edits images, and a third version of AI Test Kitchen, which is an experimental app for testing product prototypes. Both of these products were expected to be introduced. According to the slides, other image and video projects were currently in the development stage. These included a feature called Shopping Try-on, a green-screen feature for creating backgrounds on YouTube, a wallpaper maker for the Pixel smartphone, an application called Maya that visualizes 3D shoes, and a tool that could summarize videos by generating a new one.
Google has compiled a list of artificial intelligence (AI) programs that it intends to make available to software developers and other businesses. These programs, which include image-creation technologies, have the potential to increase revenue for Google’s Cloud division. According to the presentation, there are other tools available called MakerSuite that can assist other organizations in developing their own AI prototypes in internet browsers. According to the presentation, there will be two “Pro” versions of MakerSuite.
To Know More: What is Chatgpt?
According to the presentation, Google also plans to announce in May a new tool that will make it simpler to develop applications for Android smartphones. The tool, which will be dubbed Colab + Android Studio, will produce, complete, and fix code. PaLM-Coder 2, an additional code creation and completion tool, has also been in the works for some time.
Executives at Google want to reestablish the company’s reputation as an innovator in the field of artificial intelligence (AI). LaMDA, which stands for Language Model for Dialogue Applications, is the name of the company’s chatbot that it has already made available to a limited number of users. This chatbot is designed to compete with ChatGPT and has been the subject of intensive research and development by the company over the past decade.
According to a statement made by a Google spokesperson named Lily Lin, “We continue to test our AI technology internally to ensure that it is both helpful and safe, and we look forward to sharing more experiences with the outside world in the near future.” She went on to say that AI will be beneficial to individuals, businesses, and communities and that Google is taking into consideration the effects that the technology would have on wider society.
Because artificial intelligence developed by Google, OpenAI, and others use so-called large language models that are dependent on online information, it is possible that these systems will occasionally make incorrect assertions and display racist, sexist, and other biased views.
That was sufficient to convince businesses to proceed with extreme caution when giving the technology to the general public. However, a number of new companies, such as You.com and Perplexity.ai, are now offering online search engines that allow you to ask questions through an online chatbot. This functionality is quite similar to that of ChatGPT. According to a story published by The Information, Microsoft is also working on a new version of its Bing search engine that would feature technology that is analogous to that described above.
You Can Read: OpenAI Working on Paid Pro ChatGPT Version
According to the presentation that was seen through by the Times, Pichai has attempted to speed up the process of product approval reviews. The business pushed groups of employees who work to guarantee that technology is fair and ethical to more swiftly approve its new AI technology by establishing a fast-track review procedure called the “Green Lane” program.
According to the presentation, the company will “recalibrate” the level of risk it is willing to take when releasing the technology and will also find ways for teams developing AI to conduct their own reviews. In addition, the company will find ways for teams developing AI to conduct their own reviews.
The implications of Google’s shift toward a more simplified strategy are not yet fully understood. According to an analysis that was compiled by Google, the company’s technology falls behind OpenAI’s self-reported metrics when it comes to recognizing content that is hateful, poisonous, sexual, or violent. OpenAI tools outperformed Google tools in every category, even though Google technologies also fell short of human accuracy in evaluating content.
In the slide presentation, Google identified the key threats posed by the technology as being antitrust, privacy, and copyright violations. It was stated that some efforts are required to limit those hazards. Some examples of these actions include filtering answers to remove content that is protected by copyright laws and preventing AI from sharing personally-identifying information.
The demonstration of a chatbot performing a search that Google hopes to carry out this year places a high priority on eliminating misinformation, protecting users’ safety, and correcting any factual errors. According to the presentation, the corporation has a lower standard for new services and goods and will strive to reduce problems related to hatred and toxicity, as well as danger and misinformation. This is rather than trying to prevent these problems from occurring.
To prevent, for instance, the spread of hate speech, the corporation plans to prohibit the use of particular words and will also work to reduce the impact of any other potential problems.
Google is prepared for governments to investigate its AI technologies in search of indicators of problems like these. The corporation has, in recent times, been the focus of a great number of investigations and legal actions that allege it engages in business activities that are anti-competitive. The presentation states that it is preparing for “increasing pressure on Al regulatory initiatives due to mounting concerns about disinformation, harmful content, bias, and copyright.” This is something that is expected to occur.