Google Warns About Health and Finance AI Content
Listen to Podcast:
In the midst of Google’s preparations to launch its very own chatbot-integrated search feature — a major push to compete with Microsoft’s ChatGPT-integrated Bing — the search giant has quietly issued some new warnings to publishers who are interested in running AI-generated content.
These warnings pertain to Google’s guidelines for how publishers should handle AI-generated content.
Read More: Google Meet Now Supports Captions on Video Recording
To be more specific, Google is alerting media outlets that the search team will be looking at AI-generated articles pertaining to “health, civic, or financial information” with an increased level of scrutiny. In other words, these are the areas in which you really want to do things correctly.
In the newly released Google FAQ, it is stated that “these challenges occur in both human-generated and AI-generated content.” This is in reference to “AI content that potentially propagates disinformation or violates agreement on crucial topics.”
“However the content is produced, our systems look to surface high-quality information from reliable sources rather than information that contradicts well-established consensus on important topics,” it continues. “This is in contrast to information that suggests alternative viewpoints on controversial issues.” “In areas where the quality of the information is of the utmost significance, such as health, civic, or financial information, our systems place an even higher focus on indications of trustworthiness,”
To Google’s credit, it serves as an appropriate caution.
Google is, without a doubt, the most popular search engine, even taking into account the company’s own participation in the AI arms race. Generative artificial intelligence is already being utilized by large publications to produce content, and the Hustle Bro cult is already encouraging its followers to use the tool that is available to the public for free in order to establish their own personal content mills.
Read Also: WhatsApp Introduces New Status Features
Google, as one of the preeminent curators of our digital lives, is required to adapt to new technologies that transform the way online content is generated. Generative AI, despite its very obvious shortcomings, is already doing just that, and Google must adapt in order to remain competitive.
Having said that, Google unquestionably has a stake in this conflict due to the fact that it is a stakeholder in the conflict. Given that its own chatbot-infused search has already been shown to be blatantly incorrect — in an advertisement, of all things — it is probably best for it to get ahead of the many problems that are likely to come in a digital landscape packed full of cheap, fast, extremely confident-sounding but often wrong AI content. It seems pretty desperate to keep its head above the waters in the AI market that is led by Microsoft and OpenAI.
To that end, it shouldn’t come as a surprise that Google has identified healthcare and finance as the content of particular concern. This is not only because of the general importance of these areas but also because of the very bleak reality that existing generative AI tools consistently get those types of content wrong. Large Language Models (LLMs) are notoriously terrible with numbers, as evidenced by CNET’s embarrassingly error-ridden AI-generated financial advice. On the other hand, medical professionals have discovered that ChatGPT makes up medical diagnoses and even treatments, and it even includes fake sources for its purported findings.
And in terms of content that is related to politics, a number of professionals have issued warnings that the availability of ChatGPT is well-suited to turn our online world into a propaganda nightmare. To that end, I say, cheers.
Also Read: How to Save Money Smartly after 30?
Google, though, suggests that we shouldn’t spend too much time fretting about the situation. After all, they had been engaging in this activity for some time.
According to the newly formulated Frequently Asked Questions (FAQ), “Our focus on the quality of material, rather than how information is produced, is a beneficial guidance that has helped us deliver trustworthy, high-quality results to users for years.”
Noting this, but we are confident that Google will forgive us for having our fair share of concerns, especially taking into account the fact that it is not requiring content providers to mark anything as being generated by AI.
According to what they wrote, “AI or automation disclosures are useful for content where someone would ask, ‘How was this created?'” “Take into consideration including these where doing so would be consistent with reasonable expectations.”
Honor code, y’all. That’s probably going to be true.
To Read Our Exclusive Content, Sign up Now.