AI-powered search engines, designed to provide instant answers, are struggling with accuracy. A new study by the Columbia Journalism Review (CJR) has found that AI models from OpenAI, xAI, and others frequently generate incorrect information when asked about news events.
According to the research, AI search engines fabricated or got details wrong in about 60% of test queries, sometimes even creating fake article URLs. The findings raise concerns about the reliability of AI-driven search tools, which millions now depend on for information.
AI Search Engines Often Make Up Information
The study tested various AI models by feeding them excerpts from real news articles and asking them to identify key details such as headlines, publishers, and URLs. The results showed high rates of misinformation:
- Perplexity AI gave incorrect information 37% of the time.
- xAI’s Grok had the worst performance, making up details in 97% of cases.
- Some AI models even invented links to non-existent articles.
These findings suggest that AI search tools are prone to confidently providing incorrect information—a problem researchers refer to as “hallucination.”
Bypassing Paywalls and Copyright Issues
In addition to fabricating details, some AI search engines have been found to bypass paywalls to access restricted content. Perplexity AI, for example, has retrieved articles from National Geographic despite the site using a “do-not-crawl” directive meant to prevent search engines from accessing its content.
Perplexity has faced criticism for this practice but defends it as fair use and has attempted to address concerns by offering revenue-sharing deals to publishers. However, the issue remains controversial, as many news organizations struggle to control how AI models use their content.
The Risks of AI-Generated Misinformation
One of the biggest risks of AI-driven search engines is their inability to verify the accuracy of the information they generate. Unlike traditional search engines that provide a list of sources, AI models rely on retrieval-augmented generation (RAG)—a method that pulls data from multiple sources while generating responses in real time.
This approach makes AI search models vulnerable to misinformation, especially when they pull data from unreliable sources. The study warns that bad actors, such as state-backed propaganda networks, could exploit AI search engines to spread false narratives.
Concerns from the Media Industry
The increasing reliance on AI search has alarmed media organizations. Mark Howard, chief operating officer at Time magazine, expressed concerns about publishers losing control over their content. If AI-generated summaries misrepresent news articles, it could damage the reputation of trusted news brands.
This issue has already affected major news outlets. The BBC recently criticized Apple for publishing misleading summaries of its news alerts, showing how AI-generated content can distort original reporting.
Howard also pointed out that users share some responsibility for blindly trusting AI-generated search results. He stated:
If anybody as a consumer is right now believing that any of these free products are going to be 100 percent accurate, then shame on them.
Why Users Trust AI Search Despite Inaccuracy
Despite its flaws, AI search is growing in popularity. According to CJR, about one in four Americans now use AI models for online searches. Even before AI-powered tools became mainstream, over 50% of Google searches were “zero-click”—meaning users got their answers directly from search results without visiting a website.
The appeal of AI-generated search results is speed and convenience. Many users prefer quick summaries over clicking through links, even if it means relying on less authoritative information.
The Future of AI Search Engines
While current AI search models are unreliable, Howard believes they will improve over time. He stated:
“Today is the worst that the product will ever be.
With ongoing advancements in AI, search tools may become more accurate in the future. However, experts warn that releasing flawed AI models without safeguards is irresponsible. Until these systems improve, users must remain skeptical and verify information from trusted sources.
The CJR study highlights a growing problem with AI search engines: they confidently produce false or misleading information at alarming rates. With millions relying on these tools for news and research, the risks of misinformation are significant.
While AI models are expected to improve, users, publishers, and tech companies must work together to ensure that AI-powered search delivers accurate and trustworthy information.
The Information is Collected from Gizmodo and Ars Technica.