Redefining AI Power: OpenAI Concerned as ChatGPT 4 Can Read Faces
The well-known AI-powered chatbot ChatGPT from OpenAI, which is utilized for a variety of purposes, has progressed beyond text processing. The newest iteration of ChatGPT, GPT-4, now has a novel and intriguing feature: image analysis.
Users may now ask questions of the bot, describe photographs, and even use it to identify the faces of certain people in addition to interacting with it verbally. Promising prospective uses of this technology include assisting users in recognizing and resolving problems in photographs, such as diagnosing a mystery rash or fixing a car’s engine.
Jonathan Mosen, the CEO of a blind employment agency, was one of the early users of this upgraded version after having the opportunity to use the visual analysis capability while traveling. Going beyond the limitations of traditional image analysis software, he was able to identify various dispensers in a hotel bathroom and learn their contents in great detail with the use of ChatGPT.
OpenAI is wary of any dangers that might come with facial recognition, though. Although only a few famous people may be recognized by the chatbot’s visual analysis, the company is aware of the ethical and legal issues surrounding the use of face recognition technology, particularly in regards to privacy and consent. Because of this, Mosen no longer receives information from the app about people’s faces.
According to Sandhini Agarwal, policy researcher at OpenAI, the organization wants to have an open conversation with the general public about including visual analysis capabilities into their chatbot. In order to set clear norms and safety measures, they are eager to solicit feedback and democratic involvement from users. Additionally, OpenAI’s charity division is looking into ways to involve the general public in establishing guidelines for AI systems to ensure ethical and responsible behavior.
The inclusion of images and text acquired from the internet in the model’s training data makes the development of visual analysis in ChatGPT a logical evolution. OpenAI is aware of several possible difficulties, such as “hallucinations,” in which the system may respond to visuals with inaccurate or misleading information. For instance, the chatbot can inadvertently give the name of another famous person when presented with a photo of a person on the verge of stardom.
The visual analysis tool is also available to Microsoft, a significant OpenAI investor, and is currently being tested on their Bing chatbot in a restricted rollout. Both businesses are taking precautions to safeguard customer privacy and address issues before wider rollout, though.
To Read Our Exclusive Content, Sign up Now.
$5/Monthly, $50/Yearly