In response to concerns raised by activists and parents, OpenAI has established a dedicated team to thoroughly examine methods for safeguarding its AI tools against potential misuse or abuse by children.
OpenAI has recently announced the formation of a child safety team dedicated to collaborating with various internal and external groups to handle matters concerning underage users. This team will be responsible for managing processes, incidents, and reviews related to child safety on the platform.
We are currently seeking a child safety enforcement specialist to join our team. This individual will play a crucial role in ensuring that OpenAI’s policies are upheld when it comes to AI-generated content. They will also be responsible for overseeing the review processes for content that is considered “sensitive,” particularly in relation to children.
However, the creation of the new team, which comes after OpenAI’s partnership with Common Sense Media and securing its first education customer, also indicates OpenAI’s cautiousness in ensuring compliance with policies regarding minors’ use of AI and avoiding negative publicity.
Children and adolescents are increasingly relying on GenAI tools to seek assistance, not only for academic tasks but also for personal challenges. Based on a survey conducted by the Center for Democracy and Technology, a significant number of children (29%) have reported using ChatGPT as a means to address their anxiety or mental health concerns. Additionally, 22% of kids have utilized the platform to navigate issues related to friendships, while 16% have sought assistance in resolving conflicts within their families.
Many people view this as an increasing concern.
Last summer, schools and colleges swiftly implemented bans on ChatGPT due to concerns surrounding plagiarism and the spread of misinformation. Since then, certain entities have chosen to lift their restrictions. However, there are skeptics who question the potential benefits of GenAI. They refer to surveys such as the one conducted by the U.K. Safer Internet Centre, which revealed that a significant number of children (53%) have witnessed their peers using GenAI in a negative manner. This includes instances where GenAI has been used to create convincing false information or images with the intention of causing distress to others.
Read More: 20 Worst Tech Fails
In September, OpenAI released documentation for ChatGPT in classrooms. The documentation includes prompts and a FAQ to provide educators with guidance on using GenAI as a teaching tool. OpenAI has acknowledged in one of its support articles that its tools, including ChatGPT, may generate content that is not suitable for all audiences or age groups. They have advised exercising caution when exposing children, even if they meet the age requirements.
There is an increasing demand for guidelines regarding the use of GenAI by children.
Last year, UNESCO called for governments to establish regulations for the use of GenAI in education. These regulations would include setting age limits for users and ensuring data protection and user privacy. “Generative AI has the potential to greatly benefit human development, but it is important to acknowledge that it can also have negative consequences and perpetuate bias,” stated Audrey Azoulay, the director-general of UNESCO, in a press release. “Public engagement and government regulations are essential for integrating it into education.”