Koo Releases New Safety Features for Proactive Content Moderation
Listen to the Podcast:
New proactive content moderation capabilities have been announced by the Indian microblogging platform Koo with the goal of improving user security and safety on social media.
The site asserts that its internally created capabilities can quickly and effectively identify and block any instances of child sex abuse or nudity. The platform also has the ability to flag false material, block the publication of hate speech, and conceal harmful remarks.
Koo claims that the microblogging platform has included new content control tools to improve user security and safety on social media. The platform’s internally built features can flag falsehoods, detect and block child sexual abuse materials in under five seconds, and hide toxic comments and hate speech.
Koo’s Safety Features Details
According to the company, Koo‘s internal “No Nudity Algorithm” is made to proactively detect and immediately prevent any attempt by a user to submit images or videos that include nudity, sexual content, or materials for child sexual abuse. In less than five seconds, according to Koo, the system can identify and block such information. A user is immediately blocked from uploading content, being found by other users, being highlighted in trending posts, and interacting with other users on the platform if they publish sexually explicit content.
Toxic Comments and Hate Speech
According to Koo, its in-house technology proactively finds and deletes hate speech and poisonous remarks in less than 10 seconds to ensure they are hidden from the general audience.
Users see a warning message when viewing particularly graphic or violent content.
Koo frequently scans its site for profiles that exploit information, photographs, videos, or descriptions of famous people in an effort to impersonate them using an internal algorithm called the “MisRep Algorithm.” When this happens, the system quickly deletes any images or videos of the famous people from the profiles and marks the accounts for further investigation of any potentially inappropriate behavior.
Misinformation and Disinformation
The company said it made an algorithm called the “Misinfo and Disinfo Algorithm” that scans all viral and reported fake news on the platform continuously and instantly. It does this by using both public and private sources of fake news. The algorithm finds misinformation and disinformation in a post and labels it. This makes it harder for false information to spread on the platform.