ChatGPT may moderate illegal content on sites like Facebook | Digital Trends


GPT-4 — the large language model (LLM) that powers ChatGPT Plus — may soon take on new role as an online moderator, policing forums and social networks for nefarious content that shouldn’t see the light of day. That’s according to a new blog post from ChatGPT developer OpenAI, which says this could offer “a more positive vision of the future of digital platforms.”

By enlisting artificial intelligence (AI) instead of human moderators, OpenAI says GPT-4 can enact “much faster iteration on policy changes, reducing the cycle from months to hours.” As well as that, “GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling,” OpenAI claims.

Rolf van Root / Unsplash

For example, the blog post explains that moderation teams could assign labels to content to explain whether it falls within or outside a given platform’s rules. GPT-4 could then take the same data set and assign its own labels, without knowing the answers beforehand.

The moderators could then compare the two sets of labels and use any discrepancies to reduce confusion and add clarification to their rules. In other words, GPT-4 could act as an everyday user and gauge whether the rules make sense.

The human toll

OpenAI

Right now, content moderation on various websites is performed by humans, which exposes them to potentially illegal, violent, or otherwise harmful content on a regular basis. We’ve repeatedly seen the awful toll that content moderation can take on people, with Facebook paying $52 million to moderators who suffered from PTSD due to the traumas of their job.

Reducing the burden on human moderators could help to improve their working conditions, and since AIs like GPT-4 are immune to the kind of mental stress that humans feel when handling troublesome content, they could be deployed without worrying about burnout and PTSD.

However, it does raise the question of whether using AI in this manner would result in job losses. Content moderation is not always a fun job, but it is a job nonetheless, and if GPT-4 takes over from humans in this area, there will likely be concern that former content moderators will simply be made redundant rather than reassigned to other roles.

OpenAI does not mention this possibility in its blog post, and that really is something for content platforms to decide on. But it might not do much to allay fears that AI will be deployed by large companies simply as a cost-saving measure, with little concern for the aftermath.

Still, if AI can reduce or eliminate the mental devastation faced by the overworked and underappreciated teams who moderate content on the websites used by billions of people every day, there could be some good in all this. It remains to be seen whether that will be tempered by equally devastating redundancies.

Editors’ Recommendations








Source link