OpenAI said its technology is capable of moderating content, a task that could help businesses become more efficient, and highlighting a possible use case for buzzy artificial intelligence tools that haven’t yet generated huge revenue for many companies.
The startup said its latest technology, GPT-4, which powers ChatGPT, can be used to develop policies on appropriate content and can more quickly label or make decisions about posts. The company has been testing the technology and has invited customers to also experiment with it. OpenAI said its tools can help businesses perform six months of work in just a day or so.
Andrea Vallone, who works on policy at OpenAI, said the startup has found that GPT-4 does an efficient job of moderation, a role that often falls to small armies of human workers. Those jobs can sometimes be traumatic for the people performing them. At the same time, many large companies, including Meta Platforms Inc., already use artificial intelligence to help with moderation alongside staffers. Using technology to interpret the nuances of human writing can be challenging, and OpenAI has stressed that the process should not be fully automated.
Drafting content moderation policies and labeling content is often a lengthy undertaking. OpenAI’s tools will help reduce the “delta between the need and the solution,” Vallone said.
Ideally, a company’s human workers will be able to use technology to free them up to focus on more complicated decisions related to the most extreme cases of potential content violations and how to refine policies, she said. “We’ve continued to have human review to verify some of the model judgements,” she said. “I think it’s important to have humans in the loop always.”