Google Bard Workers Expected to Audit AI Answers in 3 Minutes: Docs

July 12, 2023
97 views

Google's Bard is trained by contractors who are expected to rapidly review answers, per Bloomberg.

Instruction documents showed the contractors were given deadlines as short as three minutes.

They work under pressure and are given minimal training, Bloomberg added.

Morning Brew Insider recommends waking up with, a daily newsletter. Loading Something is loading. Thanks for signing up! Access your favorite topics in a personalized feed while you're on the go. download the app Email address By clicking “Sign Up,” you also agree to marketing emails from both Insider and Morning Brew; and you accept Insider’s Terms and Privacy Policy Click here for Morning Brew’s privacy policy.

Google's Bard is reportedly trained by thousands of contractors under pressure to review answers generated by the AI chatbot in as little as three minutes.

The accuracy of Google's rival to OpenAI's ChatGPT depends on contractors at companies such as Appen and Accenture, who are given minimal training and earn as little as $14 an hour, Bloomberg reported, citing several contractors. The workers requested anonymity, the publication added.

Bard was first announced by Google in February after the launch of ChatGPT put the company on high alert. The OpenAI chatbot accumulated 100 million users within two months and posed a direct threat to Google's search business as Microsoft poured billions into OpenAI.

Though chatbots like Bard and ChatGPT depend on the large language models underlying them to generate responses, humans are also involved in reviewing responses to ensure they're reliable and accurate.

However, the workload of humans reviewing responses for Bard has become increasingly larger and more complex, Bloomberg reported, citing internal documents and six contractors.

The instruction documents were published by Bloomberg and reviewed by Insider.

"As it stands right now, people are scared, stressed, underpaid, don't know what's going on," one contractor told Bloomberg. "And that culture of fear is not conducive to getting the quality and the teamwork that you want out of all of us."

The report shines a light on how seriously Google is taking the threat from OpenAI, as an AI arms race accelerates between the two companies hoping to take the lead on AI's rollout to the world.

Part of the contractors' tasks often involves rating responses on their "helpfulness" on a scale from "not at all helpful" to "extremely helpful" by gauging how up-to-date the response is.

In a statement, a Google spokesperson told Insider: "Connecting people to high-quality information is core to our mission. We undertake extensive work to build our AI products responsibly, including rigorous testing, training, and feedback processes we've honed for years to emphasize factuality and reduce biases.

"Human evaluation – from individuals internal and external to Google – is one of many approaches we use to improve our products."

The spokesperson added that "ratings don't directly impact the output of our models and they are by no means the only way we promote accuracy."

"Teams across Google with specialized skill sets – from engineering, to user experience, to trust and safety experts – use a range of techniques to build these products and continuously improve their quality and accuracy," the spokesperson said.

Appen and Accenture did not immediately respond to Insider's request for comment.

Source: Business Insider