Use Cleanlab to Improve LLMs: Find Errors in Human Feedback in the Anthropic RLHF Dataset

April 11, 2023
  • Chris MauckChris Mauck
  • Jonas MuellerJonas Mueller

This blog uses Cleanlab Studio (an AI platform for detecting and fixing issues in data) to find mistakes in human feedback (HF) provided during RLHF training of LLM’s like Anthropic’s Claude. This blog is part of our CSA (Cleanlab Studio Audit) series – our way to inform the community about issues in popular datasets. To glean insights about a given dataset, we quickly run it through Cleanlab Studio.

Reinforcement Learning from Human Feedback Data

With Reinforcement Learning from Human Feedback (RLHF) becoming the main way to train AI assistants, it’s great to see organizations like Anthropic making their RLHF dataset publicly available (released as: hh-rlhf in Hugging Face Datasets). We discovered various problems in this dataset just by quickly running it through Cleanlab Studio.

Like other RLHF datasets, every example in this one includes an input prompt and two outputs generated by the LLM: a chosen output and a rejected output, where a human-rater preferred the former over the latter. But Cleanlab Studio reveals that in this dataset: some of the rejected outputs are unequivocally better than the chosen outputs, because humans make mistakes. Below are a couple of the problematic examples detected in the dataset.

Example 1

One problematic example in the dataset

It’s clear here that the human-rejected output answers the question of how to make a pinata whereas the human-chosen output merely describes what a pinata is (and is not actually a better output). The human who provided feedback just accidentally made a mistake here!

Example 2

Another problematic example in the dataset

Again it’s clear that the human-chosen output for this prompt is not truly more desirable than the human-rejected output (unless this LLM was intended to function as a dietitian…)

Using Cleanlab Studio, we found many more such problematic examples where the human-chosen output is just a description of the subject in the prompt, not actually answering the query in the prompt. Fixing such obvious data problems will allow much more reliable Large Language models to be produced via RLHF.

How the data look in Cleanlab Studio interface

To find & fix such issues in almost any dataset (text, image, tabular, etc), just run it through Cleanlab Studio. Try this universal Data-Centric AI solution for free!

Related Blogs
OpenAI's o1 surpassed using the Trustworthy Language Model
See results from using the Trustworthy Language Model to: detect hallucinations/errors from the o1 model and improve its response accuracy.
Read more
Benchmarking Hallucination Detection Methods in RAG
Evaluating state-of-the-art tools to automatically catch incorrect responses from a RAG system.
Read more
Preventing AI Mistakes in Production: Inside Cleanlab’s Guardrails
Even advanced AI models still hallucinate, producing confident but wrong answers that can harm trust and compliance. Cleanlab’s trustworthiness guardrails, powered by the Trustworthy Language Model (TLM), block inaccurate responses in real time and deliver safe fallback or expert-verified answers to keep AI systems reliable in production.
Read more