Letter from the CEO: Announcing our Series A and Cleanlab's Trustworthy Language Model

October 10, 2023
  • Curtis NorthcuttCurtis Northcutt

When I was a child, I had an elementary school teacher who would make up answers to questions she didn’t know to spare herself the embarrassment of not knowing questions asked by children. My young mind was a sponge and I quickly learned many wrong things… and made many mistakes.

To avoid mistakes, I spent more and more time trying to avoid learning wrong things and over time, I discovered that the key to accurate and reliable systems of thought was less to do with how good my memory was, and more to do with how well I could determine if information presented to me was accurate or not. I discovered it’s harder to unlearn than to learn.

Years later, I spent the bulk of my twenties at MIT solving this problem during my PhD… inventing a family of theory and algorithms that enabled any labeled dataset to automatically find errors in itself, so that analytics and AI systems could be built on top of clean data, avoiding learning the bad data in the first place, thereby producing much more accurate and reliable results. The theory and algorithms I developed became a sub-field of machine learning called confident learning that was realized in a small open-source package called “cleanlab”. That was five years ago.

While I was developing Cleanlab at MIT, I worked/interned at Google, Oculus, Amazon, Microsoft, and Meta/Facebook (details here). Over and over, I saw the promise of data-driven analytics, LLM, and AI solutions choke due to ambiguous data, label errors, outliers, duplicates, etc. Over time, my “PhD internships” were just “enterprise POCs”: Cleanlab was the answer to the biggest and most expensive problem for enterprise AI and analytics: trust and reliability.

There are no LLMs without data. There is no AI without data. There is no analytics without data. Learning cannot exist without data to learn from. The two are inextricably connected.

Today, it’s my great privilege to announce our $30M raise in combined seed and series A funding enabling Cleanlab to go to market with automated data curation for enterprise AI and analytics, beyond the hundreds of companies already using Cleanlab today. The Series A funding was co-led by Menlo Ventures and TQ Ventures, along with existing investor Bain Capital Ventures joining new investor Databricks Ventures, and angel investors like Freddy Kerrest (Founder of Okta), AME Cloud Ventures (Founder of Yahoo), Preston-Werner Ventures (Founder of GitHub), Essence VC, Lane VC, Avid Larizadeh Duggan, and Kearny Jackson.

Addressing the biggest problem in analytics and AI: reliability

Every day that goes by, more companies rely on data to make decisions. Their customers rely on products built with machine learning models trained on unreliable data. Enterprises rely on analytics and business intelligence to improve internal operations, allocate hiring budgets, determine salaries, pursue investments… Enterprise revenue and customer success is tied (now more than ever) to data-driven analytics, AI, and LLM models, but enterprise data is full of issues (ambiguous data, wrongly labeled data, outliers, duplicates, etc.). Bad data costs the U.S. alone over $3 trillion and 80% of time spent by enterprises on AI solutions is spent on the data that drives them.

We have a singular mission here at Cleanlab: to make the world’s data-driven systems work reliably by improving the data they rely on. We solve this with Cleanlab Studio, a no code, automated data curation and auto-improvement platform that integrates easily with enterprise ML and data pipelines, is simple to use and set-up through APIs and web interfaces, and supports every major data modality (text, image, tabular/csv/excel/json, etc.).

Cleanlab Studio automatically adds smart-meta data (label issue, outlier, ambiguity, etc.) that increases the dollar-value of every data-point in enterprise datasets to improve the reliability of data-driven analytics, LLM, and AI decisions.

We invent our data-centric AI algorithms in-house, building on the 3 founders’ CS PhDs at MIT.

Announcing Cleanlab TLM to mitigate hallucinations and add reliabilty to LLM outputs

I’m ecstatic to use this funding announcement to introduce the Cleanlab TLM (Trustworthy Language Model) now available within Cleanlab Studio. Cleanlab TLM address hallucinations and unreliable LLM outputs by providing a trustworthiness reliability score along with high-quality LLM outputs like ChatGPT, Falcon, and similar LLMs. Cleanlab TLM extends Cleanlab Studio’s capabilities to add intelligent metadata to help automate reliability and quality assurance for systems that rely on LLM outputs, synthetic data, and generated content. Cleanlab’s Trustworthy Language Model is available to try in beta today with Cleanlab Studio at https://cleanlab.ai.

Related Blogs
Overcoming Hallucinations with the Trustworthy Language Model
Announcing Cleanlab's Trustworthy Language Model. TLM overcomes hallucinations, the biggest barrier to productionizing GenAI, by adding a trust score to every LLM output.
Read morearrow
Benchmarking Hallucination Detection Methods in RAG
Evaluating state-of-the-art tools to automatically catch incorrect responses from a RAG system.
Read morearrow
Letter from the CEO: Announcing Our Seed Funding and the Launch of Cleanlab Studio for Enterprise
Cleanlab Studio for Enterprise launches to automate data curation for LLMs and the modern AI stack with $5 million in seed funding from Bain Capital Ventures.
Read morearrow
Get started today
Try Cleanlab Studio for free and automatically improve your dataset — no code required.
More resourcesarrow
Explore applications of Cleanlab Studio via blogs, tutorials, videos, and read the research that powers this next-generation platform.
Join us on Slackarrow
Join the Cleanlab Community to ask questions and see how scientists and engineers are practicing Data-Centric AI.