One of the more prolific AI and machine learning development platforms, Weights & Biases has secured a new tranche of cash from ex-GitHub CEO Nat Friedman and former Y Combinator partner Daniel Gross.
Friedman and Gross, alongside existing investors Coatue, Insight Partners, Felicis, Bond, BloombergBeta and Sapphire, have invested $50 million in Weights & Biases in a strategic round that values the company at $1.25 billion. Bringing the startup’s total raised to $250 million, the investment comes as Weights & Biases prepares to launch Prompts, a new product designed to help users monitor and evaluate the performance of large language models (LLMs) along the lines of OpenAI’s GPT-4.
The $50 million investment is far smaller than Weights & Biases’ previous haul, its Series C, which came in at around $135 million. But Lavanya Shukla, VP of growth at Weights & Biases, described it as opportunistic.
“We believe that giving employees machine learning tools should be table-stakes for CTOs and their teams,” she told TechCrunch in an email interview. “By tackling testing, security and reliability, Weights & Biases sits at a critical point along the development of a successful machine learning model.”
Lukas Biewald and Chris Van Pelt co-founded Weights & Biases in 2017, after spending years working on tools for machine learning engineers and data scientists. The two previously launched Figure Eight, formerly known as CrowdFlower, to recruit crowdworkers to label training data for machine learning algorithms. (Figure Eight was acquired by Appen in 2019 for $175 million.)
“The two identified a bigger problem: That machine learning practitioners didn’t have a great system of record for their experiments,” Shukla said. “This highly experimental yet crucial science was being logged in spreadsheets and degraded screenshots.”
So Biewald and Van Pelt joined forces with a third co-founder, a Google alumnus and developer Shawn Lewis, in an attempt to solve for that problem. Over the course of the next several years, they built the MVP for Weights & Biases: workflows to support the machine learning development life cycle.
Weights & Biases occupies a category of platforms known as MLOps, or machine learning operations, which enable data scientists to create new machine learning models and run them through repeatable, automated workflows that deploy them into production. As the demand for AI has grown, so, too, has the demand for MLOps platforms. Allied Market Research estimates that the MLOps segment will be worth $23.1 billion by 2023.
New MLOps platforms emerge on the regular. To name a few, there’s Seldon, FedML, Qwak, Galileo, Striveworks, Arize, Comet and Tecton. That’s ignoring offerings from incumbents like Azure, AWS and Google Cloud.
But what differentiates Weights & Biases is its approach to MLOps, Shukla claims.
First, all of Weights & Biases’ products were co-designed with partners and customers in an effort to ensure they meet the needs of those partners and customers, Shukla says. Second, the platform places an emphasis on tools to interrogate the datasets used to train models, allowing customers to check for issues that might arise, like biases and the presence of personally identifiable information — ideally before those datasets go into production.
“Weights & Biases is the leading machine learning platform to help developers build better models faster,” Shukla said. “We build lightweight, interoperable tools to quickly track experiments, version and iterate on datasets, evaluate model performance, reproduce models, visualize results and spot regressions, and share findings with colleagues. This lets machine learning engineers quickly iterate on their machine learning pipelines with the confidence that their datasets and models are tracked and versioned in a reliable system of record.”
Whatever other advantages Weights & Biases has, first mover is almost certainly one of them.
The platform’s solution is integrated in over 20,000 open source repositories, Shukla claims, and Weights & Biases has been cited in hundreds of machine learning academic research papers. It’s also the toolset of choice for high-profile, well-funded generative AI model builders, including OpenAI, Aleph Alpha, Cohere, Anthropic and Hugging Face.
“OpenAI trains all models on Weights & Biases. With hundreds of employees running thousands of experiments, it is critical that OpenAI has a way to test, identify issues and debug their models quickly,” Shukla said. “OpenAI also has to do a lot of training runs on small subsets of their data. Thanks to Weights & Biases, they were able to train GPT-4 faster.”
Beyond the generative AI cohort, Weights & Biases has 700,000 users — up from 100,000 in 2021 — and more than 1,000 paying users. Its team, meanwhile, has grown to over 200 people, most based in its headquarters in San Francisco.
Weights & Biases is aiming to grow that customer base further with Prompts, its alluded-to new product, which allows users to interrogate an LLM’s outputs and fine-tune the LLMs themselves.
“LLMs may reduce the number of people you need to train models, but they will increase the number of people who companies need to fine-tune, interface and build apps with those models,” Shukla said. “The goal of Prompts is also to serve a new class of users and change how big labs build machine learning models. In addition to prompt engineers and fine-tuners, researchers and companies building unique internal models will have more tools to improve their models.”
As for Weights & Biases, it’ll have a reason to continue building out its MLOps suite.