AI Training Data Scarcity Isn’t the Problem It’s Made Out to Be
By: mpost io|2025/05/06 23:30:01
0
Share
Today’s artificial intelligence models can do some amazing things. It’s almost as if they have magical powers, but of course they do not. Rather than using magic tricks, AI models actually run on data – lots and lots of data. But there are growing concerns that a scarcity of this data might result in AI’s rapid pace of innovation running out of steam. In recent months, there have been multiple warnings from experts claiming that the world is exhausting the supply of fresh data to train the next generation of models. A lack of data would be especially challenging for the development of large language models, which are the engines that power generative AI chatbots and image generators. They’re trained on vast amounts of data, and with each new leap in performance, more and more is required to fuel their advances. These AI training data scarcity concerns have already caused some businesses to look for alternative solutions, such as using AI to create synthetic data for training AI, partnering with media companies to use their content, and deploying “internet of things” devices that provide real-time insights into consumer behavior. However, there are convincing reasons to think these fears are overblown. Most likely, the AI industry will never be short of data, for developers can always fall back on the single biggest source of information the world has ever known – the public internet. Mountains of DataMost AI developers source their training data from the public internet already. It’s said that OpenAI’s GPT-3 model, the engine behind the viral ChatGPT chatbot that first introduced generative AI to the masses, was trained on data from Common Crawl, an archive of content sourced from across the public internet. Some 410 billion tokens’ worth or information based on virtually everything posted online up until that moment, was fed into ChatGPT, giving it the knowledge it needed to respond to almost any question we could think to ask it. Web data is a broad term that accounts for basically everything posted online, including government reports, scientific research, news articles and social media content. It’s an amazingly rich and diverse dataset, reflecting everything from public sentiments to consumer trends, the state of the global economy and DIY instructional content. The internet is an ideal stomping ground for AI models, not just because it’s so vast, but also because it’s so accessible. Using specialized tools such as Bright Data’s Scraping Browser, it’s possible to source information from millions of websites in real-time for their data, including many that actively try to prevent bots from doing so. With features including Captcha solvers, automated retries, APIs, and a vast network of proxy IPs, developers can easily sidestep the most robust bot-blocking mechanisms employed on sites like eBay and Facebook, and help themselves to vast troves of information. Bright Data’s platform also integrates with data processing workflows, allowing for seamless structuring, cleaning and training at scale.It’s not actually clear how much data is available on the internet today. In 2018, International Data Corp. estimated that the total amount of data posted online would reach 175 zettabytes by the end of 2025, while a more recent number from Statista ups that estimate to 181 zettabytes. Suffice to say, it’s a mountain of information, and it’s getting exponentially bigger over time. Challenges and Ethical Questions Developers still face major challenges when it comes to feeding this information into their AI models. Web data is notoriously messy and unstructured, and it often has inconsistencies and is missing values. It requires intensive processing and “cleaning” before it can be understood by algorithms. In addition, web data often contains lots of inaccurate and irrelevant details that can skew the outputs of AI models and fuel so-called “hallucinations.” There are also ethical questions around scraping internet data, especially with regard to copyrighted materials and what constitutes “fair use.” While companies like OpenAI argue they should be allowed to scrape any and all information that’s freely available to consume online, many content creators say that doing so is far from fair, as those companies are ultimately profiting from their work – while potentially putting them out of a job. Despite the ongoing ambiguity over what web data can and can’t be used for training AI, there’s no taking away its importance. In Bright Data’s recent State of Public Web Data Report, 88% of developers surveyed agreed that public web data is “critical” for the development of AI models, due to its accessibility and its incredible diversity. That explains why 72% of developers are concerned that this data may become increasingly more difficult to access in the next five years, due to the efforts of Big Tech companies like Meta, Amazon and Google, which would much prefer to sell its data exclusively to high-ticket enterprise partners. The Case for Using Web Data The above challenges explain why there has been a lot of talk about using synthetic data as an alternative to what’s available online. In fact, there is an emerging debate regarding the benefits of synthetic data over internet scraping, with some solid arguments in favor of the former. Advocates of synthetic data point to benefits such as the increased privacy gains, reduced biases and greater accuracy it offers. Moreover, it’s ideally structured for AI models from the get-go, meaning developers don’t have to invest resources in reformatting it and labeling it correctly for AI models to read. On the other hand, over-reliance on synthetic data sets can lead to model collapse, and regardless, we can make an equally strong case for the superiority of public web data. For one thing, it’s hard to beat the pure diversity and richness of web-based data, which is invaluable for training AI models that need to handle the complexity and uncertainties of real-world scenarios. It can also help to create more trustworthy AI models, due to its mix of human perspectives and its freshness, especially when models can access it in real time. In one recent interview, Bright Data’s CEO Or Lenchner stressed that the best way to ensure accuracy in AI outputs is to source data from a variety of public sources with established reliability. When an AI model only uses a single or a handful of sources, its knowledge is likely to be incomplete, he argued. “Having multiple sources provides the ability to cross-reference data and build a more balanced and well-represented dataset,” Lenchner said. What’s more, developers have greater confidence that it’s acceptable to use data imported from the web. In a legal decision last winter, a federal judge ruled in favor of Bright Data, which had been sued by Meta over its web scraping activities. In that case, he found that while Facebook’s and Instagram’s terms of service prohibit users with an account from scraping their websites, there is no legal basis to bar logged-off users from accessing publicly-available data on those platforms. Public data also has the advantage of being organic. In synthetic datasets, smaller cultures and the intricacies of their behavior are more likely to be omitted. On the other hand, public data generated by real world people is as authentic as it gets, and therefore translates to better-informed AI models for superior performance. No Future Without the WebFinally, it’s important to note that the nature of AI is changing too. As Lenchner pointed out, AI agents are playing a much greater role in AI use, helping to gather and process data to be used in AI training. The advantage of this goes beyond eliminating the burdensome manual work for developers, he said, as the speed at which AI agents operate means AI models can expand their knowledge in real-time. “AI agents can transform industries as they allow AI systems to access and learn from constantly changing datasets on the web instead of relying on static and manually processed data,” Lenchner said. “This can lead to banking or cybersecurity AI chatbots, for example, that are capable of coming up with decisions that reflect the most recent realities.” These days, almost everyone is accustomed to using the internet constantly. It has become a critical resource, giving us access to thousands of essential services and enabling work, communication and more. If AI systems are ever to surpass the capabilities of humans, they need access to the same resources, and the web is the most important of them all. The post AI Training Data Scarcity Isn’t the Problem It’s Made Out to Be appeared first on Metaverse Post.
You may also like

Trading Never Sleeps: On-Chain, Crude Oil, and Leverage
The prices in this window are determined by emotions, amplified by leverage, driven by the narrative of war—rather than by the supply and demand of crude oil.

On-chain Yield Panorama: The Evolution from Interest-bearing Stablecoins to Crypto Credit Products
In a bear market, investors tend to prefer more stable returns and lower underlying risks, which has driven the growth of interest-bearing stablecoins.

RootData announced the integration with OpenClaw, and these gameplay features have gone viral
In the era of AI Agents, the value of data lies not in "ownership," but in "connection."

Key Market Intelligence on March 9th, how much did you miss out on?
1. On-chain Funds: $221M flowed into Hyperliquid last week; $186.7M flowed out of Arbitrum
2. Largest Price Swings: $DENT, $UAI
3. Top News: Middle East Conflict Sparks Stagflation Trading, Global Stock Markets Shed Around $6 Trillion

a16z: After AI Superpowers, Where to Next for Humanity?
Cryptocurrency will become the cornerstone of trust in this new era.

Why Does Oil Go Up When Bitcoin Goes Down?
The Impact of Middle Eastern Oil on Bitcoin Price

Decoding 112,000 Polymarket Addresses: The Top 1% Making Money Are Doing These Five Things
Those loss-making addresses are not stupid, just lacking discipline — too many markets involved, overexposure, excessive FOMO, and hardly any post-mortem.

AAVE founder issues a warning: DeFi must never become the exit liquidity for Wall Street private credit
In order for RWA to succeed in DeFi and for DeFi to achieve meaningful scale expansion through real-world assets, the entire industry needs to thoughtfully and cautiously build opportunities that connect TradFi (traditional finance) and on-chain markets.
How To Create A Frequency So Strong It Makes Reality Obey You
The first-ever WEEX AI Hackathon has concluded, with 10 winners emerging from over 200 global teams. Beyond its $1.8 million prize pool, the event marked a milestone—proving that the future of AI trading belongs to accessible, AI-powered innovation.

The cryptocurrency industry has waited for five and a half years, and what they got is half a ticket
The hand that opens this door is not the rule, but the direction of the wind.

The trend of Ethena reveals what information about the cryptocurrency market
Through Ethena's data insights: the collective hedging and self-protection of VCs and project parties is leading the crypto market into an extreme risk-averse moment of "complete balance between bulls and bears" for the first time in history.

I've been in the crypto industry for five and a half years, and all I got was half a ticket.
The hand that opens this door is not a rule, but a wind.

Crude Oil Surges 25%, Hyperliquid Unfolds On-Chain Showdown
Hyperliquid users now need to keep an eye on the latest developments in the Iran Hormuz Strait, while a DeFi OG is using on-chain derivatives to hedge against war risk.

$20 Billion Valuation, Is Kalshi Engaging in an Arms Race with Polymarket?
US-Iran Conflict + World Cup + Eve of Elections, Predicts Market Key Data Points to Reach New All-Time Highs in 2026.

Will Not Messing with OpenClaw Lead to Obsolescence in the AI Era? | Lobster Fuss Summit
Amazon Web Services On-Site Guidance to Deploy OpenClaw, Low-Cost and User-Friendly

Anticipating the Market's New Challenge to Political Elections
The next US presidential election will depend on the prediction markets

The Shadow Business Empire of Iran's New Supreme Leader: Oil, Real Estate, and Financial Intrigue
From political and military influence to shaping the financial network, Mujataba has secretly laid the groundwork to assume the ultimate leadership position.

Next-Generation Software Built for Trillion-Agent Scale
When the Agent becomes a key user of the software, software design, infrastructure, and business model will all change accordingly
Trading Never Sleeps: On-Chain, Crude Oil, and Leverage
The prices in this window are determined by emotions, amplified by leverage, driven by the narrative of war—rather than by the supply and demand of crude oil.
On-chain Yield Panorama: The Evolution from Interest-bearing Stablecoins to Crypto Credit Products
In a bear market, investors tend to prefer more stable returns and lower underlying risks, which has driven the growth of interest-bearing stablecoins.
RootData announced the integration with OpenClaw, and these gameplay features have gone viral
In the era of AI Agents, the value of data lies not in "ownership," but in "connection."
Key Market Intelligence on March 9th, how much did you miss out on?
1. On-chain Funds: $221M flowed into Hyperliquid last week; $186.7M flowed out of Arbitrum
2. Largest Price Swings: $DENT, $UAI
3. Top News: Middle East Conflict Sparks Stagflation Trading, Global Stock Markets Shed Around $6 Trillion
a16z: After AI Superpowers, Where to Next for Humanity?
Cryptocurrency will become the cornerstone of trust in this new era.
Why Does Oil Go Up When Bitcoin Goes Down?
The Impact of Middle Eastern Oil on Bitcoin Price