Explainers
May 13, 2025
Defining New Primitives in an AI-Native Economy
Share on:

Introduction 

Decentralized Finance (DeFi), a set of simple yet powerful economic primitives ignited an exponential growth story and revolutionized finance by turning blockchain networks into global, permissionless markets. In DeFi’s rise, a few key metrics became the lingua franca of value: Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and Liquidity. These simple indicators galvanized participation and trust. For example, in 2020 DeFi’s TVL – the dollar value of assets locked in protocols exploded 14× and then quadrupled again in 2021, surpassing $112 billion at its peak. High yields (with some platforms advertising APYs of 3,000% during the yield farming craze.) drew in liquidity, and the depth of liquidity in pools signaled lower slippage and more efficient markets. In short, TVL told us “how much skin is in the game,” APR told us “how much you can earn,” and liquidity indicated “how easily assets trade.” These metrics, despite their flaws, bootstrapped a financial ecosystem from zero to billions in value practically overnight. Despite their eventual distortions, these metrics offered a clear, quantifiable way to assess value & growth. By turning user participation into direct financial opportunity, DeFi created a self-reinforcing flywheel of adoption that made it go viral, driving mass participation.

Today, AI stands at a similar crossroads. But unlike DeFi, the AI narrative has been dominated by large, generalized models trained on vast internet-scale datasets. These models often fail to deliver effective results for niche domains, specialized tasks, or individual user needs. Their one-size-fits-all approach is powerful, yet brittle; generalized, yet misaligned. This paradigm needs to shift. The next era of AI should not be defined by the sheer size or generality of models. Instead, the focus should shift towards bottom-up - smaller, highly specialized models. Such tailored AI requires an entirely new kind of data: high-quality, human-aligned, and domain-specific data. But acquiring this data isn’t as straightforward as scraping the web, it requires active, intentional contributions from individuals, domain experts, and communities.

To drive this next era of specialized, human-aligned AI, we need an incentive flywheel akin to what DeFi built for finance. This means introducing new, AI-native primitives that measure data quality, model performance, agent reliability, and alignment incentives, metrics that directly reflect the true value of data as an asset, not merely as an input.

In this article, we'll explore precisely these new primitives that can form the backbone of an AI-native economy. We'll illustrate how AI can flourish if we build the right economic infrastructure, one that generates new, high-quality data, properly incentivizes its creation and usage and is built around individuals. We'll also examine how platforms like LazAI are pioneering these AI-native frameworks, leading the charge in establishing a new paradigm to price and reward data, to fuel the next great leap in AI innovation.

DeFi’s Incentive Flywheel: TVL, Yield, and Liquidity – A Quick Recap

DeFi didn’t rise by accident – it grew because its design made participation profitable and transparent. Key metrics such as Total Value Locked (TVL), Annual Percentage Yield (APY/APR), and Liquidity weren’t just numbers; they were the primitives that aligned user behavior with network growth. Together, these metrics formed a virtuous cycle that attracted users and capital, which in turn spurred further innovation.

  • Total Value Locked (TVL): TVL measures the total capital deposited in DeFi protocols (in lending pools, liquidity pools, etc.). It became the de-facto “market cap” for DeFi projects. A rapidly growing TVL was seen as a sign of user trust and protocol health. For instance, the DeFi boom of 2020–2021 saw TVL climb from under $10B to over $100B in a year and $150 billion by 2023, it demonstrated how much value participants were willing to lock into decentralized apps. High TVL created a gravity effect: more capital meant more liquidity and stability, attracting yet more users in search of opportunities. As critics noted, chasing TVL for its own sake led some protocols to offer unsustainable incentives – essentially “buying” TVL – which in turn could mask inefficiencies​. Still, without TVL, the early DeFi narrative would lack a concrete way to track adoption.

  • Annual Percentage Yield (APY) / Rate (APR): The promise of yield turned participation into tangible opportunity. DeFi protocols began offering eye-popping APRs to those who supplied liquidity or funds. For example, the launch of Compound’s COMP token in mid-2020 pioneered yield farming, rewarding users who provided liquidity with governance tokens – an innovative twist that sparked a frenzy of activity. Suddenly, using a platform wasn’t just a service; it was an investment. High APYs drew in yield-seekers, which boosted TVL further. This reward mechanism bootstrapped network growth by directly incentivizing early adopters with lucrative returns.

  • Liquidity: In finance, liquidity is the ease of moving assets without drastic price changes – essential for healthy markets. In DeFi, liquidity was often bootstrapped via liquidity mining programs (users earned tokens for supplying liquidity). Deep liquidity in decentralized exchanges and lending pools meant users could trade or borrow with low friction, improving user experience. High liquidity begets more volume and utility, which in turn attracts more liquidity – a classic positive feedback loop. It also enabled composability: developers could build new products (derivatives, aggregators, etc.) on top of liquid markets, driving innovation. Liquidity, therefore, acted as the network’s lifeblood, fueling both adoption and the emergence of new services.

Together, these primitives created a powerful incentive flywheel. Participants who added value (by locking assets or providing liquidity) were immediately rewarded (via high yields and token incentives), which encouraged even more participation. This turned individual participation into broad opportunity – users earned profits and governance influence – and those opportunities cultivated network effects as thousands of users joined in. The results were striking: DeFi’s user base and capital base exploded (over 10 million active DeFi users by 2024, and the sector’s value surged nearly 30-fold in a few years. Clearly, aligning incentives at scale – effectively turning users into stakeholders – was the key to DeFi’s exponential rise.

What the AI Economy Lacks Today

If DeFi demonstrated how bottom-up participation and aligned incentives could bootstrap a financial revolution, the AI economy today still lacks the foundational primitives needed to support a similar shift. It is currently dominated by large, generalized models trained on massive, scraped datasets. These foundation models, impressive in scale but aim to solve for everything, often serve no one particularly well. Their one-size-fits-all architecture struggles to adapt to niche domains, cultural nuances, or individual preferences. This results in brittle outputs, blindspots, and a growing sense of misalignment with real-world needs.

The next generation of AI won’t be defined by scale alone, it will be defined by context. By how well models understand and serve specific domains, specialized communities, and diverse human perspectives. But this kind of contextual intelligence requires a different kind of input: high-quality, human-aligned data. And that’s exactly what’s missing today. There is no widely accepted mechanism to measure, identify, value, or prioritize such data. No open process for individuals, communities, or domain experts to contribute their perspectives and improve the intelligence systems that increasingly shape their world. As a result, the value continues to concentrate in the hands of a few infrastructure providers, while the broader population remains disconnected from the AI economy’s upside. Until we design new primitives that can surface, verify, and reward high-value contributions; data, feedback, alignment signals - we cannot unlock the participatory growth loop that made DeFi thrive.

In summary, we must similarly ask: 

What do we measure to capture the value being created, and how do we create a self-reinforcing flywheel of adoption that will boost the bottom-up participation of individual-centric data? 

To unlock an “AI-native economy” analogous to DeFi, we need to define new primitives that can turn participation into opportunity for AI, thereby catalyzing the network effects that have so far eluded this sector.

The AI-Native Stack: New Primitives for a New Economy

We’re no longer just moving tokens between wallets; we’re moving data into models, model outputs into decisions, and AI agents into action. This calls for new metrics and primitives that can quantify intelligence and alignment the way DeFi metrics quantified capital – such as the one LazAI is building a next-generation blockchain network designed to solve the AI data alignment problem by introducing new asset standards for AI data, model behavior, and agent interaction. 

Below we outline several key primitives that could define value in an AI-driven on-chain economy:

  • Verifiable Data (the new “Liquidity”): Data is to AI what liquidity is to DeFi – the lifeblood of the system. In AI, especially large-model AI, having the right data is everything. But raw data can be misleading or poor quality; we need verifiable, high-quality data on-chain. A possible primitive here is “Proof of Data (PoD) / Proof of Data Value (PoDV).” This concept would measure data contributions not just by volume, but by quality and impact on AI performance. Think of it as a counterpart to liquidity mining: contributors who supply useful data (or labels/feedback) earn rewards proportional to the value their data adds. Early designs of such systems are emerging. For example, one blockchain project’s Proof of Data (PoD) consensus treats data as the primary resource for validation, analogous to how proof-of-work treats energy or proof-of-stake treats capital​. In that system, nodes are rewarded based on the volume, quality, and relevance of the data they contribute​. 

Translating this to a general AI economy, we might see “Total Data Value Locked (TDVL)” as a metric: an aggregate measure of all valuable data available to the network, weighted by verifiability and usefulness. Verifiable data pools could even be traded similar to liquidity pools – e.g., a pool of validated medical images for an on-chain diagnostic AI might have a quantified value and usage rate. Data provenance (knowing where data came from, how it’s been modified) would be a critical part of this metric, ensuring that the data feeding AI models is trustworthy and traceable​. In essence, if liquidity was about available capital, verifiable data is about available knowledge. A metric like Proof of Data Value ( PoDV) could capture how much useful knowledge is secured in the network, and the anchoring of data on-chain (as implemented through LazAI’s Data Anchoring Tokens, or DATs) makes this possible, establishing data liquidity as a measurable, incentivized economic layer.

  • Model Performance (a new Asset Class): In the AI economy, trained models (or AI services) themselves become assets – one could even call them a new asset class alongside tokens and NFTs. A well-trained AI model is valuable because of the intelligence encapsulated in its weights. But how do we represent and measure that value on-chain? We might need on-chain performance benchmarks or certifications for models. For example, a model’s accuracy on a standard dataset, or its win rate in a competitive task, could be logged to the blockchain as a kind of performance score. Think of it as an on-chain “credit rating” or KPI for AI models. This could be updated as the model is fine-tuned or as data is added. Projects like Oraichain have explored bringing AI model APIs on-chain with reliability scores (using test cases to verify that an AI’s output meets expected results)​. In an AI-native DeFi (The “AiFi”), one could imagine staking on model performance – e.g., if you believe your model is high-performing, you stake tokens and if independent on-chain audits confirm the performance, you earn rewards (and if the model underperforms, you lose stake). This would align incentives for developers to honestly report and continuously improve models. Another idea is tokenized model NFTs that carry performance metadata – so the “floor price” of a model NFT might reflect its utility. We already see glimpses of this: certain AI marketplaces allow buying/selling model access tokens, and protocols like LayerAI (former CryptoGPT) explicitly treat data and AI models as emerging asset classes in a global AI economy​. In summary, where DeFi asked “how much money is locked?”, AI-DeFi will ask how much intelligence is locked? – not just in terms of compute power (though that too), but in the efficacy and value of models running on the network. New metrics might include “Proof of Model Quality” or an index of on-chain AI performance improvements over time.

  • Agent Behavior & Utility (On-Chain AI Agents): One of the most exciting and challenging additions in an AI-native blockchain is the presence of autonomous AI agents operating on-chain. These could be trading bots, data curators, customer service AIs, or complex DAO governors – essentially software entities that sense, decide, and act in the network on behalf of users or even on their own. In the DeFi world, we’ve only had rudimentary “bots”; in the AI blockchain world, agents could become first-class economic actors. This begs the need for metrics around agent behavior, trustworthiness, and utility. We might see something like an “Agent Utility Score” or reputation system. Imagine each AI agent (potentially represented as an NFT or SFT “Semi-fungible token” identity) accumulating a reputation based on its actions – completing tasks, cooperating with others, and so on. Such a score would be analogous to a credit score or a user rating, but for AI. It could be used by other contracts to decide whether to trust or utilize an agent’s services. In LazAI’s vision, they introduce the concept of iDAO (Individual-centric DAO) where each agent or user entity has its own on-chain domain with AI assets​. One can imagine these iDAOs or agents building a track record that can be measured. 

Already, some platforms are tokenizing AI agents and assigning them on-chain metrics: for instance, the Rivalz “Rome” protocol creates NFT-based AI agents (rAgents) that carry up-to-date reputation metrics on-chain​. Users can stake or lend these agents, and their rewards depend on the agent’s performance and impact in collective AI “swarms”​. This is essentially DeFi for AI agents, and it exemplifies how agent utility metrics will be pivotal. We might soon talk about “active AI agents” similar to active addresses, or “agent economic impact” as a metric akin to transaction volume. 

  • Attention Traces could be another primitive here – essentially logging what an agent pays attention to (which data, which signals) during its decision-making. This could make an otherwise black-box agent more transparent and auditable, and allow attribution of an agent’s successes or failures to specific inputs. In sum, metrics for agent behavior will ensure accountability and alignment: if an autonomous agent is to be trusted with value, we need to quantify its reliability. A high Agent Utility Score might become a prerequisite for an AI agent to manage large funds or critical tasks on-chain (just like a high credit score is needed for large loans in traditional finance).

  • Usage Incentives & AI Alignment Measures: Finally, an AI economy must consider how to incentivize beneficial usage and alignment of AI systems. In DeFi, usage is incentivized via liquidity mining, airdrops for early users, or fee rebates – all aimed at growth. In AI, mere growth in usage is not enough; we want usage that leads to better AI outcomes. This is where metrics tied to AI alignment come in. For example, human feedback loops (like users rating an AI’s responses or providing corrections through iDAOs - more on this later) could be recorded, and contributors of feedback might earn an “alignment yield.” One could envision a “Proof of Attention” or “Proof of Engagement” where users who spend time to improve the AI (by giving preference data, corrections, or novel use cases) are rewarded. The metric could be something like Attention Traces, capturing how much quality feedback or human attention has been invested into refining the AI. 

Just as DeFi needed block explorers and dashboards (e.g. DeFi Pulse, DefiLlama) to track TVL and yields, the AI economy will need new explorers to track these AI-centric metrics – imagine an “AI-llama” dashboard showing Total Aligned Data, Active AI Agents, Cumulative AI Utility Yield, etc. The parallels to DeFi are there, but the content is new.

Towards a DeFi-Like Flywheel for AI

We need to build an incentive flywheel for AI – one that treats data as a first-class economic asset and by doing so, we could turn AI development from a closed endeavor into an open, participatory economy – much as DeFi turned finance into an open playground of user-driven liquidity.

There are early glimmers of this approach. Projects like Vana have started rewarding user participation in data sharing. For example, Vana’s network lets users contribute personal or community data into DataDAOs (decentralized data pools) and earn dataset-specific tokens in return, which can be traded for the network’s native token​. This is a significant step toward data monetization for contributors.

However, rewarding contribution behavior alone is not enough to recreate DeFi’s explosive flywheel. In DeFi, liquidity providers weren’t just rewarded for the act of depositing – the assets they provided had transparent market value and the yields reflected real usage (trading fees, borrowing interest, plus incentive tokens). Likewise, an AI data economy must go beyond generic rewards and directly price the value of data. Without pricing data as an economic asset (based on its quality, rarity, or the improvement it brings to models), we risk shallow incentives. Simply handing out tokens for participation can encourage quantity over quality, or plateau if the tokens lack a linkage to real AI utility. To truly unlock innovation, contributors need to see clear, market-driven signals of what their data is worth and be rewarded when that data is actually used in AI systems.

We need an infrastructure that focuses more on directly valuing and rewarding the data itself that creates a data-centric incentive loop: the more quality data people contribute, the better the models get, which attracts more usage and demand for data, driving up rewards for contributors. It would transform AI from a closed competition for big data into an open marketplace for trusted, high-quality data.

How do these ideas manifest in a real project? Enter LazAI, a project building the next-gen blockchain network and foundational primitives for a decentralized AI economy. 

Introducing LazAI - Aligning AI with Humanity

LazAI is a next-generation blockchain network and protocol designed to solve the AI data alignment problem by introducing new asset standards for AI data, model behavior, and agent interaction. 

LazAI offers one of the most visionary approaches, that solves AI misalignment by making data verifiable, incentivized, and programmable on-chain. I’ll use LazAI’s framework to illustrate how an AI-native blockchain can put the above principles into practice.

The Core Problem – Data Misalignment & Lack of Fair Incentives: 

AI alignment often boils down to the quality of data that AI systems are trained on and the future demands new data that is human-aligned, trustworthy, and governed. As the AI industry shifts from centralized, generalized models to a future of smaller, contextual, and aligned intelligences, the infrastructure must evolve in kind. The next era of AI won’t be defined by size or scale, it will be defined by alignment, precision, and provenance. LazAI zeroes in on this data alignment & incentive challenge and posits a fundamental solution: align the data itself at the source and directly reward the data itself. In other words, make the training data verifiably representative of human perspectives, free of noise/bias and reward based on data quality, rarity, or the improvement it brings to models. This is a radical shift from patching models to curating the data. 

LazAI is not just introducing primitives - it’s proposing a new paradigm for how this data is sourced, priced, and governed. It introduces two core concepts – the Data Anchoring Token (DAT) and the intelligent DAO (iDAO) – which together enable the pricing, provenance, and programmable usage of data.

Verifiable & Programmable Data – The Data Anchoring Token (DAT): 

To enable this, LazAI introduces a new on-chain primitive called the Data Anchoring Token (DAT). A new token standard to assetize your AI data. Each DAT represents a piece of data anchored on-chain along with its lineage: who contributed it, how it has evolved over time, and where it has been used. This creates a verifiable history for each piece of data – much like version control or git for datasets, but secured by blockchain. Because DATs live on-chain, they are programmable: smart contracts can govern their usage. For example, a data contributor could specify that their DAT (say a set of medical images) can only be used by certain approved AI models or only under certain conditions (this could enforce privacy or ethical constraints via code). The incentivization aspect comes in as these DATs can be transacted or staked – if your data is valuable to a model, the model (or its owner) might pay to access the DAT. In essence, LazAI creates a marketplace where data is tokenized and has provenance. This directly addresses the “verifiable data” metric we discussed: one could inspect a DAT and see if it’s been validated, how many models used it, and what improvement in model performance resulted. Such data gets a higher valuation. By anchoring data on-chain and attaching economic incentives to its quality, LazAI ensures that the AI is trained on data that we can trust and measure. It’s solving alignment by aligning incentives – good data is rewarded and rises to the top.

The Individual-centric DAO (iDAO) Framework: 

The second key piece is LazAI’s iDAO concept which redefines governance in the AI economy by placing individuals, rather than organizations at the center of decision-making and data ownership. Traditional DAOs often prioritize collective organizational goals, inadvertently minimizing individual will. iDAOs flip that logic. They serve as personalized governance units that allow individuals, communities, or domain-specific entities to directly own, control, and validate the data and models they contribute to AI systems. iDAOs enable customized, aligned AI: they are governance frameworks to keep models on course with the values or intents of their contributors. From an economic perspective, iDAOs also make AI behavior programmable by the community – they can set rules for how the model uses certain data, who can access the model, and how rewards from the model’s outputs are shared. For instance, an iDAO could specify that whenever its AI model is used (say an API call or a successful task completion), a portion of the revenue goes back to the DAT holders whose data contributed to that success. This creates a direct feedback loop between agent behavior and contributor reward – analogous to how liquidity provider rewards in DeFi are tied to platform usage. Furthermore, iDAOs can interact with each other composably: one AI agent (iDAO) can leverage data or models from another under agreed terms. 

Building Trust in AI - Verified Computing Framework: 

A crucial part of this entire flow is LazAI’s Verified Computing Framework - the layer that brings trust to life. It ensures that every DAT generated, every iDAO decision, and every incentive distributed has a verifiable trail. It’s what makes data ownership enforceable, governance accountable, and agent behavior auditable. Verified Computing transforms iDAOs and DATs from powerful ideas into reliable, provable systems - anchoring trust not in assumptions, but in verification.

By establishing these primitives, LazAI’s framework turns the vision of a decentralized AI economy into something tangible. Data becomes an asset users can own and profit from, models become collaborative ventures rather than proprietary silos, and every participant – from the individual who curates a unique dataset to the developer who builds a small specialized model – can become a stakeholder in the AI value chain. This incentive alignment is poised to create the same kind of momentum we saw in DeFi: when people realize that participating in AI (by contributing data or expertise) directly translates into opportunity, they are more likely to do it enthusiastically. And as more do so, the network effect kicks in – more data leads to better AI models, which attract more users, which generate more data and demand, and so on.

Conclusion: Towards an Open AI Economy

The story of DeFi taught us that the right primitives can unlock unprecedented growth. In the coming AI-native economy, we are on the cusp of a similar breakthrough. By defining and implementing new primitives that value data and alignment, we can transform AI development from a centralized effort into a decentralized, community-driven endeavor. The journey will not be without challenges: we must get the economics right (so that quality trumps quantity), and navigate ethical pitfalls to ensure that incentivizing data doesn’t compromise privacy or fairness. But the direction is clear. Efforts like LazAI’s DAT and iDAO are pioneering this path, turning abstract ideas of “human aligned AI” into concrete mechanisms for ownership and governance.

Much as early DeFi was experimental in refining TVL, yield farming, and governance, the AI economy will iterate on its new primitives. We should expect debates and innovation around how to best measure data value, how to fairly distribute rewards, and how to keep AI agents aligned and beneficial. This article scratches the surface of an incentive model that could democratize AI. The hope is to spark an open discussion and further research into these concepts. How else might we design primitives for an AI-native economy? What unforeseen consequences or opportunities might arise? By engaging a broad community in answering these questions, we increase the chances of building an AI future that is not only technologically advanced, but also economically inclusive and aligned with human values.

Let’s build this future together – one verifiable dataset, one aligned AI agent, and one new primitive at a time.

Related Posts
Subscribe to our Newsletter
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.