
The Two Layers of Decentralized AI: Agent Operations (8004) and Asset Economics (8028)
Why Two Standards? The Foundation of Decentralized AI
Ethereum Improvement Proposal 8004 and 8028 target different, complementary layers of the emerging decentralized AI (dAI) stack. This report is not to argue "which is better," rather it examines each standard's objectives, strengths, weaknesses, and how they work together to enable a blockchain-based AI economy.
Part 1: EIP-8004 - Establishing Trust in Autonomous Agents
What Makes an AI Agent Trustworthy?
EIP-8004 defines a framework for trustless AI agents on Ethereum. Its goal is to provide a minimal on-chain trust and identity layer for autonomous agents such as AI services, bots, or programs. In practice, EIP-8004 establishes three registries on-chain:
Three Pillars of On-Chain Trust
Identity Registry: Creating Verifiable AI Agents
Every agent can register a unique on-chain identity, implemented as an ERC-721 token representing the agent. The token's metadata or associated URI describes the agent's details including its endpoint, capabilities, and public keys. This creates an open, unlimited registry of AI agents where each agent has a verifiable ID.
Reputation Registry: Building Transparent Performance Records
After interactions, agents or their clients can record feedback scores and logs on-chain. EIP-8004 standardizes a feedback event containing a numeric rating, context tags, and a URI plus hash of a detailed off-chain report. To prevent spam or fake reviews, the protocol uses pre-authorized feedback: the agent signs a permit for the client to post feedback, so only genuine interactions produce on-chain ratings. These feedback entries serve as a reputation system visible to all.
Validation Registry: Independent Verification at Scale
Think of each validation request like a GitHub commit: one specific piece of work, linked to supporting details and pinned by a unique fingerprint. Independent validators can check that exact request and publish an approval or warning with evidence. An agent can ask multiple validators to review the work, creating a public trail of what passed and what got flagged.
Economic Flexibility: Leaving Room for Innovation
Notably, EIP-8004 does not prescribe the incentive or slashing mechanisms for validators. It leaves economic design open, so different networks can build atop this standard with their own reward and slash schemes.
The Complete Lifecycle: From Registration to Trust
These four components work in tandem as a composable trust layer. A typical lifecycle under EIP-8004 might be: an AI agent registers and gains an ID and discoverable profile; a user finds the agent via its on-chain profile and requests a service; after performing the service off-chain, the user posts feedback on-chain, perhaps a 5-star rating with a log URI; and optionally a validator re-runs the task or checks a proof and posts a validation result. All these signals—identity info, reputation scores, validation outcomes—are then queryable on-chain, allowing future users or even smart contracts to assess an agent's trustworthiness before engaging.
Why does EIP-8004 matter?
Decentralized Identity: Freedom from Central Control
Provides a unified identity standard for AI agents. Any AI service can mint an identity token, making it easy to index and discover agents without a central registry.
Immutable Trust Records: Transparency You Can Verify
Key interaction metrics including performance feedback and validation are recorded as on-chain events. This creates an immutable, transparent reputation trail. Participants can trust the data—for example, a high reputation score is verifiably backed by past events—and smart contracts can even automate decisions such as "only use agents with >90% success validation."
Minimal Yet Powerful: Built for Interoperability
EIP-8004 is deliberately minimal in scope. It doesn't dictate how agents communicate or how validators are rewarded, focusing purely on identity and trust data. This makes it truly composable: it can work alongside any off-chain agent communication protocol like Agent-to-Agent or HTTP APIs, and any incentive layer. For example, it complements protocols like A2A or Model Commons by handling who the agent is and why to trust it, while those protocols handle how messages are sent.
Smart Data Storage: Efficiency Without Sacrificing Security
The standard smartly uses URIs with content hashes to keep bulky data off-chain but verifiable. Detailed logs or proofs reside in decentralized storage like IPFS, Filecoin, or Arweave, with only their hash recorded on Ethereum. This balances transparency with scalability—the chain isn't overloaded with large AI logs, yet they are securely linked.
Flexibility Across Industries: One Standard, Many Models
By not enforcing a single reputation algorithm or validator system, it allows different domains to plug in their own trust models. For instance, a medical AI agent network might require validators to stake tokens and get slashed for bad reports, whereas a gaming AI network might use free community voting. Both can use EIP-8004's registry to post results in a standard format. This flexibility increases adoption potential across industries.
The Practical Challenges Ahead
While EIP-8004 provides powerful trust infrastructure, it relies on external incentive mechanisms to reward validators or weight feedback scores. Additionally, while agent IDs are on-chain, actual services and validations occur off-chain, creating risks of false claims if validators collude—though the standard exposes evidence for audit.
For developers, integrating with EIP-8004 requires handling NFTs, signing messages, and hosting metadata files, which is more complex than centralized APIs. This may slow adoption unless tooling abstracts these details.
Despite these considerations, EIP-8004 is foundational for decentralized AI. It addresses "How can we trust autonomous agents on-chain?" by creating a shared data layer for identity and performance. By anchoring trust signals on Ethereum, it transforms private data into a public good that any dApp or user can benefit from.
Part 2: EIP-8028 - Tokenizing AI Assets for a Fair Economy
From Data to Value: The Economics of AI Assets
EIP-8028 proposes the Data-Anchored Token (DAT) as a native asset standard for AI assets on Ethereum. In essence, DAT is designed to tokenize AI assets in a way that encodes usage rights and revenue sharing on-chain. It originates from the LazAI Network, which introduced DAT as a protocol mechanism and is now generalizing it through the EIP.
Three Dimensions of AI Asset Tokenization
A Data-Anchored Token can be thought of as a special SFT (semi-fungible token) with three key properties bound together:
Class: Defining and Verifying AI Assets
DAT introduces a notion of "class ID" which represents an abstract AI asset or resource. For example, a class could be a specific dataset, an ML model, an AI API endpoint, or even an AI agent profile. The class carries metadata about the asset and an integrity hash or proof to verify the asset's contents or provenance. All DAT tokens belonging to the same class relate to the same underlying asset. This is comparable to ERC-1155's idea of a token ID representing a class of item, but with added AI-specific metadata like a model's hash or a dataset's IPFS CID.
Quota: Measuring and Metering AI Usage
Each DAT token has an associated consumable quota value. This could be expressed in domain-specific units decided by the asset's class rules—for example, "number of API calls remaining," "compute hours remaining," or "number of inference runs allowed." The quota represents how much usage of the AI asset the token entitles the holder to. Consuming the asset—such as an AI model performing inference or a dataset being queried—will decrement this quota on-chain. This essentially treats usage as the core utility of the token, aligning the token's value with actual workload delivered.
Share: Automating Fair Revenue Distribution
Each DAT token also encodes a share ratio or percentage that determines how much of the revenue generated by the underlying asset accrues to that token holder. For example, if a model earns fees from users, a token's share might entitle its owner to 5% of those fees. All tokens of a class collectively define the distribution of revenue for that asset. This is a standardized way to do revenue sharing or royalty distribution at the asset level. Importantly, EIP-8028 includes methods and events for settling revenue in ETH or ERC-20 to token holders according to their share proportions.
Building on Solid Foundations: Standards That Work Together
DAT aims for backwards compatibility and composability by building on existing standards where possible. It is semi-fungible: within each class, tokens are fungible by value like ERC-20, but different classes are distinct like ERC-721. The implementation can utilize ERC-1155 or ERC-3525 under the hood, with classes analogous to IDs or slots. It also incorporates standards for metadata integrity to verify the asset's hash, and possibly interfaces for role management for access control. By referencing these existing standards, DAT avoids reinventing wheels for things like permissions or subscription expiry, instead focusing on the AI-centric pieces such as quota usage and on-chain metering of service.
Why EIP-8028 Matters for AI Economics
Real Assets, Real Value: AI Resources as Economic Goods
DAT squarely focuses on AI assets as first-class tokens, rather than generic utility tokens. Traditional AI project tokens often just serve as payment or governance instruments detached from the actual AI outputs. In contrast, a DAT token inherently represents a stake or entitlement in a particular dataset or model, which is a more concrete link between token and asset value. This asset-centric design means if you hold a DAT, you hold usage power and profit rights to a defined AI resource, making it much more directly tied to AI activity than a general "AI protocol token."
Earning Through Usage: Aligning Incentives with Reality
By making usage quota the core variable, DAT ensures that the economics revolve around actual AI workload. Contributors like data providers or model trainers earn based on how much their asset is used, not just on speculative token trading. This is a strength for sustainability: it aligns incentives such that a valuable model is one that is heavily utilized and thus yields more revenue to share. It provides on-chain usage data via standardized events that can feed into analytics or governance. For example, one could prove how many queries a model served in a month by reading the quota consumption events.
Transparent Distribution: Automating Fair Payouts
DAT bakes in a universal revenue distribution mechanism. Instead of each AI project writing a custom profit-sharing contract, EIP-8028 tokens would have a standard way to specify and pay out shares of revenue. This increases transparency and composability: revenue streams can be audited on-chain and even integrated into DeFi. For instance, a DAT could be used as collateral if its revenue flow is predictable, or one could build an index fund of top-earning AI assets. It also simplifies multi-party collaboration: data owners, model developers, and compute providers could all hold DATs of a class and automatically receive splits when the service is used, without trusting a centralized accountant.
Verified Authenticity: Proving What You Own
The "anchoring" in Data-Anchored Token implies strong provenance tracking. Each class carries an integrity hash such as a hash of the training data or model weights. Additionally, DAT could incorporate links to verification proofs in synergy with standards like EIP-7007. This is crucial for AI, where verifying origin and authenticity of models and outputs is essential. For instance, a model DAT might include a TEE attestation that the model was computed on certain secure hardware, or a zkSNARK proving the accuracy of the model's training. By having a slot for such proofs in a standard way, ERC-8028 encourages verifiable AI artifacts, making it harder to maliciously swap out a model or tamper with data unnoticed.
Ecosystem Interoperability: One Token Format, Infinite Possibilities
LazAI's push to make DAT an open standard means the concept can be audited and reused ecosystem-wide, rather than being a proprietary mechanism. This open approach builds trust—the standard can be vetted by the Ethereum community—and encourages interoperability. For example, multiple AI marketplaces or compute platforms could all support DAT tokens. A model from Platform A could be consumed by an agent on Platform B if both understand DAT usage accounting. This comparability is akin to how ERC-20 enabled exchanges and wallets to all interact with fungible tokens. Here, EIP-8028 could allow various AI services to interoperate using the same "AI asset token" format.
Navigating Implementation Challenges
DAT is a complex standard combining NFT, credit, and royalty features. Implementation introduces smart contract complexity, higher gas costs, and state management challenges for quotas and token holder shares. Some question whether DAT is necessary given existing standards could achieve similar functionality through combination.
While DAT meters usage and automates payments, it relies on off-chain enforcement to prevent unauthorized use beyond quotas. For example, open-sourced model weights could be queried off-chain despite on-chain quota depletion. DAT works best paired with access controls like paid APIs or secure enclaves. This is a practical limitation rather than a standard weakness—tokenization requires the asset to be enclosed or controllable.
Quotas and pay-per-use models may face user experience challenges. Unlike traditional software with invisible terms, DAT makes usage costs visible, potentially deterring casual users. Service providers might worry that transparent accounting limits pricing flexibility.
Despite these considerations, EIP-8028's vision is compelling: it treats data and models as quantifiable on-chain economic goods with built-in metering and revenue logic. This addresses the critique that most "AI tokens" are superficial add-ons. DAT enables new models like fractionalizing model ownership among token holders who earn when the model is used—similar to music royalties. It aims to be for AI assets what ERC-20 was for fungible assets: a shared standard for expressing ownership and rights.
Part 3: The Synergy - How Two Standards Create One Ecosystem
The Missing Pieces: What Each Standard Provides
EIP-8004 and EIP-8028 address two distinct pillars of a decentralized AI ecosystem:
EIP-8004 = Agents (Who and How to Trust)
Provides identity, reputation, and validation for AI service providers. In simple terms, it answers "Which AI agent should I choose or trust to perform this task?" by making their track record transparent.
EIP-8028 = Assets (What and How to Use and Monetize)
Provides a tokenized representation of the AI resources themselves including usage rights and profit-sharing. It answers "How is this AI asset accessed and who benefits?" by encoding who can use it and how value flows back.
These two standards are highly complementary rather than competitive. In fact, Ethereum's own decentralized AI initiative underlines the need for both: "Make Ethereum the preferred settlement and coordination layer for AI, which requires agent and payment standards and asset standards." Vitalik Buterin and others have emphasized that for Ethereum to support AI, it must handle verification, provenance, and value-sharing for AI systems. EIP-8004 tackles verification and trust through provenance of agent behavior, while EIP-8028 tackles value-sharing and provenance of data and models.
When Standards Work Together: Real-World Integration
An autonomous agent registered via EIP-8004 might consume an AI asset that is tokenized as a DAT via EIP-8028. For example, a generative art AI agent could use a dataset token for training data. The agent's on-chain identity could even hold DAT tokens in its own 6551 smart wallet. When it uses up quota, it would emit DAT usage events reducing the token's quota and perhaps owe payment. Thanks to DAT's standard, the payment can automatically route to the dataset's contributors. Meanwhile, the agent can log feedback about the dataset's quality or the model's output, which becomes part of its reputation. Thus, 8004 and 8028 enable an on-chain supply chain: data token → agent uses it → output validated → revenue split back to data token holders.
EIP-8004 provides trust signals about agents; EIP-8028 provides financial signals about assets. Together, they allow a more holistic picture. For instance, if an AI model is available via DAT, a client might choose among multiple agents to query that model. Using 8004 data, the client picks the agent with the best reputation to get reliable results. That agent, by using the model, spends DAT quota—generating revenue that flows to the model's token holders, possibly including the model's creator and the agent if it also holds some share tokens. In this way, an agent with a good reputation could invest in DAT tokens of the best models, aligning it to use those models because it will earn a share back. The standards thus encourage a scenario where "good agents gravitate to good assets." Trustless agents ensure quality service on good data and models, and data tokens ensure those who improve or heavily use an asset have skin in the game.
With both standards, we can trace both process and product. EIP-8004 logs who did what and how well; EIP-8028 logs what was used and how value was exchanged. If a dispute arises—say an agent produced a faulty result—we can check the agent's 8004 validation records to see if a validator flagged the result as wrong, and also check the DAT records to see if the agent exceeded the usage it paid for or if the underlying model had known limitations. Together they contribute to a transparent, end-to-end audit trail for AI transactions, from inputs like data and models, to agent actions, to outputs and payments.
Projects building decentralized AI services can mix and match these standards. For example, a decentralized AI marketplace might use EIP-8004 to register all participating AI APIs and require them to accept DAT tokens for payment per call. The marketplace's smart contract could automatically update the agent's reputation via 8004 feedback events after each call and trigger DAT consumption for each asset used in the call. Because 8004 and 8028 are standardized, the marketplace doesn't need to write custom logic for each new AI model or agent. As long as they conform to these ERCs, the interactions are uniform. This reduces friction in composing complex workflows like an agent that chains multiple models, where each model could be a DAT and the chain's coordinator contract just follows the standard usage event format to charge quotas and route rewards accordingly.
Part 4: Seeing It in Action - The Question-Answering Service
A Real Scenario: How Trust and Tokenized Usage Work Together
Imagine a decentralized Q&A service where multiple AI agents compete to answer questions using a shared knowledge base.
Two standards work together:
1. EIP-8004 (“ID + track record”) enables agent registration and a public history of user feedback and third-party verification results.
2. EIP-8028 (DAT, “usage credits + revenue split”) packages the shared knowledge base as a tokenized asset with metered usage and programmable revenue sharing.
Setting the Stage
Agent A and Agent B each maintain a public profile and prior performance history. Agent A accumulates stronger feedback scores and more confirmed verification reports over time, so the application ranks Agent A higher. The knowledge base exists as a DAT: ownership is distributed among token holders, and each use consumes a small amount of usage credit, similar to spending one serving from a prepaid meter.
The Moment of Truth
A user posts a question with a bounty. Both agents generate answers, and the application selects Agent A’s answer because Agent A’s public record appears stronger. Payment splits automatically: one portion flows to DAT token holders because the knowledge base was used, and another portion flows to Agent A as a usage fee for completing the task. The system deducts the required usage credit for the request. After delivery, the user can submit feedback to Agent A’s profile, and an independent reviewer can publish a pass/fail verification result with supporting evidence, allowing later users to see whether the answer was confirmed or flagged.
The Result: Everyone Wins
You earned revenue from data usage. Agent A earned reputation and reward. The user got a trustworthy answer. All details—agent, data, value distribution—are on-chain for auditing.
This example shows both standards working together: reputation data from 8004 guided the user's choice, while 8028 handled trustless value exchange. Agents compete on verifiable performance, and assets generate trackable revenue.
Without EIP-8004, users couldn't assess agent reliability. Without EIP-8028, revenue would require centralized escrow, blocking multi-party collaboration. Together, they form the "brain" (intelligent decision-making with trust) and "bloodstream" (value flow with incentives) of a decentralized AI ecosystem.
The Path Forward: A Complete AI Economy on Ethereum
EIP-8004 and EIP-8028 are complementary cornerstones of Ethereum's dAI strategy.EIP-8004 ensures reliable interactions through trust and verification of AI agents. EIP-8028 tokenizes AI assets with usage and profit mechanisms, rewarding contributions and accounting for usage.
Each standard covers what the other omits. Together they enable a secure, fair, and efficient AI economy: 8004 provides transparency and accountability for users to trust services, while 8028 aligns incentives and ensures fair compensation for creators. Their limitations complement each other—8004's lack of payment mechanisms is answered by DAT's revenue system, while DAT's need for off-chain enforcement is mitigated by 8004's reputation-based accountability.
Ethereum is becoming the settlement and coordination layer for AI. EIP-8004 provides coordination primitives (agent IDs, reputations, validations), while EIP-8028 provides settlement primitives (usage metering, payment distribution). Both are essential for a self-sustaining, decentralized AI ecosystem.
Rather than choosing between them, the community should view them as complementary standards addressing different stack layers—like HTTP and HTTPS for the web. One handles agent trust, the other asset value. Together, they form a robust foundation for building and scaling truly decentralized AI services.
Curious how EIP-8028 turns data into on-chain, ownable AI assets? Dive into the EIP, the Magicians thread, and the LazAI documentation below:
EIP PR: https://github.com/ethereum/ERCs/pull/1219
Post in Ethereum Magician:
https://ethereum-magicians.org/t/erc-8028-data-anchoring-token-dat/25512
LazAI Docs: https://docs.lazai.network/developer-docs
