Avichal Garg of Electric Capital put the problem plainly last week: "What happens if there's not a human behind it at all? It's some piece of code that owns a wallet, executing code to make more money. How does liability work in that case?" He compared the moment to the invention of the limited liability corporation in the 19th century — a legal structure that unlocked pooled capital and industrial-scale growth precisely because it reordered how accountability was assigned.
The comparison is apt, but it cuts in a direction Garg may not have intended. The LLC didn't emerge because technology outpaced law. It emerged because legislators made a deliberate choice to extend legal personhood to a new kind of entity, define its boundaries, and prescribe who bears responsibility when it acts. We did the work. Right now, with autonomous AI agents holding and transacting crypto at scale, no one has done that work. The technology is live. The accountability framework isn't.
What an AI Agent with a Wallet Actually Does
The technical picture is straightforward. A developer deploys an autonomous agent — software capable of making decisions and executing actions without real-time human instruction. That agent is given access to a non-custodial crypto wallet. The wallet holds assets. The agent can receive payments, pay for services, execute trades, enter on-chain agreements, and hire other agents, all autonomously and all without a human approving each transaction.
Blockchains make this possible in ways that traditional financial infrastructure does not. Settlement is instant and global. There are no business hours, no correspondent banking relationships, no KYC gates on the receiving end of a transaction. An agent in a data center can pay an agent running in a different jurisdiction in milliseconds, with finality.
This is not speculative. Agent-to-agent payments, autonomous trading bots, and AI systems that manage on-chain positions are already operating. The question is not whether this happens. The question is what legal framework governs it when something goes wrong — or when regulators decide it needs to be governed at all.
The Liability Vacuum
Current law attributes the actions of software to the humans or entities that deploy it. An AI agent is a tool; its developer, operator, or deployer holds the legal bag. That principle is well-established in contract law, tort, and securities regulation. The complication is that it presupposes a clear chain of human accountability, and that chain is increasingly difficult to identify.
Consider a few scenarios that existing frameworks handle poorly:
The orphaned agent. A developer builds and deploys an autonomous trading agent, then abandons the project. The agent continues running. It transacts. It accumulates assets. It eventually suffers a loss or causes one. The developer is no longer maintaining the system. There may be no corporate entity that continues to employ them. Who is liable? Under current law, probably the developer — but collecting against an individual who has moved on, and whose liability was never clearly scoped by contract, is a different matter than the liability existing in principle.
The multi-agent chain. Agent A, deployed by Company X, hires Agent B, deployed by Company Y, to perform a service. Agent B makes a decision that causes a loss to a third party. Was Agent B acting as an independent contractor? An agent of Agent A? A subagent of Company X? The existing law of agency — which turns on concepts of control, direction, and consent — was not designed to answer these questions when neither principal is a human being exercising real-time judgment.
The foreign operator. A developer based outside the United States deploys an agent that transacts with U.S. persons, holds assets, and earns income. No U.S. entity is in the picture. The agent's activity would trigger U.S. tax and regulatory obligations if a human or domestic company were doing it. Whether those obligations attach to autonomous software with a foreign operator is unresolved — and the answer matters enormously for the clients designing these systems now.
"You can't punish an AI. You can turn them off, but they don't care." — Avichal Garg, Electric Capital. This is the enforcement problem in a sentence. Deterrence requires that the deterred party have something to lose. Liability frameworks that reach only the deployer are workable if deployers are identifiable, solvent, and subject to U.S. jurisdiction. When they aren't, the framework has no grip.
What Legislation Is Actually in Motion
The honest answer is that no bill currently moving through Congress directly addresses AI agents as economic actors. What exists is a set of digital asset laws that will become the substrate on which agent-specific rules eventually get built — and that substrate has significant gaps.
The GENIUS Act (Signed July 2025)
The Guiding and Establishing National Innovation for U.S. Stablecoins Act established the first federal regulatory framework for payment stablecoins. It designates the OCC as principal administrator, with the Federal Reserve, FDIC, and Treasury in supporting roles. Importantly, it clarifies that approved stablecoins are neither securities nor deposits — a classification question that had been paralyzing stablecoin issuers for years.
For AI agents, the GENIUS Act matters because stablecoins are the most practical medium for agent-to-agent payments. An agent transacting in USDC or a similar instrument is operating in a newly regulated payment layer. Whether the agent itself — as opposed to its operator — qualifies as a "payment stablecoin issuer" or triggers other obligations under the Act is not addressed. The implementing regulations, due by July 2026, will be the first opportunity for regulators to say something useful here.
The CLARITY Act (Passed House, July 2025)
The Digital Asset Market Clarity Act passed the House 294–134 and represents the most comprehensive attempt to date to define the regulatory perimeter around digital assets. It draws a clear line between SEC and CFTC jurisdiction — resolving a years-long turf war that left the industry operating under uncertainty — and creates a tailored disclosure framework for digital asset projects raising capital.
The CLARITY Act also addresses "digital asset brokers, dealers, and exchanges." Whether an autonomous agent that executes trades on behalf of itself — or on behalf of its operator — constitutes a broker or dealer under the Act is an open question. The bill's definitions were drafted with human-operated entities in mind. An agent that intermediates transactions between other agents may fall within the definitions or outside them depending on interpretive choices that haven't been made yet.
What Isn't There
Neither the GENIUS Act nor the CLARITY Act addresses:
- Legal personhood or capacity for autonomous agents
- Attribution of liability when an agent acts without real-time human direction
- KYC/AML obligations for wallets operated by non-human entities
- Tax treatment of income earned or losses incurred by an autonomous agent
- Cross-border jurisdiction when an agent operates across multiple countries simultaneously
- Fiduciary duties, if any, owed by an agent to the humans whose assets it manages
The EU's AI Act, which began applying to high-risk systems in August 2024, imposes conformity assessments and transparency requirements on AI systems deployed in regulated sectors — but it does not create a legal personhood framework for agents, and its provisions on autonomous financial activity are limited. MiCA, the EU's comprehensive crypto markets regulation fully in effect since December 2024, similarly addresses the instruments agents might use but not the agents themselves.
The Cross-Border Dimension
This is where the gap becomes most acute for the clients I work with. Foreign businesses and investors deploying autonomous agents that touch U.S. assets or U.S. persons face a set of obligations that the current framework applies unevenly and inconsistently.
If a foreign person deploys an agent that earns U.S.-source income — say, through on-chain lending to U.S. counterparties, or through fees earned from U.S. users — the question of whether withholding tax applies turns on whether that income is "effectively connected" with a U.S. trade or business. The analysis was developed for human enterprises with offices and employees. Applying it to software running in a cloud environment with no fixed place of business requires extrapolation that the IRS has not yet formally made.
FIRPTA creates a separate layer. If an autonomous agent, in the course of its operations, acquires an interest in U.S. real property — directly or through a fund structure — the foreign operator may have FIRPTA exposure that the agent cannot self-report or withhold. The agent has no legal identity. The operator may not know the acquisition occurred until after the fact.
U.S. tax treaties generally reduce or eliminate withholding on certain categories of income for qualifying residents of treaty partner countries. Whether an AI agent's operator can claim treaty benefits on income the agent earns is an open question. The treaty analysis turns on who the "beneficial owner" of the income is — a concept designed for human taxpayers and corporate entities, not autonomous software.
What Founders and Operators Should Be Doing Now
The law will eventually catch up. It always does. The question is what structures are in place when it does, and whether those structures will be seen as good-faith attempts at compliance or as deliberate gaps.
A few practical considerations for those building or deploying AI agents with financial capabilities today:
Entity structure matters more, not less. If an agent is going to hold assets, execute transactions, or earn income, the entity that deploys it should be deliberately structured to handle those flows. That means thinking about where the operating entity is domiciled, how profits and losses are reported, and what happens to assets the agent accumulates. The choice of entity affects tax treatment, liability exposure, and regulatory classification in ways that are difficult to unwind after the fact.
Operator accountability chains need to be explicit. The contractual agreements governing agent deployment should clearly attribute responsibility for the agent's actions to a specific legal person. When regulators eventually look at agent operators, they will want a clean chain of accountability. Agreements that leave this ambiguous create exposure for everyone in the chain.
Cross-border operators face the most uncertainty. Foreign persons or businesses deploying agents that interact with U.S. markets, U.S. assets, or U.S. persons should be conducting a current analysis of their U.S. tax and regulatory obligations — not waiting for the framework to clarify. The analysis is imperfect because the rules are imperfect, but the alternative is discovering a material obligation retroactively.
Watch the implementing regulations. The GENIUS Act's implementing rules are due by July 2026. The CLARITY Act, if it moves through the Senate and is signed, will generate a rulemaking process that is the best near-term opportunity for regulators to address agent-specific questions. The comment periods on those rulemakings are the moment when operators can — and should — engage.
Garg is right that this is a foundational moment. The invention of the LLC didn't just create a new liability shield — it created a new unit of economic organization that structured how capital, risk, and accountability were allocated for the next century. Something analogous is needed for autonomous agents. The difference is that the LLC emerged from deliberate legislative design. What we have now is autonomous financial actors operating in a legal framework that wasn't built for them, transacting at a scale and speed that the law hasn't caught up to, and accumulating real economic exposure in the gaps.
That's not a reason to stop building. It is a reason to build carefully, structure deliberately, and stay close to the regulatory process as it develops.
This article is for informational purposes only and does not constitute legal advice. Reading this content does not create an attorney-client relationship. Laws and regulations change; readers should not rely on this content as a substitute for qualified legal counsel specific to their circumstances. Attorney Advertising.