Adobe Stock
There’s a new “customer” accessing banking systems — checking balances, transferring funds and making investment decisions — but it isn’t a person. This new entity is
Processing Content
For decades, the cornerstone of trust in finance has been know your customer, the process of verifying that a human user or company is who or what they claim to be. But as consumers begin to delegate financial control to AI agents, banks are extending transaction authority to new and potentially unverified, unaudited, opaque or misbehaving AI systems.
Banks still have to meet their regulatory obligations pursuant to Customer Identification Program and Customer Due Diligence, but they need to be able to address the emerging risks these AI agents present in acting on behalf of customers. Without a framework for identifying and governing these agents, banks face financial, operational and regulatory risks at the very heart of their digital infrastructure.
For example, under Reg E and Reg Z, financial institutions may not be liable for losses or errors caused by a customer’s authorized AI agent — potentially leaving consumers unprotected and exposing banks to reputational risk. At the same time, liability could shift to the AI platform provider if the agent acts improperly. Meanwhile, Bank Secrecy Act and/or anti-money-laundering and sanctions obligations might require that banks link every AI agent to a verified customer identity — essentially to know each agent — to maintain compliance and effective transaction monitoring. Improper data access or sharing by these agents could also create privacy and data protection liabilities, triggering breach reporting and regulatory scrutiny. Finally, expanded consumer data access is further blurring the boundary between customer, agent, and institutions, forcing banks to rethink how they manage risk, identity, and accountability in an agent-driven ecosystem.
Imagine a consumer asking her AI assistant to monitor her checking account balance daily and automatically transfer any amount over $2,000 into savings, or to continuously search for credit cards offering a lower APR and instantly transfer balances to minimize interest payments. While these agentic capabilities are powerful, they also introduce new and complex challenges for tech stacks not designed for them. Such continuous, autonomous transfers can confuse fraud detection and anti-money-laundering systems, which were built to identify abnormal human behavior based on customary human patterns and actions, not automated, algorithmic actions. Similarly, repeated balance transfers or new account openings can inadvertently damage a consumer’s credit score, even as the agent attempts to “optimize” financial outcomes.
These new behaviors obscure the line between a customer’s actual intent and autonomous, AI-enabled action. The traditional frameworks that governed identity, security and compliance, especially know your customer, were never designed for nonhuman agents.
To operate safely in this new era, banks must extend the rigorous and time-tested principles of know your customer and enhance their controls to cover autonomous and semi-autonomous AI systems. The know-your-agent framework must ensure every AI agent accessing financial data or executing transactions is authenticated (its identity, provenance and ownership are verified); authorized (its permissions and transaction boundaries are clearly defined and consented to by the customer); auditable (its behavior, logic, and decision-making processes are transparent and traceable); and aligned (its actions comply with regulatory, ethical, and fiduciary standards).
A robust know-your-agent program should mirror the structure of know your customer and build upon it with three core pillars: agent identification & validation, or AIP, which establishes identity, origin, and ownership of the AI system; agent due diligence, or ADD, which assesses risk, alignment, and governance of agent’s behavior and data; and ongoing agent monitoring, or OAM, to continuously monitor performance, compliance, and integrity.
These principles lay the foundation for agentic AI governance — the discipline of ensuring that AI systems act within safe, transparent and auditable boundaries.
Consumers will soon empower their AI agents to act directly on their behalf, transferring funds, optimizing credit and managing financial relationships autonomously. But with that empowerment comes risk. To secure the next generation of financial innovation, the industry must consider the role of know your agent.