Enzo Tolentino is a forward-thinking audit and risk leader driving digital transformation at Banco de Crédito BCP. With expertise in machine learning, analytics and IT-based controls, he pioneers innovative strategies that redefine internal auditing. Tolentino is known for advancing efficiency, accuracy and organizational resilience through data-driven risk management.
Understanding Agentic AI in Finance
The core difference between traditional AI systems and agentic AI systems lies in their ability to perform autonomous data interpretation, real-time adaptation and action initiation. The financial sector utilizes agentic AI through applications that include:
• Automated fraud detection: By enabling autonomous, real-time analysis and response, agentic AI offers a more advanced approach to fraud detection than traditional rulebased systems.
• Dynamic portfolio management: Agentic AI enables greater personalization by dynamically aligning strategies with individual investor goals and market conditions. It can rebalance portfolios, simulate market scenarios and coordinate actions across advisory, compliance and analytics workflows.
“The core difference between traditional AI systems and agentic AI systems lies in their ability to perform autonomous data interpretation, realtime adaptation and action initiation”
• Scalable customer service and advice: Enhancing the customer experience, agentic AI autonomously manages inquiries, anticipates client needs and initiates proactive outreach. It can synthesize account data and match it with product offerings, delivering tailored financial advice without human escalation and offering continuity and responsiveness across digital service channels.
While these systems offer efficiency and precision, their decision-making independence introduces new governance concerns, particularly accountability, transparency, regulatory compliance and ethical alignment.
Governance Challenges
1. Accountability and Liability
With agentic AI executing decisions once reserved for humans, such as approving loans or reallocating assets, clarity over responsibility is essential. If an autonomous action leads to financial loss or noncompliance, determining who is accountable becomes complex. To facilitate the identification process, organizations should adopt structured accountability frameworks that include:
• Role assignment tools like RACI (Responsible, Accountable, Consulted, Informed) charts to help clarify oversight responsibilities.
• Comprehensive documentation of model logic, training data and decision outputs.
• Escalation protocols to flag high-risk decisions for human review.
Together, these controls support the broader solution of implementing "human accountability frameworks," which ensure that ultimate decision ownership remains with human agents.
2. Explainability and Transparency:
Often powered by deep or reinforcement learning, agentic AI systems tend to function as opaque decision-makers, making their inner workings difficult to interpret. Their decisions can be difficult to interpret, complicating regulatory compliance and error remediation.
To address the challenge, firms should adopt a combination of technical and procedural controls. Tools such as LIME and SHAP can generate interpretable outputs. At the same time, tiered transparency (simplified insights for customers, technical details for auditors and regulatory-focused justifications for compliance teams) facilitates the accessibility of explanations. Explainability should also be integrated into existing Model Risk Management (MRM) programs. The overarching solution involves building AI models with interpretability as a foundational design CXOINSIGHTS principle and using monitoring tools to continuously validate the accuracy of the explanations provided over time.
3. Regulatory Oversight and Model Drift
Often powered by deep or reinforcement learning, agentic AI systems tend to function as opaque decision-makers, making their inner workings difficult to interpret. Compliance teams must, therefore, maintain ongoing validation to ensure decisions remain aligned with legal standards. The regulatory bodies are increasingly taking notice. The EU AI Act and financial regulators are moving toward riskbased frameworks that emphasize real-time oversight and auditability of high-risk AI systems.
Organizations must adopt validation loops through automated systems that regularly test AI performance against compliance benchmarks to mitigate those risks. Implementing drift detection mechanisms through statistical checks and predefined thresholds facilitates the identification of model behavior deviations from expected norms. Additionally, version control and sandbox testing provide a controlled environment to evaluate updates before they are deployed in production.
Overall, institutions must create a regulatory change management process that integrates real-time updates into the AI system's compliance logic and mandates revalidation as part of each model update cycle.
4. Scalability of Human Oversight
AI governance models implementing "human-in-the-loop" (HITL) approaches need human reviewers to verify and approve essential AI decisions before system activation. The implementation of human-in-the-loop governance models becomes impractical when agentic systems need to execute thousands of fraud checks each second. Organizations need to implement "human-on-the-loop architectures" that combine human supervisors to monitor system behaviors through dashboards and override capabilities, yet do not need to validate each decision; humans step in only for exceptions while reviewing periodic aggregated decision logs. Real-time observability of AI decisions must exist alongside escalation protocols for anomalous behavior and thorough training for oversight staff to interpret and act upon AI outputs.
5. Ethical Governance and Value Alignment
Financial institutions must ensure their AI systems maintain institutional values during their interactions with customers while influencing financial decisions. The main ethical issues in AI system operation involve minimizing discriminatory choices and bias along with protecting privacy and confidentiality while upholding ethical standards in automated systems. Financial institutions need to integrate ethical oversight at every stage of AI development, from creation to deployment. The ethics-by-design framework works well because it integrates fairness checks, bias testing, and diversity metrics into model development pipelines. High-impact AI initiatives can receive continuous evaluation through cross-functional ethics review boards. AI Ethics Impact Assessments (AI-EIAs) performed regularly enable organizations to discover and solve potential ethical problems before they become serious issues. Organizations should write their ethical principles into model design and operational constraints to meet regulatory standards while maintaining institution-wide ethical commitments.
Financial operations incorporating agentic AI require institutions to develop new governance frameworks for their implementation. The current oversight methods do not provide sufficient protection. The combination of technical controls with regulatory engagement and ethical foresight represents the essential proactive governance framework that enables organizations to achieve autonomous system benefits through responsible risk management.