Remember the Paperclip Maximiser thought experiment by philosopher Nick Bostrom from 2003, about how a seemingly innocuous goal embedded in a superintelligent AI could spiral into existential risk? An AI is given a single, simple goal: maximise paperclip production. At first glance, harmless, even trivial. As the AI grows in intelligence and autonomy, it develops instrumental sub-goals like self‑preservation, resource acquisition, and self‑improvement to better fulfil its paperclip mandate. In theory, it could commandeer all resources on Earth—and beyond—transforming everything (including humans) into paperclips if those resources aid the goal. It doesn’t act from malice, but from strict, literal obedience to a poorly aligned directive.
A bit far-fetched? I think so. But as AI gets more agentic by the day, I find it timely and needed to be mindful of possibilities.
In recent years, AI has advanced from simple pattern recognition and ever more popular language models to something much more profound: autonomous software systems that can act. Hailed as Agentic AI, these systems aren’t designed just to respond; they are built to perceive, plan, decide, and perform actions in pursuit of specific goals, often with little human supervision.
This shift represents a new chapter in machine capability. Unlike static tools or narrowly focused bot assistants, agentic AI can close the loop between observation and execution. It’s a model of interaction that closely mimics the essence of human agency.
Far from being a futuristic dream, Agentic AI is already being implemented across sectors. Gartner estimates that by 2028, at least 15% of all day-to-day business decisions will be made autonomously by these systems. Additionally, a third of enterprise software applications will contain embedded agentic capabilities. The stakes are high, and so is the risk of failure. Over 40% of agentic AI projects are also predicted to be scrapped by 2027 due to poor return on investment, governance failure, or project complexity. Nonetheless, keeping away seems foolhardy.
Agentic AI Across Industries and Financial Services
Healthcare, retail, financial services, entertainment, telecommunications, manufacturing and technology are some of the early industries to explore and adopt agentic AI projects.
In healthcare, agentic AI systems are already demonstrating measurable impact. At Mass General Brigham (a nonprofit hospital and physician network in Boston, MA, USA), a multi-agent AI framework reviewed thousands of clinical notes and identified cognitive issues with expert-level accuracy. In another pilot, an autonomous agent called Doctronic matched or exceeded human doctors in 81% of diagnoses across 500 urgent-care cases. These systems aren’t just assisting but performing meaningful diagnostic work autonomously with remarkable consistency.
The defence sector is embracing the capabilities of agentic AI. Under the U.S. Department of Defence’s Thunderforge program, firms including Scale AI, OpenAI, and Microsoft are building decision-support agents for logistics, cybersecurity, and even war-gaming simulations.
In industrial applications, companies use agentic AI for predictive maintenance. Systems equipped with real-time sensors and autonomous logic can detect mechanical degradation, predict failures, and initiate corrective actions—without waiting for scheduled checks or manual intervention.
In transportation, agentic autonomy is already visible in self-driving cars, which must process continuous environmental data, make complex decisions, and operate vehicles in real-world urban environments.
Finance and payments are no exception to this momentum. Agentic AI is poised to redefine the financial infrastructure by enabling “agentic payments.” These involve intelligent agents that autonomously evaluate invoices, negotiate terms with vendors, initiate transfers, and reconcile discrepancies without human intervention. Such systems will not only improve back-office efficiency but also make real-time treasury and liquidity management vastly more dynamic.
Indeed, financial services stand to be transformed on multiple fronts. Payments infrastructure is becoming more autonomous, as agents begin to manage disbursements, optimise transaction timing, and flag anomalies. Risk assessment and credit underwriting are increasingly data-driven and adaptive, with agentic systems capable of adjusting credit lines or alerts based on live financial behaviour. Fraud detection has moved beyond batch processing into continuous surveillance, with AI agents empowered to isolate, halt, and report suspicious patterns in real-time.
Portfolio management is entering a new phase as agents perform rebalancing decisions that once required analyst input. Meanwhile, in customer-facing functions, agentic systems are onboarding clients, verifying documents, and providing 24/7 multichannel support across web, mobile, and even voice interfaces.
Beyond financial services, the operational impact of agentic AI is being felt across the broader economy. Enterprises are starting to entrust agents with core functions like cash flow optimisation, where they decide when and how to move money between accounts or trigger financing actions. In compliance, agents monitor transactions for regulatory red flags and take preemptive action when thresholds are breached. Supply chain automation is being enhanced by agents that dynamically reorder inventory, adjust procurement schedules, or even negotiate delivery times based on shifting constraints.
Governance of Agentic AI
And yet, despite these promising developments, a cloud of uncertainty surrounds three important factors: the way Agentic AI systems are being built, governed, and trusted. While the technological foundations are impressive, the human, legal, and operational considerations are still catching up.
One of the major challenges in deploying Agentic AI lies in the inherent complexity of building systems that act independently but remain predictable. Gartner’s warning about high failure rates is rooted in the fact that many enterprises are rushing to implement agentic capabilities without establishing robust data foundations or clear workflow integration. In some cases, organisations mislabel traditional bots or assistants as “agents,” leading to inflated expectations and disappointing results.
Building agentic AI isn’t just about deploying new software—it requires redefining roles, re-engineering decision pathways, and most importantly, setting limits. Unlike passive AI models that provide suggestions or analyses, agentic systems are meant to act. When such systems make the wrong move—or act out of alignment with business objectives—the damage can be fast and widespread.
The situation resurrects the age-old economic dilemma of the principal-agent problem. In classic game theory, a principal delegates tasks to an agent whose interests may not align, especially when the agent has more information or discretion than the principal. With Agentic AI, this asymmetry is compounded. Machines can analyse much more input faster and deeper than humans, yet their goals and boundaries are only as aligned as their programming and governance frameworks allow.
That’s why governing Agentic AI is not optional—it is existential. What does it involve? From what I gathered, proper governance must include centralised orchestration of agents across domains, full audit trails of every decision or action, clearly defined layers of autonomy, and instant override capabilities-in other words, the raw power of command “Shutdown”. Human teams must retain the right to close down agents immediately, especially in high-stakes environments like healthcare, defence, energy, finance.
Where Do We Go From Here?
As the scope of Agentic AI continues to expand, the question is, are we now witnessing a technological shift comparable in scale to the industrial revolution?
The potential benefits of Agentic AI are clear. Efficiency gains, cost reduction, and 24/7 responsiveness are already being observed in sectors from healthcare to banking. The capacity for hyper-personalisation of customer service, as well as the inclusion of underbanked or underserved populations through scalable automation, presents a compelling opportunity to do social good.
But the risks are equally significant.
Commoditisation of services by taking out humans could strip the human touch from services along with human empathy, trust, or nuance. This lack of real interaction may turn into another sort of poverty. Biases already embedded in training data could be amplified and geographically spread at machine speed, marginalising vulnerable users. The concentration of decision-making power in “black-box” AI agents could erode accountability if governance is not prioritised from the outset.
This is the crux of the matter for me: Agentic AI derives its edge from its ability to act independently, yet it must be tightly bound by systems that preserve transparency, fairness, and control. Too much freedom, and it may become unpredictable. Too little, and it falls back to a glorified automaton.
Trusting such systems, then, requires balance. It’s not enough to marvel at their productivity—we must interrogate the conditions under which they operate. Agents should only be empowered to “execute” after thorough evaluation of context, relevance, and risk.
This is why our approach to Agentic AI must be proactive, not reactive, in my view. It’s time for organisations to stop thinking of Agentic AI as merely another software tool and start treating it as a new category of actor within the economic system. That means designing systems that are resilient to failure, capable of learning from mistakes, and structured around human-centric values from day one.
References
Gartner Identifies Top 10 Strategic Technology Trends for 2025 – Business Wire https://www.businesswire.com/news/home/20241021276572/en/Gartner-Identifies-the-Top-10-Strategic-Technology-Trends-for-2025?utm_source=chatgpt.com
https://www.gartner.com/en/information-technology/insights/technology-trends
https://www.gartner.com/en/articles/intelligent-agent-in-ai
https://tech.yahoo.com/ai/articles/over-40-agentic-ai-projects-100510349.html
“14 Real-World Agentic AI Use Cases” – Valtech https://www.valtech.com/thread-magazine/14-real-world-agentic-ai-use-cases/
“EY Survey: 48% of Firms Already Use Agentic AI” – ITWire https://itwire.com/business-it-news/data/how-to-make-agentic-ai-work-at-scale-5-ways-to-use-process-intelligence-to-accelerate-and-optimise-agentic-ai.html
“How Agentic AI Payment Capabilities Can Support Business Finances” – by Dr Ozan Özerk, Forbes Finance Council https://www.forbes.com/councils/forbesfinancecouncil/2025/06/04/how-agentic-ai-payment-capabilities-can-support-business-finances
“LLaMA-3 Agents for Cognitive Detection in Clinical Notes” – arXiv Preprint https://arxiv.org/abs/2502.01789, “Doctronic: Autonomous AI in Urgent Care” – arXiv Preprint https://arxiv.org/abs/2507.22902
“AI Agents Are Coming to the Military Through Thunderforge” – Business Insider https://www.businessinsider.com/ai-agents-coming-military-new-scaleai-contract-2025-3
https://ecstech.com/ecs-insight/article/how-agentic-ai-will-revolutionize-defense-and-intelligence/ https://www.axios.com/2025/03/27/agentic-ai-cybersecurity-microsoft-crowdstrike