Agentic AI: Autonomy, Liability, and the Next Governance Frontier

Agentic AI represents a fundamental shift in what AI systems do. Unlike earlier tools that process data or respond to prompts, agentic systems autonomously plan, decide, and act, moving money, placing orders, deleting records, identifying vulnerabilities and deploying code, all without requiring human approval at each step. As AI agents are integrated into critical workflows across finance, healthcare, legal services, and supply chain management, pressing questions emerge: which ex-ante and ex-post safeguards are in place? When autonomous systems cause harm, who is liable?
A significant liability gap is taking shape. Most agentic systems are currently deployed under legacy technology contracts designed for passive, deterministic software firmly under human control. Suppliers routinely disclaim responsibility for agent behaviour, leaving deploying organisations to absorb compliance and legal consequences for actions they neither fully directed nor could reasonably foresee. Existing AI regulatory frameworks, including the EU AI Act, were largely drafted before the agentic paradigm matured, and struggle to allocate responsibility across the layered ecosystems of model providers, orchestration platforms, tool vendors, and end-users that characterise modern multi-agent systems.
Developing international redlines and a dedicated agentic AI liability framework is therefore becoming urgent. Foundational building blocks are beginning to emerge: ex-ante safety check, auditable autonomy, chain-of-thought logging, clearly designated human oversight roles, mandatory impact assessments for high-stakes agent deployments, and enforceable accountability chains that trace decisions across agent handoffs. As AI agents move from assistants to autonomous actors in the real world, governance frameworks must evolve at the same pace or risk leaving individuals, organisations, and democratic institutions exposed to harms for which no one is legally responsible.