Insights | ape factory

Agentic AI Governance: How to Avoid a Digital Riot

Written by Team | Mar 17, 2026 10:05:47 AM

“Captain’s log, stardate 2026.2. Earth’s citizens have moved beyond GPT-as-a-sidekick to ecosystems of autonomous agents. But will they master this technology without causing a digital riot?”

Sticking with our Star Trek theme, Captain Kirk and the crew are still monitoring our technical development. Afterall, hope remains that we’ll become warp capable once we’ve finished messing about with large language models (LLMs) and get back to serious science.

However, as a society, we’re currently busying ourselves with Agentic AI, leveraging the capabilities of LLMs to execute complex, multi-step tasks autonomously. These take on tasks in a human-like way, with different agents handling specific aspects of the task, interacting with data and other platforms through available APIs.

Next challenge: Governance

While the jury is still out on how much money can be saved by integrating AI into our business processes, naturally, no one wants to miss the boat. That means it’s full steam ahead in developing agentic systems, possibly with some employees using it to develop their own task-related tools.

But this causes a whole heap of headaches. Those who have introduced platforms like Monday into their organizations will know what’s coming. Without appropriate training and corporate guidelines on how it will be used, you end up with as many versions and formats of tables and business processes as there are employees, 95% of which will be sitting dormant two months from now.

As with any tool deployment, a conductor is required to ensure everyone is playing the same piece of music.

As highlighted in an article from SAP [1], enterprises will need governance frameworks if they are to reap the benefits of agentic. These should cover aspects such as:

  • Lifecycle management
  • Observability and auditability
  • Policy enforcement
  • Human-agent collaboration models
  • Performance monitoring

This will ensure that tools developed are tested sufficiently, retired when no longer useful, follow business rules and autonomy boundaries, and perform their tasks accurately. It mustn’t be forgotten that we’re giving Agentic AI access to our most sensitive data to act on our behalf in the critical path of business interaction.

At stake is not only financial impact but also the risk of irreparable brand damage if things go wrong. If AI has responsibility for approving loans or recommending health therapies, how do consumers get answers about why they were or weren’t approved, as they would in a human interaction?

Does AI actually make business sense?

Some will point to Klarna's AI-driven fiscal success as a clear sign that AI is changing how businesses work. They claim that, thanks to their OpenAI-powered customer service chatbot [2], they’ve not recruited anyone other than engineers since September 2023 [3]. But few are seeing comparable AI-driven fiscal benefit.

In a 2025 research paper [4], MIT Sloan professor Kate Kellogg followed the deployment of AI agents to detect adverse events in cancer patients. Of all the challenges in implementing this system, it wasn’t the fine-tuning of models or prompt engineering that caused the major issues—it was aspects such as stakeholder alignment and data engineering.

The format of available data remains a key issue. Without clearly defined structures, teams first need to reformat key data sources so that they can be used at all. Then there are the typical issues that any change management demands: aligning stakeholders, driving the project forward, and clarifying governance concerns. Getting everyone on board and 

ensuring that existing rules are followed applies universally, regardless of whether AI is involved or not. Lastly comes the workflow integration, where man and machine must collaborate. This typically determines whether staff will actually use the tool or if it will simply decline into obscurity.

Kellogg notes that: “Just because an agentic AI model reclaims 20% of someone’s time, that doesn’t mean it’s a 20% labor-cost saving.”

Other studies have also highlighted this dilemma. Cui et al. reported a 26% increase in completed tasks in a study of software developers across a range of companies [5]. However, the benefits came disproportionately to less experienced developers, suggesting the floor was raised rather than the ceiling. Furthermore, coding is not the only task software developers face, so reducing headcount by a quarter is not on the cards.

The next spanner in the works: Deglobalization

Although many would like to ignore world politics, it is increasingly becoming a concern that needs to be addressed. Everyone is taking a risk by relying solely on US SaaS providers, regardless of whether their data is hosted in their region of operation or not [6]. And regions, like Europe, are setting the regulatory barriers differently from others.

Sovereign cloud is only the first issue on the table, possibly prompting businesses to seek local partners with local data centers that operate under local regulations.

On top come the LLM models themselves, upon which agentic systems are built. It is conceivable that restrictions also apply here, such as a requirement to use models trained in accordance with European rules. Suddenly, your agentic AI project developed in the US cannot be rolled out globally to all of your teams.

What we’ll also see is a move away from the current approach—one-off projects where an LLM is stitched together with data and APIs—towards full integration into day-to-day business practises, much like any IT tool or platform.

Summary

While engineering teams prefer to keep out of politics, seeing technology as a leveller that can benefit humanity equally, reality could look starkly different. Firstly, we may need more of the hardware upon which agentic AI needs to run and its operators locally, rather than relying on the big three US providers.

Next up, the models we’ve quickly become accustomed to may no longer be available, with regional differences pushing us towards LLMs that conform with local regulations.

Finally, as businesses take agentic seriously and understand its pros and cons, it will be forced to conform to standard governance and business practices expected of other software systems. If they don’t, we’re going to end up with a digital riot on our hands.

-----

[1]  https://news.sap.com/2026/01/ai-in-2026-five-defining-themes/
[2] https://fortune.com/2026/02/17/klarnas-ceo-dario-amodei-ai-white-collar-workforce-shrink-2030/
[3] https://www.reuters.com/technology/artificial-intelligence/swedens-klarna-says-ai-chatbots-help-shrink-headcount-2024-08-27
[4] https://mitsloan.mit.edu/shared/ods/documents?PublicationDocumentID=10789
[5] https://pubsonline.informs.org/doi/10.1287/mnsc.2025.00535
[6] https://www.apefactory.com/en/insights/geopatriation-cloud-sovereignty