HubSpot Onboarding

CRM Implementation 

Consulting & Implementation

Enhance your setup

Data Migration

ISO 27001 certified

Integrations

Connecting to/from HubSpot 

Beautiful websites

Crafted with HubSpot

Orgplexity

Organise agentic complexity

GuardHub®

AI Governance for HubSpot Users

The Art of Steering

A New Framework for Human-AI Collaboration

ConvX®

Turn AI conversations into revenue

Smartbound®

Signal Based Prospecting, plus AI

Independent HubSpot Audit

Independent review of your set-up

Blog

Hints & Tips

HubSpot Workshops

Virtual & Real World

Introducing Orgplexity: The Organisational Discipline for the Agentic Era

There’s a question that nobody in the AI space is asking properly, and it is the one that matters most. It is not "which AI tools should we adopt?" or "how do we build an AI strategy?" or even "how do we govern AI responsibly?" Those are all legitimate questions, and there are good frameworks addressing each of them. The question I am interested in is different: as AI agents embed deeper into how we operate, how does the organisation itself need to change?

Not the technology stack. Not the use cases. The organisation. The reporting lines, the accountability structures, the management practices, the way we think about who, or what, does work around here. That’s organisational complexity. And it is the question that led me to develop Orgplexity: a maturity model for agentic adoption that moves organisations through five defined stages of organisational complexity.

Why This, and Why Now

I have spent years working at the intersection of organisational psychology, technology adoption, and business operations, first as a HubSpot partner helping businesses structure how they work, and more recently through AI governance and our ISO 42001 certification. That journey has given me a front-row seat watching how organisations actually adopt AI. Not in theory, but in practice. What I see most often is not a lack of technology. It is the opposite. Businesses have access to more AI tools than they know what to do with, but no structured approach to adoption. The technology is available. A structured way to adopt it is not.

The reason is straightforward. Every major AI adoption framework on the market focuses on the same axis: technology readiness. What tools to deploy, how to build capability, how to scale from pilot to production. Some of the better ones address governance and ethics. A few touch on workforce implications. But virtually none of them address the organisational question: how does the structure, management, and coordination of your business need to evolve as agents become embedded participants in how work gets done?

That gap is about to become urgent. In 2025, most enterprises were prototyping. Through 2026, production-scale agentic AI deployment is accelerating rapidly. McKinsey is integrating AI agents as virtual employees. Anthropic has articulated a vision of agents with corporate identities, complete with email addresses, access credentials, and defined roles. Salesforce, Microsoft, and others are building agent platforms designed not just to automate tasks but to participate in business processes alongside humans.
This is no longer speculative. Agents are moving from tools to teammates. And the challenge that creates is not technological. It is organisational.

The Missing Discipline

In 2019, I wrote something that seemed idealistic at the time: "The antidote to complexity isn't simplicity, it's organisation." It was rooted in the work of Stafford Beer, the father of management cybernetics, who argued that every organisation is fundamentally a system for managing complexity. The more complexity your environment generates, the more organisational capacity you need to process it. Not by reducing the complexity, but by building structures sophisticated enough to handle it.

That principle has become unexpectedly urgent. The introduction of AI agents into business operations represents a step-change in organisational complexity. We are not just adding new tools. We are adding new kinds of participants: entities that can initiate, decide, escalate, and in some configurations, manage other entities. That is not a technology challenge you solve with an implementation roadmap. It is an organisational challenge that requires a new way of thinking about how businesses are structured.  Orgplexity is my attempt to provide that. It is the practice of structuring organisational complexity as AI embeds into the business. It is rooted in cybernetics. Not the science fiction version, but the original discipline that Beer developed: the science of effective organisation. And it is built around a maturity model that maps how organisations progress from basic AI tool usage through to fully integrated human-agentic systems.

I am introducing it publicly for the first time here because the timing has shifted from interesting to necessary. The organisations I work with are hitting these challenges now, not in some theoretical future.

The Orgplexity Maturity Model

The model has five levels. Each one describes a fundamentally different relationship between the organisation and its AI agents. Not in terms of technology capability, but in terms of organisational structure, feedback loops, governance requirements, and what appears on the org chart.

The unit of analysis throughout is the feedback loop. That comes directly from cybernetics. Beer understood that organisations do not function through hierarchy alone. They function through feedback. Information flows up, decisions flow down, and the quality of those loops determines whether the organisation can navigate complexity or gets overwhelmed by it. When you introduce AI agents, you are introducing new nodes in those feedback loops. The question is how those nodes integrate with the existing human ones.

Level 1: Agent as Tool

This is where most organisations sit today. AI does tasks. A team member uses ChatGPT to draft an email, a chatbot handles basic customer enquiries, an agent summarises meeting notes. The productivity gains are real, but nothing changes organisationally. Same people, same reporting lines, same decision-making structures.

The feedback loop at this level is simple and single-direction: a human initiates a task, the agent executes it, the human evaluates the output. The agent does not appear on any org chart because it is not an organisational entity. It is software, like a spreadsheet or a search engine. You do not restructure your business around Excel, and at this level, you do not restructure it around AI either.
Governance at Level 1 is basic but important: usage policies, acceptable use guidelines, data handling protocols. It is about establishing the ground rules for responsible tool use before anything more complex follows.

The key question for Level 1 is: "Are we using AI tools effectively?" It is a productivity question, not an organisational one. And for many businesses, this is the right place to be for now. The mistake is assuming it is the only level that exists.

Level 2: Agents in Workflow

At Level 2, AI moves from individual tool use to systematic deployment across business processes. Multiple agents are now embedded at defined points in operations: marketing automation, customer service routing, data processing, lead qualification. The organisation begins to change because processes are being redesigned to incorporate AI, and the people who previously did that work are shifting from doing it to overseeing the agent that does it.

The feedback loops multiply. Instead of one person using one agent, you now have multiple human-agent loops operating in parallel across different functions. Each loop is still discrete, with a human managing a specific agent for a specific process, but the combined effect starts to create operational complexity that did not exist before. Which agent handles which use case? What happens when two agents' outputs conflict? Who monitors performance across all of them?

This is where systematic governance becomes essential. Each agent needs defined use cases, risk assessment, escalation protocols, and monitoring. It is no longer enough to have a general usage policy. You need per-agent oversight that accounts for the specific risks and requirements of each deployment.

Agents at Level 2 still do not appear on the org chart. They show up in process maps and workflow documentation, but not in reporting structures. They are embedded in how work flows, not in how the organisation is structured.

The key question for Level 2 is: "Are our agents governed and performing across the business?" It is an operational question. And it is where many organisations will find themselves through 2026 and into 2027.

Level 3: Agents as Colleagues

Level 3 is the boundary. It is the single most important transition in the entire model, and it is the one that no existing AI adoption framework adequately addresses.

At Level 3, agents appear on the org chart for the first time. Not metaphorically. Literally. An agent has a defined role alongside human team members. Someone manages it. It has accountability, or at least, someone is accountable for its decisions and outputs. It is not a tool embedded in a process. It is a participant in a team.

The feedback loops shift fundamentally. For the first time, agents initiate rather than just respond. They flag issues, escalate proactively, make recommendations without being asked. The loop becomes genuinely bidirectional: humans direct agents, and agents inform humans. This is a qualitative change, not just a quantitative one. It means the organisation has to think about agents differently. Not as software that executes instructions, but as entities that participate in the flow of information and decision-making.
This creates management questions that most organisations have never had to consider. How do you onboard an agent? Not technically, because that is the easy part, but organisationally. Who is accountable when an agent makes a bad decision? What do you disclose to customers, partners, or stakeholders about which interactions involve agents? How do you manage a team where some members are human and some are not?

These are fundamentally organisational questions. They cannot be answered by a technology implementation plan. They require new thinking about management, accountability, disclosure ethics, and team design.

The key question for Level 3 is: "How do we manage hybrid human-agent teams?" And the honest answer, for most organisations, is that nobody has fully figured this out yet. That is exactly why this model exists: to give organisations a way to think about these challenges before they are forced to improvise through them.

Level 4: Agentic Hierarchies

At Level 4, agents begin managing other agents. An orchestrator agent delegates tasks to specialist agents, evaluates their outputs, and decides what to escalate to humans. The organisation now contains structures that are entirely agentic: agent-to-agent feedback loops that operate with human oversight at the system level rather than the individual interaction level.

This is where governance becomes genuinely complex. When Agent A instructs Agent B, and Agent B produces an outcome that causes a problem, where does accountability sit? The human who designed the system? The team that deployed the orchestrator? The governance framework that approved the agent-to-agent delegation? These are not hypothetical questions. They are the practical challenges that emerge when you allow agents to manage agents.

The human role at Level 4 shifts from managing agents to governing the architecture within which agents operate. You are no longer overseeing individual interactions. You are designing and monitoring the system itself. That requires a different kind of organisational capability, one closer to systems engineering than traditional management.

The organisation at this level is processing more complexity than any purely human structure could handle. That is the point. But it means the stakes of getting the governance wrong are proportionally higher, because the speed and scale of agent-to-agent operations can compound errors faster than human oversight can catch them.

The key question for Level 4 is: "How do we govern systems of agents, not just individual agents?" It is a system-level question, and it requires system-level thinking.

Level 5: Cybernetic Integration

Level 5 is where the model meets its cybernetic foundations most directly. At this level, AI participates in the steering function of the organisation. The system does not just process operational complexity. It navigates strategic complexity. The distinction between human and agent contributions to organisational direction becomes, in practical terms, organisationally irrelevant. The system steers.

This is Beer's vision fully realised in a modern context. Beer argued that every viable organisation needs a function that asks two questions: "What are we?" and "What should we become?" At Level 5, AI is part of how the organisation answers those questions. Not by replacing human judgment, but by processing environmental complexity at a speed and scale that enables better collective steering.

The traditional org chart is replaced at this level by something closer to a cybernetic map: a representation of feedback loops, variety management, and information flows rather than reporting lines and job titles. Governance is not something applied to the organisation from the outside. It is embedded in how the organisation operates. The system's architecture is its governance.
This might sound abstract, but the direction is already visible. Organisations that are deploying AI for strategic scenario modelling, real-time market sensing, and autonomous portfolio management are moving toward this territory. The question is not whether Level 5 will happen. It is whether organisations will get there through deliberate structural evolution or through ad hoc improvisation that leaves critical governance gaps.

The key question for Level 5 is: "Are we a coherent system that can navigate complexity?" Not "do we use AI well?", because that is a Level 1 question. The real question is whether your organisation has evolved to the point where human and agentic capabilities form an integrated system that is greater than either alone.

The Throughline

If you step back and look at the progression, there is a single thread running through all five levels: as agents move from tools to colleagues to systems, the organisational challenge shifts from managing technology to designing organisations.

At Level 1, AI is a capability question. By Level 3, it is a management question. By Level 5, it is an identity question. What kind of organisation are we, and how do we steer ourselves?

The boundary between Level 2 and Level 3 is the most consequential. That is where organisations move from "agents that coexist with the business" to "agents that participate in the business." Every level after that is a deepening of what it means for agents to participate: first as team members, then as managers, then as part of the steering function itself.

Most organisations will spend the next two to three years navigating Levels 1 through 3. Some will progress faster. A few are already at Level 4. Almost none have reached Level 5 in any meaningful sense. But the trajectory is clear, and the organisations that think about this progression deliberately, rather than discovering each level's challenges reactively, will navigate it far more effectively.

Why Cybernetics, Not Technology

I have grounded Orgplexity in cybernetics rather than technology strategy deliberately.

Most AI frameworks start from the technology and ask: how do we implement this effectively? That is a valid question, but it leads to a technology-centric view of what is fundamentally an organisational transformation. You end up with implementation roadmaps that tell you everything about the AI and nothing about the organisation that has to absorb it.

Cybernetics starts from the other direction. It asks: what does this organisation need to be in order to navigate the complexity it faces? The technology is part of the answer, but it is not the starting point. The starting point is the organisation's capacity to process variety, which is Beer's term for the complexity of its environment, and the structures it needs to maintain coherence while doing so.
That reframing matters because it changes what you optimise for. A technology-first approach optimises for agent capability. A cybernetic approach optimises for organisational viability: the capacity to survive and adapt as your environment changes. Those are different objectives, and they lead to different decisions about how you structure your business as AI embeds into it.
Stafford Beer developed these ideas in the 1960s and 70s, long before AI agents were conceivable. But the principles he articulated, that organisation is the tool for managing complexity, that viable systems require specific feedback structures, that the steering function must match the complexity of what it steers, are more relevant now than they have ever been. The introduction of AI agents into business operations is the exact kind of step-change in environmental complexity that Beer's work was designed to address.

What Comes Next

Orgplexity is a new concept. This is the first time I've written about it publicly. The maturity model, the cybernetic foundations, the organisational framing: these are all being introduced here for the first time.

I do not claim to have all the answers. Nobody does. We are in frontier territory where organisations are encountering challenges that have no established playbook. What I do believe is that the organisational lens is the missing piece in how businesses think about AI adoption, and that cybernetics provides the intellectual foundation for addressing it properly.

Over the coming months, I'll be developing this further: deeper dives into each maturity level, practical guidance for organisations navigating specific transitions, and the governance frameworks that enable safe progression through the model. If you work in this space, whether you are leading AI transformation internally or advising organisations through it, I would welcome the conversation.

The antidote to complexity is not simplicity. It is organisation. That has always been true. It has just never mattered this much.


Martin Shervington is the founder of Plus Your Business, a HubSpot partner agency and the first ISO 42001 certified HubSpot Partners. His work sits at the intersection of organisational psychology, AI governance, and business operations. Orgplexity is his framework for addressing the organisational challenges of the agentic era.