Introduction: The Sailor and the Wind
A sailor is always in dialogue — with the wind, the water, and the vessel.
Unlike driving on a fixed road, sailing is a dynamic exchange with forces that can’t be controlled — only sensed, interpreted, and responded to.
The wind is invisible—yet its presence is made visible through its effect on the environment. The surface of the water ripples, darkens, or shimmers in patterns that reveal approaching gusts or calming lulls. The experienced sailor learns to read the water—watching how the wind moves across it, predicting how it will arrive at their sails.
When the wind shifts, the sailor doesn't panic. They adjust the sails, trim the sheets, or change course slightly. These are small acts of real-time steering—a continuous loop of sensing, deciding, acting, and sensing again.
This, at its core, is what all great steering requires—in business, in leadership, in the age of intelligent machines.
Cybernetics: The Hidden DNA of Modern AI
The word cybernetics comes from the Greek kybernētēs, meaning "steersman"—the one who holds the rudder. Coined in the 1940s by Norbert Wiener, cybernetics emerged as a science of control and communication in systems, whether mechanical, biological, or social. Its central insight was deceptively simple: systems can self-regulate by responding to feedback.
This made it a science of steering—not just of objects, but of processes, behaviors, and organisations.
Consider the thermostat. A thermostat doesn't "know" anything in the human sense, but it forms a closed-loop system. It compares the current temperature (input) to a set goal (target temperature). If the room is too cold, it signals the heater to turn on. If too warm, it turns the heater off. This continuous loop—sense, compare, act, adjust—is the core of feedback-based control.
Now scale this up. A human body regulating blood pressure. A flock of birds responding to each other's flight direction. A supply chain adjusting inventory based on demand. A leadership team navigating market shifts. In each case, a cybernetic loop is at work: signals are sensed, compared to expectations or goals, and used to guide the next action.
What made early cyberneticians like Wiener, Ashby, and McCulloch so influential was their recognition that these patterns repeated across wildly different domains. The system didn't need a brain to be intelligent—it needed feedback and the capacity to respond.
When we talk about AI today, we're seeing the evolved manifestation of these early cybernetic principles. What was once a thermostat has become GPT-4. What was once a simple feedback loop has become a complex neural network. But beneath the sophistication, the fundamental principles of steering remain.
From Simple Feedback to Intelligent Systems
This shift—from managing outputs to managing systems—is essential to understanding modern steering. A thermostat responds to heat. A well-run company responds to market shifts. A co-pilot responds to prompts. In each case, the system's performance depends on how well it senses its environment, how clearly it defines its goal, and how effectively it adjusts its behavior.
But unlike thermostats, human systems introduce a new layer of complexity: interpretation. What counts as a "goal"? What is the "right" feedback to follow? Who decides what a deviation means? This is where cognitive tools—like the concepts in this series—become necessary. The human steerer must interpret, not just react. Steering isn't just about correction—it's about construction. We don't just hold the rudder. We shape the map.
As Stafford Beer, who applied cybernetics to management and organisational design, put it:
"The purpose of a system is what it does. There is, after all, no point in claiming that the purpose of a system is to do what it constantly fails to do." — Stafford Beer, Brain of the Firm (1972)
Beer's famous axiom reminds us: intent does not equal outcome. If your organisation claims to be customer-centric but rewards internal politicking and short-term wins, the true purpose of the system is not what you say—it's what you reward.
In AI-enhanced systems, this principle becomes even more critical. The systems you're building don't just execute stated intentions—they learn from implicit patterns. They don't just do what you say—they do what you reinforce.
The Five Nodes: A Cybernetic Framework for the AI Age
In our first exploration of the steering model, we introduced the five interconnected nodes that form a complete framework for effective human-AI collaboration. Now, let's explore these nodes through the lens of cybernetic principles:
1. The Vehicle
This is the system you are steering—the organisation, team, or process under your influence. It contains the structures, roles, tools, and flows that determine how action is taken.
Just as a sailboat has a hull, rudder, sails, and crew, a business has technology stacks, team structures, workflows, and channels of operation. To steer well, you must understand your vehicle's design and capabilities.
In cybernetic terms, the vehicle represents what Beer called the "viable system"—the entity that must maintain integrity while adapting to changing conditions. With AI integration, this vehicle now includes both human and machine components, creating a hybrid system with new properties and potentials.
2. The Environment
Every vehicle exists within an environment that it must respond to—conditions outside its control but not outside its awareness. Markets shift, weather patterns change, competitors emerge, customer expectations evolve.
A steerer must sense the environment, interpret its signals, and update their orientation accordingly. In sailing, the wind is not your enemy—it is your information source.
Cybernetician Ross Ashby formulated this as his Law of Requisite Variety: "Only variety can absorb variety." In other words, your system must be as complex as the environment it navigates. AI dramatically expands this capacity, allowing organisations to process more environmental signals than ever before—but only if those signals are properly integrated into decision processes.
3. The Goal
Steering only makes sense in relation to a goal. Whether explicit or implicit, every system is being guided toward a desired outcome. That goal may be fixed (like a destination) or dynamic (like optimising flow).
Clarity of goal is essential—but so is the ability to renegotiate it as conditions evolve. One of the most dangerous errors in steering is overcommitting to a goal that no longer fits the environment or vehicle capabilities.
In cybernetic systems, goals create what engineers call a "reference signal"—the standard against which current conditions are compared. With AI systems, goal-setting becomes more nuanced. The explicit goals you state may be overshadowed by the implicit goals embedded in how you train and reinforce the system.
4. The Feedback
To steer effectively, feedback is everything. It is the signal that tells you whether your current action is leading you closer or further from your goal. And crucially, not all feedback is equal. Some signals are noisy, delayed, or distorted. Others are subtle but accurate.
High-performance systems cultivate feedback literacy—the ability to distinguish signal from noise, build sensing mechanisms into every layer, and respond quickly without overreacting.
Cybernetics revolutionized our understanding of feedback, moving beyond simple error correction to recognize multiple layers of adaptation. First-order feedback adjusts actions. Second-order feedback adjusts goals. Third-order feedback adjusts the system itself. In AI-enhanced organisations, all three levels become critical to effective steering.
5. The Steerer
The final node—and the one that links back into the vehicle—is the human being who is perceiving, deciding, and acting. The steerer brings judgment, attention, habit, bias, and clarity to the system.
Steering is never just technical; it is always also cognitive. Two people at the helm of the same system can get very different results—not because the vehicle changed, but because perception, interpretation, and intention differ.
In second-order cybernetics, this became a crucial insight: the observer is part of the system. The steerer doesn't stand apart from what is being steered—they are entangled with it, shaping and being shaped by the dynamics they participate in.
Perception Determines Response
At the heart of this framework is a single, foundational principle that links cybernetic theory to practical leadership:
Perception determines response.
Every action you take—every adjustment, decision, or strategic move—begins with a way of seeing. If that perception is clear, context-aware, and grounded in meaningful concepts, the likelihood of making an optimal response increases dramatically.
This is why the concepts matter. Each one is a refinement tool—a way to bring more resolution to how a system is understood. The more refined your perception, the more effective your steering.
You don't just need better data. You need better distinction-making—the cognitive ability to tell the difference that makes a difference. That's what these concepts provide.
As anthropologist and cybernetician Gregory Bateson noted, information is fundamentally "a difference that makes a difference." The steerer's job is to recognize which differences matter and respond accordingly.
Creating a Conceptual Toolkit
Just as a skilled sailor learns to read the wind, trim the sails, and adjust course with precision, so too must the modern decision-maker develop fluency with a core set of tools.
These tools are what we refer to as concepts—mental models that offer ways to perceive, frame, or act. Think of them not as static definitions, but as dynamic levers—terms that sharpen attention, shape inquiry, and help surface deeper insights across contexts.
When internalized, these concepts form a shared language across teams, enabling more consistent diagnosis, better alignment, and clearer communication between humans—and increasingly, between humans and intelligent systems.
When mapped into tools like CRMs, these concepts form the structure for:
- Framing co-pilot prompts
- Structuring system feedback
- Organising playbooks and procedures
- Aligning cross-functional dialogue
This approach transforms cybernetic principles from abstract theory into practical application. The concepts become what Beer called "variety attenuators"—tools that help manage complexity by directing attention to what matters most.
Steering as the Essential Skill
The real advantage in the AI age isn't in matching the machine's speed or scale—it's in learning how to guide it effectively. This means framing direction, interpreting signals, and adjusting with clarity as conditions evolve. It means understanding what matters and how to move toward it, even when the landscape is shifting.
AI can generate, recommend, and automate. But it still looks to us for purpose, priorities, and sense.
Steering is how intent becomes motion. And in the age of intelligent machines, it may be the most essential skill we develop.
To steer well is to develop anticipatory awareness. To see not just where the system is, but where it's tending—and how a small adjustment now may prevent a large correction later.
The most critical takeaway from cybernetics isn't technical—it's philosophical. As Heinz von Foerster put it: "Act always so as to increase the number of choices." Good steering isn't about control in the sense of limitation—it's about control in the sense of capability. It expands what's possible.
The art of steering—rooted in cybernetic science but expressed through human judgment—isn't just about technology. It's about developing the cognitive discipline to navigate complexity with confidence, clarity, and purpose. It's about learning to read the wind.
Want to explore how these principles could be applied in your organisation? Contact me to discuss your specific business context.

Introduction: The Sailor and the Wind
A sailor is always in dialogue — with the wind, the water, and the vessel.
Unlike driving on a fixed road, sailing is a dynamic exchange with forces that can’t be controlled — only sensed, interpreted, and responded to.
The wind is invisible—yet its presence is made visible through its effect on the environment. The surface of the water ripples, darkens, or shimmers in patterns that reveal approaching gusts or calming lulls. The experienced sailor learns to read the water—watching how the wind moves across it, predicting how it will arrive at their sails.
When the wind shifts, the sailor doesn't panic. They adjust the sails, trim the sheets, or change course slightly. These are small acts of real-time steering—a continuous loop of sensing, deciding, acting, and sensing again.
This, at its core, is what all great steering requires—in business, in leadership, in the age of intelligent machines.
Cybernetics: The Hidden DNA of Modern AI
The word cybernetics comes from the Greek kybernētēs, meaning "steersman"—the one who holds the rudder. Coined in the 1940s by Norbert Wiener, cybernetics emerged as a science of control and communication in systems, whether mechanical, biological, or social. Its central insight was deceptively simple: systems can self-regulate by responding to feedback.
This made it a science of steering—not just of objects, but of processes, behaviors, and organisations.
Consider the thermostat. A thermostat doesn't "know" anything in the human sense, but it forms a closed-loop system. It compares the current temperature (input) to a set goal (target temperature). If the room is too cold, it signals the heater to turn on. If too warm, it turns the heater off. This continuous loop—sense, compare, act, adjust—is the core of feedback-based control.
Now scale this up. A human body regulating blood pressure. A flock of birds responding to each other's flight direction. A supply chain adjusting inventory based on demand. A leadership team navigating market shifts. In each case, a cybernetic loop is at work: signals are sensed, compared to expectations or goals, and used to guide the next action.
What made early cyberneticians like Wiener, Ashby, and McCulloch so influential was their recognition that these patterns repeated across wildly different domains. The system didn't need a brain to be intelligent—it needed feedback and the capacity to respond.
When we talk about AI today, we're seeing the evolved manifestation of these early cybernetic principles. What was once a thermostat has become GPT-4. What was once a simple feedback loop has become a complex neural network. But beneath the sophistication, the fundamental principles of steering remain.
From Simple Feedback to Intelligent Systems
This shift—from managing outputs to managing systems—is essential to understanding modern steering. A thermostat responds to heat. A well-run company responds to market shifts. A co-pilot responds to prompts. In each case, the system's performance depends on how well it senses its environment, how clearly it defines its goal, and how effectively it adjusts its behavior.
But unlike thermostats, human systems introduce a new layer of complexity: interpretation. What counts as a "goal"? What is the "right" feedback to follow? Who decides what a deviation means? This is where cognitive tools—like the concepts in this series—become necessary. The human steerer must interpret, not just react. Steering isn't just about correction—it's about construction. We don't just hold the rudder. We shape the map.
As Stafford Beer, who applied cybernetics to management and organisational design, put it:
"The purpose of a system is what it does. There is, after all, no point in claiming that the purpose of a system is to do what it constantly fails to do." — Stafford Beer, Brain of the Firm (1972)
Beer's famous axiom reminds us: intent does not equal outcome. If your organisation claims to be customer-centric but rewards internal politicking and short-term wins, the true purpose of the system is not what you say—it's what you reward.
In AI-enhanced systems, this principle becomes even more critical. The systems you're building don't just execute stated intentions—they learn from implicit patterns. They don't just do what you say—they do what you reinforce.
The Five Nodes: A Cybernetic Framework for the AI Age
In our first exploration of the steering model, we introduced the five interconnected nodes that form a complete framework for effective human-AI collaboration. Now, let's explore these nodes through the lens of cybernetic principles:
1. The Vehicle
This is the system you are steering—the organisation, team, or process under your influence. It contains the structures, roles, tools, and flows that determine how action is taken.
Just as a sailboat has a hull, rudder, sails, and crew, a business has technology stacks, team structures, workflows, and channels of operation. To steer well, you must understand your vehicle's design and capabilities.
In cybernetic terms, the vehicle represents what Beer called the "viable system"—the entity that must maintain integrity while adapting to changing conditions. With AI integration, this vehicle now includes both human and machine components, creating a hybrid system with new properties and potentials.
2. The Environment
Every vehicle exists within an environment that it must respond to—conditions outside its control but not outside its awareness. Markets shift, weather patterns change, competitors emerge, customer expectations evolve.
A steerer must sense the environment, interpret its signals, and update their orientation accordingly. In sailing, the wind is not your enemy—it is your information source.
Cybernetician Ross Ashby formulated this as his Law of Requisite Variety: "Only variety can absorb variety." In other words, your system must be as complex as the environment it navigates. AI dramatically expands this capacity, allowing organisations to process more environmental signals than ever before—but only if those signals are properly integrated into decision processes.
3. The Goal
Steering only makes sense in relation to a goal. Whether explicit or implicit, every system is being guided toward a desired outcome. That goal may be fixed (like a destination) or dynamic (like optimising flow).
Clarity of goal is essential—but so is the ability to renegotiate it as conditions evolve. One of the most dangerous errors in steering is overcommitting to a goal that no longer fits the environment or vehicle capabilities.
In cybernetic systems, goals create what engineers call a "reference signal"—the standard against which current conditions are compared. With AI systems, goal-setting becomes more nuanced. The explicit goals you state may be overshadowed by the implicit goals embedded in how you train and reinforce the system.
4. The Feedback
To steer effectively, feedback is everything. It is the signal that tells you whether your current action is leading you closer or further from your goal. And crucially, not all feedback is equal. Some signals are noisy, delayed, or distorted. Others are subtle but accurate.
High-performance systems cultivate feedback literacy—the ability to distinguish signal from noise, build sensing mechanisms into every layer, and respond quickly without overreacting.
Cybernetics revolutionized our understanding of feedback, moving beyond simple error correction to recognize multiple layers of adaptation. First-order feedback adjusts actions. Second-order feedback adjusts goals. Third-order feedback adjusts the system itself. In AI-enhanced organisations, all three levels become critical to effective steering.
5. The Steerer
The final node—and the one that links back into the vehicle—is the human being who is perceiving, deciding, and acting. The steerer brings judgment, attention, habit, bias, and clarity to the system.
Steering is never just technical; it is always also cognitive. Two people at the helm of the same system can get very different results—not because the vehicle changed, but because perception, interpretation, and intention differ.
In second-order cybernetics, this became a crucial insight: the observer is part of the system. The steerer doesn't stand apart from what is being steered—they are entangled with it, shaping and being shaped by the dynamics they participate in.
Perception Determines Response
At the heart of this framework is a single, foundational principle that links cybernetic theory to practical leadership:
Perception determines response.
Every action you take—every adjustment, decision, or strategic move—begins with a way of seeing. If that perception is clear, context-aware, and grounded in meaningful concepts, the likelihood of making an optimal response increases dramatically.
This is why the concepts matter. Each one is a refinement tool—a way to bring more resolution to how a system is understood. The more refined your perception, the more effective your steering.
You don't just need better data. You need better distinction-making—the cognitive ability to tell the difference that makes a difference. That's what these concepts provide.
As anthropologist and cybernetician Gregory Bateson noted, information is fundamentally "a difference that makes a difference." The steerer's job is to recognize which differences matter and respond accordingly.
Creating a Conceptual Toolkit
Just as a skilled sailor learns to read the wind, trim the sails, and adjust course with precision, so too must the modern decision-maker develop fluency with a core set of tools.
These tools are what we refer to as concepts—mental models that offer ways to perceive, frame, or act. Think of them not as static definitions, but as dynamic levers—terms that sharpen attention, shape inquiry, and help surface deeper insights across contexts.
When internalized, these concepts form a shared language across teams, enabling more consistent diagnosis, better alignment, and clearer communication between humans—and increasingly, between humans and intelligent systems.
When mapped into tools like CRMs, these concepts form the structure for:
- Framing co-pilot prompts
- Structuring system feedback
- Organising playbooks and procedures
- Aligning cross-functional dialogue
This approach transforms cybernetic principles from abstract theory into practical application. The concepts become what Beer called "variety attenuators"—tools that help manage complexity by directing attention to what matters most.
Steering as the Essential Skill
The real advantage in the AI age isn't in matching the machine's speed or scale—it's in learning how to guide it effectively. This means framing direction, interpreting signals, and adjusting with clarity as conditions evolve. It means understanding what matters and how to move toward it, even when the landscape is shifting.
AI can generate, recommend, and automate. But it still looks to us for purpose, priorities, and sense.
Steering is how intent becomes motion. And in the age of intelligent machines, it may be the most essential skill we develop.
To steer well is to develop anticipatory awareness. To see not just where the system is, but where it's tending—and how a small adjustment now may prevent a large correction later.
The most critical takeaway from cybernetics isn't technical—it's philosophical. As Heinz von Foerster put it: "Act always so as to increase the number of choices." Good steering isn't about control in the sense of limitation—it's about control in the sense of capability. It expands what's possible.
The art of steering—rooted in cybernetic science but expressed through human judgment—isn't just about technology. It's about developing the cognitive discipline to navigate complexity with confidence, clarity, and purpose. It's about learning to read the wind.
Want to explore how these principles could be applied in your organisation? Contact me to discuss your specific business context.