HubSpot Onboarding

CRM Implementation 

Consulting & Implementation

Enhance your setup

Data Migration

ISO 27001 certified

Integrations

Connecting to/from HubSpot 

Beautiful websites

Crafted with HubSpot

GuardHub®

AI Governance for HubSpot Users

ConvX: Loop Marketing

Turn AI conversations into revenue

Smartbound®

'HubSpot AI' Powered Sales Process

Independent HubSpot Audit

Independent review of your set-up

Blog

Hints & Tips

HubSpot Workshops

Virtual & Real World

GUARD: A Methodology for Building Your Own AI Management System

Most organisations don't need ISO 42001 certification. But most organisations do need a sensible way to manage their AI adoption.

The gap between 'we should probably have some governance around AI' and 'here's our certified management system' is enormous. And for many businesses, the formal certification route is overkill for where they are right now.

But the principles behind ISO 42001? Those are sensible for almost everyone. Knowing what AI tools you're using. Knowing who's responsible for them. Understanding the risks. Having a way to learn when things don't go as expected. That's just good practice.

GUARD is a methodology for building your own AI management system using these principles. Not a compliance framework you implement. Not a certification you pursue. A practical approach you apply to make better decisions about AI adoption.

The Real Goal: Faster, More Confident Adoption

Here's what we've found: organisations with clear AI governance actually adopt faster, not slower.

Without a system, every new AI tool triggers the same uncertain conversations. Should we use this? Who decides? What if something goes wrong? Is someone already using it? These questions slow everything down because there's no framework for answering them.

With a system, you have a methodology. A new tool appears, you apply GUARD, you make a decision, you move on. Low-risk tools get approved quickly. High-risk tools get appropriate scrutiny. Nothing sits in limbo because nobody knows who's supposed to evaluate it.

The goal isn't to protect yourself from AI. It's to engage with AI confidently. Use tools like we do every day. Get higher quality work than if you were working alone. Continuously improve how you work together with AI systems.

The GUARD Framework

GUARD gives you five lenses for thinking about any AI tool, policy, or process. Whether you're assessing a specific agent, writing an adoption policy, or reviewing an incident, these same five questions apply.

 

The GUARD framework

 

G: Governance

Governance covers the structures and accountabilities around AI in your organisation. Who sets AI policy? Who approves new tools? Who reviews performance? How are decisions documented?

This isn't about creating bureaucracy. It's about clarity. When a new AI capability appears, people should know the process for evaluating it. When something goes wrong, there should be a clear path for reporting and resolving it.

The governance question: What structures do we have for making and reviewing AI decisions?

U: Use Case

Before evaluating any AI tool, you need to understand what problem it solves. What's the intended outcome? What does success look like? Is AI actually the right approach for this?

Use case thinking prevents both over-adoption (using AI because it's novel) and under-adoption (avoiding AI because it's unfamiliar). It keeps the focus on business value.

The use case question: What specific outcome are we trying to achieve, and why is AI the right approach?

A: Agent

Agent refers to the specific AI tool, system, or capability you're working with. What exactly does it do? What data does it access? What actions can it take? What are its limitations?

This might be a HubSpot Breeze agent, Claude connected to your systems, an AI feature in existing software, or a standalone tool. Understanding the agent means knowing its capabilities, boundaries, and how it fits into your workflows.

The agent question: What specifically is this tool, what can it do, and what does it touch in our environment?

R: Risks

Risk assessment considers both what could go wrong and what could go right. What's the downside if the AI makes a mistake? What's the upside if it works well? What's the risk of not adopting it?

Not all AI tools carry the same risk profile. A content suggestion tool has different implications than an agent that automatically contacts customers. The risk level should determine the rigour of your evaluation and the controls you put in place.

The risks question: What could go wrong, what could go right, and what level of scrutiny does this warrant?

D: Deployment

Deployment tracks the status and lifecycle of AI tools in your organisation. Is this under evaluation? Approved and in use? Rejected? Approved but not yet implemented? Being phased out?

Good deployment tracking gives you visibility across your AI footprint. You can see what's in production, what's in the pipeline, and what's been considered and rejected. This becomes your institutional memory for AI decisions.

The deployment question: Where is this tool in its lifecycle, and what's the next step?

The Learning Loop: Non-Conformance and Continual Improvement

An AI management system isn't something you build once and forget. It's a living system that improves as you use it.

When something doesn't work as expected - an agent produces poor outputs, a process breaks down, someone uses a tool outside its intended scope - that's not a failure of the system. That's the system working. It's how you learn.

Non-conformance reporting captures these moments. What happened? Why did it happen? What are we changing as a result? This feeds back into your GUARD assessments: maybe the risk rating needs adjusting, maybe governance wasn't clear enough, maybe the use case was poorly defined.

Over time, this creates a cycle of continuous improvement. You adopt AI tools, you learn from experience, you refine your approach, you adopt more confidently. The system gets smarter as you use it.

GuardHub: The Practical Toolkit

GUARD is the methodology. GuardHub is the practical toolkit that helps you apply it.

The toolkit includes templates and tools that each apply the GUARD methodology to specific needs:

  • AI Tool Inventory: What AI tools are being used across your organisation? This captures the current state, often revealing 'shadow AI' that teams have adopted without central visibility.
  • Internal Adoption Policy Template: What's approved for use? What's the process for requesting new tools? What are the boundaries? These documents turn GUARD principles into organisational policy.
  • Non-Conformance Report Template: When things don't go as expected, capture what happened and what you learned. This closes the loop and drives continual improvement.
  • Customer Agent Testing Protocol: A structured approach to validating AI agent performance, including generation of 200 test questions and systematic evaluation of responses against your knowledge base and brand guidelines.

Each piece works together because they're all built on the same ‘Guard’ methodology. Your inventory informs your assessments. Your assessments inform your policies. Your non-conformance reports refine everything. Over time, you accumulate the artefacts that constitute your AI management system. Not because you're ticking compliance boxes, but because you're making good decisions and documenting them.

Built on ISO 42001 Principles

GUARD is built on the principles of ISO 42001, the international standard for AI management systems. This matters even if you never pursue certification.

ISO 42001 represents the global consensus on what responsible AI management looks like. By aligning with these principles, you're building on established thinking rather than inventing your own approach. If regulations tighten, you're already aligned. If enterprise clients ask about your AI governance, you can point to a structured methodology based on international standards.

But you don't need to read the standard or hire consultants to interpret it. GUARD translates these principles into practical questions and tools you can apply immediately.

Plus Your Business is ISO 42001 certified. We've implemented the full standard ourselves and been independently audited against it. That experience informs how we've designed GUARD to be practical without losing rigour.

Getting Started

You don't need to build a complete AI management system before you start getting value from GUARD. Start with whatever's most pressing:

  • If you don't know what AI tools people are using: Start with an inventory. Just discovering what's in use often surfaces immediate priorities.
  • If you're evaluating a specific tool: Apply the five GUARD questions to structure your assessment. Document your decision and reasoning.
  • If something's gone wrong: Use a non-conformance report to capture what happened and what you're learning. This is valuable even before you have formal governance in place.
  • If you need to create policy: Use GUARD as the structure for your AI adoption policy. It gives you a framework that's comprehensive but not overwhelming.

Each piece you create becomes part of your AI management system. Over time, these accumulate into a coherent approach. Built from your actual decisions and experience, not imposed from a generic template.

Moving Forward with Confidence

AI capabilities are expanding rapidly. New tools appear constantly. The organisations that thrive won't be the ones who avoid AI or the ones who adopt everything uncritically. They'll be the ones with a clear methodology for making good decisions quickly.

GUARD gives you that methodology. GuardHub gives you the practical tools to apply it. Together, they help you build an AI management system that fits your organisation. One that enables faster adoption, clearer accountability, and continuous improvement.

If you'd like to explore how GUARD could work for your organisation, or if you'd like access to the GuardHub toolkit, get in touch. We're always happy to talk through AI adoption challenges, wherever you are in the journey.

Plus Your Business is a HubSpot Elite Partner and ISO 42001 certified consultancy. We help organisations adopt AI confidently through practical governance frameworks built on international standards.