
Strivenn Thinking
Pull up a seat at our
digital campfire
where story, strategy,
and AI spark
new possibilities
for sharper brands
and smarter teams.
Culture-First AI Governance: The Key to Sustainable Trust
By Matt Wilkinson
Key Account Managers have always sat at the nexus of revenue, risk and trust. Today, artificial intelligence magnifies that role. Recommendation engines drive cross-sell offers; large language models draft proposals; predictive analytics guide inventory for strategic customers. When these systems misfire, or violate regulation, the blowback lands squarely on the supplier–client relationship.
That makes compliance a front-line KAM behaviour, not a back-office checklist. A manager who can explain how their company’s AI meets Europe’s stringent rules, or why an AI product selector is ISO 42001 compliant, turns a potential objection into proof of reliability. Conversely, a KAM who shrugs at governance can sink multimillion-euro deals with a single unanswered question about bias or data provenance.
The EU Artificial Intelligence Act is a landmark legal framework passed in 2024 that promises to set the tone for global AI governance. But it’s not alone. Governments from the U.S. to China are introducing new standards that will reshape how businesses design, deploy, and scale AI.
To not only survive but thrive in this climate, organisations must embed governance as a design principle, not a compliance afterthought.
The Global Patchwork of AI Regulation
The EU AI Act: A Model with Teeth
The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. Its central idea: regulate based on risk. Key categories include:
- Unacceptable Risk: Prohibited for uses such as social scoring or real-time biometric surveillance in public.
- High Risk: Subject to strict obligations. It covers uses in hiring, healthcare, policing, and education.
- Limited Risk: Systems like chatbots must disclose AI interaction.
- Minimal Risk: Most everyday applications face no additional regulation.
The Act also introduces obligations for General-Purpose AI (GPAI) systems like large language models, requiring transparency on training data and safety measures.
Penalties for non-compliance can reach €35 million or 7% of global turnover. Enforcement started August 2024, with a phased rollout through 2027.
Beyond the EU: A Growing Global Movement
- United States: While there is not yet a comprehensive federal AI law, there are ongoing legislative efforts. However, the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) offers voluntary guidance and is quickly gaining traction, especially in federal procurement.
- China: Enforces sector-specific AI rules, focusing heavily on content moderation, national security, and algorithmic transparency.
- UK: Takes a sector-led approach, empowering existing regulators to shape AI standards in context.
- Canada & Brazil: Drafting their own AI-specific bills, heavily inspired by the EU model.
Implication: Even if you're not in the EU, your AI may be used there, and these frameworks are influencing legislation worldwide.
ISO/IEC 42001: Operationalising AI Governance
While the EU AI Act tells us what to regulate, ISO/IEC 42001 answers the how. Published in 2023, it’s the first global standard to guide organisations in building a comprehensive AI Management System (AIMS).
The framework emphasises a structured but flexible approach rooted in six key principles:
- Transparency: Make AI decisions explainable.
- Accountability: Document who is responsible for what.
- Fairness: Detect and prevent bias.
- Security and Safety: Protect systems and people from harm.
- Data Quality and Privacy: Ensure reliable, lawful data usage.
- Reliability: Guarantee consistent, intended AI performance.
Central to ISO 42001 is the Plan-Do-Check-Act (PDCA) model, a continuous improvement loop that keeps AI governance responsive and dynamic.
- Plan: Set goals and identify risks.
- Do: Implement processes and safeguards.
- Check: Audit outcomes, measure performance.
- Act: Refine based on lessons learned.
This turns governance into a living practice, designed to scale with innovation.
How ISO 42001 and NIST Differ
The NIST AI Risk Management Framework (AI RMF), while not a formal standard like ISO 42001, provides a US-centric roadmap for assessing and mitigating AI risks. Here's how they compare:
Feature |
ISO/IEC 42001 |
NIST AI RMF |
Type |
International standard (certifiable) |
Voluntary framework |
Audience |
Global businesses, certifiers |
U.S. government, industry |
Focus |
Building an AI Management System |
Managing risks throughout AI lifecycle |
Core Structure |
PDCA cycle + principles-based |
Four functions: Map, Measure, Manage, Govern |
Emphasis |
Organisational integration and accountability |
Flexibility and context-specific application |
Use Case |
Suitable for certification and audits |
Suitable for operational risk awareness |
Bottom Line: ISO 42001 is best for structured implementation and certification. NIST is best for context-sensitive risk exploration. Used together, they complement each other, helping organisations bridge operational rigor with adaptive flexibility.
Why Compliance-Driven AI Is a KAM Superpower
- Contractual Assurance: AI clauses now appear in master service agreements, data lineage, explainability, audit rights. A KAM fluent in EU AI Act tiers or ISO controls, accelerates legal review instead of stalling it.
- Reputation Shield: An AI mishap that harms a client’s customers boomerangs on the supplier. Demonstrating certified governance creates a reputational moat.
- License to Innovate: When clients trust the guardrails, they approve pilots faster, turning compliance from brake pedal into accelerator.
The Cross-Functional KAM Playbook
Responsible AI is a team sport. High-value accounts expect orchestrated answers.
Internal Partner |
KAM Question |
Governance Touchpoint |
Analytics |
“Can we train on client usage data?” |
Verify GDPR lawful basis, risk tier. |
Legal |
“Is our recommender High Risk in the EU?” |
Schedule conformity assessment, draft CE declaration. |
Product |
“What transparency artefacts can we share?” |
Provide model cards, bias test results, decision logs. |
Customer Success |
“How do we monitor live performance?” |
Align service-level metrics, drift alerts, incident response. |
Co-Creating Responsible AI With Customers
Strategic accounts often have their own AI programmes. Savvy KAMs turn compliance into collaboration:
- Joint Risk Workshops: Map each use case to EU tiers or NIST functions; align mitigation plans.
- Shared Transparency Portals: Dashboards showing accuracy drift, retrain dates, audit trails, turn statutory disclosures into relationship assets.
- Co-Innovation Sprints: Use ISO 42001’s PDCA loop as a shared methodology, ensuring both sides iterate under the same governance rhythm.
Upskilling for the AI Era
To deliver, KAMs require three new skill sets, or access to them at the least:
- Regulatory Literacy: Timelines and thresholds of the EU AI Act, China’s algorithm rules, U.S. sector directives.
- Standards Fluency: Ability to discuss ISO 42001 clauses or NIST functions with risk and tech leads.
- Facilitation Mastery: Orchestrate cross-functional teams and translate governance into commercial advantage.
Forward-looking organisations already weave AI governance into KAM onboarding and quarterly enablement.
Governance isn’t a Brake, it’s a Compass
Too often, AI governance is framed as a barrier. But the real risk lies in not governing responsibly. Bias in training data. Black-box models with no audit trail. Systems making life-altering decisions without oversight.
Governance frameworks like ISO 42001 and NIST AI RMF help prevent these outcomes. More importantly, they:
- Build stakeholder trust by making AI systems accountable and explainable.
- Enable faster adoption by clarifying risks and roles.
- Align AI deployment with brand values and legal expectations.
Govern to Scale, Not Just Comply
The future of AI won’t be shaped by who builds the most powerful models, it will be shaped by who builds the most trusted ones.
As the EU AI Act and ISO 42001 make clear, governance is not an afterthought. It is infrastructure. It is culture. It is strategy.
If you're not already embedding these frameworks into your organisation, now is the time to start. The regulatory clock is ticking, and so is the opportunity to lead with integrity.
This article was originally published in the Q3 2025 AKAM Bulletin