Artificial intelligence is moving from experimentation to business-critical use.

The EU AI Act Explained: Why AI Governance Is Becoming a Board-Level Responsibility

Organizations are using AI in customer service, marketing, recruitment, document processing, analytics, software development, fraud detection, compliance, operations and decision support. In many cases, AI is already embedded in the tools companies use every day, from CRM and ERP systems to Microsoft 365, Salesforce, HR platforms and cloud services.

The EU AI Act  creates opportunity. But it also creates risk.

The question for leadership teams is no longer simply:

“Can we use AI?”

The more important question is becoming:

“Can we demonstrate that we use AI responsibly, safely and in control?”

That is the context in which the EU AI Act becomes relevant.

The EU AI Act is the European Union’s legal framework for artificial intelligence. It entered into force on 1 August 2024 and is being applied in phases, with most of the framework becoming applicable from 2 August 2026, and further obligations following later for specific categories of AI systems.

For many organizations, this means that AI governance can no longer remain an informal topic owned by IT, data teams or innovation departments alone. It becomes a matter of risk management, compliance, procurement, information security, operational resilience and executive accountability.

What Is the EU AI Act?

The EU AI Act is a risk-based regulation for artificial intelligence.

Its purpose is to ensure that AI systems used in the European Union are safe, transparent, traceable, non-discriminatory and subject to appropriate human oversight.

The regulation does not treat all AI in the same way. Instead, it classifies AI systems based on risk. The higher the potential risk to people, safety, fundamental rights or society, the stronger the obligations.

In practical terms, the AI Act asks organizations to understand:

  • which AI systems they use
  • what those systems are used for
  • what risks they create
  • who is responsible for them
  • what controls are in place
  • whether users are properly informed
  • whether human oversight is possible
  • whether decisions and outcomes can be explained, logged and reviewed

This makes the AI Act much more than a technology regulation. It is a governance challenge.

The Risk-Based Approach of the EU AI Act

The EU AI Act uses several broad risk categories.

1. Prohibited AI Practices

Some AI practices are considered unacceptable and are banned. These include certain uses of manipulative AI, social scoring and other practices that create unacceptable risks to people’s rights and safety.

The prohibitions and AI literacy obligations have applied since 2 February 2025.

For organizations, this means that AI use should not be left unmanaged. Even before the full framework applies, companies should already know whether any AI-enabled tools or processes create unacceptable risk.

2. High-Risk AI Systems

High-risk AI systems are the most important category for many organizations.

These are AI systems that may have a significant impact on people’s lives, rights, access to services, employment, education, safety or essential processes.

Examples may include AI used for:

  • recruitment or candidate screening
  • employee evaluation or workforce management
  • access to education or training
  • creditworthiness assessments
  • access to essential private or public services
  • certain safety components
  • critical infrastructure
  • law enforcement, migration or justice-related contexts

High-risk AI systems are subject to stricter requirements. These may include risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, robustness and cybersecurity.

For businesses, this means that AI cannot simply be implemented because it is efficient or commercially attractive. If an AI system affects people or critical decisions, it must be controlled.

3. Limited-Risk AI Systems

Some AI systems are allowed but require transparency.

For example, users may need to be informed when they are interacting with a chatbot. In certain cases, AI-generated or manipulated content may need to be labelled.

This category is highly relevant for customer service, marketing, content generation and digital communication.

The practical question is simple:

Would a customer, employee or citizen reasonably expect to know that AI is being used?

If the answer is yes, transparency obligations may apply.

4. Minimal-Risk AI Systems

Many AI applications will fall into a lower-risk category. Examples may include spam filters, simple recommendation systems or internal productivity tools.

But minimal risk under the AI Act does not mean no governance at all.

Other laws and obligations may still apply, including GDPR, cybersecurity requirements, contractual obligations, sector-specific regulations and internal risk policies.

Why the EU AI Act Matters for Business Leaders

Many organizations underestimate the extent to which AI is already present in their business.

AI may be embedded in:

  • HR systems
  • CRM platforms
  • marketing automation tools
  • customer service chatbots
  • fraud detection systems
  • cloud platforms
  • analytics tools
  • productivity software
  • document management systems
  • procurement platforms
  • cybersecurity tooling
  • ERP systems
  • low-code and automation platforms

This means the AI Act is not only relevant for companies that build AI products.

It is also relevant for organizations that use AI.

The distinction matters because the AI Act creates obligations for different roles, including providers and deployers of AI systems. A company that buys and uses an AI-enabled solution may still have responsibilities, especially when the AI is used in sensitive or high-impact business processes.

For executive teams, the core issue is control.

  • Do we know which AI systems we use?
  • Do we know which business processes are affected?
  • Do we understand the risks?
  • Have we assessed our suppliers?
  • Have we assigned ownership?
  • Can we explain how the system is used?
  • Can we provide evidence?

If the answer to these questions is unclear, the organization has an AI governance gap.

The Link Between the EU AI Act, NIS2 and ISO 27001

The EU AI Act should not be treated as an isolated compliance project.

It connects naturally with other governance and resilience topics, especially NIS2 and ISO 27001.

NIS2 focuses on cybersecurity and digital resilience. It requires organizations in essential and important sectors to manage cyber risks, protect critical services, handle incidents and improve supply chain security.

ISO 27001 provides a management system for information security. It helps organizations structure risk management, controls, ownership, evidence, audits and continuous improvement.

The EU AI Act adds another layer: responsible AI governance.

These topics overlap in several practical areas:

  • risk assessment
  • supplier management
  • access control
  • cybersecurity
  • incident handling
  • logging and monitoring
  • business continuity
  • data governance
  • management accountability
  • documentation and evidence
  • continuous improvement

That is why AI Act readiness should not become a separate paper exercise.

For most organizations, the better approach is to integrate AI governance into the existing management system for risk, security, compliance and transformation.

Why ISO 27001 is a Strong Foundation for AI Governance

ISO 27001 does not make an organization automatically compliant with the EU AI Act.

However, it provides a very useful foundation.

AI governance requires many of the same management disciplines that ISO 27001 already promotes:

  • asset inventory
  • risk assessment
  • control selection
  • ownership
  • supplier assessment
  • access management
  • incident management
  • evidence collection
  • management review
  • continuous improvement

The same logic can be extended to AI.

Instead of only asking, “Which information assets do we have?”, organizations should also ask:

“Which AI systems do we use, where are they used, and what risks do they create?”

Instead of only managing information security controls, organizations should also manage AI controls.

Instead of only reviewing suppliers from a security perspective, organizations should also review AI-related supplier risks.

Instead of only preparing for cyber incidents, organizations should also prepare for AI-related failures, misuse, bias, incorrect outputs or lack of explainability.

This is where ISO 27001 becomes valuable beyond certification. It creates a practical operating model for control, evidence and accountability.

Practical Steps Toward AI Act Readiness

Organizations do not need to start with a complex legal program. They need to start with visibility and control.

A practical first step is to create an AI inventory.

1. Identify AI Use Cases

Map where AI is currently used in the organization. Include both internally developed AI and AI embedded in third-party tools.

Look at areas such as HR, sales, marketing, customer service, operations, finance, legal, procurement, IT and cybersecurity.

The goal is to create a clear overview of actual AI usage, not just official AI initiatives.

2. Classify AI Risk

Once AI systems are identified, classify them by risk.

Ask:

  • Is the use case prohibited?
  • Could it be high-risk?
  • Does it affect people’s rights, employment, access to services or critical processes?
  • Is transparency required?
  • Is it only a low-risk productivity tool?

This classification determines the level of governance required.

3. Determine Your Role

Organizations need to understand their role under the AI Act.

  • Are they developing AI systems?
  • Are they deploying AI systems from a supplier?
  • Are they integrating AI into a product or service?
  • Are they using general-purpose AI tools in business processes?

Different roles create different responsibilities.

4. Assess Suppliers

Many AI risks enter the organization through suppliers. That makes procurement and vendor management critical. Organizations should understand whether suppliers use AI, how AI is governed, what documentation is available, where data is processed, how outputs are controlled and what contractual safeguards exist.

This is especially important when AI is embedded in HR, customer, financial, operational or compliance processes.

5. Define Ownership and Controls

Every relevant AI system should have an owner. That owner should understand the purpose of the system, the risk classification, applicable controls, supplier dependencies, required documentation and review cycle.

Controls may include human oversight, access restrictions, logging, approval workflows, user communication, output review, bias testing, data quality checks and incident procedures.

6. Build Evidence

Like NIS2 and ISO 27001, AI governance depends on evidence…
It is not enough to say that AI is used responsibly. Organizations must be able to demonstrate what has been assessed, decided, implemented and reviewed.

This means documenting AI systems, risks, owners, controls, supplier assessments, incidents, decisions and improvement actions.

AI Governance Is Not Only About Compliance

Compliance is important, but it is not the only reason to take AI governance seriously.

Poorly governed AI creates business risk.

It can lead to:

  • incorrect decisions
  • reputational damage
  • regulatory exposure
  • biased outcomes
  • privacy violations
  • security weaknesses
  • supplier dependency
  • operational disruption
  • loss of customer trust
  • unclear accountability

Good AI governance does not block innovation. It makes innovation safer and more scalable.

Organizations that build governance early will be better positioned to use AI confidently. They will know where AI adds value, where it creates unacceptable risk and where additional controls are needed.

That is a competitive advantage.

From AI Experimentation to AI Control

Many companies are currently in the experimentation phase. Teams are testing AI tools. Employees are using copilots. Suppliers are adding AI features. Business units are exploring automation. Data teams are building models.

This is normal.

But experimentation without governance creates risk. The next phase is not to stop AI. The next phase is to professionalize it.

That means moving from scattered AI usage to controlled AI adoption:

  • from informal use to registered use cases
  • from unclear ownership to accountable owners
  • from supplier promises to supplier assessments
  • from generic policies to practical controls
  • from isolated experiments to portfolio-level visibility
  • from uncertainty to evidence

This is where organizations need structure.

How Oosterwal Consultancy Can Help

Oosterwal Consultancy helps organizations bring structure, control and momentum to complex initiatives around digital transformation, ISO readiness, NIS2 readiness and AI governance.

We support organizations with:

  • EU AI Act readiness assessments
  • AI inventory and risk classification
  • AI governance operating models
  • ISO 27001 preparation and implementation
  • NIS2 gap analysis and improvement planning
  • supplier and technology dependency analysis
  • risk and control frameworks
  • portfolio and program governance
  • practical management systems for audit readiness

Our approach is pragmatic.

No unnecessary complexity. No paper-based compliance theatre. No isolated governance documents that nobody uses.

We help organizations translate regulatory pressure into a practical system of risks, controls, owners, actions, evidence and measurable progress.

For organizations already working on ISO 27001 or NIS2, AI governance can be added as a logical next layer. The underlying management discipline is the same: understand the risks, assign ownership, implement controls, collect evidence and improve continuously.

Conclusion

The EU AI Act marks an important shift.

  • AI is no longer only an innovation topic. It is becoming a governance topic.
  • Organizations need to know where AI is used, what risks it creates, who owns it, which controls are in place and whether responsible use can be demonstrated.

For boards, CIOs, CFOs, legal teams, risk teams and transformation leaders, this creates a clear challenge:

Can we show that our organization uses AI responsibly and in control?

The organizations that answer this question early will be better prepared for regulation, more resilient in execution and more confident in scaling AI responsibly.

The time to start is now 🙂

 

FAQ

What is the EU AI Act?

The EU AI Act is the European Union’s legal framework for artificial intelligence. It introduces risk-based rules for the development, deployment and use of AI systems in the EU.

When does the EU AI Act apply?

The AI Act entered into force on 1 August 2024 and is being applied in phases. Prohibited AI practices and AI literacy obligations applied from 2 February 2025. Governance rules and obligations for general-purpose AI models applied from 2 August 2025. Most of the framework becomes applicable from 2 August 2026, with some specific obligations applying later.

Does the EU AI Act only apply to companies that build AI?

No. The AI Act is also relevant for organizations that use AI systems, especially when AI is used in high-impact processes such as recruitment, employee evaluation, credit assessment, access to services, safety, security or critical operations.

What is a high-risk AI system?

A high-risk AI system is an AI system that may significantly affect people’s rights, safety, opportunities or access to essential services. Examples may include AI used in recruitment, education, employment, credit assessment, critical infrastructure or certain regulated products.

How does the AI Act relate to ISO 27001?

ISO 27001 does not automatically create AI Act compliance, but it provides a strong management foundation. It helps organizations structure risk management, controls, ownership, supplier management, incident handling, evidence and continuous improvement. These are also important for responsible AI governance.

How does the AI Act relate to NIS2?

NIS2 focuses on cybersecurity and digital resilience. The AI Act focuses on responsible AI. They overlap in areas such as risk management, cybersecurity, supplier control, incident handling, governance and evidence. Organizations should manage them together where possible.

What should organizations do first?

Start with an AI inventory. Identify where AI is used, which suppliers are involved, which business processes are affected and whether the use cases may be high-risk or require transparency. From there, define ownership, controls and improvement actions.