Blog Post
AI Regulations and Frameworks: Preparing for Compliance and Resilience
September 22, 2025
AI Regulations and Frameworks: Preparing for Compliance and Resilience
TL;DR
- AI governance and oversight have become a business imperative as regulators and standards bodies establish expectations worldwide.
- Laws such as the EU AI Act and national initiatives in the US, UK, Canada, Australia, and Japan are shaping a global regulatory patchwork.
- Frameworks like NIST’s AI RMF and ISO/IEC 42001 provide organizations with voluntary yet structured methods to operationalize governance.
- GRC leaders are under pressure to demonstrate defensible oversight and ensure risk practices keep pace with GenAI adoption.
- AI risk assessments grounded in recognized frameworks create visibility into GenAI and AI system usage across the organization, benchmark safeguard maturity, and show overall preparedness.
- AI quantification models build on assessments by forecasting incidents and financial impact, giving leaders objective data for risk management and resource allocation.
AI Governance and Oversight Is Now a Global Market Concern
Artificial intelligence (AI) has departed from the realm of science fiction and emerged as a very real, regular part of life, increasing efficiency across a number of everyday activities. Particularly in the marketplace, where process optimization directly equates to time and money, general-purpose AI (GenAI) and other AI systems have rapidly taken on a central role. Enterprise leaders are employing AI for everything from information processing to data analysis, making visibility into where and how these systems operate critical for informed decisions.
The Growing Demand for AI Governance
However, as this technology takes on a more integral role within the market, its potential to cause massive disruption grows in parallel, a fact which has not gone unnoticed by state legislators and standards bodies. Worldwide, entities are recognizing the urgency of establishing visibility into AI usage and safeguards through acceptable use rules and structured oversight. What is materializing is a patchwork of binding regulations and voluntary frameworks.
The EU, for example, has published the first comprehensive law, while other countries like Canada and Australia are preparing their own mandates. In the US, the absence of a sweeping federal statute has been offset by agency enforcement actions and the adoption of widely referenced standards such as the NIST AI Risk Management Framework (RMF). The UK, Japan, and others are likewise issuing guidance documents or sector-specific oversight measures, demonstrating that this is a global movement rather than a regional initiative.
For organizations, the AI revolution signifies that its governance and risk management can no longer remain abstract or a relegated issue. Regulatory requirements and standards have already begun to establish what responsible adoption looks like in practice, and, sooner or later, GRC leaders and, eventually, board members, will be expected to demonstrate alignment. Stakeholders that begin preparing now will find themselves not only better equipped to meet evolving regulations but, more importantly, succeed within this new, AI-fueled risk landscape.
Why AI Regulation Matters
Every new technology is a double-edged sword; the risks scale proportionally with the rewards, and AI is no exception. Just as nearly every organization today enforces acceptable use policies for computing resources, it's only natural that similar guardrails would be introduced to ensure that GenAI is leveraged responsibly, especially considering that early studies, such as IBM's 2025 Cost of a Data Breach, have already linked AI-related vulnerabilities to tangible financial consequences.
At the same time, the only surefire way to eliminate AI-related risk would be to avoid adoption entirely, an option that’s neither viable nor strategically sound. Early GenAI tools have already proven themselves extremely valuable in accelerating innovation and helping enterprises meet growth targets. As capabilities continue to evolve, these merits will only multiply, providing organizations with even greater operational and competitive advantages.
The conundrum then, as it so often is, lies in striking the right balance; preserving the benefits while adequately managing its inherent risks. Regulations, not coincidentally, serve as a critical level in that balancing act, creating the structure required to safely harness the power of AI systems on a large scale. While regulatory constraints, regardless of whether they're internal or external, may feel burdensome at times, they establish the necessary scaffolding for sustained and ethical adoption.
Governance as a Safeguard Against Systemic Disruption
Still, it's important to note that legislation and frameworks will not be stagnant. AI risk is evolving, and its long-term effects are not yet fully understood, let alone mapped. Nonetheless, it remains evident that unchecked systems lead to damages that extend far beyond a single enterprise. Effective governance, thus, carries much more weight than merely achieving legal compliance. It is the first line of defense against systemic disruption, providing the structure and visibility stakeholders need to ensure GenAI strengthens rather than destabilizes the economy.
The Global Regulatory Landscape
No longer confined to academic debate and advisory papers, AI oversight has entered a new phase of strategic and tactical importance. Governments across the world have begun defining expectations, some through binding legislation and others via high-level guidance. The specifics may vary by jurisdiction, but there is a growing consensus that AI governance is a core component of maintaining both market stability and public trust.
European Union: The Artificial Intelligence (AI) Act
The European Union (EU) released the AI Act in August 2024, leading the way as the first major regulator to do so, and giving organizations a two-year window before full enforcement takes effect. The Act begins by stating that the legislation exists to improve and promote the safe usage of AI systems. It then proceeds to make distinctions between the various forms of the technology based on the risk it poses to society, defining categories such as prohibited, which are banned outright, high-risk, limited-risk, and minimal-risk.
High-risk AI systems, discernibly, are subject to the most stringent oversight, with ten dedicated articles detailing the obligations of both providers and users. These provisions cover a wide range of requirements, including AI risk management, data governance, transparency, human oversight, and post-market monitoring. GenAI and other foundational models are addressed specifically in a standalone chapter, which details obligations such as disclosing training data sources and documenting model design inputs.
Notably, the Act elevates governance responsibilities to the boardroom level. Under Article 66, management boards are assigned specific tasks to ensure compliance, embedding AI accountability into the highest tier of the corporate agenda. Should organizations employ any of the prohibited AI, they’ll face a penalty of up to €35 million or 7% of global annual revenue. In comparison, non-compliance with the AI Act's provisions can result in fines of up to €15 million or 3% of global annual revenue.
United States: Fragmented But Intensifying Oversight
Unlike the EU, the US has not yet enacted a comprehensive federal law dedicated to AI governance and risk management. Regulators are instead leveraging existing statutes and agency powers to police market usage. The Federal Trade Commission (FTC), for instance, warned companies that deceptive AI practices, such as writing fake reviews, fall under consumer protection law. Similarly, the Equal Employment Opportunities Commission (EEOC) issued guidance regarding the illegality of using AI for certain hiring practices.
In 2024, US Congress members introduced the Federal Artificial Intelligence Risk Management Act. Although not officially ratified as of 2025, this bipartisan and bicameral bill would require federal agencies and vendors to incorporate NIST’s AI RMF into their operations. Meanwhile, certain states and regions are advancing their own legislative agendas. New York City launched its AI Action Plan in 2023, and Colorado, in 2024, passed the Colorado Artificial Intelligence Act (CAIA), the first comprehensive state law addressing high-risk AI systems.
United Kingdom: A Pro-Innovation Approach
The United Kingdom (UK) opted against a singularly binding piece of AI legislation, establishing its oversight practices through a pro-innovation regulatory approach. The National AI Strategy, released in 2021 and refreshed a year later, set out details of a ten-year vision to position the UK as a global AI superpower and highlighted the importance of investing in the AI ecosystem, ensuring that GenAI will deliver benefits across sectors and be governed responsibly.
Parliament consequently erected the Office for Artificial Intelligence, a dedicated authority nestled under the Department for Science, Innovation, and Technology. Rather than impose broad AI usage restrictions, the UK relies on sector-specific regulators such as the Information Commissioner's Office (ICO) and Financial Conduct Authority (FCA) to enforce the five guiding principles of safety, transparency, fairness, accountability, and contestability, as published in the AI Regulation White Paper.
The UK has also entrenched itself in the international conversation, hosting the AI Safety Summit in November 2023. The Summit brought together global governmental representatives and culminated in the signing of the Bletchley Declaration, the world’s first international agreement acknowledging the catastrophic risks AI could pose through misuse or loss of control, particularly in areas such as cybersecurity, biotechnology, and disinformation.
Canada: High-Impact AI Under Scrutiny
Canada is advancing its AI oversight with the proposed Artificial Intelligence and Data Act (AIDA), which, if ratified, will become the country's first national AI law, focused primarily on systems deemed "high-impact." The legislation requires Canadian-based organizations to identify harmful scenarios that GenAI usage could cause, and then implement mitigation measures. Throughout the usage lifecycle of these high-impact systems, stakeholders would be expected to maintain ongoing monitoring and upkeep.
AIDA also emphasizes transparency, stating that providers must keep detailed documentation on training data and system design, while offering mechanisms for individuals to contest harmful outcomes. Redress is also a key component of the regulation, with enforcement powers granted to the Minister of Innovation, Science, and Industry, which could issue penalties of up to C$25 million or 5% of global annual revenue, whichever is higher. AIDA is among the stricter AI governance laws, despite its narrower focus on high-impact use cases.
Australia: Moving Toward Mandatory Guardrails
Like Canada, Australia is concentrated on high-impact use cases, building a risk-based approach that seeks to establish "mandatory guardrails" against AI risk. Following a 2023 public consultation on safe and responsible AI, the government’s January 2024 interim response concluded that voluntary commitments were insufficient. In September 2024, the Department of Industry, Science, and Resources released a proposal paper outlining preventative obligations across the AI lifecycle for developers and deployers of high-risk systems.
While legislation is still being amended, Australia continues to draw on existing regulations to inform the trajectory of its AI governance model. The nation's privacy regulator, the OAIC, for example, issued guidance for training and deploying generative models, and the eSafety Commissioner has published a position statement on generative AI harms, both of which will be taken into account for the future national law. The 2024–25 federal budget earmarked funding to support responsible AI, reinforcing the policy push even as a national statute remains pending.
Japan: Soft-Law and Human-Centric Principles
Japan’s regulatory model diverges sharply from the EU’s, relying on voluntary standards and existing laws rather than binding requirements. In 2025, parliament approved the AI Promotion Act, its first national framework focused on encouraging development while embedding human-centric concepts such as fairness and accountability. The Act builds on Japan’s earlier AI principles launched in 2019 and its National AI Strategy, and it operationalizes oversight through voluntary AI Guidelines for Business.
Within the international arena, Japan has acted as a bridge between Western and Asian governance models, launching the G7 Hiroshima AI Process in 2023. The forum produced a voluntary code of conduct for advanced AI systems, with the aim of harmonizing approaches to GenAI deployment across borders. Extending its reach and cementing its position as an AI governance architect, Japan then established the Hiroshima AI Process Friends Group, bringing in dozens of non-G7 countries to promote wider AI adoption and usage alignment.
AI Risk Management and Governance Frameworks
Even as national and regional authorities propel their regulatory approaches to AI governance and oversight forward, coverage remains disparate, with many requirements still in flux. This uncertainty, however, has not deterred organizations from acting. On the contrary, many stakeholders recognize that establishing governance and risk management mechanisms for GenAI is not only prudent preparation for any upcoming regulatory changes but also a strategic necessity. Enterprises that address AI proactively secure their competitive advantage more firmly than those that delay.
But because AI risk is still so new, there lies a great challenge in determining which safeguards to apply and how to embed them effectively. As a result, GRC and security leaders are increasingly employing management frameworks that distill high-level concepts into operational practice. Standards such as NIST’s AI RMF and ISO/IEC 42001 help organizations build visibility into risks, safeguards, and maturity gaps, translating high-level principles into operational practice. By leveraging such frameworks, enterprises can simultaneously start building resilience and demonstrate alignment with future regulations.
The NIST AI Risk Management Framework
Developed by the US National Institute of Standards and Technology (NIST), widely known for its Cybersecurity Framework (CSF), the AI Risk Management Framework (RMF) has quickly become one of the most referenced guides for responsible AI adoption. After a period of extensive public consultation, the framework was officially released in January 2023 and, today, remains a voluntary resource designed for cross-industry use, helping organizations to identify, assess, and manage AI-related risks.
The NIST AI RMF, much like the CSF, is structured on a set of core functions. Unlike the CSF, however, which revolves around six pillars, the AI RMF's foundation consists of four, including Govern, Map, Measure, and Manage, each of which comes with categories and subcategories that translate broad risk concepts into actionable steps. NIST's AI standard was specifically designed to be adaptable to different contexts, offering a blueprint that delivers consistent visibility into risk and control maturity as technology evolves.
ISO/IEC 42001: Global AI Management Systems Standard
In late 2023, the International Organization for Standardization (ISO), along with its long-time collaborator, the International Electrotechnical Commission (IEC), published ISO/IEC 42001, laying out specific requirements for "establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS)." This standard is the first certifiable management system worldwide focused exclusively on AI, making it applicable to organizations of any size or sector that engage with GenAI or AI-based products and services.
Like the NIST AI RMF, ISO/IEC 42001 is entirely voluntary. Still, it offers stakeholders a solid benchmark on which to build responsible AI practices and demonstrate adherence to any upcoming laws. Similarly, its structure mirrors that of other widely adopted ISO standards such as ISO/IEC 27001, which makes integration into existing governance programs more practical.
Components of the framework also include leadership commitment and ongoing performance evaluations, while Annex A enumerates the specific controls that can reduce AI exposure.
The Expanding Mandate for GRC Leaders
AI governance has quickly become an operational responsibility for GRC leaders, whether it's a consequence of direct legislation or as a strategic response to the growing need for structured governance and risk management. In practice, this requires teams, first and foremost, to map AI use across the enterprise; effective oversight is not possible without visibility into where and how systems are deployed. From there, teams will need to assess the level of risk the AI system introduces and establish relevant controls to mitigate this exposure accordingly.
As with other forms of business-level risk management, defensibility is paramount. Leaders must be able to plainly demonstrate not only that AI systems are inventoried and controlled but also that risk-related decisions are anchored in recognized standards and implemented through repeatable processes. Ultimately, with AI systems being further entrenched into everyday operations, GRC functions will be tasked with ensuring governance and risk practices scale accordingly.
AI Risk Assessments and Quantification as Foundations for Readiness
This expansion of responsibilities inevitably raises the question of how organizations can demonstrate that their governance and oversight are substantive. Fortunately, an assessment grounded in NIST or ISO standards, for example, addresses this issue by providing stakeholders with a systematic means to map AI and GenAI activity across the enterprise and measure the extent to which it is currently managed. The end-product serves as a record that can satisfy external demands, while also offering a snapshot into how prepared the organization is to withstand AI incidents.
AI risk quantification then represents the next stage in the process of building resilience, taking into account the findings of the assessment and forecasting which loss scenarios are most likely to occur and what their financial consequences could be. With these insights, GRC leaders have the data necessary for building cost-effective, targeted risk management programs and can easily compare tradeoffs, set priorities, and communicate exposure in terms that resonate with executives across the enterprise.
In this way, quantification transforms assessment outputs into a tangible decision-making asset, further strengthening the organization’s ability to prepare for and effectively cope with AI-related disruptions.
Building Readiness for the AI Era
AI regulations are developing rapidly, both on a local and national scale, and organizations would do well to stop treating oversight as a peripheral concern. The expectation from regulators, investors, and customers will soon be, if it's not already, that AI risk is managed with the same level of rigor applied to other enterprise risks. Compliance will be necessary to avoid fines, but it is resilience that will determine which organizations succeed as GenAI and other AI systems become more deeply embedded in operations.
An AI management framework helps to establish the foundation of what best practices should look like, while assessments convert those standards into practice. AI risk quantification elevates the results of those assessments into actionable intelligence. When combined, these GRC elements create a defensible, data-based approach that advances AI governance and oversight from static reporting into strategic risk management. Enterprises that start preparing now will not only be ready to comply but also be able to capitalize on all the advantages that AI can bring without compromising critical resources.


