Blog Post

19 AI Risk Leaders Driving Enterprise Transformation

December 9, 2025

Table of Contents

The Other Side of AI Adoption: Governance at Scale

AI has moved from experimentation to everyday infrastructure, shaping decisions and workflows across nearly every industry. However, in the rush to harness its speed and efficiency, many enterprises adopted GenAI and other AI systems faster than they built the structures necessary to govern them. The result is an all-too-familiar pattern of powerful technology being deployed widely before its risks are fully understood, let alone managed.

Nevertheless, AI governance is an inherent counterpart of AI adoption, and every advancement carries implications that cannot be addressed retrospectively. As the technology becomes more embedded and more agentic, the associated risk scales with it. Governance remains the mechanism that ensures AI enhances performance without eroding trust, increasing exposure, or creating obligations the organization is unprepared to meet.

A growing community of leaders has been working to define that discipline and highlight the structures and operating practices that make AI both effective and accountable. They translate abstract principles into concrete action and help executives understand how governance strengthens both resilience and impact. To highlight their contributions, Kovrr compiled a set of influential voices shaping the future of responsible AI.

1. Kris Kimmerle, VP, AI Risk and Governance, RealPage, Inc.

Kris Kimmerle brings a grounded, practitioner-level view to managing AI risk, shaped by years spent balancing security, governance, and operational demands. Discussing agentic AI systems, Kimmerle warns that “the more agency you give a system, the more useful it becomes. But also the more governance you need,” and later adds that “immature technology deployed at scale is the governance challenge.” He positions governance as the only way to prevent high-velocity automation from cascading into avoidable failures, and he consistently argues for controls that are built into the infrastructure rather than added as afterthoughts.

Read Kris’s Making Sense of Agentic AI Governance here. 

2. Rock (Kyriakos) Lambros, CEO & Founder, RockCyber, LLC

Rock Lambros brings three decades of cybersecurity and risk leadership to the challenge of governing AI in environments where board accountability is rising fast. Speaking directly to directors, he opens with a blunt reminder that “AI agent risk is now a board problem. You don’t get to punt this,” and later warns that “AI systems evolve rapidly, and your oversight cadence must keep pace.” He argues that boards need steady, repeatable mechanisms that surface the right signals before an incident forces their hand. His approach ties strategy, controls, and cadence together so oversight becomes a disciplined practice rather than a box-checking exercise.

Read Rock’s AI agent risk for boards. A 90-day oversight plan here.

3. Aaron Turner, Faculty, IANS Research

Aaron Turner brings more than thirty years of experience helping organizations understand and address complex security exposures, with a career spanning early penetration testing work, leadership roles at Microsoft during the formation of its security programs, and hands-on research into infrastructure vulnerabilities across high-risk environments. His perspective is grounded in seeing how real adversaries adapt, and he applies that same discipline to emerging AI-driven ecosystems, where identity, automation, and infrastructure converge. 

Reflecting on his work with global enterprises, Turner notes that many organizations “dramatically underestimate both the cyber risk surface and the total cost of ownership of their AI systems, often by factors of 5× to 10×,” adding that financially quantified insights are essential for achieving true AI ROI. He remains a trusted advisor for leaders navigating fast-moving technology risks and the governance decisions that accompany them.

Connect with Aaron on LinkedIn. 

4. Elisabeth Thaller, Senior AI & Compliance Strategy Consultant, Beyond Conformity

Elisabeth Thaller has spent decades working at the center of global conformity assessment, shaping how organizations design accountable and auditable AI programs. As a technical expert across multiple ISO committees, she helps define the standards that guide trustworthy AI, including the requirements organizations must meet under ISO/IEC 42001. Her recent work includes training teams on ISO/IEC 42005 and the discipline behind AI impact assessments, with Clause 6 outlining the structured evaluation steps needed to demonstrate responsible use.

Watch Elisabeth’s presentation, Mastering AI Impact Assessments, here.

5. Raz Kotler, CISO, Valkore

Raz Kotler’s work sits at the intersection of cybersecurity leadership and AI-driven transformation, shaped by roles across startups, advisory boards, and enterprise security strategy. In his reflection on AI adoption, he emphasizes cutting through the hype, noting that “this is not a bubble, this is production.” He argues that “AI security is not part of the bubble, it is part of survival.” His guidance focuses on building pragmatic readiness through visibility, guardrails, identity controls, and disciplined operations that keep pace with rapid deployment.

Read Raz’s full commentary here.

6. Dr. Jodie Lobana, CEO & Founder, AIGE Global Advisors

Dr. Jodie Lobana is widely recognized for advancing the governance discourse at the board level, earning the distinction of holding the world’s first PhD focused exclusively on AI governance for corporate directors. She encourages leaders to “govern AI with courage, clarity, and speed,” a principle reflected in the holistic framework she developed to help boards embed accountability, ethics, and strategic oversight into AI adoption. Through AIGE Global Advisors and her work with global institutions, she equips directors and executives with structures that support trustworthy, purpose-aligned deployment as AI reshapes organizational risk.

Subscribe to Jodie’s Holistic AI Governance Brief newsletter here.

7. Chuck Rickwald, AI Governance Lead Consultant, Agentic Reactor

Chuck Rickwald draws on more than fifteen years of experience across cybersecurity, compliance, and enterprise GRC to help organizations operationalize responsible AI frameworks. His commentary on emerging regulation cuts through speculation and focuses leaders on readiness, urging them to “build a governance structure that can flex regardless of which path wins,” and reminding them that “the real risk isn’t regulation — it’s being unprepared for the speed of change.” He advises teams on aligning AI practices with NIST RMF, ISO 42001, and the evolving US policy landscape.

Read Chuck’s full commentary here.

8. Suzanne DiCicco, Principal, AI Governance Advisors

Suzanne DiCicco guides enterprises through the operational realities of AI governance, drawing on more than a decade of experience building auditable, defensible information governance and risk management programs. Her work spans data stewardship, regulatory alignment, and enterprise-scale intake processes that unify Legal, Engineering, Security, and Privacy teams. She specializes in translating ambiguous mandates into structured, measurable governance practices that strengthen compliance and support responsible AI adoption.

9. Antony Hibbert, Consulting Partner, Trusted AI, and AI Governance Expert, NXP Semiconductors

Antony Hibbert works at the intersection of AI governance, cybersecurity, and digital trust, shaped by nearly two decades spent advising financial institutions and technology teams on responsible AI adoption. Reflecting on the growing reliance on generated code, he notes how leaders are increasingly focused on “trust in generated code.” He adds that the mission across his work remains consistent: “in AI governance, prevent harm before it happens.” His consulting and product efforts both center on strengthening assurance, improving reliability, and embedding safeguards that scale with real-world use.

Read Antony’s full commentary here.

10. Caryn Lusinchi, AI Strategy Lead, Nemko Digital

Caryn Lusinchi consults across some of the most complex AI environments, from high-risk federal agencies to global technology companies and emerging-tech consultancies. She has led enterprise AI/ML governance efforts grounded in NIST frameworks, helped design compliance strategies under evolving US federal directives, and built guardrails for generative AI platforms spanning OpenAI, Amazon Bedrock, Copilot, and Apple Intelligence. Her work also draws on deep experience with the EU AI Act, GDPR, and ISO 42001, giving leaders an integrated way to manage fast-moving, safety-critical AI lifecycles with structure and accountability.

Listen to Caryn on Who Controls AI? here.

11. Sarah Hoffman, Director of AI Thought Leadership, AlphaSense

Sarah Hoffman analyzes how emerging AI capabilities reshape industries, markets, and executive decision-making, drawing on two decades in machine learning research and enterprise innovation roles. In her work on agentic systems, she notes that “trust is still a weak spot,” and adds that “accuracy, governance, and reliability will remain top concerns, especially for sensitive or high-risk tasks.” Her perspective helps leaders cut through hype and focus on the structural foundations that make advanced AI usable and dependable at scale.

Read Sarah’s The Year of AI Agents? What Really Happened in 2025 here.

12. Lee Dittmar, Co-Founder, Infinity Data AI

Lee Dittmar has spent decades helping global organizations align technology, governance, and strategy, with experience that spans nuclear engineering, enterprise consulting, and board-level advisory work. In a recent post, he noted that “AI continues to improve exponentially while enterprise scale adoption has been stalled,” calling out data fragmentation, governance gaps, and leadership misalignment as the real obstacles. His perspective pushes executives to address the foundations that determine whether AI delivers value.

Read Lee’s full commentary here.

13. Dhara Shah, AI Legal Counsel, Uber

Dhara Shah works at the intersection of law, engineering, and policy, drawing on her programming background and leadership roles across the IAPP, NIST, and EU AI Act working groups. In her writing on lifecycle oversight, she stresses that “governance should not be a one-time box-tick” and that “it should turn like a wheel: constantly adapting as your systems, risks, and contexts evolve.” Her perspective helps organizations treat governance as an active practice that guides design, deployment, and long-term monitoring of AI systems.

Read Dhara’s full commentary here.

14. Tomal K. Ganguly, Senior Project Lead & AI Governance Strategist, Marenas Consulting

Tomal K. Ganguly anchors the conversation around operational AI governance with a rare mix of regulatory depth and real-world transformation experience across Europe and LATAM. His widely shared breakdown of governance failures highlights a hard truth, positing that “if governance fails quietly, risk grows loudly.” In recent commentary, he warns that “AI is accelerating faster than the systems designed to control it,” urging leaders to treat governance as an always-on function rather than an end-of-cycle review.

Read Tomal’s AI Governance Breakdown report here.

15. Alexandra C., Chief AI and Sustainability Officer, BI Group Australia

As the creator of the Agentic AI Responsible AI Blueprint and Chief AI & Sustainability Officer at BI Group Australia, Alexandra C. focuses on turning governance from policy into actual infrastructure. Her work centers on building lifecycle controls, oversight systems, and evaluation workflows that hold up under real-world scale and regulatory pressure. She recently captured the core challenge perfectly, saying that “AI governance is not a framework problem. It is a clarity and control problem… The world keeps producing principles, but organisations need something very different, they need operational clarity.”

Read Alexandra’s full commentary here.

16. Ian Fletcher, AI Thought Leader & Strategic Advisor, Optimize

Ian Fletcher zeroes in on how AI is rewiring transport and logistics, grounding his perspective in decades of work across transformation, optimisation, and C-suite advisory. In his recent analysis of industry shifts, he offers a sharp warning that “those who harness AI culturally will operate faster, cleaner, and more profitably; those who don’t will struggle to keep pace with an industry evolving at accelerating speed.” His work pushes leaders to match technical adoption with organizational readiness.

Read Ian’s full commentary here.

17. Luiza Jarovsky, Co-Founder, AI, Tech & Privacy Academy

Luiza Jarovsky, PhD, co-founder of the AI, Tech & Privacy Academy and author of a newsletter read by more than 87,000 people, is one of the most widely followed voices in global AI governance. Her work consistently highlights the gaps between rapid AI deployment and real oversight, and her reminder that “we are still in the AI chatbot Wild West” underscores how urgently companies must strengthen accountability and guardrails before the next wave of regulation arrives.

Subscribe to Luiza’s Newsletter here.

18. Katharina Koerner, Senior Principal Consultant - AI Governance, Trace3

Katharina Koerner helps companies translate AI governance into practical execution, drawing on deep experience in privacy, security, and shift-left product governance at Trace3. In her recent conversation on the BetterTech podcast, she highlights a challenge many teams face, emphasizing that their AI principles are strong, but operationalization lags. Koerner focuses on closing that maturity gap through risk-based reviews, discoverability, enforceable controls, and clear safeguards for both internal models and third-party AI.

Listen to Katharina on Bridging the Gap in AI Governance here.

19. Michael L. Woodson, Nomad Cyber Concepts, Chief Information Officer and Chief Cybersecurity Strategist

Michael L. Woodson draws on deep experience leading security, privacy, and compliance programs across critical infrastructure, hospitality, and enterprise environments. His perspective on AI governance centers on the importance of visibility. As he notes, most organizations don’t have an AI problem. Rather, “they have an AI visibility problem.” He emphasizes that employees adopt AI to move faster, not to introduce risk, yet “you can’t govern what you can’t see” and “risk begins with the AI you didn’t know existed.” His work focuses on helping enterprises surface shadow AI and build safeguards that match how AI is actually used across the business.

Read Michael’s full commentary here.

Strengthening Enterprise Readiness for the Next Phase of AI

AI is becoming a core operating layer across modern enterprises, but its value depends on whether organizations can govern it with strategic intent. The leaders featured here demonstrate what that requires in practice, including governance frameworks embedded directly into product development, risk management structures that scale with capability, and oversight mechanisms that allow executives to make informed decisions at the speed AI now demands.

Their collective work reflects a broader shift unfolding across industries. Responsible AI should not be treated as a compliance exercise but as an operational discipline that protects organizational integrity. As AI moves deeper into autonomy, enterprises that approach governance as a strategic capability will be better positioned to balance innovation with risk management and remain resilient in an environment defined by rapid change.

Hannah Yacknin-Dawson

Cybersecurity Marketing Writer

No items found.
Industry Recognition