AI Governance – The Ultimate Human-in-the-Loop

Ken Mendelson AIGP, CISSP, CIPP, CISA June 25, 2024

“AI, which is going to be the most powerful technology and most powerful weapon of our time, must be built with security and safety in mind.” 

Jen Easterly, Director CISA.

As the world grapples with the rapid advancement of artificial intelligence (AI) technologies, concerns about the potential risks and unintended consequences have understandably taken center stage. While some advocate for outright bans or stringent regulations on AI development, such approaches risk stifling innovation and hampering progress in a field that holds immense promise for humanity. The truth is, we cannot effectively regulate technology itself.  Unlike sensitive military technologies that have limited civilian use and can be controlled, millions of people are already freely using generative AI products – the proverbial toothpaste is already out of the tube. Instead, the most effective strategy lies in focusing on AI governance, which places the responsibility squarely on the shoulders of the humans that develop and deploy these powerful tools.

The NIST Cybersecurity Framework 2.0: Governance Takes Center Stage

Recognizing the critical importance of governance in managing emerging technologies, the National Institute of Standards and Technology (NIST) has added governance as a control in its recently updated Cybersecurity Framework 2.0.[1] This move underscores the recognized need for organizations to establish robust governance structures and processes to ensure the responsible and secure operation of their information technology (IT) environments.  As IT is at the heart of all AI systems, it seems logical to extend this paradigm to AI.

A Risk-Based Approach: The NIST AI Risk Management Framework

Companies that produce or deploy AI systems can voluntarily utilize an acceptable framework, such as the NIST AI Risk Management Framework (AI RMF)[2].  In fact, doing so is currently considered a “best practice.”   

The AI RMF provides a structured approach to identifying, assessing, and mitigating the risks associated with AI systems throughout their lifecycle. The AI RMF’s “govern, map, measure, manage” approach offers a comprehensive roadmap for companies to follow:

  1. Govern: Establish organizational policies, processes, and governance structures to ensure AI systems are developed and deployed responsibly.
  2. Map: Identify and document the AI system’s components, data sources, and potential impacts.
  3. Measure: Assess the risks associated with the AI system, considering factors such as bias, security, and privacy.
  4. Manage: Implement risk management strategies, including monitoring, testing, and continuous improvement.

By adhering to this framework, companies can more readily demonstrate reasonable due diligence in ensuring their AI tools are being developed and deployed safely and securely.  But this is a rapidly changing landscape.  Over time, companies may also choose to implement guidance that results from the work of the AI Safety Institute Consortium that was created following President Biden’s AI Executive Order as a means of demonstrating a commitment to best practices.   But will voluntary adoption of best practices be enough? 

Accountability: The Ultimate Human-in-the-Loop

Virtually all the emerging best practices related to AI development and deployment encourage the idea of ensuring that there is always a “human-in-the-loop.”  That is, maintaining human oversight in AI-decision making pipelines.[3] This concept, intended to ensure that humans – not machines – remain in control as we deploy AI in increasingly risky use cases, is truly at the core of how society can, and must, work to prevent AI from becoming the catastrophic nightmare many believe it can be.

Ultimately, the key to preventing disastrous outcomes lies in holding humans accountable for their actions. While AI systems may exhibit autonomous behavior, they are still the product of human design, development, and deployment decisions. By implementing robust AI governance frameworks, companies can establish clear lines of responsibility and accountability. This approach ensures that the humans involved in the AI lifecycle – from developers to decision-makers – are held responsible for the ethical, safe, and secure deployment of these technologies.

A Flexible and Future-Proof Approach

A significant percentage of Americans balk at the idea of imposing new regulations on businesses.  Yet even the most regulation-averse among us realize that the stakes are incredibly high with respect to AI and agree that some reasonable guardrails should be put in place to prevent cataclysmic outcomes. 

Should the current voluntary regimes be made mandatory?  Is there another, perhaps less intrusive approach that will allow humans to remain accountable for the actions of a machine?    This is a global issue, and the debates should occur not just in the United States, but worldwide.  Many would agree that it is not hyperbole to say that the future of our species is at stake.  If so, isn’t it worth a discussion?

Adopting a risk-based governance requirement for AI systems offers one way to achieve a reasonable, flexible, and future-proof approach, and there may be others. As AI technologies continue to rapidly evolve, it is worth considering that a principles-based governance framework can adapt and remain relevant, ensuring that companies remain vigilant and accountable in their AI endeavors.  By embracing AI governance as the ultimate human-in-the-loop, we can harness the transformative potential of AI while mitigating its risks and upholding the highest standards of ethical and responsible development and deployment.

[1] https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf

[2] https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf

[3] https://www.sogolytics.com/blog/human-in-the-loop-ai/

Ken Mendelson in a suit and tie smiling for a professional photograph

Ken Mendelson AIGP, CISSP, CIPP, CISA

Senior Managing Director

Ken Mendelson has spent more than 30 years at the intersection of law, information technology and public policy. As a member of the National Security Practice, Ken manages governance, risk and compliance projects and investigations, and conducts monitorships and third-party audits in connection with mitigation agreements enforced by the Committee on Foreign Investment in the United States (CFIUS). In addition, he assists established and emerging companies with implementing and maintaining cybersecurity and privacy programs by developing cybersecurity policies, procedures and guidelines, and conducting risk-based cybersecurity assessments.

InvestigationHotlines