Artificial Intelligence Governance – First, Build On What You Have

Ken Mendelson AIGP, CISSP, CIPP, CISA June 12, 2024

As artificial intelligence (AI) continues to advance rapidly, organizations of all types are seeking to deploy this powerful tool to increase the effectiveness and efficiency of their operations, improve service to their customers and improve their bottom lines.  Some companies are going “all-in” with AI tools, and others are just “testing the waters” with limited deployments. Even those companies that are just dipping their toes in the AI-pond must prioritize AI governance to ensure responsible and compliant use of these powerful technologies.

What Does “Compliant Use” Mean?

Governments worldwide are taking action to regulate AI. Within the United States, the charge is being led at the state level.  As of May 2024, 13 states[1] have passed laws regulating the use of AI in political advertising, aiming to combat the spread of misinformation and protect the integrity of elections. During the past 12 months, over 200 new laws have been proposed across most US States to regulate AI technology.[2]  The recently passed Colorado AI Act takes a comprehensive, risk-based approach (like the EU AI Act) but does not become effective until February 1, 2026.

This period of rapid technological and legislative change can be confusing.  Some organizations are taking a “wait-and-see” approach and avoiding addressing AI governance considerations until things are more settled.  This is a mistake.  Addressing AI governance now will significantly reduce the risk of regulatory entanglements and other potential liability down the road.  Given the current absence of specific Federal regulatory AI requirements, there are steps that should be taken to demonstrate your intent to utilize best practices, and to encourage a culture of compliance with your organization.  In addition, taking these steps now will future proof your compliance efforts. By establishing a robust program, you’ll be able to seamlessly incorporate any new regulatory requirements as they arise, rather than having to overhaul everything from the ground up. This proactive approach saves you time, effort, and resources down the line.

Don’t Reinvent the Wheel – Improve the Wheel You Have

While most AI legislation currently focuses on consumer protection, businesses, particularly those in regulated industries like financial services, telecommunications, healthcare, and those subject to SEC regulations, must also ensure their AI systems operate in a manner that is compliant with existing regulations – even if those regulations don’t specifically mention AI. Rather than creating entirely new governance structures, companies should integrate AI considerations into their existing governance, risk-management, and compliance programs, adding AI-specific controls when it makes sense to do so.[3]  This reasonable approach ensures a consistent and comprehensive framework for managing AI risks and aligns with regulatory expectations, both now and in the future. This can be accomplished using the following approach:

I. Incorporate AI Considerations into Existing Governance and Risk Management Programs – Adding as Needed

To ensure compliance with current and forthcoming AI regulations, businesses must adopt a proactive approach to AI governance. The optimal strategy is to seamlessly integrate AI considerations into existing governance and risk management frameworks. This involves:

A. Conducting Comprehensive AI Risk Assessments

Incorporate AI tools into your existing risk assessment process, meticulously evaluating assets, vulnerabilities, potential impacts (both internal and external), and likelihood of occurrence. This holistic assessment will enable you to identify and mitigate AI-related risks proactively.

B. Updating Existing Policies and Procedures

Revise and update existing IT policies and procedures to encompass the entire AI lifecycle, including development, deployment, and ongoing monitoring. Clearly defined guidelines will ensure consistent and compliant AI implementation across your organization.

C. Ensuring AI Use Cases Comply with Existing Data Privacy and Security Regulations

When using personal or sensitive data for AI, companies must comply with relevant data protection regulations (e.g., GDPR, CCPA, etc.). This includes:

  1. Ensuring robust data security measures are applied to AI systems consistent with a recognized data security framework (e.g., NIST CSF, CIS, ISO, etc.);
  2. Requiring that proper consent is obtained before processing data into AI model training sets; and
  3. Using techniques like data anonymization and other privacy enhancing technologies to protect data used in AI models.

D. Monitoring Regulatory Developments

All companies should monitor regulatory developments that affect their businesses.  Adding AI regulation monitoring to that process is a best practice.  As this is a rapidly evolving discipline, with new laws and guidelines emerging at global, federal, and state levels.[4]  Companies should:

  1. Stay informed about regulatory changes impacting their AI use cases.
  2. Participate in industry initiatives and collaborate with stakeholders to align with emerging standards.
  3. Consult with legal counsel to ensure the AI compliance program adheres to applicable laws and regulations.

II. The Add-Ons…

In addition to updating existing governance structures and tools include AI considerations, companies should begin adding certain AI-focused controls to their governance program.  This should include establishing a diverse AI Steering Committee (“AISC”) to oversee AI governance efforts.  Ideally, the AISC would have cross-functional representation that includes experts from legal, compliance, risk management, data science, and relevant business units, ensuring a holistic approach to AI governance.  

The AISC should be tasked to utilize the NIST AI Risk Management Framework[5] and supporting documentation[6] to address and remediate AI risks of particular relevance to the organization and its AI use cases.  In addition, the AISC’s oversight function should include, but not be limited to:

  1. Ensuring Transparency and Explainability – Regulations increasingly require AI systems to be transparent and explainable, meaning their decision-making processes can be understood and audited. This will require the use of interpretable AI models or explanation techniques to make AI decisions explainable.
  2. Addressing Ethical Considerations – The AISC should ensure that deployed AI systems align with ethical principles and organizational values. To do so, the AISC should implement ethical frameworks and guidelines for the organization’s development and use of AI technologies, assess AI systems for potential biases, privacy violations, or other ethical risks, and involve human experts to validate AI decisions and ensure ethical soundness.

III. And Of Course, Document Everything

Documenting all AI governance and procedural efforts is crucial, as companies may need to demonstrate their due diligence and responsible AI practices to regulators, auditors, or other stakeholders.  Such documentation would also include all data sources, algorithms, and decision processes for AI systems.  Maintaining documentation in a centralized repository streamlines updates and maintenance processes. Moreover, it safeguards operational continuity during personnel transitions. When key employees depart, their replacements can seamlessly access and leverage the centralized knowledge base, minimizing disruptions and ensuring business continuity.

While internal expertise is essential, companies may want to consider leveraging external AI governance experts to supplement their efforts. These experts can provide valuable insights, best practices, and guidance on navigating the complex landscape of emerging AI regulations and ethical considerations.

By integrating AI governance into existing compliance frameworks, ensuring transparency and ethical use, protecting data privacy, and continuously monitoring regulatory developments, businesses can mitigate risks, build trust with stakeholders and regulators, and responsibly harness the power of AI technologies.  By taking these proactive measures to stay ahead of the curve, businesses can establish a robust AI governance framework that not only ensures continuous regulatory compliance but also positions the organization as a responsible and forward-thinking leader in the AI landscape.

[1] These states include California, New York, Texas, Florida, Illinois, Massachusetts, New Jersey, Ohio, Pennsylvania, Virginia, Washington, Colorado, and Oregon.

[2] https://www.dglaw.com/utah-colorado-and-other-states-lead-groundbreaking-ai-legislation-in-u-s/#:~:text=Since%20the%20beginning%20of%202023,with%20many%20others%20close%20behind.

[3] https://www.morganlewis.com/pubs/2024/04/existing-and-proposed-federal-ai-regulation-in-the-united-states

[4] https://iapp.org/resources/article/global-ai-legislation-tracker/

[5] https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf

[6] Such as https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf

Ken Mendelson in a suit and tie smiling for a professional photograph

Ken Mendelson AIGP, CISSP, CIPP, CISA

Senior Managing Director

Ken Mendelson has spent more than 30 years at the intersection of law, information technology and public policy. As a member of the National Security Practice, Ken manages governance, risk and compliance projects and investigations, and conducts monitorships and third-party audits in connection with mitigation agreements enforced by the Committee on Foreign Investment in the United States (CFIUS). In addition, he assists established and emerging companies with implementing and maintaining cybersecurity and privacy programs by developing cybersecurity policies, procedures and guidelines, and conducting risk-based cybersecurity assessments.

InvestigationHotlines