Why Corporate AI Policies Fail: Neglecting Cross-Functional AI Governance

Allison Spagnolo CIPP / Jacqueline Pluta ACAMS / Morgan Maiorino November 14, 2024

At the onset of widespread AI use, AI’s ability to produce quick and seemingly trustworthy outputs distracted many companies and individuals from the dangers of using AI tools as a “new toy.” After the novelty wore off, AI users are left with the reality that AI models are simply a tool to assist in finding a solution, not the solution itself. Focusing too much on the AI component of a solution while neglecting its management and monitoring can lead to several issues, such as biased or inaccurate results, data breaches, and legal liability.

For example, consider a company that rapidly integrates AI into its hiring process to expedite candidate screening. Initially, the AI tool appeared to perform exceptionally well, quickly sifting through thousands of applications and identifying top candidates. However, over time, a troubling pattern emerges in which the AI system begins to exhibit significant bias, disproportionately favoring applicants from specific backgrounds and inadvertently discriminating against others. This bias results in a public relations disaster, leading to accusations of unfair hiring practices and legal challenges. The company’s reputation will no doubt take a severe hit, and the financial costs of rectifying the situation will be substantial.

This example underscores the critical importance of implementing a robust cross-functional AI risk management framework to identify, monitor, and mitigate potential issues before they escalate into major problems. AI is evolving so quickly that many companies are developing and deploying the technology before fully building out the related risk management framework. It’s never too late to work with all departments affected to implement a robust corporate AI policy to address AI risk management and monitoring, however. This includes setting up clear AI governance structures, defining roles and responsibilities, and establishing protocols for regular audits and reviews.

Integrating cross-functional human oversight into the AI lifecycle allows leadership to manage proactively and keep a pulse on potential issues. Human reviewers can help catch errors that automated systems might miss and provide valuable context that data-driven models might overlook.

A key element of data governance is establishing clear AI policies and procedures on how data is collected, stored, and used. Monitoring of these policies should be risk-based, continuous, and encompass various metrics like security, accuracy, and fairness. By doing so, organizations can proactively identify and address new issues, update controls on existing ones, and ensure the AI system remains reliable and effective.

Whether your organization is just starting to explore AI or seeking to refine existing AI policies, our experts are ready to help ensure your organization implements a responsible and secure AI strategy. Our team is dedicated to partnering with businesses to navigate the complexities of AI governance, privacy, and compliance.

Alison Spangolo with curly hair is wearing a black jacket and a plaid shirt

Allison Spagnolo CIPP

Chief Privacy Officer, Senior Managing Director

As Chief Privacy Officer and Senior Managing Director, Allison Spagnolo leads the Artificial Intelligence (AI) practice, ensuring governance and compliance for clients’ AI usage, and compliance engagements across sectors including financial institutions, healthcare, and government contractors. This includes reviewing anti-money laundering (AML) and sanctions (OFAC) issues for global banks and multi-national companies, as well as advising on financial crime compliance issues specific to cryptocurrency exchanges and Fintech companies. She has traveled extensively in Europe and Asia for the purpose of leading and conducting on-site inspections and reviews related to NYDFS and Federal Reserve monitorships, BSA/AML audits and other compliance matters.

Jacqueline Pluta ACAMS

Analyst

Jacqueline Pluta is an analyst with Guidepost Solutions working on a variety of issues across multiple industries. Most recently, Ms. Pluta completed a compliance review for a community bank with 13 locations, where she developed several review procedures to test the bank’s compliance with regulatory requirements including the Equal Credit Opportunity Act and Truth in Savings Act.

Morgan Maiorano in a striped shirt is smiling for the camera

Morgan Maiorino

Senior Analyst

Morgan Maiorino is a compliance and investigations analyst at Guidepost Solutions. She supports the Investigations + Business Intelligence, Corporate Risk + Compliance, Institutional Integrity, and Immigration + Border Services practice groups.

InvestigationHotlines