For compliance professionals, the rise of generative AI (GenAI) feels like déjà vu. We’ve been here before—with ERP rollouts, e-discovery software, and data analytics tools. Each new technology comes with the same pitch: faster, smarter, cheaper. And each time, compliance officers are tasked with answering a more difficult question: At what cost?
Mark Mortensen’s recent piece in Harvard Business Review titled Calculating the Costs and Benefits of GenAI, provides a framework for thinking about this balancing act. While AI undeniably creates efficiency, Mortensen cautions that organizations risk losing knowledge, engagement, and trust if they fail to carefully evaluate adoption. For compliance leaders, the implications are profound.
Today we consider five key takeaways from the article for compliance professionals. Each one an area where AI’s promise and peril intersect.
1. Efficiency Gains Must Be Weighed Against Knowledge Loss
One of AI’s greatest selling points is speed. It can review contracts in minutes, summarize regulatory changes instantly, and generate risk assessments that previously took weeks. For compliance departments that are perpetually under-resourced, this is a tantalizing offer.
Yet here lies the first hidden cost: learning. Mortensen reminds us that the process of struggling with a problem, the back-and-forth revisions of a policy draft, the iterative risk-mapping discussions, even the time spent combing through dense regulations. This cements knowledge and deepens institutional expertise. If compliance teams begin to outsource too much of that process to AI, the organization risks eroding the very expertise it relies on to interpret nuance.
Think about it this way, an AI might draft your anti-bribery training materials, but without human engagement in the process, your team loses the chance to sharpen its understanding of new FCPA enforcement trends. Over time, this erodes your compliance program’s intellectual resilience.
The lesson for compliance leaders is clear: use AI to accelerate, not replace, your team’s learning. Make sure staff remain actively engaged in the interpretive process. AI should tee up information, not become the final arbiter of compliance knowledge.
2. Short-Term Problem Solving Can Inhibit Long-Term Skill Development
“Practice makes perfect” it is just a proverb, rather it is a professional truth. Drafting compliance reports builds writing skills, testing control frameworks sharpens analytical ability, and grappling with regulatory ambiguity builds judgment.
But if compliance teams lean too heavily on AI to generate audit memos or to identify anomalies in financial data, they risk undermining their own development. Mortensen points out that when we hand tasks to AI, we sacrifice the chance to strengthen the very skills we will need tomorrow.
Consider a scenario where AI consistently handles first drafts of risk assessments. Compliance officers may grow accustomed to editing AI output rather than developing their own structured thinking. Over time, the skill gap widens. This leaves organizations dependent on tools that cannot be held accountable when regulators ask tough questions.
From a compliance standpoint, this has a direct connection to sustainability. DOJ guidance emphasizes the need for continuous program improvement and the development of compliance capabilities. A department that loses skills to AI outsourcing may look efficient on paper but becomes brittle in practice.
Compliance leaders should strike a balance by reserving certain core tasks, like drafting root cause analyses or preparing investigation reports, for human-led execution, even if AI could technically do them faster. These are the muscle-building exercises of compliance, and like any workout, skipping them leads to long-term weakness.
3. AI Risks Weakening Relationships and Organizational Trust
Compliance does not happen in a vacuum. It thrives or fails, based on relationships. Internal trust with business units, credibility with senior leadership, and even informal rapport built during brainstorming sessions all matter.
AI, however, threatens to reduce these interactions. Mortensen notes that the computational power of AI allows individuals to solve problems alone that previously required teams. While efficient, this independence comes at a cost: fewer interpersonal touchpoints, weaker social ties, and ultimately, reduced trust.
For compliance, this risk is especially acute. Much of our effectiveness hinges on being seen as collaborative partners, not bureaucratic enforcers. If AI reduces the frequency of conversations around risk assessments, policy updates, or investigations, compliance officers may lose opportunities to build influence. Worse, an “AI does it all” approach may reinforce perceptions that compliance is transactional rather than relational.
The takeaway here is that AI should never replace human dialogue in compliance. Use it to free up time so compliance officers can spend more energy building relationships, with line managers, auditors, and employees, rather than less. The culture of compliance is rooted in trust, and no algorithm can generate that.
4. Engagement and Ownership Can Decline with Over-Automation
Engagement matters. Mortensen defines it as being psychologically present in the work. For compliance professionals, engagement translates into vigilance: spotting red flags, questioning anomalies, and challenging assumptions.
But AI introduces a risk of disengagement. When it summarizes investigation interviews or drafts compliance dashboards, humans can become passive consumers rather than active participants. Over time, “good enough” replaces “deep enough.”
This erosion of ownership is dangerous for compliance. Regulators increasingly expect companies to demonstrate not only robust processes but also genuine cultural buy-in. If compliance staff are disengaged because AI has taken over too many cognitive functions, the program risks becoming a paper tiger, form without substance.
To counter this, compliance leaders should intentionally design workflows where humans must interpret and add value to AI outputs. For example, AI can generate a first-pass risk heat map, but compliance officers should validate and adjust it based on local context and business realities. That layer of judgment keeps engagement alive and maintains a sense of accountability.
Ultimately, compliance is about judgment, not just information. AI can support but never substitute for human ownership of ethical decision-making.
5. Homogenization Threatens Compliance Program Uniqueness
Every compliance program reflects its company’s unique culture, risks, and leadership voice. Mortensen warns that because large language models are convergent technologies, they produce standardized answers. Leaders who rely on AI for memos, presentations, or policies risk erasing their distinctive tone and voice.
For compliance professionals, this risk translates into a loss of authenticity. Regulators, employees, and stakeholders can quickly tell the difference between a policy that reflects real company values and one that reads like a generic AI template. Over time, over-reliance on AI can strip a compliance program of its personality and with it, credibility.
The danger goes deeper. If multiple companies rely on AI to draft similar codes of conduct, policies may start to look indistinguishable. That creates industry-wide convergence at a time when regulators are looking for tailored programs that reflect specific risks. In effect, AI could make compliance programs less defensible, not more.
The path forward is to use AI as a scaffolding tool, not as a finished product. Compliance officers should inject their organization’s unique voice, industry-specific risks, and leadership tone into every AI-assisted document. Authenticity is non-negotiable in compliance. AI can never be allowed to flatten it.
AI Audits for Compliance Leaders
Mortensen’s framework for an “AI value audit” is particularly relevant for compliance. He suggests three steps: (1) determine the types of value a task creates, (2) prioritize and optimize them, and (3) continually reassess with a “milk test” to ensure the value hasn’t expired.
For compliance, this means asking: Does AI enhance our program without undermining knowledge, skills, trust, engagement, or authenticity? If not, the short-term benefits may not be worth the long-term costs.
AI is here to stay, and compliance officers must learn to harness it. But like every tool before it, AI is not a replacement for judgment, culture, and leadership. It is an assistant.




