Every compliance program looks good on paper. Policies are written. Org charts show a compliance function. Training is listed in the employee handbook. From a distance, all the boxes appear checked.
But examiners evaluate what actually happens. And the gap between what a program says it does and what it actually does is one of the fastest ways to lose credibility during an exam.
As my friend Bert Friedman, former CFPB examiner put it: “I can defend a stale date. What I can’t defend is a control that only exists on paper. Once practice diverges from reality, you’ve lost credibility and invited additional scrutiny.”
Here are five signs that your compliance program may be more form than substance, and what to do about each one.
Compliance Program Gaps Examiners Spot Immediately
1. Your Policies Don’t Match What People Actually Do
This is the most common and most consequential gap. A policy says one thing. The team does another. Maybe the policy was written years ago and never updated. Maybe a process changed but nobody revised the documentation. Maybe the policy was aspirational from the start.
Whatever the reason, the result is the same: when an examiner compares your policy to your actual operations, the mismatch is immediately visible.
Why it matters:
An outdated policy is a governance gap. A policy that isn’t followed is an operational failure, and it may involve consumer harm. Examiners will always prioritize what actually happens over what’s written on paper.
What to do:
- Conduct a policy-to-practice audit. For each major policy, ask: Is this what we actually do? If the answer is no, either update the policy or change the practice.
- Build policy reviews into your annual calendar. Don’t wait for an exam to discover the drift.
- Use version control with effective dates and approval dates so the history of changes is clear.
- Reference job titles in policies, not individual names. When someone leaves, the policy shouldn’t break.
2. You Have No Monitoring Reports
You say you monitor. Your policy describes a monitoring function. But when asked to produce a monitoring report, there’s nothing to show.
This is a credibility problem. An examiner who asks for documentation and gets a verbal explanation instead has every reason to dig deeper. As Bert says, “If it’s not written, it hasn’t happened.”
Why it matters:
Monitoring is one of the core pillars of a Compliance Management System. If you can’t produce documented outputs, the examiner has no way to verify that monitoring is actually occurring. And once credibility is lost, the scope of scrutiny expands.
What to do:
- Create a monitoring plan that defines what you’re reviewing, how often, and who’s responsible.
- Produce written monitoring reports at your defined cadence, even if they’re simple. A quarterly summary of testing results, exception trends, and management responses is a strong starting point.
- Store reports in a centralized, accessible location so they can be produced quickly during an exam.
- Make sure monitoring reports capture not just what was reviewed, but what was found and what action was taken.
3. Root Cause Analysis Is Missing or Superficial
You track issues. You track complaints. But the root cause field says “human error” on every entry, or it’s left blank entirely.
Root cause analysis is how examiners assess whether your program is self-correcting. If every issue is attributed to “human error” without further explanation, or if root cause is skipped altogether, it signals that you’re treating symptoms without diagnosing the underlying problem.
Why it matters:
Without meaningful root cause analysis, the same issues will recur. Examiners know this. They’ll look at your issue and complaint logs and ask whether the same types of problems are showing up repeatedly. If they are, and root cause was never properly identified, that’s a pattern of unaddressed risk.
What to do:
- For every issue and complaint, document the root cause using a consistent framework. At minimum, determine whether the cause was a policy failure, a procedural or execution failure, or a training gap.
- Make corrective actions specific to the root cause. A policy failure requires a policy update. A procedural failure requires a process change. A training gap requires targeted retraining.
- Review root cause trends quarterly. Are the same categories of failure recurring? If so, your corrective actions aren’t working, and that needs to be addressed.
4. Your Board Doesn’t See Compliance Reporting
Board oversight is the first pillar of a CMS for a reason. If compliance risk isn’t reaching the board, or if board meeting minutes show no evidence of compliance discussion, examiners will flag a tone-at-the-top problem before they review anything else.
This doesn’t mean the board needs to review every policy or approve every procedure. It means the board should receive regular, documented reporting on the state of the compliance program, key risks, open issues, and material developments.
Why it matters:
Tone at the top is not a soft concept in regulatory exams. It’s a structural expectation. If the board isn’t engaged, examiners will question whether compliance has the support, budget, and authority it needs. And if a compliance officer has been escalating resource concerns that aren’t documented in board materials, the organization has a bigger problem.
What to do:
- Establish a regular cadence for compliance reporting to the board (quarterly is typical).
- Document board discussions in meeting minutes, including questions asked, decisions made, and follow-up items.
- Include key metrics: open issues, complaint trends, testing results, risk assessment updates, and any material regulatory changes.
- If the board has approved policies, document the approval with a resolution and date.
5. Your Training Program Is Off-the-Shelf and Unmeasured
You purchased a compliance training platform. Everyone completed the annual module. The completion rate is 98%. On paper, training is covered.
But examiners look deeper. Was the training customized to your products and your risk profile? Did it address the specific regulations that apply to your business? Do you measure whether it actually changed behavior, or just whether people clicked through?
Off-the-shelf training, even from a reputable provider, is rarely sufficient on its own. It covers general concepts but doesn’t address the specific controls, products, and risks that are unique to your organization.
Why it matters:
Training is often used to close control gaps. When a policy failure or human error is identified, the corrective action frequently includes retraining. But if your training program isn’t customized and isn’t measured for effectiveness, that corrective action doesn’t actually close the gap. Examiners will probe this.
What to do:
- Supplement off-the-shelf training with modules customized to your products, policies, and regulatory obligations.
- Define roles and responsibilities for training: who needs what, and when.
- Measure effectiveness beyond completion rates. This could include post-training assessments, observed behavior changes, or reduction in related errors.
- Document retraining protocols for when issues or complaints point to a training gap.
- Keep records that show not just that training occurred, but what it covered and how its impact was assessed.
The Common Thread
All five of these signs point to the same underlying problem: a program that was designed to look compliant rather than to operate compliantly.
Examiners are trained to see the difference. A program that’s strong on paper but weak in practice will generate findings, erode credibility, and invite deeper scrutiny. A program that’s honest about its maturity, documents its work, and demonstrates continuous improvement will earn trust, even if it isn’t perfect.
Substance always beats form. It’s not only a best practice, but how regulators evaluate your program.
Where to Start
If you recognized your program in one or more of these signs, don’t try to fix everything at once. Pick the one that carries the most risk for your organization today, assign an owner, and start closing the gap.
Compliance maturity is built incrementally. The teams that perform best in exams aren’t the ones that overhauled everything overnight. They’re the ones that started small, documented their progress, and built a repeatable system over time.

