Testing vs. Monitoring: What Examiners Actually Expect

By Amber de Volk

Compliance teams talk about “testing and monitoring” as if it’s one activity. It’s not. Examiners know the difference, and they expect you to know it too.

When a regulator asks to see your testing plan and your monitoring plan, they’re asking for two separate documents, with two different cadences, two different ownership structures, and two different sets of outputs. If you hand over a single plan that blends the two, or if you can’t clearly articulate where one ends and the other begins, you’ve already raised a flag.

This post breaks down the distinction, explains what examiners are looking for, and offers a practical starting point for building both functions into your Compliance Management System (CMS).


The Core Distinction

At the highest level:

  • Testing is periodic, hands-on review of transactions, processes, or controls. It’s where you discover issues for the first time.
  • Monitoring is the ongoing, higher-level review of testing outputs, trends, exceptions, and patterns over time.

Both involve looking at data. But the cadence, depth, and purpose are different.

Testing answers the question: Is this control working right now?

Monitoring answers the question: Are we seeing patterns that suggest something is breaking, or has broken?


Testing: First Line Discovery

Testing is typically conducted by the first line of defense, the people closest to the work. It’s the initial check on whether processes and controls are functioning as designed.

What testing looks like in practice:

  • Pulling transaction samples and reviewing them against policy requirements
  • Checking that disclosures, adverse action notices, or communications are accurate and timely
  • Validating that complaint intake, escalation, and resolution follow documented procedures
  • Reviewing account opening or onboarding files for completeness and regulatory adherence

What makes testing defensible:

  • A documented testing plan that defines scope, methodology, and sample selection rationale
  • Clear periodicity (weekly, monthly, quarterly) tied to the risk profile of the activity being tested
  • Written results that capture what was reviewed, what was found, and what action was taken
  • A defensible sample methodology: here’s how I chose my samples, and here’s why

An examiner doesn’t need to see perfection. They need to see a repeatable process with documented outputs. If you say you test and the examiner asks for your report and it doesn’t exist, that’s a credibility problem. As former CFPB Bert Friedman says: “If it’s not written, it hasn’t happened.”


Monitoring: Second Line Oversight

Monitoring sits above testing. It’s less frequent, more analytical, and typically owned by the second line of defense or a compliance advisory function.

Where testing looks at individual transactions or controls, monitoring looks at the outputs of testing over time. It asks: What are the trends? Where are the exceptions? Are issues recurring? Is the control environment improving or degrading?

What monitoring looks like in practice:

  • Reviewing aggregated testing results on a monthly or quarterly basis
  • Tracking exception rates and identifying patterns across products, teams, or time periods
  • Assessing whether corrective actions from prior testing cycles were effective
  • Producing monitoring reports that summarize findings, trends, and recommendations

What makes monitoring defensible:

  • A documented monitoring plan, separate from the testing plan
  • Defined cadence (typically less frequent than testing)
  • Reports that capture trends, escalations, and management responses
  • Evidence that monitoring findings feed back into risk assessments, training, or policy updates

Why the Distinction Matters to Examiners

Examiners and auditors are trained to evaluate the three lines of defense model. Testing and monitoring map to different lines, and conflating them signals that your program may not have the structural independence regulators expect.

Here’s what can go wrong when the two are blurred:

  • No independent check. If the same team tests and monitors, there’s no separation of duties. The second line isn’t actually providing oversight.
  • Gaps in documentation. A combined “testing and monitoring” plan often means one function gets documented while the other is assumed. Assumptions don’t hold up in exams.
  • Missed trends. Testing catches individual issues. Without a separate monitoring layer reviewing those results over time, systemic problems can go undetected.
  • Credibility risk. When an examiner asks for your monitoring report and you point them to your testing results, it raises questions about whether you understand your own program.

A Side-by-Side Comparison

TestingMonitoring
PurposeDiscover whether controls are workingIdentify trends and patterns over time
CadenceMore frequent (weekly, monthly, quarterly)Less frequent (monthly, quarterly, biannually)
OwnershipTypically first line (business/operations)Typically second line (compliance/advisory)
ScopeIndividual transactions, files, or controlsAggregated testing outputs and exception trends
Key outputTesting results with findings and actionsMonitoring reports with trends and recommendations
DocumentationTesting plan + sample methodology + resultsMonitoring plan + reports + escalation records

Where QA/QC Fits In

Quality assurance (QA) and quality control (QC) often sit within the second line and can feel like they overlap with both testing and monitoring. In practice, QA/QC functions can support either, but they should be clearly mapped to one or both in your CMS documentation.

The key is to articulate the role. If QA reviews a sample of completed work for accuracy, that’s closer to testing. If QA tracks error rates over time and reports on trends, that’s monitoring. Define it, document it, and make sure the ownership is clear.


How to Get Started

If your organization currently has a single “testing and monitoring” plan, here’s a practical path forward:

  1. Separate your plans. Create a distinct testing plan and a distinct monitoring plan. They can reference each other, but they should be standalone documents with their own scope, cadence, and ownership.
  2. Define your samples. For testing, document your sample selection methodology. Why these transactions? Why this sample size? An examiner will ask, and you need a defensible answer.
  3. Build your monitoring reports. Even if you start with a simple quarterly summary of testing results and exception trends, having a documented monitoring output is better than having none.
  4. Assign ownership. Testing and monitoring should not be owned by the same person or team if you can avoid it. Structural independence matters.
  5. Connect findings to action. Both testing exceptions and monitoring trends should feed into your issue management process, your risk assessment, and, where relevant, your training program.

You don’t need a sophisticated system on day one. You need documented, repeatable processes with clear outputs. Start there, and build over time.


The Bottom Line

Testing and monitoring are two of the 12 pillars of a strong Compliance Management System as documented in the Equinox CMS framework. They serve different purposes, operate at different cadences, and are owned by different lines of defense.

Examiners know the difference. If your program treats them as one activity, it’s worth the time to separate them now, before your next exam or audit.

Can't get enough compliance? Neither can we.

Join our newsletter to receive fresh content from expert compliance operators. Get notified of job postings, upcoming trainings and events.

Google reCaptcha: Invalid site key.

Be audit-ready, before examiners arrive

A practical framework for compliance audit readiness for financial services organizations, aligned with 2026 best practices.

Download the ebook