
Hi {{first_name | there}},
AI is everywhere right now — and I mean everywhere. It’s crazy, right?
My inbox is full of it. LinkedIn is full of it. Every firm vendor has an "AI-powered" something. Too much.
And the accounting profession? We are genuinely at the start of a massive transformation that is moving faster than most of us expected.
But, we are not even ready.
Here’s the thing that nags me the most. Most of the conversation around AI is about:
Speed and efficiency.
Job layoffs and
Certain professions becoming irrelevant.
And that is not wrong — there is real truth there. Some of these are eventualities. I can see it already happen, and it’s coming. So, brace for impact.
However, aren’t we the skeptics?
So, why not do this?
Take a step-back and apply professional judgement — before stepping in.
Because, very few people are talking about what AI means for your firm’s safety:
Where is the AI law (macro)?
Where’s your firm’s AI rule (micro)?
And that gap SERIOUSLY worries me.
This newsletter issue is about that gap — specifically, the quality risk that sits at the intersection of AI use and your firm’s System of Quality Management.
If you don’t tackle this risk right now, you are not just behind the curve. You have a massive, undocumented risk sitting in plain sight.
Let's dive in,… first with a question for you:
Where and how is AI already being used in your firm today — and who decides which AI tools are acceptable to use on client work?
QUALITY RISK OR TOPIC
Quality fails (and your risk explodes) the moment the first unapproved prompt is sent.
Ask yourself this:
Right now, today — do your staff use AI tools when working on client files?
If the answer is yes, or probably, then:
You have a quality risk and you didn’t know about it.
One more question:
Do you have any of these:
A written policy,
Training on responsible use of AI,
Procedures to review AI outputs before they enter the file, or
A SOC 2, Type 2 report from one of these AI companies?
If you don't (and most firms don't) you already have a HUGE quality risk (and real business exposures that you must worry about).
Not a theoretical one. A documented one, under the quality management standards.
Quality Risk
Use this quality risk definition as a starting point for your firm’s SoQM, and customize it as needed.
TECH - QR - 001:
Firm personnel use AI tools without defined policies, approved platforms, or validation procedures, resulting in confidential client information being compromised and / or unreliable outputs being used in engagement work — leading to potential breaches of confidentiality, unsupported conclusions, non-compliance with professional standards, and increased risk of regulatory sanctions and reputational damage.
Under the quality management standard, your firm must identify and assess quality risks — the conditions or circumstances that could result in the firm not achieving quality.
An unmanaged AI environment is exactly that:
A condition that could lead to unreliable outputs, confidentiality breaches, or unsupported professional conclusions.
The standard does not give you a list of risks to identify. That work is yours.
But the inspector absolutely will ask.
And if your answer is that AI is on everyone's laptop and you have no policy, that is a finding — full stop.

INSIGHTS, INNOVATION AND SYSTEMS
The Quality Management (QM Standards are unambiguous on this point — and they use nearly identical language. Let’s look at what three of the standards say.
International Firms — International Standard on Quality Management (ISQM) 1: |
|---|
Para A101: “The firm may specifically prohibit the use of IT applications or features of IT applications until such time that it has been determined that they operate appropriately and have been approved for use by the firm. Alternatively, the firm may establish policies or procedures to address circumstances when the engagement team uses an IT application that is not approved by the firm. Such policies or procedures may require the engagement team to determine that the IT application is appropriate for use prior to using it on the engagement, through…” |
Canadian Firms — Canadian Standard on Quality Management (CSQM) 1: |
|---|
The requirement is identical: Para A101: “The firm may specifically prohibit the use of IT applications or features of IT applications until such time that it has been determined that they operate appropriately and have been approved for use by the firm. Alternatively, the firm may establish policies or procedures to address circumstances when the engagement team |
US Firms — Statements on Quality Management Standards (SQMS) — Section 10: |
|---|
The US Standard requires the firm to establish a culture throughout the firm that recognizes and reinforces quality. Para A106: “The firm may specifically prohibit the use of IT applications or features of IT applications until such time that it has been determined that they operate appropriately and have been approved for use by the firm. Alternatively, the firm may establish policies or procedures to address circumstances in which the engagement team uses an IT application that is not approved by the firm. Such policies or procedures may require the engagement team to determine that the IT application is appropriate for use prior to using it on the engagement, through…” |
The message is clear and aligned from all three standards.
The Risk Comes Alive at the Prompt.
Here is something most practitioners don't think through: the confidentiality risk does not begin when something goes wrong. It begins much, much earlier (see below).
Some AI tools — including popular ones — retain conversation data for training or analytics purposes.
Some have third-party data-sharing arrangements buried in the terms of service.
Without a defined list of approved platforms, your team has no way to know the difference.
This is why the quality management standards requires a firm to establish processes that support managing confidentiality of client information.
Using AI platforms without a data governance is not just a privacy concern. It is a big SoQM issue.
What this means in practice:
When a staff member pastes a client's trial balance into an unapproved AI tool to "speed up" the analysis — that is a confidentiality event.
When a senior uses ChatGPT to draft a management letter comment and pastes the output directly into the file without reviewing it — that is an unsupported conclusion.
When a manager asks AI to summarize audit evidence and treats the summary as the evidence — that is a documentation failure.
When a partner approves an AI-assisted file without knowing which parts were AI-generated — that is a sign-off without oversight.
None of these require wrong intent. They require the absence of a policy — and that absence is a risk.
There is Nothing Called an Automated Audit or Assurance Job
I cringe when I read LinkedIn posts from CPAs claim that they’re automating entire audits.
If entire audits are automated, then what must CPAs do — just collect fees?
This second gap is subtler, and I think it is the one that will cause massive problems in the next few years:
That AI Outputs Are Fully Reliable Without Any Human Intervention.
AI outputs can look authoritative. They are grammatically clean, logically structured, and fast.
But, they can also be wrong — confidently and completely wrong.
In the absence of a validation step, there is a real risk that inexperienced staff (sometimes even experienced) fully rely on AI-generated conclusions without applying independent judgment.
The perception of reliability in AI outputs + time pressure in engagements = That’s what bothers me the most.
That is not an AI problem. That is a quality management problem.
The QM standard is explicit on this: professional judgment requires critical thinking and AI does not really apply judgment (it mostly applies math and statistics, at least as of now).
Humans cannot (or must not) bypass the “professional judgement” step.
If your firm does not have a defined validation requirement — a step that says AI outputs must be reviewed before entering the engagement file, then you have a gap in how professional judgment is exercised.
And if that gap ever surfaces in a claim or inspection, things can escalate real quick.
Illustrative scenario:
A small regional firm adopted three AI tools across different teams — one for drafting, one for data extraction, one for client communications. No one had approved them. No one had prohibited them either. They had just appeared, one by one, because they worked and someone recommended “a cool AI tool that we can use”.
During a quality review, the reviewer asked the engagement partner a straightforward question: "What happens inside this tool, and what policies govern their use?"
The partner explained the output. But, he was not aware of how data was controlled or exposed.
The review finding was not about data breach or a bad output. It was about a firm operating without awareness of its own risk environment — a firm that had not identified, assessed, or responded to an AI-related quality risk as required under the SoQM.
The fix was not complicated. But it cost them a finding that could have been avoided with one internal survey and a one-page policy.
FAQ
We use AI tools internally, but we never put client data into them. Does the policy gap still apply to us?
Can you prove you never put client data into AI tools?
No, it really depends on whether you can prove it or not.
Most firm leaders think this way until they get questioned. A quick internal survey will often reveal that staff have been using AI in ways that leadership wasn't aware of — not out of bad intent, but because no one told them the boundaries.
More importantly, the quality risk exists whether or not a breach has occurred. An undefined AI environment is that condition. The absence of a historical incident does not mean the risk is absent. It means you haven't detected it or it hasn’t surfaced yet.
If you genuinely have zero AI usage on client work (unlikely), document that.
A short written directive confirming prohibited use is still a policy — and it is still a quality risk response. Right now, most firms have neither.
RESOURCE OR ACTION ITEM
Start NOW.
The single biggest mistake I see firms make here is waiting for the perfect time, for e.g., AI laws to come into effect or for a risk to escalate.
Or maybe they want a comprehensive AI policy — something reviewed by legal, approved by partners, formatted and filed.
And while they are waiting, the big AI companies are getting one step closer to agentic AI and their staff are using AI on live client files with zero guardrails.
The standard does not require perfection.
It requires action — that quality risks are identified, assessed, and responded to — and that the response be communicated and monitored. That is it.
Download this customizable free AI policy here:
Not sure where to begin?
I have put together an AI policy that walks you through the minimum elements your firm cover to satisfy the SoQM quality risk response requirements.

Actions you can take this week.
Keep it simple but intentional:
Survey your team — Find out what AI tools your staff are already using. You cannot manage what you don't know..
Issue this AI policy — Define approved platforms, explicitly prohibit unapproved ones, and set a basic validation requirement. One page. This week.
Host a 15-minute team conversation — not a training session, just a conversation — about what THIS quality risk means in the context of each person's day-to-day role in your firm.
Add an AI use declaration to engagement planning — A simple checkbox "AI tools and outputs, used in this engagement were subject to review: Yes / No / Specify" — creates accountability at the file level.
Remember, if it’s happening repeatedly, it’s no longer an exception. It’s a system failure.
Final thought
Your AI policy does not need to be perfect.
It needs to exist, be communicated, and be monitored. That is the standard. That is what protects your firm.
Do this, and let me know how it goes — I’ll be waiting for your email.
The firms that get this wrong won't know it — until a regulator asks or a claim arises.
If you like this newsletter, consider joining The ARQ — Assurance, Risk and Quality Network for SMP CPA and CA Firms.
It’s free.
You will get:
1 hour learning credit,
1 digital certificate
Peer insights and practical resources
Networking opportunity.
Click the image below to register for the next session of The ARQ.
If you've any feedback or questions, write to me at [email protected], and I will personally respond to your email.
Until next time,
Athreya

Join us for the full experience.
Thanks for reading this issue of The AQRM Compass.
If you would like to try our affordable, yet powerful AQRM technology and consulting solutions, get started here.
Professional Disclaimer:
This email is intended as a professional outreach and knowledge sharing initiative that is consistent with applicable CPA Codes of Professional Conduct. If you prefer not to receive any further communications, please "unsubscribe" using the link below or email us, and we will immediately remove you from our list. Audimatiq Consulting Inc. is an independent consulting services, learning, thought leadership and technology solutions provider to CPA firms. We do not offer audit, review, or any type of assurance services and are not a registered CPA firm. Read our full disclosure here.

