CTF: Your SME Is Using AI — Are You Governed or Gambling?

Difficulty: Intermediate | Time: 20–30 min | Linked product: AI Governance Pack ($97)​‌‌​​​​‌‍​‌‌​‌​​‌‍​​‌​‌‌​‌‍​‌‌​​‌‌‌‍​‌‌​‌‌‌‌‍​‌‌‌​‌‌​‍​‌‌​​‌​‌‍​‌‌‌​​‌​‍​‌‌​‌‌‌​‍​‌‌​​​​‌‍​‌‌​‌‌‌​‍​‌‌​​​‌‌‍​‌‌​​‌​‌‍​​‌​‌‌​‌‍​‌‌​​​‌‌‍​‌‌​‌​​​‍​‌‌​​​​‌‍​‌‌​‌‌​​‍​‌‌​‌‌​​‍​‌‌​​‌​‌‍​‌‌​‌‌‌​‍​‌‌​​‌‌‌‍​‌‌​​‌​‌‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​‌​‌‍​​‌​‌‌​‌‍​‌‌​​​‌‌‍​‌‌‌​‌​​‍​‌‌​​‌‌​


The Setup

You own a 35-person accounting firm in Parramatta. Over the past 18 months, your team has quietly adopted a set of AI tools:

  • Bookkeepers are using ChatGPT Plus to summarise client financials and draft advisory emails
  • Your admin team has an AI scheduling assistant that has read-only access to the full client calendar
  • One senior accountant built a custom GPT that he feeds raw client P&L statements to get initial audit commentary
  • Your receptionist is using an AI transcription tool (Otter.ai) to transcribe client phone calls without asking clients first
  • Your marketing person uses a generative AI image tool trained on scraped data to create social media graphics

None of this was formally approved. There's no AI use policy. Nobody has done a data privacy impact assessment. Your clients have no idea their financials are passing through US-based LLM inference servers.​‌‌​​​​‌‍​‌‌​‌​​‌‍​​‌​‌‌​‌‍​‌‌​​‌‌‌‍​‌‌​‌‌‌‌‍​‌‌‌​‌‌​‍​‌‌​​‌​‌‍​‌‌‌​​‌​‍​‌‌​‌‌‌​‍​‌‌​​​​‌‍​‌‌​‌‌‌​‍​‌‌​​​‌‌‍​‌‌​​‌​‌‍​​‌​‌‌​‌‍​‌‌​​​‌‌‍​‌‌​‌​​​‍​‌‌​​​​‌‍​‌‌​‌‌​​‍​‌‌​‌‌​​‍​‌‌​​‌​‌‍​‌‌​‌‌‌​‍​‌‌​​‌‌‌‍​‌‌​​‌​‌‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​‌​‌‍​​‌​‌‌​‌‍​‌‌​​​‌‌‍​‌‌‌​‌​​‍​‌‌​​‌‌​

You've just been named as a reference check for a large corporate client who is running a supply chain security audit. One of their questions: "Does your firm have a documented AI governance framework?"

You have two weeks to get this right. Where do you start?


The Challenge


Question 1 — The data residency problem

Your senior accountant is feeding raw client P&L statements to a custom ChatGPT. OpenAI's infrastructure is US-based. The client data includes names, ABN numbers, financial figures, and in some cases, details of trust structures and related-party transactions.

  • Under Australia's Privacy Act, what obligations does your firm have before sending personal information offshore?
  • OpenAI's Enterprise plan offers a "no training" data retention policy — does using the consumer ChatGPT Plus plan provide the same protections?
  • Draft the two-sentence client disclosure you'd need to add to your engagement letters.

Question 2 — The transcription tool

Your receptionist is transcribing client calls without client consent. In Queensland (where your Brisbane office also operates), recording a phone call without at least one-party consent is a criminal offence under the Invasion of Privacy Act 1971 (Qld). At the federal level, the Telecommunications (Interception and Access) Act 1979 also applies.

  • What is the minimum consent mechanism you need to implement?
  • The transcripts are stored in Otter.ai's cloud. Otter.ai is a US company. What privacy obligations does this trigger?
  • If a client demands you delete their transcript, what are your obligations under the Privacy Act's "right to erasure" equivalent?

Question 3 — AI in an advisory context

One of your bookkeepers sends a client an AI-drafted email with specific tax advice. The advice is wrong — it misinterprets a section of the ITAA 1997. The client acts on it and underpays their BAS by $12,000, attracting ATO penalties.

  • What is your professional liability exposure?
  • Does using AI to draft advice change your PI insurance position?
  • What is the minimum "human in the loop" governance control that would have prevented this?

Question 4 — The supply chain audit question

Your prospective corporate client has sent you their AI governance questionnaire. Key questions include:

  1. Do you have a documented AI Acceptable Use Policy?
  2. Have you conducted a Data Privacy Impact Assessment (DPIA) for your AI tools?
  3. Do you have a process for assessing new AI tools before deployment?
  4. Are staff trained on AI use risks?

You currently answer "No" to all four. The client has said they require at least a "Yes" to questions 1 and 3 to proceed.

What's the minimum viable AI governance framework you can implement in two weeks that is credible, not just performative?


Question 5 — The generated image copyright question

Your AI image tool (trained on scraped internet data) created a graphic that is stylistically very close to a specific artist's work. The artist has identified it and contacted you via email. They allege copyright infringement.

  • What is the current state of AI-generated content and copyright under Australian law?
  • What should your firm's policy be on AI-generated content for commercial use?
  • What's the risk mitigation step you should take immediately?

Hints

Hint 1 (Q1): APP 8 (Australian Privacy Principles) governs cross-border disclosure of personal information. You remain accountable for what happens to data offshore, even if you use a third-party processor. The key distinction: "processing" on your behalf (data processor relationship) vs the vendor using data for their own purposes (e.g., model training). OpenAI's consumer terms permit using your data for training. Enterprise terms don't. That distinction is everything.

Hint 2 (Q2): Queensland's recording laws are stricter than most states. The "one-party consent" rule means the person recording the call can consent on behalf of all parties — but only if they're a participant. Your receptionist is a participant. The issue is not legality of recording per se, but whether the AI transcription service constitutes a "third party" receiving the communication. Good practice (and the safest legal position) is explicit disclosure at the start of every call: "This call may be recorded and transcribed using AI tools."

Hint 3 (Q3): Professional indemnity insurance for accounting firms typically requires that AI-assisted advice be reviewed by a qualified professional before delivery. If your PI policy predates widespread AI tool use (i.e., it was written before 2023), the policy wording may not contemplate AI-generated advice at all. Call your broker.

Hint 4 (Q4): A "minimum viable" AI governance framework has three components: a policy document (what tools are approved and why), a process (how new tools get approved), and an acknowledgement (staff have read and signed the policy). You can build this in a weekend with the right template.

Hint 5 (Q5): Australian copyright law as of mid-2025 does not provide clear protection for AI-generated works (the human authorship requirement is the sticking point). But using a tool that trained on copyrighted work and produces output substantially similar to that work creates a derivative works risk. The risk mitigation step is not legal argument — it's replacing the graphic immediately and revising your policy.


Reveal: Full Answer to Question 4

Minimum viable AI governance framework for a 35-person accounting firm, buildable in two weeks:

Week 1: Policy and inventory

Start with an AI tool inventory. Have every staff member list every AI tool they're currently using — personal accounts, firm accounts, browser extensions, all of it. You'll be surprised what you find. For each tool, record: tool name, vendor, what data it touches, what it's used for, and whether there's a firm account or just personal accounts.

From this inventory, create a two-tier classification:

  • Approved for use: Tools that don't touch client data, or where you've reviewed the data processing terms and they're acceptable (e.g., Grammarly for internal comms drafts)
  • Conditionally approved: Tools that touch client data only if specific controls are in place (e.g., dedicated Enterprise accounts with data processing agreements, no training, Australian data residency where available)
  • Not approved: Tools where vendor terms permit training on your data, or where you cannot establish data residency

This classification is your AI Acceptable Use Policy. It doesn't need to be a 40-page document. Two pages, clearly written, covering: what's approved, what's not, what the process is for requesting a new tool, and what happens when someone breaches the policy.

Week 2: Process and acknowledgement

Document your tool approval process: when someone wants to use a new AI tool, they complete a one-page assessment form (tool name, data it touches, vendor privacy terms reviewed Y/N, data processing agreement in place Y/N, approved by manager Y/N). This is your DPIA-lite. It won't satisfy ISO 42001, but it satisfies "do you have a process for assessing new AI tools before deployment?" — which is what your client asked.

Have every staff member sign an acknowledgement that they've read the policy. Keep records. This is your answer to question 4 on their questionnaire.

What this achieves: You can now answer "Yes" to questions 1 and 3 on the supply chain audit questionnaire. You have a documented policy and a documented process. That's the threshold your client set.

What this doesn't achieve: A full DPIA for each tool, staff training on AI risks, or compliance with ISO 42001. Those are the next phase. But two weeks from discovery to "we have a framework" is credible. Two weeks from discovery to "we have a 60-page framework" is not.


Get the Full Answer Key

You've seen one answer in detail. The remaining questions — on data residency disclosure language, Qld recording laws and AI transcription, PI insurance implications, and AI copyright under Australian law — are covered in the AI Governance Policy Pack for SMBs.

The pack includes:

  • AI Acceptable Use Policy template (editable Word + PDF)
  • AI tool risk assessment form (the one-page DPIA-lite)
  • Staff acknowledgement template
  • Data processing agreement checklist for AI vendors
  • Client disclosure language for offshore data processing
  • Incident response addendum for AI-related breaches

Built for Australian SMBs by a practising security consultant. No ISO certification required to use it.

Get the AI Governance Pack for $97 → lil.business/products/ai-governance-pack

Or buy via Polar: https://buy.polar.sh/polar_cl_8KEjRB7rL8QidCD5EAXNOJavkYIVqdLdazVqE4SaII2


Scenario is fictionalised. Legal references are to Australian law as at April 2026. This is educational content, not legal advice.

Ready to strengthen your security?

Talk to lilMONSTER. We assess your risks, build the tools, and stay with you after the engagement ends. No clipboard-and-leave consulting.

Get a Free Consultation