CTF: Rate the Risk — AI Tool Decisions That Can Sink Your Business

Difficulty: Intermediate | Time: 20–25 min | Linked product: AI Governance Pack ($97)​‌‌​​​​‌‍​‌‌​‌​​‌‍​​‌​‌‌​‌‍​‌‌‌​​‌​‍​‌‌​‌​​‌‍​‌‌‌​​‌‌‍​‌‌​‌​‌‌‍​​‌​‌‌​‌‍​‌‌​​​​‌‍​‌‌‌​​‌‌‍​‌‌‌​​‌‌‍​‌‌​​‌​‌‍​‌‌‌​​‌‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​‌​‌‍​‌‌​‌‌‌​‍​‌‌‌​‌​​‍​​‌​‌‌​‌‍​‌‌​​​‌‌‍​‌‌‌​‌​​‍​‌‌​​‌‌​


The Setup

You've just been appointed as the part-time "AI Governance lead" for a 60-person professional services firm in Adelaide. The CEO's brief: "Figure out what we're doing with AI, whether it's safe, and what policies we need." Budget: minimal. Timeline: one month.

You spend the first week doing a walk-around and asking staff what AI tools they're using. Here's what you find — five scenarios, each with a different risk profile. Your job is to rate each one (Low / Medium / High / Critical) and recommend the appropriate governance response.​‌‌​​​​‌‍​‌‌​‌​​‌‍​​‌​‌‌​‌‍​‌‌‌​​‌​‍​‌‌​‌​​‌‍​‌‌‌​​‌‌‍​‌‌​‌​‌‌‍​​‌​‌‌​‌‍​‌‌​​​​‌‍​‌‌‌​​‌‌‍​‌‌‌​​‌‌‍​‌‌​​‌​‌‍​‌‌‌​​‌‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​‌​‌‍​‌‌​‌‌‌​‍​‌‌‌​‌​​‍​​‌​‌‌​‌‍​‌‌​​​‌‌‍​‌‌‌​‌​​‍​‌‌​​‌‌​

There's a catch: three of the five scenarios are rated incorrectly by the staff member who reported them. Your job is to spot which three and explain why.


The Challenge

For each scenario, assign a risk rating (Low / Medium / High / Critical) and a governance recommendation. Then identify which scenarios have been misrated by the staff member.


Scenario 1 — "It's just for drafting emails"

Staff member's rating: Low

The marketing coordinator uses Canva's built-in AI writing assistant to draft social media captions and internal newsletter copy. No client data is involved. She always reviews and edits the output before publishing. She's on Canva's free plan.

What's the actual risk rating? What, if anything, does your policy need to say about this use case?


Scenario 2 — "We checked and it's fine"

Staff member's rating: Low

The HR manager

uses an AI resume screening tool to shortlist job applicants. The tool scores resumes against a job description and ranks candidates. The HR manager reviewed the vendor's website and it says "AI-powered, unbiased hiring." The tool has been in use for four months across 12 hiring decisions.

What's the actual risk rating? What legal framework is potentially in play that the HR manager hasn't considered?


Scenario 3 — "It saves us hours every week"

Staff member's rating: Medium

The operations team uses Microsoft Copilot (M365) to summarise internal meeting transcripts, draft project updates, and search across SharePoint. The firm is on a standard M365 Business Premium licence. The IT manager notes that Copilot is pulling from a SharePoint environment where permissions are "a bit messy" — some legacy folders are accessible to all staff.

What's the actual risk rating? What's the specific risk the IT manager is underplaying?


Scenario 4 — "Our vendor is ISO 27001 certified so we're covered"

Staff member's rating: Low

The finance team uses a third-party AI-powered invoice processing tool. Client invoices (including company names, ABNs, amounts, and banking details) are uploaded and processed by the tool, which extracts and categorises the data. The vendor is ISO 27001 certified and based in Singapore. The legal team reviewed the contract 18 months ago.

What's the actual risk rating? What does "ISO 27001 certified vendor" actually and not actually protect you from?


Scenario 5 — "It's a chatbot on our website, not our systems"

Staff member's rating: Low

Your firm has deployed an AI chatbot on your client-facing website. The chatbot handles initial enquiries and can access your firm's pricing and service information. Prospective clients sometimes share details about their legal/financial situation in the chat to get a quote. The chatbot vendor is a US startup. The chat logs are stored on the vendor's servers. You have not reviewed the vendor's privacy policy or data retention terms.

What's the actual risk rating? What are the three specific problems with this setup?


Hints

Hint (Scenario 1): Canva's free plan terms allow Canva to use user-generated content to improve its products and services. "No client data involved" is the key mitigating factor here, but consider whether AI-generated content for commercial use creates any IP or brand risk. The correct rating might be a notch higher than the staff member thinks.

Hint (Scenario 2): Automated decision-making systems used in employment contexts are subject to anti-discrimination law in Australia (Age Discrimination Act, Sex Discrimination Act, Racial Discrimination Act, Disability Discrimination Act). AI hiring tools have been found to encode historical biases — the EEOC in the US has taken action on similar tools. Australia doesn't have a dedicated AI liability framework yet (as of 2026), but existing discrimination law applies. Four months and 12 decisions is significant exposure if any of those decisions are challenged.

Hint (Scenario 3): Microsoft Copilot's risk profile is fundamentally tied to your M365 permission architecture. Copilot can surface any document that the user has access to — and in an environment with broken permissions, that means Copilot can help a junior staff member inadvertently access sensitive HR, legal, or financial documents they shouldn't see. The "messy permissions" problem is not a Copilot problem — it's a pre-existing data governance failure that Copilot makes catastrophically visible.

Hint (Scenario 4): ISO 27001 certification covers the vendor's information security management system. It does not cover data privacy, data residency, or your obligations under Australia's Privacy Act. An ISO 27001-certified vendor can still lawfully (by their own terms) process your client data in ways that breach your APP 8 obligations. The question is: what does the contract say about data use, training, residency, and sub-processors?

Hint (Scenario 5): Three problems: (1) prospective clients are disclosing potentially sensitive personal information to a system you don't control; (2) you haven't reviewed what the vendor does with that data; (3) your website likely has a privacy policy that doesn't disclose that AI chat logs are stored offshore by a third party — that's a Privacy Act compliance gap.


Reveal: Full Answer — Which Three Are Misrated?

The three misrated scenarios are 2, 3, and 4.

Scenario 2 (HR resume screening) — should be High, not Low

This is the most dangerous misrating in the set. The staff member rated it Low because "the vendor says it's unbiased." That claim is not auditable from your end, and it doesn't insulate your firm from liability. If a rejected candidate files a discrimination complaint, your firm is the respondent — not the AI vendor. You made the hiring decision, you used the tool to inform it, and under Australian anti-discrimination law, ignorance of the tool's bias is not a defence.

The governance response is not "stop using the tool." It's:

  1. Document your hiring criteria independently of the AI output
  2. Have a human reviewer who can articulate why a candidate was shortlisted or rejected in terms of specific, documented criteria
  3. Review the four months of historical decisions for any pattern that might indicate discrimination
  4. Ensure your vendor contract has a representation about bias testing methodology

Scenario 3 (Copilot + broken permissions) — should be High, not Medium

Copilot in a broken-permissions environment is effectively an insider threat vector. Any staff member with Copilot access can now ask natural-language questions that surface documents they technically shouldn't see — salary information, legal correspondence, board minutes, client confidential files. The "Medium" rating treats this as a configuration risk. It's actually a data governance failure that requires remediation before Copilot is expanded.

The governance response: permissions audit before Copilot rollout to more users. This is a Microsoft prerequisite that many businesses skip.

Scenario 4 (ISO 27001 AI invoice tool) — should be Medium-High, not Low

ISO 27001 is a security certification, not a privacy or AI governance certification. Your firm's APP 8 obligations require you to take reasonable steps to ensure offshore recipients of personal information protect it in a manner consistent with the APPs. "They're ISO 27001 certified" does not satisfy this — you need a contract provision (a Data Processing Agreement or equivalent) that binds the vendor to specific privacy obligations.

Additionally: the contract was reviewed 18 months ago. AI vendor terms change. If the vendor updated their terms to permit using processed data for model training, your legal team's review is out of date.

Scenarios 1 and 5 were correctly rated by staff (approximately):

  • Scenario 1 is low-medium risk — Canva AI for internal/social copy with no client data is genuinely lower risk, though the free plan IP terms warrant a policy note.
  • Scenario 5 is correctly identified as problematic, though "Low" for the chatbot is actually an understatement — it should be Medium-High given the offshore data storage and the sensitive nature of prospective client disclosures.

Get the Full Scoring Guide

You've seen the answer to the misrating question. The full AI Governance Policy Pack for SMBs gives you the scoring rubric, risk assessment framework, and policy templates to turn these five scenarios into documented decisions your firm can stand behind.

The pack includes:

  • AI tool risk assessment matrix (rate any tool in 10 minutes)
  • Data processing agreement checklist for AI vendors
  • Automated decision-making policy template (covers hiring, lending, client scoring)
  • M365 Copilot readiness checklist (including the permissions audit)
  • Vendor re-review schedule template (so "reviewed 18 months ago" doesn't happen again)

Get the AI Governance Pack for $97 → lil.business/products/ai-governance-pack

Or buy via Polar: https://buy.polar.sh/polar_cl_8KEjRB7rL8QidCD5EAXNOJavkYIVqdLdazVqE4SaII2


Scenarios are fictionalised. Legal references are to Australian law as at April 2026. This post is educational and does not constitute legal or compliance advice.

Ready to strengthen your security?

Talk to lilMONSTER. We assess your risks, build the tools, and stay with you after the engagement ends. No clipboard-and-leave consulting.

Get a Free Consultation