CTF: Rate This AI Vendor — Would You Sign the Contract?
Difficulty: Hard | Time: 25–35 min | Linked product: AI Governance Pack ($97)
The Setup
You're the operations manager for a 40-person legal services firm in Sydney. Your firm handles family law, commercial litigation, and property conveyancing. The partners have asked you to evaluate an AI contract drafting tool called DraftAI (fictional) that a few of the lawyers have been trialling using personal accounts.
Get Our Weekly Cybersecurity Digest
Every Thursday: the threats that matter, what they mean for your business, and exactly what to do. Trusted by SMB owners across Australia.
No spam. No tracking. Unsubscribe anytime. Privacy
DraftAI promises to reduce contract drafting time by 60%. It's built on GPT-4-class models, hosted on AWS US-East, and markets itself to law firms specifically.
You've been sent:
- Their privacy policy (a 14-page PDF)
- Their terms of service
- A draft enterprise agreement their sales rep emailed over
You have one week to decide: approve for firm-wide deployment, reject, or negotiate modifications.
Below are five excerpts from their documents. For each, decide: (A) acceptable, (B) acceptable with modifications, or (C) dealbreaker. Justify your rating.
The Challenge
Excerpt 1 — From the Privacy Policy, Section 4.2:
"We may use Customer Data to improve, develop, and train our AI models. By using the Service, you grant DraftAI a non-exclusive, worldwide, royalty-free licence to use Customer Data for these purposes. You may opt out of model training by submitting a written request to [email protected] within 30 days of account creation."
Rate: A / B / C. Justify. What specific modification would you request?
Excerpt 2 — From the Terms of Service, Section 8.1 (Limitation of Liability):
"To the maximum extent permitted by law, DraftAI's total aggregate liability to you for any a
nd all claims arising out of or related to this Agreement shall not exceed the amount paid by you in the three (3) months preceding the claim."Free Resource
Free AI Governance Checklist
Assess your organisation's AI risk posture in 10 minutes. Covers transparency, bias, data governance, and ISO 42001 alignment.
Download Free Checklist →
Rate: A / B / C. Justify. For a firm paying $600/month, what does this cap mean in dollar terms if DraftAI causes a data breach affecting your clients' privileged legal communications?
Excerpt 3 — From the Enterprise Agreement, Section 12 (Subprocessors):
"DraftAI may engage third-party subprocessors to assist in delivering the Service. A current list of subprocessors is available at draftai.com/subprocessors. DraftAI may add or change subprocessors at its discretion and will provide notice via email to the primary account holder at least 7 days before any material change."
Rate: A / B / C. Justify. What's missing from this clause that matters for your Privacy Act obligations?
Excerpt 4 — From the Privacy Policy, Section 7.1:
"DraftAI stores all Customer Data on AWS infrastructure located in the United States. We do not currently offer data residency options for Australian customers. DraftAI has self-certified under the EU-U.S. Data Privacy Framework."
Rate: A / B / C. Justify. The vendor mentions EU-U.S. DPF compliance. Does this help your Australian Privacy Act position?
Excerpt 5 — From the Enterprise Agreement, Section 15.3 (Security):
"DraftAI maintains SOC 2 Type II certification, renewed annually. DraftAI will notify Customer of any security incident affecting Customer Data within 72 hours of DraftAI becoming aware of the incident. Upon request, DraftAI will provide Customer with a copy of its most recent SOC 2 report under NDA."
Rate: A / B / C. Justify. Is 72 hours sufficient for your Privacy Act obligations? What additional security clause would you want?
Bonus Question — The Privilege Problem
Your lawyers will be uploading draft contracts, legal advice memos, and litigation strategy documents to DraftAI. This is legally privileged material. If DraftAI's model training uses this data, or if DraftAI is subpoenaed in a US legal proceeding, privilege may be at risk.
- Does uploading privileged legal material to a US-hosted AI platform waive privilege in Australia?
- What clause would you add to the enterprise agreement to protect privilege?
ISO 42001 AI Governance Pack — Coming Soon
Policy templates, risk assessment frameworks, and implementation guidance for organisations deploying AI systems. Join the waitlist for early access.
Join the Waitlist →Hints
Hint 1 (Excerpt 1): A 30-day opt-out window from account creation is a deliberate friction design. By the time you've deployed the tool firm-wide and a lawyer notices the clause, you're likely outside the opt-out window. The correct modification: make opt-out the default (opt-in to training, not opt-out), and make it available at any time. For a legal firm, model training on your client documents is a per se dealbreaker regardless of opt-out availability.
Hint 2 (Excerpt 2): Three months of fees = $1,800 liability cap for a firm paying $600/month. A data breach affecting privileged legal communications could result in client complaints to the Law Society, regulatory action, professional indemnity claims, and reputational damage worth multiples of that. Limitation of liability clauses in vendor contracts are negotiable for enterprise deals. Push for: (1) carve-out for negligence and wilful misconduct, (2) carve-out for data breaches affecting personal information, (3) a higher cap tied to annual contract value.
Hint 3 (Excerpt 12): The missing element is your right to object to new subprocessors, not just receive notice. Under APP 8, you are accountable for what your subprocessors do with Australian personal information. "We'll tell you 7 days before" is not the same as "you can say no." Also missing: what specific information will you receive about new subprocessors? Country of operation, security certifications, purpose?
Hint 4 (Excerpt 7.1): The EU-U.S. DPF is an EU-to-US data transfer mechanism. It is entirely irrelevant to Australian Privacy Act compliance. Australia has its own cross-border transfer obligations under APP 8, and there is no equivalent self-certification framework. The vendor's mention of EU-U.S. DPF is a compliance credential that sounds impressive but does nothing for your Australian legal obligations. You need a contract clause that binds DraftAI to protect the data in a manner consistent with the APPs.
Hint 5 (Excerpt 15.3): Your 30-day NDB assessment window starts when you have "reasonable grounds to believe" a breach has occurred. If DraftAI notifies you at the 72-hour mark, you've already used up a significant portion of your assessment window. You want notification as fast as possible — and you want it to include enough detail to begin your own assessment. Additional clause: penetration testing schedule, and your right to request an updated SOC 2 report mid-cycle if you have concerns.
Reveal: Full Answer to Excerpt 1
Rating: C (Dealbreaker) for a legal firm — B (Acceptable with modification) for most other SMBs
Why it's a dealbreaker for a legal firm:
Uploading client contracts, legal advice, and litigation strategy documents to a platform that trains on your data is a fundamental conflict with your professional obligations. The Law Society of NSW (and equivalent state bodies) have issued guidance that lawyers must exercise care before uploading client information to AI platforms that may use it for purposes beyond the specific service. If DraftAI trains on your uploaded contracts, that client data is, in effect, being shared with every future user whose prompts trigger model recall.
The 30-day opt-out window is designed so that by the time you've deployed the tool and a lawyer reads the fine print, you've missed it. This is not good faith drafting.
The modification you'd request:
Replace Section 4.2 entirely with:
"DraftAI will not use Customer Data (including any inputs, outputs, or metadata associated with Customer's use of the Service) for the purpose of training, fine-tuning, or improving any AI model, without explicit written consent from Customer on a case-by-case basis. Customer's use of the Service does not constitute consent to model training. DraftAI shall implement technical controls to enforce this restriction and shall make those controls available for Customer audit upon request."
This is standard language in enterprise AI agreements for professional services firms. If DraftAI won't accept it, that tells you something about their commitment to data governance.
For non-legal SMBs:
If you're not a law firm and your data doesn't include legally privileged material, model training can be acceptable — but only if:
- Opt-out is the default, not opt-in
- The opt-out can be exercised at any time
- You understand what "training" means for your data specifically (is it fine-tuning? RLHF? Retrieval augmentation?)
The clause as written is a B (Acceptable with modification) for most SMBs, with the modification being: opt-out default, any-time opt-out, and a written commitment that previously uploaded data will be deleted from training sets upon opt-out.
Get the Full Answer Key
You've seen the full analysis for Excerpt 1. The remaining excerpts — covering liability cap negotiation, subprocessor objection rights, APP 8 compliance for US-hosted AI, SOC 2 sufficiency, and the legal privilege problem — are covered in the AI Governance Policy Pack for SMBs.
The pack includes:
- AI vendor contract review checklist (30 questions to ask before signing)
- Data processing agreement template for AI vendors
- Model training opt-out clause template
- Liability carve-out negotiation guide
- Privilege protection clause for professional services firms
Get the AI Governance Pack for $97 → lil.business/products/ai-governance-pack
Or buy via Polar: https://buy.polar.sh/polar_cl_8KEjRB7rL8QidCD5EAXNOJavkYIVqdLdazVqE4SaII2
DraftAI is a fictional vendor created for this post. Contract excerpts are illustrative. Legal references are to Australian law as at April 2026. This post is educational and not legal advice.
Work With Us
Ready to strengthen your security posture?
lilMONSTER assesses your risks, builds the tools, and stays with you after the engagement ends. No clipboard-and-leave consulting.
Book a Free Consultation →The Short Version
Imagine you own a café. You've got great locks on every door. Your alarm system is top-notch. But then the company that handles your online orders gets hacked — and suddenly every customer's address and payment info is out in the open. You didn't do anything wrong. Your café was fine. But the people you trusted with your customers' details weren't.
That's what a third-party vendor breach is. And right now, 1 in every 4 data breaches happens this way [1].
Why Your Software Tools Are Now the Target
Your business probably uses dozens of software tools: a payroll system, an email platform, an accounting app, a booking system. Each one of those companies has access to some piece of your data.
When hackers want to hit a big haul — lots of businesses' data in one go — they don't try to hack every business individually. That's slow. Instead, they target one of the shared tools that thousands of businesses all use. Hit one vendor, and you've hit everyone who uses that vendor at once.
This week, a company called Betterment found this out the hard way. Hackers tricked someone at a company Betterment used for sending emails into giving them access. Then they downloaded the financial details, names, phone numbers, and retirement plan information of 1.4 million customers [2]. Betterment's own systems were fine. The problem was one of their suppliers.
A few days earlier, the same thing happened to a fintech company called Figure — 1 million customers exposed through a social engineering attack on a vendor account [3].
What "Social Engineering" Means (It's Just Fancy Trickery)
Social engineering sounds complicated. It isn't. It means convincing a human to do something by pretending to be someone they trust.
Think of it like a con artist calling your receptionist, pretending to be the IT department, and asking for a password. Your receptionist wasn't hacked. The building wasn't hacked. But someone convinced a human to open the door anyway.
Hackers use this technique because it's often easier than breaking through technical security. And once they have access to a vendor's system, they can reach your data too.
How Fast Is This Happening?
Faster than most businesses can keep up with. Here's a number that matters: 96% of vendor software vulnerabilities are turned into active attacks within the same year they are discovered [1].
That means when a flaw is found in a tool you use, there's a very good chance someone tries to exploit it quickly — often before the tool is even patched.
Security researchers are also predicting that 2026 will see over 50,000 new software vulnerabilities disclosed — a record [4]. That's a lot of doors for attackers to try.
What You Can Actually Do About It
You don't need a team of security experts. You just need a few habits:
1. Know who has your data. Write a list of every tool your business uses and what customer or business data it touches. If you don't know, you can't act fast when something goes wrong.
2. Ask vendors hard questions. Before signing up with a new tool: Do they have security certification (like SOC 2 or ISO 27001)? Do they have a breach notification policy? If they can't answer, that's a red flag.
3. Turn on two-factor authentication everywhere. Including on your vendor accounts. It doesn't stop all attacks, but it makes the con artist's job much harder.
4. Keep your vendor list small. Every new tool you add is a new door into your business. The fewer tools, the less exposure.
5. Put it in the contract. Require any vendor handling your customer data to notify you within 48 hours if something goes wrong. Many SMBs skip this — don't.
The Upside: Security as a Business Edge
Here's the thing most people miss: if you handle your vendor relationships responsibly, it becomes a selling point.
When a potential client asks "how do you protect our data?" — and you have a real answer — you win business that your competitors don't. Especially in regulated industries like healthcare, legal, or finance, where data protection is a procurement requirement.
Security isn't just defensive. Done right, it's a competitive advantage — and it saves you from having to explain to your customers why their information ended up on the internet.
Need help figuring out which vendors are your biggest exposure? That's exactly what lilMONSTER does. Book a 30-minute vendor security review →
TL;DR
- Imagine you own a café. You've got great locks on every door. Your alarm system is top-notch. But then the company that
- Your business probably uses dozens of software tools: a payroll system, an email platform, an accounting app, a booking
- Action required — see the post for details
FAQ
Q: What is the main security concern covered in this post? A:
Q: Who is affected by this? A:
Q: What should I do right now? A:
Q: Is there a workaround if I can't patch immediately? A:
Q: Where can I learn more? A:
References
[1] Dataminr, "2026 Cyber Threat Landscape Report," Dataminr, Feb. 2026. [Online]. Available: https://resources.dataminr.com/dataminr-for-cyber-defense/dataminr-2026-cyber-threat-landscape-report
[2] P. Arntz, "Betterment data breach might be worse than we thought," Malwarebytes, Feb. 19, 2026. [Online]. Available: https://www.malwarebytes.com/blog/news/2026/02/betterment-data-breach-might-be-worse-than-we-thought
[3] "Data breach hits 1 million Figure customers," American Banker, Feb. 19, 2026. [Online]. Available: https://www.americanbanker.com/news/data-breach-hits-1-million-figure-customers
[4] FIRST, "2026 Vulnerability Forecast," Forum of Incident Response and Security Teams, Feb. 11, 2026. [Online]. Available: https://www.first.org/blog/20260211-vulnerability-forecast-2026