CTF: Your S3 Bucket Is Public — How Bad Is It?
Difficulty: Intermediate | Time: 20–25 min | Linked product: IRP Template ($47)
The Setup
It's Thursday morning. You manage IT for a 50-person property development company in Perth. An email lands in your inbox from a name you don't recognise:
Get Our Weekly Cybersecurity Digest
Every Thursday: the threats that matter, what they mean for your business, and exactly what to do. Trusted by SMB owners across Australia.
No spam. No tracking. Unsubscribe anytime. Privacy
"Hi — I was doing some research on Australian property companies and came across an open S3 bucket that appears to belong to your organisation. The bucket is publicly accessible and contains what looks like client contracts, financial statements, and internal emails. I'm not affiliated with any competitor. I'm a security researcher. I'm giving you 72 hours before I publish my findings. The bucket name is: [your-company-name]-documents-prod. Regards, [name]"
You open the AWS console. The bucket exists. Public access block is off. The bucket policy is "Principal": "*", "Action": "s3:GetObject" — anonymous read for all objects. The bucket contains 14,847 objects. It has been publicly accessible for at least eight months, based on when the bucket policy was last changed.
You feel ill.
This is a cloud misconfiguration incident. The breach may already be over, or it may still be ongoing. What do you do?
The Challenge
Question 1 — Immediate remediation vs evidence first
Your first instinct is to fix the bucket policy right now. Before you do:
- What evidence will you lose if you immediately flip the bucket to private?
- What AWS-native tools should you check before any remediation to understand whether the data was accessed?
- What's the correct sequence: fix first, or investigate first?
Question 2 — Scope the blast radius
The bucket has 14,847 objects. You need to understand what's actually in there before you can assess the severity.
- What command or query gives you a fast inventory of object types and sizes?
- The bucket contains a folder called
contracts/with 847 objects. Describe a methodology to sample these quickly and determine whether they contain personal information.Free Resource
Get the Free Cybersecurity Checklist
A practical, no-jargon security checklist for Australian businesses. Download free — no spam, unsubscribe anytime.
Send Me the Checklist → - You find 12 objects with names containing "passport" and 3 with "tax-file". What does this do to your NDB assessment?
Question 3 — Has anyone actually accessed it?
S3 access logs can tell you whether the bucket was accessed — but only if logging was enabled.
- How do you check whether S3 server access logging was enabled for this bucket?
- If logging was not enabled, what alternative data sources can give you partial access history?
- Your CloudTrail logs show 43 API calls to this bucket in the last 30 days from IPs you don't recognise. What's the difference between a CloudTrail
GetObjectevent and an S3 server access logREST.GET.OBJECTevent — and why does that difference matter for your impact assessment?
Question 4 — The researcher: threat or asset?
The security researcher who emailed you is giving you 72 hours. You need to decide how to handle this relationship.
- Is there any legal risk in responding to the researcher and acknowledging the issue?
- The researcher has offered to share their "full list of accessed files" for $500. How do you assess this offer?
- Draft a two-paragraph response to the researcher that protects your legal position while maintaining a functional relationship.
Question 5 — Post-incident: The config that caused this
The bucket was made public eight months ago. A junior developer changed the bucket policy to test an integration with a public-facing API and forgot to revert it. There's no record of the change being reviewed or approved.
Design a cloud configuration control that prevents this from happening again, using only AWS-native tooling (no third-party spend). The control must detect a public S3 bucket within 15 minutes of it being created.
ISO 27001 SMB Starter Pack — $97
Everything you need to start your ISO 27001 journey: gap assessment templates, policy frameworks, and implementation roadmap built for Australian SMBs.
Get the Starter Pack →Hints
Hint 1 (Q1): AWS S3 doesn't automatically log who accessed your objects unless you've enabled server access logging or are relying on CloudTrail data events. If you flip the bucket to private immediately, you don't lose any logs that already exist — but you do stop the ongoing exposure. The question is whether you need to understand current access patterns before locking down. The correct answer involves checking logs first (5 minutes) and then immediately remediating.
Hint 2 (Q2): aws s3 ls s3://[bucket] --recursive | head -100 gives you a sample. For the NDB question: passport scans and documents with "tax-file" in the name almost certainly contain passport numbers and Tax File Numbers. TFNs are among the most sensitive personal identifiers in Australia — the Tax File Number Rule 2015 and Privacy Act treat them as requiring special protection. Finding TFNs in a public bucket is a Critical NDB.
Hint 3 (Q3): CloudTrail logs management-plane API calls (bucket policy changes, etc.) by default. CloudTrail data events (actual GetObject calls) must be explicitly enabled and cost extra. Most small AWS accounts don't have data events enabled. S3 server access logging is separate and also not enabled by default. The gap: you may have no way to prove definitively whether anyone downloaded the files. That uncertainty itself has to be disclosed in your NDB assessment — it doesn't let you off the hook.
Hint 4 (Q4): The researcher's $500 offer for their list of accessed files is a yellow flag. Legitimate security researchers operating under responsible disclosure typically don't charge for sharing their findings — that crosses from research into extortion territory. However, it's a low-stakes yellow flag. Do not pay. Do not engage with the payment offer. Keep your response focused on thanking them for the disclosure and confirming you've remediated.
Hint 5 (Q5): AWS Config has a managed rule called s3-bucket-public-read-prohibited. AWS Security Hub enables this as a default control. An EventBridge rule can trigger a Lambda function to alert (or auto-remediate) when this Config rule fires. This entire stack is free-tier eligible for most SMB workloads.
Reveal: Full Answer to Question 5
AWS-native control to detect and alert on public S3 buckets within 15 minutes:
Option A — AWS Config + SNS (detect and alert, no auto-remediation)
- Enable AWS Config in your account (if not already enabled). Cost: ~$2/month for a small account.
- Enable the managed rule
s3-bucket-public-read-prohibitedin Config. This evaluates all S3 buckets on a continuous basis and flags any with public read access. - Create an SNS topic called
s3-public-bucket-alertswith your email as a subscriber. - Create a Config remediation notification: when the
s3-bucket-public-read-prohibitedrule moves toNON_COMPLIANT, publish to the SNS topic. - Detection time: AWS Config evaluates within approximately 10–15 minutes of a configuration change.
Option B — AWS Security Hub (detect + dashboard, broader coverage)
If you're not already using Security Hub, enable it. Security Hub's AWS Foundational Security Best Practices standard includes S3 checks (S3.1, S3.2) that flag public buckets. Security Hub aggregates findings into a central dashboard with severity ratings. This is the better long-term option if you have multiple S3 buckets or other AWS services to monitor.
Option C — EventBridge + Lambda (detect and auto-remediate, advanced)
- Create an EventBridge rule that matches
aws.s3events whereeventNameisPutBucketAclorPutBucketPolicy. - Trigger a Lambda function that checks the resulting bucket configuration and, if public, either alerts or automatically reverts the change.
- This is the fastest response (near-real-time vs 15 minutes for Config) and can auto-remediate without human intervention.
The preventive control the junior dev needed:
Beyond detection, the root cause is a lack of change management process for cloud configuration. The preventive control is:
- All S3 bucket policy changes require a pull request reviewed by a senior engineer before they go to production
- Infrastructure as Code (IaC) — if your bucket configs are managed in Terraform or CloudFormation, you can't make bucket-level changes via the console without going through a review process
Detecting a public bucket within 15 minutes is good. Not being able to make a bucket public without a review is better.
Get the Full Answer Key
You've seen the cloud misconfiguration control answer in full. The remaining questions — on evidence preservation before remediation, scoping TFN exposure for NDB, CloudTrail data events gaps, and handling a security researcher disclosure — are covered in the Incident Response Plan Template for SMBs.
The template includes:
- Cloud misconfiguration IR playbook (S3, Azure Blob, GCS)
- Evidence preservation checklist for cloud environments
- NDB assessment decision tree for cloud breach scenarios
- Responsible disclosure response templates
- AWS-native quick-win security checklist
Get the IRP Template for $47 → lil.business/products/incident-response-plan-template
Or buy via Polar: https://buy.polar.sh/polar_cl_G95ZMX6xnZpa7JuXj1AROgffKr1aL0JDmJ2KU1rHJ84
AWS console output, command syntax, and S3 access log behaviour are accurate as at April 2026. Scenario is fictionalised.
Work With Us
Ready to strengthen your security posture?
lilMONSTER assesses your risks, builds the tools, and stays with you after the engagement ends. No clipboard-and-leave consulting.
Book a Free Consultation →