TL;DR

Cloud misconfigurations remain the leading cause of data breaches for Australian SMBs, with IAM over-permissioning and exposed storage buckets topping the list. This guide covers the five most dangerous misconfigurations across AWS, Azure, and GCP, with hardened policy examples and native monitoring tools to close the gaps before an attacker finds them.​‌‌​​​‌‌‍​‌‌​‌‌​​‍​‌‌​‌‌‌‌‍​‌‌‌​‌​‌‍​‌‌​​‌​​‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​​‌​‌‍​‌‌​​​‌‌‍​‌‌‌​‌​‌‍​‌‌‌​​‌​‍​‌‌​‌​​‌‍​‌‌‌​‌​​‍​‌‌‌‌​​‌‍​​‌​‌‌​‌‍​‌‌​‌‌​‌‍​‌‌​‌​​‌‍​‌‌‌​​‌‌‍​‌‌​​​‌‌‍​‌‌​‌‌‌‌‍​‌‌​‌‌‌​‍​‌‌​​‌‌​‍​‌‌​‌​​‌‍​‌‌​​‌‌‌‍​‌‌‌​‌​‌‍​‌‌‌​​‌​‍​‌‌​​​​‌‍​‌‌‌​‌​​‍​‌‌​‌​​‌‍​‌‌​‌‌‌‌‍​‌‌​‌‌‌​‍​‌‌‌​​‌‌‍​​‌​‌‌​‌‍​‌‌​​​​‌‍​‌‌‌​‌​‌‍​‌‌‌​​‌‌‍​‌‌‌​‌​​‍​‌‌‌​​‌​‍​‌‌​​​​‌‍​‌‌​‌‌​​‍​‌‌​‌​​‌‍​‌‌​​​​‌‍​‌‌​‌‌‌​‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​​‌​‍​​‌​‌‌​‌‍​​‌‌​​‌​‍​​‌‌​​​​‍​​‌‌​​‌​‍​​‌‌​‌‌​

1. IAM Over-Permissioning: The Wildcard Problem

Too many SMBs grant AdministratorAccess or blanket * permissions to avoid friction. Long-lived access keys embedded in code or CI/CD pipelines compound the risk—once leaked, they grant persistent access until manually rotated.

BAD — Wildcard policy:​‌‌​​​‌‌‍​‌‌​‌‌​​‍​‌‌​‌‌‌‌‍​‌‌‌​‌​‌‍​‌‌​​‌​​‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​​‌​‌‍​‌‌​​​‌‌‍​‌‌‌​‌​‌‍​‌‌‌​​‌​‍​‌‌​‌​​‌‍​‌‌‌​‌​​‍​‌‌‌‌​​‌‍​​‌​‌‌​‌‍​‌‌​‌‌​‌‍​‌‌​‌​​‌‍​‌‌‌​​‌‌‍​‌‌​​​‌‌‍​‌‌​‌‌‌‌‍​‌‌​‌‌‌​‍​‌‌​​‌‌​‍​‌‌​‌​​‌‍​‌‌​​‌‌‌‍​‌‌‌​‌​‌‍​‌‌‌​​‌​‍​‌‌​​​​‌‍​‌‌‌​‌​​‍​‌‌​‌​​‌‍​‌‌​‌‌‌‌‍​‌‌​‌‌‌​‍​‌‌‌​​‌‌‍​​‌​‌‌​‌‍​‌‌​​​​‌‍​‌‌‌​‌​‌‍​‌‌‌​​‌‌‍​‌‌‌​‌​​‍​‌‌‌​​‌​‍​‌‌​​​​‌‍​‌‌​‌‌​​‍​‌‌​‌​​‌‍​‌‌​​​​‌‍​‌‌​‌‌‌​‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​​‌​‍​​‌​‌‌​‌‍​​‌‌​​‌​

‍​​‌‌​​​​‍​​‌‌​​‌​‍​​‌‌​‌‌​

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": "*",
    "Resource": "*"
  }]
}

GOOD — Least-privilege for S3 read-only:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Action": ["s3:GetObject", "s3:ListBucket"],
    "Resource": [
      "arn:aws:s3:::company-reports",
      "arn:aws:s3:::company-reports/*"
    ],
    "Condition": {
      "Bool": {"aws:MultiFactorAuthPresent": "true"}
    }
  }]
}

Remediation: Enforce IAM Access Analyzer or Azure Advisor to flag overly permissive roles. Rotate access keys every 90 days and prefer IAM Roles with temporary credentials for workloads.

2. Public Storage Buckets and Blob Containers

An S3 bucket, Azure Blob container, or GCS bucket set to public is a data breach waiting to happen. The ACSC consistently warns that exposed object storage is one of the most reported incident vectors for Australian organisations.

BAD — Public-read S3 bucket policy:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::customer-data/*"
  }]
}

GOOD — Private with conditional org access:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Deny",
    "Principal": "*",
    "Action": "s3:*",
    "Resource": ["arn:aws:s3:::customer-data", "arn:aws:s3:::customer-data/*"],
    "Condition": {"Bool": {"aws:SecureTransport": "false"}}
  }, {
    "Effect": "Allow",
    "Principal": {"AWS": "arn:aws:iam::111122223333:root"},
    "Action": ["s3:GetObject"],
    "Resource": "arn:aws:s3:::customer-data/*"
  }]
}

Remediation: Enable block public access settings at the account level. Use AWS Config rules s3-bucket-public-read-prohibited, Azure Policy Storage accounts should restrict network access, and GCP Security Command Center public bucket findings.

3. Serverless Secret Leakage in Environment Variables

Developers routinely stuff API keys and database passwords into Lambda environment variables. These are visible in plain text to anyone with lambda:GetFunction or functions:read permissions and often appear in logs.

BAD — Terraform with hardcoded secret:

resource "aws_lambda_function" "processor" {
  function_name = "invoice-processor"
  environment {
    variables = {
      DB_PASSWORD = "SuperSecret123!"
      API_KEY     = "ak_live_abcdef"
    }
  }
}

GOOD — Secrets Manager integration:

resource "aws_lambda_function" "processor" {
  function_name = "invoice-processor"
  environment {
    variables = {
      DB_SECRET_ARN = aws_secretsmanager_secret.db.arn
    }
  }
}

resource "aws_iam_role_policy" "lambda_secrets" {
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect   = "Allow"
      Action   = ["secretsmanager:GetSecretValue"]
      Resource = aws_secretsmanager_secret.db.arn
    }]
  })
}

Remediation: Migrate secrets to AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager. Never commit .env files. Scan IaC with Checkov or Trivy pre-deployment.

4. Unmonitored CloudTrail and Activity Log Gaps

If you are not logging, you are flying blind. Australian SMBs frequently disable CloudTrail in non-production accounts or fail to ship Azure Activity Logs to a central SIEM, missing the telemetry needed to detect lateral movement.

Remediation: Enable CloudTrail organisation trails in every AWS account with S3 bucket versioning and log file validation. In Azure, export Activity Logs to a Log Analytics workspace with a retention of at least 90 days. In GCP, enable Admin Activity and Data Access audit logs at the project level. Set up alerts for CreateAccessKey, PutBucketPolicy, and UpdateFunctionConfiguration events.

5. Serverless Cold-Start Secret Loading Anti-Patterns

Fetching secrets from a vault inside the Lambda handler on every invocation adds latency and cost. Caching them in a global variable outside the handler is better, but developers often skip encryption at rest or fail to handle rotation.

GOOD — Python cold-start cache pattern:

import boto3
import os

secrets_client = boto3.client('secretsmanager')
SECRET_ARN = os.environ['DB_SECRET_ARN']
_cached_secret = None

def get_secret():
    global _cached_secret
    if _cached_secret is None:
        _cached_secret = secrets_client.get_secret_value(SecretId=SECRET_ARN)['SecretString']
    return _cached_secret

def handler(event, context):
    creds = get_secret()
    # use creds

Remediation: Cache secrets during init, not per invocation. Subscribe Lambda functions to Secrets Manager rotation events via Amazon EventBridge so credentials refresh without redeployment.

Continuous Monitoring with Native Tools

Platform Tool What It Catches
AWS AWS Config + Security Hub Public S3 buckets, non-encrypted resources, unrestricted security groups
Azure Microsoft Defender for Cloud Storage account exposure, overly permissive RBAC, missing MFA
GCP Security Command Center Public buckets, IAM anomalies, unencrypted disks

Run these continuously, not quarterly. For Australian compliance, align findings with the ACSC Essential Eight maturity model.

FAQ

Q: Do these misconfigurations really affect small businesses? Yes. The Australian Cyber Security Centre reports that SMBs are increasingly targeted because they often lack dedicated cloud security staff while holding valuable customer data.

Q: Is using environment variables always bad? Not for non-sensitive configuration. For secrets, use a managed vault. Environment variables are logged by many monitoring tools and visible in the console.

Q: How often should we audit IAM policies? At minimum quarterly. Automate this with AWS IAM Access Analyzer, Azure Advisor, or custom Forseti rules in GCP. Remove unused users and roles monthly.

Q: Which cloud has the worst default security posture? All three have improved defaults, but GCP and Azure still allow overly broad IAM bindings out of the box. AWS has made S3 buckets private by default since 2023, which helps.

Conclusion

Misconfigurations are not complex zero-days—they are configuration mistakes that attackers scan for at scale. Fix your IAM policies, lock down storage, get secrets out of environment variables, and turn on logging today. The cost of remediation is minutes; the cost of a breach is your business.

Next step: Visit consult.lil.business for a free cybersecurity assessment tailored to Australian SMBs.

References

  1. Australian Cyber Security Centre — Essential Eight
  2. NIST SP 800-53 Rev 5 — Access Control and Audit Policies
  3. SANS Cloud Security Curriculum — Cloud Misconfigurations

TL;DR

  • Some bad people use AI to pretend to be computer workers and get hired by companies
  • They use robot voices, fake photos, and computer-generated resumes
  • They don't actually do the work—they steal secrets
  • Companies need new ways to check if people are who they say they are

What's Happening?

Imagine this: Someone sends a job application to a company. They have a nice photo, a good resume, and they do great in the interview. The company hires them.

But there's a problem: That person doesn't really exist.

A group of bad people used AI (artificial intelligence) to create a fake person, trick the company, and get hired. Then they use their job to steal secrets and money.

This is happening RIGHT NOW with computer programming jobs.


Who's Doing This?

Microsoft (a really big computer company) found out that some people from North Korea are doing this [1]. They use special names:

  • Jasper Sleet
  • Coral Sleet (used to be called Storm-1877)

They're like teams of tricksters using computers to fake being workers.


How Do They Trick Companies?

Step 1: Creating a Fake Person

They use AI to make everything up:

  • Fake names - The computer suggests names that sound real
  • Fake photos - Computer-generated pictures that look like real people
  • Fake resumes - Computer-written work history that looks perfect for the job
  • Fake emails - Email addresses that match the fake name

It's like playing dress-up, but with computers instead of clothes.

Step 2: Tricking the Interview

When it's time for a video call, they use special tricks:

  • Robot voices - Computers that change their voice to sound like someone else
  • Chat helper - AI that helps them answer questions during the interview
  • Maybe pre-recorded videos - Sometimes they just play a video instead of talking live

The company thinks they're talking to a real person. But they're actually talking to a trickster using computer tools.

Step 3: Getting Hired (and Stealing)

Once they're "hired":

  • They get paid salary money (which goes to the bad people)
  • ️ They get access to company computers and secrets
  • They steal important information
  • They sell passwords or secrets to other bad people

They might do a little work—using AI to help them write computer code so they don't get caught. But the real goal is stealing, not working. [1]


Why Can't Companies Tell They're Fake?

Good question! Here's why regular background checks don't work:

  • Background check passes - Fake people have no criminal history because they don't exist!
  • References check - Fake references from computer-made people
  • Skills test passes - AI helps them answer technical questions
  • Looks normal on video - Computer voices and fake photos look real

It's like a really, really good costume.


Signs Someone Might Be Fake

Microsoft found some clues that can give away fake workers [1]:

Weird Things in Their Computer Code

  • Using emojis as checkmarks () inside code
  • Writing comments that sound like they're explaining themselves too much
  • Using way too many complicated words for simple things
  • Code that's more complicated than it needs to be

Weird Things About Their "Life"

  • Hardly any photos or posts on social media before a certain date
  • The same face shows up with slightly different names
  • Jobs or schools that are hard to check really exist
  • Generic stories that could be about anyone

Weird Things When Working

  • Working at strange hours
  • Asking for access to things they don't really need
  • Moving files around for no clear reason
  • Doing very little real work

How Companies Can Stay Safe

Good companies are fighting back with new rules:

Better Checking

  • Multiple video calls - Not just one interview, but lots of talking
  • Real work tests - Watch them actually do work, not just answer questions
  • Meeting in person - Sometimes you just have to see someone face-to-face
  • Checking their whole internet life - Seeing if they exist in more than one place online

Watching for Weird Stuff

  • Strange computer access - Looking at files they shouldn't need
  • Weird hours - Working at 3am when nobody else is awake
  • Moving data around - Sending files to places they shouldn't go

Being Extra Careful

  • Not giving too much power - Only giving access to what they really need
  • Checking on contractors too - Not just full-time workers, but anyone with access
  • Using computers to watch computers - AI helpers that look for fake workers

What Does This Mean for Us?

This might sound scary, but here's the good news:

Smart people are figuring this out - Companies like Microsoft are finding these tricks Better rules are being made - New ways to check if people are real Good AI is fighting bad AI - Using computer helpers to catch the tricksters

And for us regular people:

  • Learn about internet safety - Knowing tricks helps you avoid them
  • Build real relationships - Fake people can't do friendship or teamwork well
  • Ask questions - If something seems weird, it's okay to ask why

FAQ for Curious Kids

They try! But the fake people are really good at tricking. It's like when someone wears a really good Halloween costume—you can't tell who's underneath until they take it off.

Yes! Microsoft found thousands of fake accounts and stopped them [1]. But the bad people keep trying new tricks.

Maybe. That's why companies are being extra careful now. It's like locking doors—not because you expect burglars, but because you want to be safe.

No, AI is just a tool. Think of it like a hammer. You can use a hammer to build a birdhouse OR break a window. AI can help bad people do bad things, but it also helps good people catch them!

TELL A GROWNUP. Don't try to figure it out yourself. If someone online seems weird or too good to be true, that's a grownup problem to solve.


Remember

The internet has good people and bad people, just like the real world. The difference is:

  • Real world - You can see people's faces
  • Online world - People can hide who they really are

That's why we need to be extra careful and use smart rules to stay safe. ️


Want to learn more about staying safe online? Ask your parents or teachers about internet safety, or check out resources from CISA—they're the experts on keeping computers safe!


Sources

  1. Microsoft Security Blog. "AI as tradecraft: How threat actors operationalize AI." https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/

  2. Microsoft Security Blog. "Jasper Sleet: North Korean remote IT workers' evolving tactics to infiltrate organizations." https://www.microsoft.com/security/blog/2025/06/30/jasper-sleet-north-korean-remote-it-workers-evolving-tactics-to-infiltrate-organizations/

  3. CISA. "Cybersecurity for Kids." https://www.cisa.gov/news-events/news/cisa-launches-cybersecurity-awareness-month-kids

  4. FBI. "North Korean IT Workers Warning." https://www.fbi.gov/ic3/alertr/north-korean

Ready to strengthen your security?

Talk to lilMONSTER. We assess your risks, build the tools, and stay with you after the engagement ends. No clipboard-and-leave consulting.

Get a Free Consultation