TL;DR

  • North Korean state-sponsored hackers are using AI to create fake IT worker personas and infiltrating companies globally
  • Jasper Sleet and Coral Sleet use AI across the entire attack lifecycle: identity fabrication, social engineering, and long-term operational persistence
  • AI-generated resumes, voice modulation, and deepfake photos are standard tools
  • These operations are low-cost, high-scale revenue generators for the North Korean regime
  • Standard background checks are insufficient—AI-enabled tradecraft requires new hiring security protocols

The New Face of Insider Threats

Microsoft's latest threat intelligence report reveals a threat that should alarm every business that hires remote IT workers: North Korean state-sponsored hacking groups are operationalizing AI to create fake identities and infiltrate organizations as remote IT workers [1].​‌‌​‌‌‌​‍​‌‌​‌‌‌‌‍​‌‌‌​​‌​‍​‌‌‌​‌​​‍​‌‌​‌​​​‍​​‌​‌‌​‌‍​‌‌​‌​‌‌‍​‌‌​‌‌‌‌‍​‌‌‌​​‌​‍​‌‌​​‌​‌‍​‌‌​​​​‌‍​‌‌​‌‌‌​‍​​‌​‌‌​‌‍​‌‌​​​​‌‍​‌‌​‌​​‌‍​​‌​‌‌​‌‍​‌‌‌​‌​​‍​‌‌‌​​‌​‍​‌‌​​​​‌‍​‌‌​​‌​​‍​‌‌​​‌​‌‍​‌‌​​​‌‌‍​‌‌‌​​‌​‍​‌‌​​​​‌‍​‌‌​​‌‌​‍​‌‌‌​‌​​‍​​‌​‌‌​‌‍​‌‌​​‌‌​‍​‌‌​​​​‌‍​‌‌​‌​‌‌‍​‌‌​​‌​‌‍​​‌​‌‌​‌‍​‌‌​‌​​‌‍​‌‌‌​‌​​‍​​‌​‌‌​‌‍​‌‌‌​‌‌‌‍​‌‌​‌‌‌‌‍​‌‌‌​​‌​‍​‌‌​‌​‌‌‍​‌‌​​‌​‌‍​‌‌‌​​‌​‍​‌‌‌​​‌‌‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​​‌​‍​​‌​‌‌​‌‍​​‌‌​​‌​‍​​‌‌​​​​‍​​‌‌​​‌​‍​​‌‌​‌‌​

These group

s—tracked by Microsoft as Jasper Sleet and Coral Sleet (formerly Storm-1877)—represent a fundamental shift in how insider threats manifest. They're not disgruntled employees or careless contractors. They're state-sponsored actors using AI to blend into legitimate environments for months or years.

This isn't theoretical. Microsoft has identified and disrupted thousands of accounts associated with fraudulent IT worker activity [1]. The scale and sophistication of these operations demand a rethinking of hiring security for every business that employs technical talent.​‌‌​‌‌‌​‍​‌‌​‌‌‌‌‍​‌‌‌​​‌​‍​‌‌‌​‌​​‍​‌‌​‌​​​‍​​‌​‌‌​‌‍​‌‌​‌​‌‌‍​‌‌​‌‌‌‌‍​‌‌‌​​‌​‍​‌‌​​‌​‌‍​‌‌​​​​‌‍​‌‌​‌‌‌​‍​​‌​‌‌​‌‍​‌‌​​​​‌‍​‌‌​‌​​‌‍​​‌​‌‌​‌‍​‌‌‌​‌​​‍​‌‌‌​​‌​‍​‌‌​​​​‌‍​‌‌​​‌​​‍​‌‌​​‌​‌‍​‌‌​​​‌‌‍​‌‌‌​​‌​‍​‌‌​​​​‌‍​‌‌​​‌‌​‍​‌‌‌​‌​​‍​​‌​‌‌​‌‍​‌‌​​‌‌​‍​‌‌​​​​‌‍​‌‌​‌​‌‌‍​‌‌​​‌​‌‍​​‌​‌‌​‌‍​‌‌​‌​​‌‍​‌‌‌​‌​​‍​​‌​‌‌​‌‍​‌‌‌​‌‌‌‍​‌‌​‌‌‌‌‍​‌‌‌​​‌​‍​‌‌​‌​‌‌‍​‌‌​​‌​‌‍​‌‌‌​​‌​‍​‌‌‌​​‌‌‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​​‌​‍​​‌​‌‌​‌‍​​‌‌​​‌​‍​​‌‌​​​​‍​​‌‌​​‌​‍​​‌‌​‌‌​


How AI Powers the Fake Worker Lifecycle

What makes these operations different from traditional employment fraud is the systematic use of AI across every stage of the attack lifecycle. Microsoft's report details how Jasper Sleet leverages AI to get hired, stay hired, and misuse access at scale [1].

Phase 1: Identity Fabrication with AI

Threat actors use generative AI to shortcut the reconnaissance process that informs convincing digital personas:

  • Name and identity generation: Prompting AI platforms to generate culturally appropriate name lists and email address formats matching specific identity profiles [1]. For example: "Create a list of 100 Greek names" or "Create a list of email address formats using the name Jane Doe."

  • Resume and cover letter generation: AI-assisted resumes and cover letters tailored to specific job descriptions [1]. Jasper Sleet uses AI to review job postings for software development and IT roles, extracting and summarizing required skills, then tailoring fake identities to match.

  • Fake developer portfolios: Creating fake portfolios using AI-generated content to establish credibility [1].

  • AI-generated photos: Using AI applications like Faceswap to insert the faces of North Korean IT workers into stolen identity documents and generate polished headshots for resumes [1]. In some cases, the same AI-generated photo was reused across multiple personas with slight variations.

Phase 2: Social Engineering and Interviews

AI is used to overcome the most significant barrier to infiltration: the interview process.

  • Voice-changing software: Jasper Sleet has been observed using voice modulation during interviews to mask accents, enabling them to pass as Western candidates in remote hiring processes [1].

  • Real-time conversation assistance: AI helps sustain long-term employment by reducing language barriers and improving responsiveness [1].

Phase 3: Operational Persistence and Data Theft

Once hired, the real work begins. These aren't employees in any meaningful sense—they're persistent access agents embedded within your organization.

  • Day-to-day camouflage: AI enables communications that fit role expectations and maintain consistent behavior across multiple fraudulent identities [1]. This includes translating messages, crafting contextually appropriate professional responses, answering technical questions, and maintaining consistent tone across emails and chat platforms.

  • Meeting performance expectations: AI helps these actors "meet performance expectations even in unfamiliar domains" by generating code snippets, answering technical questions, and maintaining productivity illusions [1].

  • Data exfiltration: Once trusted access is established, data theft begins—not through sophisticated exploits, but through the legitimate credentials and permissions granted to an employee.


The Economics of State-Sponsored Fraud

Understanding the motivation behind these operations helps explain their scale. Microsoft's report notes that these campaigns are "likely focused on revenue generation, where efficiency directly translates to scale and persistence" [1].

For the North Korean regime, fake IT worker operations represent:

  • Low cost: AI reduces the technical barrier and operational expense
  • High scale: A single operative can manage multiple fraudulent identities simultaneously
  • Long-term payout: Salaries, access to proprietary data, and future exploitation opportunities

This isn't just about stealing data—it's about generating revenue for the regime through stolen wages, cryptocurrency theft, and intellectual property acquisition.


Why Traditional Background Checks Fail

The critical insight from Microsoft's report is that these actors are passing standard background checks. They're not using stolen identities from data breaches. They're creating entirely new, AI-generated personas that have no criminal history because they don't exist outside the fraud.

Traditional hiring security controls are insufficient:

  • ✅ Background check passes (no criminal history)
  • ✅ References check (AI-generated or fabricated)
  • ✅ Technical assessment passes (AI-assisted)
  • ✅ Video interview passes (voice modulation, AI-generated photos)
  • ❌ The "candidate" doesn't actually exist

The Shift to Insider Risk Management

Microsoft explicitly recommends treating fraudulent employment and access misuse as an insider-risk scenario, focusing on detecting misuse of legitimate credentials, abnormal access patterns, and sustained low-and-slow activity [1].

This is a fundamental shift in threat modeling. The danger isn't a sophisticated external exploit—it's a trusted insider who is neither an employee nor a stranger, but something in between.


Real-World Indicators of AI-Generated Personas

Microsoft has identified characteristics within code and communications consistent with AI-assisted creation [1]. These red flags should inform hiring and security monitoring:

In Code and Technical Work

  • Emoji usage as visual markers: Green check marks (✅) for successful requests, red crosses (❌) for errors [1]
  • Conversational in-line comments: Comments that describe execution states and developer reasoning in natural language [1]
  • Overly descriptive or redundant naming: Functions, variables, and modules using long, generic names that restate obvious behavior
  • Over-engineered modular structure: Code broken into highly abstracted, reusable components with unnecessary layers
  • Inconsistent naming conventions: Related objects referenced with varying terms across the codebase

In Employment History and Digital Footprint

  • Limited digital footprint: AI-generated personas may have minimal social media presence or employment history before a certain date
  • Inconsistent details: Slight variations in the same photo across profiles, timelines that don't align, or employment gaps
  • Generic or templated language: Resumes, cover letters, or communications that lack specific, verifiable details

Protecting Your Business: New Hiring Security Protocols

The response to AI-enabled hiring fraud requires layered controls that go beyond traditional screening:

1. Identity Verification Beyond Documents

  • Video calls with real-time interaction: Require multiple video interactions, not just a single interview. Look for signs of pre-recorded content or voice modulation.

  • In-person verification when possible: For critical roles, require physical presence for onboarding or use verified third-party identity verification services.

  • Cross-platform verification: Verify that the candidate's digital footprint exists consistently across multiple platforms over time.

2. Enhanced Technical Screening

  • Live coding sessions: Require real-time technical assessments that are difficult to AI-assist without detection.

  • Collaborative work trials: Paid trial periods with close supervision and collaboration make sustained impersonation more difficult.

  • Behavioral interviews focused on specific experiences: AI can generate generic technical knowledge, but specific, verifiable career anecdotes are harder to fabricate.

3. Monitoring for Abnormal Patterns

Microsoft recommends treating this as an insider threat scenario [1]. Monitor for:

  • Unusual data access patterns: Especially outside working hours or from unexpected locations
  • Credentials used in unexpected ways: Administrative access at unusual times, excessive permissions requests
  • Low-and-slow activity: Minimal legitimate work combined with anomalous data movement

4. Least Privilege and Segmentation

  • Principle of least privilege: Grant the minimum access necessary for the role. Review and prune permissions regularly.

  • Segment access: Critical systems and data should require additional verification beyond standard employee credentials.

  • Separate development and production environments: Ensure that development access cannot easily pivot to production systems.

5. Vendor and Contractor Risk Management

These attacks aren't limited to direct employment. Contractors, freelancers, and third-party developers are equally vulnerable vectors.

  • Treat external access with the same scrutiny as internal employees
  • Contractually require security notifications for personnel changes
  • Monitor third-party access to your systems and data

The Role of AI-Powered Defenses

If attackers are using AI, defenders must as well. Microsoft has introduced the Security Dashboard for AI, now in public preview, which provides a unified view of AI security posture by aggregating security, identity, and data risk across Microsoft Defender, Microsoft Entra, and Microsoft Purview [1].

For businesses without enterprise Microsoft licenses, alternatives include:

  • User and Entity Behavior Analytics (UEBA): Tools that detect anomalous behavior patterns
  • Data Loss Prevention (DLP): Monitoring unusual data movement
  • Managed Detection and Response (MDR): Services that provide 24/7 security monitoring

The Bottom Line

North Korean fake IT worker operations represent the maturation of AI as a tool for cyberthreats. The barrier to creating convincing personas has collapsed, and the economic incentive for state-sponsored actors is enormous.

For businesses, the implications are clear:

  1. Remote IT hiring requires new security protocols—standard background checks are insufficient
  2. Insider threat monitoring must expand to include "trusted" access misuse
  3. AI-powered defenses are becoming necessary to detect AI-enabled attacks
  4. The cost of prevention is far lower than the cost of remediation after a breach

The question isn't whether your business will hire remote IT workers. It's whether your hiring and security processes can detect AI-generated personas before they become trusted insiders.


FAQ

Yes and no. AI helps them "meet performance expectations" by generating code, answering technical questions, and maintaining the appearance of productivity [1]. But the real goal is sustained access for data theft and future exploitation.

Microsoft has disrupted thousands of accounts associated with fraudulent IT worker activity [1]. The scale suggests this is an organized, state-sponsored campaign, not isolated incidents.

Yes. While larger enterprises are juicier targets, small businesses are equally vulnerable—especially if they work with larger clients or handle sensitive data. Access to a small business can be a stepping stone to larger targets in their supply chain.

Immediately:

  1. Restrict their access to sensitive systems
  2. Preserve logs and evidence
  3. Consult legal and security experts
  4. Do not confront the individual directly until you have a plan

Prevention:

  • Implement the hiring security controls outlined above
  • Treat all remote IT access with appropriate skepticism
  • Monitor for the behavioral red flags described

No. While Microsoft's report focuses on North Korean groups (Jasper Sleet and Coral Sleet), the techniques—AI-generated personas, voice modulation, deepfake photos—are available to any threat actor. This is the new normal for hiring fraud.


Sources

  1. Microsoft Security Blog. "AI as tradecraft: How threat actors operationalize AI." https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/

  2. Microsoft Security Blog. "Jasper Sleet: North Korean remote IT workers' evolving tactics to infiltrate organizations." https://www.microsoft.com/security/blog/2025/06/30/jasper-sleet-north-korean-remote-it-workers-evolving-tactics-to-infiltrate-organizations/

  3. CISA. "Insider Threat Mitigation." https://www.cisa.gov/news-events/news/understanding-and-responding-insider-threats

  4. NIST. "Workforce Management." https://www.nist.gov/cyberframework/category/workforce

  5. Reuters. "North Korean hackers use AI to target businesses." (Related reporting on Jasper Sleet operations)

  6. Microsoft Purview. "Insider Risk Management." https://learn.microsoft.com/purview/insider-risk-management

  7. Microsoft Entra. "Identity Protection." https://www.microsoft.com/security/business/identity-access/microsoft-entra-id

  8. FBI. "North Korean IT Workers and Democratic People's Republic of Korea (DPRK) Employment Fraud." https://www.fbi.gov/ic3/alertr/north-korean


Your next hire could be a state-sponsored actor using AI to blend into your team. lilMONSTER helps businesses implement hiring security protocols that detect AI-generated personas before they become trusted insiders. Get a hiring security consultation.

TL;DR

  • Some bad people use AI to pretend to be computer workers and get hired by companies
  • They use robot voices, fake photos, and computer-generated resumes
  • They don't actually do the work—they steal secrets
  • Companies need new ways to check if people are who they say they are

What's Happening?

Imagine this: Someone sends a job application to a company. They have a nice photo, a good resume, and they do great in the interview. The company hires them.

But there's a problem: That person doesn't really exist.

A group of bad people used AI (artificial intelligence) to create a fake person, trick the company, and get hired. Then they use their job to steal secrets and money.

This is happening RIGHT NOW with computer programming jobs. 🤯


Who's Doing This?

Microsoft (a really big computer company) found out that some people from North Korea are doing this [1]. They use special names:

  • Jasper Sleet
  • Coral Sleet (used to be called Storm-1877)

They're like teams of tricksters using computers to fake being workers.


How Do They Trick Companies?

Step 1: Creating a Fake Person

They use AI to make everything up:

  • 📛 Fake names - The computer suggests names that sound real
  • 📸 Fake photos - Computer-generated pictures that look like real people
  • 📄 Fake resumes - Computer-written work history that looks perfect for the job
  • 📧 Fake emails - Email addresses that match the fake name

It's like playing dress-up, but with computers instead of clothes.

Step 2: Tricking the Interview

When it's time for a video call, they use special tricks:

  • 🎤 Robot voices - Computers that change their voice to sound like someone else
  • 💬 Chat helper - AI that helps them answer questions during the interview
  • 🎥 Maybe pre-recorded videos - Sometimes they just play a video instead of talking live

The company thinks they're talking to a real person. But they're actually talking to a trickster using computer tools.

Step 3: Getting Hired (and Stealing)

Once they're "hired":

  • 💰 They get paid salary money (which goes to the bad people)
  • 🖥️ They get access to company computers and secrets
  • 📁 They steal important information
  • 🔑 They sell passwords or secrets to other bad people

They might do a little work—using AI to help them write computer code so they don't get caught. But the real goal is stealing, not working. [1]


Why Can't Companies Tell They're Fake?

Good question! Here's why regular background checks don't work:

  • Background check passes - Fake people have no criminal history because they don't exist!
  • References check - Fake references from computer-made people
  • Skills test passes - AI helps them answer technical questions
  • Looks normal on video - Computer voices and fake photos look real

It's like a really, really good costume. 👻


Signs Someone Might Be Fake

Microsoft found some clues that can give away fake workers [1]:

Weird Things in Their Computer Code

  • Using emojis as checkmarks (✅❌) inside code
  • Writing comments that sound like they're explaining themselves too much
  • Using way too many complicated words for simple things
  • Code that's more complicated than it needs to be

Weird Things About Their "Life"

  • Hardly any photos or posts on social media before a certain date
  • The same face shows up with slightly different names
  • Jobs or schools that are hard to check really exist
  • Generic stories that could be about anyone

Weird Things When Working

  • Working at strange hours
  • Asking for access to things they don't really need
  • Moving files around for no clear reason
  • Doing very little real work

How Companies Can Stay Safe

Good companies are fighting back with new rules:

🔍 Better Checking

  • Multiple video calls - Not just one interview, but lots of talking
  • Real work tests - Watch them actually do work, not just answer questions
  • Meeting in person - Sometimes you just have to see someone face-to-face
  • Checking their whole internet life - Seeing if they exist in more than one place online

👀 Watching for Weird Stuff

  • Strange computer access - Looking at files they shouldn't need
  • Weird hours - Working at 3am when nobody else is awake
  • Moving data around - Sending files to places they shouldn't go

🔐 Being Extra Careful

  • Not giving too much power - Only giving access to what they really need
  • Checking on contractors too - Not just full-time workers, but anyone with access
  • Using computers to watch computers - AI helpers that look for fake workers

What Does This Mean for Us?

This might sound scary, but here's the good news:

Smart people are figuring this out - Companies like Microsoft are finding these tricks ✅ Better rules are being made - New ways to check if people are real ✅ Good AI is fighting bad AI - Using computer helpers to catch the tricksters

And for us regular people:

  • 📚 Learn about internet safety - Knowing tricks helps you avoid them
  • 🤝 Build real relationships - Fake people can't do friendship or teamwork well
  • 💡 Ask questions - If something seems weird, it's okay to ask why

FAQ for Curious Kids

They try! But the fake people are really good at tricking. It's like when someone wears a really good Halloween costume—you can't tell who's underneath until they take it off.

Yes! Microsoft found thousands of fake accounts and stopped them [1]. But the bad people keep trying new tricks.

Maybe. That's why companies are being extra careful now. It's like locking doors—not because you expect burglars, but because you want to be safe.

No, AI is just a tool. Think of it like a hammer. You can use a hammer to build a birdhouse OR break a window. AI can help bad people do bad things, but it also helps good people catch them!

TELL A GROWNUP. Don't try to figure it out yourself. If someone online seems weird or too good to be true, that's a grownup problem to solve.


Remember

The internet has good people and bad people, just like the real world. The difference is:

  • 🌎 Real world - You can see people's faces
  • 💻 Online world - People can hide who they really are

That's why we need to be extra careful and use smart rules to stay safe. 🛡️


Want to learn more about staying safe online? Ask your parents or teachers about internet safety, or check out resources from CISA—they're the experts on keeping computers safe!


Sources

  1. Microsoft Security Blog. "AI as tradecraft: How threat actors operationalize AI." https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/

  2. Microsoft Security Blog. "Jasper Sleet: North Korean remote IT workers' evolving tactics to infiltrate organizations." https://www.microsoft.com/security/blog/2025/06/30/jasper-sleet-north-korean-remote-it-workers-evolving-tactics-to-infiltrate-organizations/

  3. CISA. "Cybersecurity for Kids." https://www.cisa.gov/news-events/news/cisa-launches-cybersecurity-awareness-month-kids

  4. FBI. "North Korean IT Workers Warning." https://www.fbi.gov/ic3/alertr/north-korean

Ready to strengthen your security?

Talk to lilMONSTER. We assess your risks, build the tools, and stay with you after the engagement ends. No clipboard-and-leave consulting.

Get a Free Consultation