Amazon Blocks 1,800 Job Applications from Suspected North Korean Operatives

In a striking revelation about emerging security threats in corporate hiring pipelines, Amazon has disclosed that it blocked more than 1,800 job applications from individuals suspected of being linked to North Korean state actors. These applications were identified as fraudulent attempts by operatives affiliated with the Democratic People’s Republic of Korea (DPRK) to secure remote information technology roles at the U.S. tech giant, Amazon’s Chief Security Officer Stephen Schmidt said in a public post and during security briefings.

According to Schmidt, the surge in fake or manipulated job applications has been ongoing since April 2024, with quarterly attempts rising by roughly 27 % each quarter throughout 2025. The vast majority of these suspicious applications targeted positions in IT, including roles tied to software development, artificial intelligence, and systems engineering -fields that command high salaries and technical expertise. 

The suspected North Korean applicants employed a range of deceptive methods to disguise their true identities and geographic origins. In many cases, fraudsters used stolen identities-hijacking real profiles or LinkedIn accounts of legitimate software engineers -to build credible résumé histories. These stolen credentials allowed the applicants to pass initial screening stages designed for remote workers. 

Other applicants used fabricated details enhanced by AI tools, including polished résumés, professional online personas, and falsified academic credentials. Security teams also encountered hybrid strategies in which operators collaborated with domestic U.S. individuals who hosted company-issued laptops in so-called “laptop farms.” These laptop farms created the superficial appearance that the workers were local and thus helped conceal the true remote origin of the activity. 

Amazon’s security staff relied on advanced detection measures that extended beyond traditional resume checks. One notable case involved detecting keystroke latency anomalies -unusually high delays in typing speed that signaled the worker might be operating from a distant location rather than a local home office. In this instance, response latencies were more than 110 milliseconds, far above what would be expected for a legitimate U.S.-based remote employee. This subtle but telling indicator triggered deeper investigation and eventual removal of the impostor from Amazon’s systems. 

Security teams also flagged inconsistencies such as improper phone number formats, implausible educational histories, and language usage patterns that did not align with American norms. These indicators, when combined with AI-driven analytics and human vetting, helped reduce the risk of a fraudulent hire slipping through the recruitment process.

Experts believe these attempts are part of a wider strategy by Pyongyang to exploit the proliferation of remote work to secure foreign income and potentially gain access to sensitive corporate systems. The salaries earned by fake employees are believed to be funneled back to the North Korean regime, possibly supporting state programs, including weapons development. 

Amazon’s experience highlights a growing challenge for businesses worldwide as they fortify their hiring pipelines against sophisticated identity fraud and nation-state intrusion. Security professionals emphasize that AI-assisted deception and remote hiring pose risks that extend beyond simple fraudulent applications -potentially exposing proprietary data and strategic assets to stealthy adversaries. 

As the landscape of remote work continues to evolve, companies are increasingly urged to adopt multi-layered verification measures, combining AI analytics, manual reviews, and behavioral monitoring to safeguard their operations against state-linked threats and other high-risk applicants.

Comments