Alarming rise in AI-powered scams: Microsoft reveals $4 Billion in thwarted fraud
AI-driven scams are advancing quickly as cybercriminals harness new technologies to target victims, according to Microsoft’s latest Cyber Signals report.
Over the past year, the company says it has stopped $4 billion worth of fraud attempts, blocking roughly 1.6 million bot sign-up attempts every hour — highlighting just how large this threat has become.
The ninth edition of Microsoft's Cyber Signals, titled “AI-powered deception: Emerging fraud threats and countermeasures,” outlines how artificial intelligence has lowered the barrier to entry for cybercrime, allowing even inexperienced actors to create sophisticated scams with ease.
Tasks that once took scammers days or weeks can now be completed in just minutes.
The broadening access to fraud tools marks a major shift in the criminal ecosystem, impacting both consumers and businesses around the globe.
The rise of AI-enhanced cyber scams Microsoft’s report details how AI can now scrape the web for company data, enabling cybercriminals to assemble detailed profiles of potential targets and craft convincing social engineering attacks.
Scammers can now lure victims into complex fraud operations using fake AI-boosted product reviews and AI-generated storefronts, often complete with fabricated business records and customer feedback.
Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft Security, notes the rising threat. “Cybercrime is a trillion-dollar issue, and it’s been growing every year for the past three decades,” the report states.
“I believe we have a real opportunity today to embrace AI faster, so we can close exposure gaps quickly. AI now has the ability to make an impact at scale, helping us build security and fraud defenses into products far more efficiently.”
Microsoft’s anti-fraud team reports that AI-driven fraud is a global issue, with heavy activity out of China and Europe — especially Germany, given its position as one of the European Union’s biggest e-commerce hubs.
The report emphasizes that the larger a digital marketplace grows, the higher the volume of attempted fraud it attracts.
E-commerce and job scams topping the list Two areas seeing particularly troubling AI-enhanced scams are e-commerce and job recruitment. In the e-commerce world, fraudulent websites are now spun up within minutes, even by people with little technical expertise.
These sites often impersonate legitimate businesses, using AI-generated descriptions, images, and reviews to trick consumers into thinking they’re buying from trusted sellers.
Adding to the deception, AI-powered chatbots can now handle customer inquiries, using scripted excuses to stall chargebacks and crafting convincing responses to complaints, helping scam sites appear professional and trustworthy.
Job seekers are also prime targets. According to the report, generative AI has made it easier than ever for scammers to post fake job listings across recruitment platforms. Criminals can create fake recruiter profiles with stolen credentials, draft fake job postings using AI, and send out phishing emails at scale.
AI-driven interviews and automated email replies further boost the scams’ believability. “Fraudsters commonly request sensitive personal details, like resumes or even bank account numbers, under the pretense of verifying information,” the report says.
Warning signs include unexpected job offers, requests for upfront payments, and communication through casual platforms like SMS or WhatsApp.
Microsoft’s fight against AI fraud In response to the growing threat, Microsoft says it’s taking a multi-layered approach across its products and services. Microsoft Defender for Cloud offers threat protection for Azure environments, while Microsoft Edge now features typo protection and domain impersonation detection. The report notes that Edge uses deep learning models to help steer users away from scam websites.
Windows Quick Assist has also been upgraded with new warning messages to alert users to possible tech support scams before they grant remote access. Microsoft says it now blocks an average of 4,415 suspicious Quick Assist connections every day.
As part of its Secure Future Initiative (SFI), Microsoft has introduced a new fraud prevention policy. Beginning in January 2025, all Microsoft product teams must conduct fraud risk assessments and build fraud prevention measures into their designs, aiming to make products “fraud-resistant by design.”
With AI-powered scams evolving rapidly, Microsoft stresses that consumer vigilance remains key. The company advises users to watch out for urgent pressure tactics, verify the authenticity of websites before making purchases, and never share sensitive information with unknown sources.