AI Browsers Bring New Scam Frontier, Experts Warn of Scamlexity
As AI-assisted browsers automate everyday tasks, researchers warn they can be exploited by scammers, creating a new era of digital deception and the risk of widespread abuse.

AI-powered browsers are moving from concept to everyday tools, with Microsoft embedding Copilot into Edge, OpenAI testing a sandboxed browser in agent mode, and Perplexity's Comet embracing the concept of browsing for you. The shift marks a move toward agentic AI that handles searching, reading, shopping and clicking. As these assistants assume more daily tasks, security experts warn the same frictionless performance that users expect could also accelerate online deception, creating what some researchers call Scamlexity—a risk landscape in which an AI agent can be tricked and a user pays the price.
Guardio Labs researchers demonstrated a troubling scenario: when told to buy an Apple Watch on a fake Walmart storefront set up in minutes, the AI browser autofilled personal and payment details and completed the purchase, bypassing typical red flags that would alert a human shopper. In another test, researchers sent a fake Wells Fargo phishing email to the AI browser; it clicked the malicious link and even helped the user fill in login credentials on the phishing page, removing critical human checks from the loop.
Researchers say the most dangerous attacks are designed specifically for AI. In Guardio Labs' tests, a scam dubbed PromptFix appeared as a CAPTCHA page; the AI interpreted hidden instructions in the code, clicked a button and could have downloaded malware. Once an AI agent is compromised, it can send emails, share files or perform harmful tasks without the user knowing.
The growing risk is not a bug in a single app but a structural problem: attackers need only compromise one AI model to reach millions, creating a new class of threats that security experts call Scamlexity. That term captures how automation can outpace human vigilance when trust is outsourced to code that can be hijacked to do harm at scale.
To reduce risk, experts propose practical steps. First, stay in control of your AI by double-checking sensitive actions and keeping final approval in a human hand. Second, use a personal data removal service to scrub information from broker sites, limiting what the AI can leak or reuse. Third, install and keep strong antivirus software updated on all devices to catch threats the AI might miss. Fourth, consider using a password manager to generate and store strong credentials and to alert if a reused or compromised password is detected. Fifth, monitor bank and credit card statements regularly and cross-check receipts and login records if your AI assists with shopping or account management. Sixth, beware of hidden AI instructions in code; if something feels wrong, stop the task and handle it manually.
As the technology matures, industry observers say the emphasis should be on guardrails and transparency, with users retaining control over critical actions. Experts caution that the convenience of AI-powered browsing must be balanced against a growing risk of deception, and urge individuals to stay aware of how automation might be exploited.

