Google is turning Chrome into a frontline defense against online fraud, using on-device artificial intelligence to spot tech support scams in real time before users hand over money or control of their machines. The company is pairing this new protection with an unusually explicit promise of control, letting people remove the local AI models that power the feature if they are not comfortable with them living on their laptops and phones. That mix of aggressive security and opt-out flexibility is emerging as a template for how browsers might use AI without deepening the privacy backlash already surrounding the technology.
How Chrome’s on-device AI scam shield actually works
The new protection is built around a lightweight large language model that runs directly on a user’s device and watches for the telltale patterns of tech support fraud. When the system decides a page looks suspicious, based on specific triggers that How the content behaves and what it tries to make the user do, Chrome interrupts the session with a full page warning, called an interstitial, that explains the risk and offers a way back to safety. The goal is to break the psychological spell that scammers rely on, where a fake Windows alert or a bogus antivirus pop-up pressures someone into calling a number or installing remote access software. By inserting a calm, system-level message at that moment, the browser can remind people that legitimate companies do not demand gift cards or instant bank transfers to fix imaginary infections.
Under the hood, the model is tuned to recognize the social engineering tricks that have become standard in these schemes, such as countdown timers, fake system dialogs and instructions to disable real security tools. Earlier work on this feature described how the on-device Gemini Nano system analyses web content locally to detect known scam indicators, then, if a threat is suspected, Chrome sends only a minimal signal to its servers to fetch the right warning template. That design keeps the heavy analysis on the user’s machine while still letting Google update the guidance people see as scammers change tactics, a balance that tries to combine privacy with up-to-date intelligence.
Gemini Nano and the rise of on-device browser AI
Chrome’s scam detector is part of a broader shift inside Google toward running more AI directly on phones and PCs instead of in distant data centers. With the launch of Chrome version 137, the browser began using an on-device Gemini Nano la model to provide an additional layer of protection against tech support scams, effectively turning the browser into a constantly running security scanner that understands language and context, not just URLs and file hashes. Because the model lives on the device, it can react instantly when a page starts spawning pop-ups or demanding remote access, without waiting for a cloud service to respond or logging every page visit to a central server.
Google has been explicit that this is part of a larger strategy to use AI to fight online scams across its products, including Search, Chrome and Android, where AI already helps block millions of scammy search results and app installs before they reach users. In that context, putting Gemini Nano inside the browser is less a one-off experiment and more a sign that Google sees local models as a standard security component, similar to how built-in password managers and phishing filters became default features over the past decade. The difference now is that the same kind of model that can summarize an email or draft a document is also being trained to recognize the language of fraud.
New Chrome controls: powerful AI, explicit opt-outs
What makes this rollout unusual is that Google is not just adding AI to Chrome, it is also giving people a clear way to turn it off. Reporting on the latest update describes a New Chrome feature that lets users remove the AI models on the device that power scam detection, with Google signaling that similar on-device systems could be used for other uses in the future. In practice, that means the same settings area that controls security features now includes an option to delete the local model files entirely, not just toggle a checkbox, which is a stronger form of opt-out than many AI tools currently offer.
Other coverage confirms that Google Chrome now lets you delete the local AI models that power the Enhanced Protection feature, which was upgraded with AI capabilities to spot scams more effectively. A separate report notes that Google Chrome users will soon be able to opt out from an AI model that had been added as part of the enhanced protection mode, confirming that the model was hosted on the user’s device through Google Chrome. Taken together, these changes show Google trying to preempt criticism by making the AI visible and removable, rather than burying it as an invisible background process.
Privacy backlash and why local AI still worries people
Even with on-device processing, the idea of a browser-level AI model quietly analyzing every page can feel intrusive, and Google is clearly aware of that tension. One analysis points out that One thing worth point out is the word “like,” in Google’s own description of how on-device AI can be used for features like scam detection, because that wording leaves the door open to a wide range of other uses in the future. While scam detection is non-contentious, the same report stresses that on-device AI can be turned to a wide range of tasks, and that users may want the ability to keep security features while disabling more experimental assistants, which is why it matters that each capability can be toggled on its own.
There is also a broader context of skepticism around how Google handles data for AI training and analytics. Separate reporting notes that Bleeping Computer has highlighted how Google will discontinue its dark web report feature in January 2026, which added to concerns about how the company balances user privacy amid growing datasets like Common Crawl. Against that backdrop, even a local model that never uploads raw page content can trigger questions about what telemetry is sent back, how long it is stored and whether it might eventually be used to refine future models. The new Chrome controls do not answer every one of those questions, but they at least give users a visible switch to flip if they decide the trade-off is not worth it.
What this means for everyday users and the future of browser security
For most people, the practical impact of Chrome’s AI scam detector will show up in a few specific scenarios that have become depressingly common. Someone might be browsing a recipe site when a malicious ad redirects them to a full-screen warning claiming their Windows 11 laptop is infected and instructing them to call a toll-free number, or a retiree might see a fake Apple support page that urges them to install a remote desktop tool to “fix” their MacBook. In those moments, the on-device LLM can recognize the pattern and trigger an interstitial that explains that these kinds of pop-ups and extensions are tech support scams, a behavior described in detail when When the user lands on a suspicious page the model intervenes. That kind of timely interruption can be the difference between a close call and a drained bank account.
Supporting sources: Google Chrome will, Gemini’s Smart Summaries:.