As scammers ramp up their activity during the holiday season, Gmail's artificial intelligence (AI) protections have helped reduce phishing attempts reaching users' inboxes by 35% compared to the same period last year.
Millions of unwanted and potentially dangerous messages are blocked before they ever reach inboxes.
Over the past year, Google has developed several AI models to improve Gmail's protections. In particular, a large language model (LLM) was specifically trained to strengthen its defenses against phishing.
Additionally, just ahead of Black Friday (late November), Gmail rolled out a new AI model that acts as a “watchdog” for its existing AI defenses. The model instantly evaluates hundreds of threat signals when it detects a dangerous message and deploys appropriate protections.
Gmail's AI technology improvements are not only a step forward in protecting users from threats, but also reflect Google's long-term commitment to creating a safer online environment.
As scammers become more sophisticated and innovate their tactics, Gmail's AI-based defenses will continue to improve to stay ahead of new threats.
Google also recommends that users stay vigilant, carefully check the origin of emails, and report suspicious messages to join hands in building a safer online space.