Cybersecurity experts have issued a warning to Gmail’s 1.8 billion users as hackers increasingly use artificial intelligence (AI) to bypass security measures, including two-factor authentication (2FA). The latest wave of scams involves AI-generated phishing attacks that trick users into revealing sensitive information.
Reports indicate that cybercriminals are leveraging AI-powered tools to create highly convincing emails, phone calls, and fake login pages. These sophisticated scams are designed to appear legitimate, making it difficult for users to differentiate them from genuine communications. Even traditional security advice, such as checking for grammatical errors or suspicious links, is becoming less effective against these AI-enhanced attacks.
A major concern is the ability of hackers to bypass 2FA, a security measure widely used to protect online accounts. According to cybersecurity researchers, attackers use AI to manipulate victims into providing authentication codes, often by posing as trusted entities such as Google or Microsoft. Some scams involve deepfake audio, where victims receive realistic-sounding phone calls from supposed customer service representatives asking for verification details.
Google and the FBI have acknowledged the growing threat, advising users to remain vigilant against unexpected security alerts, emails requesting login credentials, and phone calls urging immediate action. Security experts recommend enabling hardware-based security keys, regularly updating passwords, and being cautious of unsolicited messages.
As AI-driven cyberattacks become more advanced, tech companies are working to strengthen security measures. However, experts warn that user awareness remains the first line of defense against these evolving threats.