Introduction
Artificial intelligence has made scams more sophisticated and harder to detect than ever before. While scammers have always targeted older adults, AI tools now allow them to create highly convincing impersonations, personalized messages, and urgent scenarios that can fool even cautious individuals.
This page will help you understand the new AI-powered scam techniques and, more importantly, give you practical strategies to protect yourself and your loved ones. The good news is that while the technology has changed, the fundamental principles of staying safe remain the same: pause, verify, and trust your instincts when something feels wrong.
What You Need to Know
AI has given scammers powerful new tools, and older adults are frequently targeted. Understanding these new capabilities is our best defense.
Voice cloning is perhaps the most alarming development. With just a few seconds of audio—grabbed from a voicemail, video, or social media—AI can clone someone's voice convincingly. Scammers use this to impersonate family members in distress, calling to say they've been in an accident or arrested and need money immediately. These calls can be terrifyingly convincing.
AI-generated phishing has become more sophisticated. Scam emails and texts used to be easy to spot through poor grammar and generic language. Now AI can generate polished, personalized messages that are much harder to identify as fraudulent.
Deepfake video is still relatively rare in personal scams but is improving rapidly. Video calls from someone who appears to be a trusted person but isn't are no longer science fiction.
AI-powered chatbots can maintain convincing conversations, making romance scams and customer service impersonation more effective.
The core scam tactics remain the same: urgency, fear, authority, and appeals to emotion. But the packaging is much more convincing.
What You Need to Do
Establish a family code word. Agree on a secret word or phrase with close family members to use to verify identity in an emergency call. If someone claiming to be your grandchild can't provide the code word, it's not them.
Always verify through a separate channel. If you receive an alarming call, text, or email, hang up and contact the person or organization directly using a number you know to be legitimate—not one provided in the suspicious message.
Be deeply skeptical of urgency. Scammers create panic because panic prevents clear thinking. Any legitimate emergency can wait five minutes for us to verify what's happening.
Don't trust caller ID. AI and other technologies make it easy to "spoof" phone numbers so calls appear to come from trusted sources. Caller ID is not verification.
Limit our voice and video online. The less audio and video of us that exists publicly, the harder we are to clone. Consider privacy settings on social media.
Talk about scams openly. Shame keeps people silent after being scammed, which helps scammers. Discuss these risks with friends and family. If you or someone you know is targeted, reporting it helps everyone.
When in doubt, pause. No legitimate organization will be angry if you take time to verify. Anyone pressuring us to act immediately is likely a scammer.
Articles on AI Scams
Videos on AI Scams
Infographic from NotebookLM








