You Have Been Made

Dear Impossible Readers,

Do you know what I find fascinating about scammers? They are incredibly creative. From the weirdest porn fetishes you did not know even existed to the most useless products, making you wonder if they have ever worked in pornography, telemarketing, or infomercials. It is the most beautiful love-hate relationship. The most ridiculous email that makes you laugh so hard you fall out of your chair, to heartbreaking news you read online that makes you wish you were an octopus so you could strangle eight scammers at once.

Language was once a key sign of fraud, with poor grammar, awkward phrasing, and spelling mistakes revealing it. However, this defence is weakening, especially in English. Since English is best understood by AI, it is more difficult to detect scams in English. Most language models are trained on English data, enabling them to produce fluent, idiomatic, and contextually suitable text. Tasks that once needed skilled scammers can now be done quickly, at scale, and at low cost. As a result, linguistic mistakes that once helped identify scams are becoming less common, removing an important safeguard.

This shift is crucial because English dominates global business, technology, finance, and online platforms. It is the most adaptable language for fraud, reaching the broadest audience. AI removes language barriers, so polished English no longer indicates legitimacy but access to better tools. This is evident in corporate scams, where phishing emails now mimic a professional tone, hierarchy, and timing, sounding calm and courteous and blending into routine communication.

This development may heighten vulnerability among younger users, as AI removes linguistic cues that signal legitimacy. Younger users trust polished, correct language online, and AI can provide it, boosting trust rather than suspicion.

Tone introduces risk. AI not only corrects grammar but also enhances politeness, confidence, and emotional stability. Scams no longer rely on urgency or aggression. They can be patient and helpful. In English, politeness conveys authority, reducing suspicion even with unusual requests, creating a false sense of security. Users were advised to watch for spelling and grammar errors as fraud signs, but AI has rendered this advice obsolete. Now, clean language signals automation, not legitimacy. The real risk is that scams become harder to distinguish from genuine communication.

Since language often does not reveal scammers, detection depends on behaviour and context. It is not about how messages sound but rather what they request, the pressure they apply, and breaches of norms. Future scam awareness should focus on recognising manipulation patterns rather than just poor English, and should be implemented soon in all official languages.

AI rapidly reduces the gap that once kept users safe, with English at its centre. When scams appear more professional and less suspicious, the danger of fraud rises, and confidence in false sources increases.

In our comfortable modern world, scammers keep our fight-or-flight response active. I doubt they will ever stop, so we should use these experiences to strengthen our awareness.

The devil is no longer in the spelling and grammar. It is in your gut.

The octopus hiding in plain sight,
Yours Possibly

Further Reading

Join Impossibly Possible!

Subscribe or follow Impossibly Possible on LinkedIn or Medium.

Leave a comment