Gmail’s autocomplete legitimizing the Indian-English ‘revert back’ and a parallel to fake news

The Indianization of Google is now complete. Take a look at what Gmail autocomplete suggested!

I intended to write ‘revert before EoD’ and as soon as I typed ‘b’, Gmail added ‘ack to you’, most probably because millions of Indians (and perhaps people outside India too) have been using that combination! 🙂

I also use a Grammarly plugin and that tells me this usage is wrong!

Incidentally, the very word ‘revert’ as a replacement for ‘reply’ is wrong, but Google has perhaps already internalized that usage, much like how ‘prepone’ has been warmed up too.

See Merriam Webster’s ‘words we’re watching’ note on ‘prepone. And a broader note on Indian English usages, including the fact that ‘prepone’ has been added to the Oxford Dictionary.

Language evolves as people start using it and find newer ways of using the same language, depending on regional nuances and updates. So, the fact that millions of other Indians are using ‘revert back’ led to Gmail assuming I want to complete that too. Someday, Gmail may also suggest ‘be rest assured’ too, I reckon.

But this process—of a large enough group of people using a particular way to say something in English/a language, and it becoming the norm, enough to get into the dictionary—brought to my mind another parallel: fake news and urban legends.

Consider how ‘fake news’ works. Someone with an agenda creates a piece of news that is patently false (imaginative, but untrue) and lets a LOT of other like-minded people (who have bought into the same agenda) spread it far and wide. AI-based social media algorithms may pick that up (because a lot of people are saying it, so it may be right, goes the logic) and beam it up further to even more people. Now, something that was fake gains credibility beyond the agenda-based cliques, thanks to AI and algorithms.

AI learns. Unless AI has been taught to question everything and use filters to identify fake from real, it may simply use one signal (a lot of people saying it) and assume that to be true until a human intervenes by adding a note that the news is fake. Or, the AI needs to approach every piece of news with multiple contexts – like the source. It needs to look at every single source spreading it, each source’s antecedents, past news shares and correlate it with a possibility of the new piece of news being fake. If some mainstream media publications have fallen prey to the same fake news, that makes things all the more murky, however.

Ditto for urban legends. Till someone counters it categorically as being false, it may comfortably spread, thanks to both gullible people and clueless AI/algorithms.

Doesn’t that sound a lot like Indian or any regional variations of a language that started out as ignorant/wrong grammar/usage eventually becoming right/acceptable? I fully understand that one is about linguistic use-case and the other is a malicious bending of truth for a purpose, but the process of legitimizing both seems same. And in the Gmail auto-correct example, AI literally supports the legitimization.

In an ideal scenario, Gmail should perhaps offer suggestions instead of leading one to use the default ‘wrong’ option (I use ‘wrong’ contextually here since it was wrong to say ‘revert’ or ‘revert back’ at one point in time before we Indians made it acceptable)? So, for instance, in this example of ‘revert back’, Gmail should ideally offer 2 branches of suggestions –
1. Accepted in Indian English
2. Try using ‘reply’ instead of ‘revert’, as per <cite a dictionary>

And Gmail can still learn from how users choose. That may a stronger feedback mechanism for AI to understand usages in a region than just offering one default suggestion and exacerbate the problem. If Facebook and Twitter are using AI to offer counter-suggestions to fake news, this approach may seem legitimate too.

The main crux, in this comparison between wrong grammar becoming the norm and fake news, is one of intent. The general assumption is that bad/wrong grammar comes out of ignorance and is inadvertent, while fake news is intentional.

The evolution of languages to include local variants in grammar and spelling (colour vs. color, for instance) is usually inadvertent and benign. They are done, first, ignorantly as a mistake (a mistake only when pointed out – in their heads, they are right because to them, it sounds right). But when others see it and continue to assume that is the right way and follow the same mistake, it becomes a norm.

In case of fake news, the earlier thought was that the people who spread it (and not who create it – they do it with a malicious intent) are doing it ignorantly because they truly believed in that piece of news and ‘didn’t know better’. But a more recent school of thought is that most carriers of fake news do know, to some extent, that the news they are reading is imaginary and created for a purpose, with an agenda, but since it aligns with their own thought process, they actually feel good in sharing it forward. In other words, they like the damage it may cause (the malintent with which it was created in the first place) and they think the subject of the fake news ‘deserves it’.

So, using that logic, what if a group of ethical researchers ‘game’ the Gmail AI by getting a sufficiently large enough number of people, over a period of time, to use ‘be rest assured’ in their Gmail communication? Or, if not ethical researchers, how about a large group of rabid nationalists from any one country picking a ‘wrong’ usage of a phrase prevalent in that country? Would the AI assume it as a new usage (considering it is already in-vogue in India a LOT)?

Comments

comments