How should we react to the use of AI?

So much has been said and written about how we could use AI in assorted tasks. But let me turn that focus away from using and towards reacting. How should we react to AI when we come across its usage?

Allow me to explain with a few use cases. Let me start with broadcast communication, or what can be termed as one-to-many communication.

Advertising

Next, consider the use of AI to generate images of people and other things in advertisements.

Take for example, the ads by Tata Cliq that I have been tracking in newspapers for quite some time. It looks like they have fully move to using AI to generate images.

Is there a note or a disclaimer that they (either Tata Cliq or its ad agency) have used AI to generate the models? No, but is it required? I don’t think so – it’s their ad and they can use anything they deem fit: photos of real models, hand-drawn sketches, cartoons, or AI generated visuals of people. There is no rule that they need of a disclosure only when they use AI generated visuals.

Is it obvious that Tata Cliq ads’ ‘models’ have been AI generated? I could easily differentiate them. Many others may too. Many others may not care enough or be observant enough to notice. If people do realize that they have been AI generated (either by themselves or by seeing someone else mention it), they may also possibly wonder if the clothing on those AI generated models too are AI generated, and if such AI generated clothing are actually available on Tata Cliq. But that’s an incidental problem not directly relevant to this post.

I have observed quite a few other brands use AI to generate models too. For instance:

In online advertising too (this is from Facebook):

AI’s use has also been sporadic in social media posts by brands. BMW’s use of AI last December (2023) is an infamous example!

TV advertising

Toys ‘R’ Us recently released an AI-generated (fully!) ad teaser and it was universally panned!

How did you react to it?

I can understand why it was panned given the largely poor quality of the production/graphics, but these are early days in terms of AI-generated video. I expect it to get so better in the next few years that we may not be able to tell the difference at all. At that point, it wouldn’t matter if the ad was made with AI or with real video shooting – the only thing that would matter is if the ad was persuasive or not.

On a related note, remember the recent Google ad that it played during the Olympics? It was not made with AI, but it pitched AI – Google Gemini. The ad features a father seeking AI’s help to write a letter on behalf of his daughter to USA’s track star Sydney McLaughlin.

This one:

The father is not even shown asking what his daughter wants to tell Sydney. He assumes that he is not equipped to articulate the communication and hence seeks Gemini’s help. That seems to remove the personal connect in the email. We have a seen a lot of examples of kids writing to famous people and the replies they get become famous. This flips that script on its head by introducing the AI in the mix.

But just consider this scenario: say, the father did get a draft from AI. What if he asks his daughter to hand-write that draft content on a piece of paper and send it to Sydney? Wouldn’t the form (hand-written letter) mask the AI infusion into the content?

Music video

The song ‘The Hardest Part’, by Washed Out, claims to be the first official music video made with OpenAI’s Sora! Take a look!

How did you react to it?

There’s an interesting ‘movement effect’ that keeps things interesting beyond the dodgy faces and skin texture that has become a hallmark of AI-generated humans in videos. A sign of things to come?

Magazine covers

I happened to notice one Tamil magazine depending a lot on AI-generated visuals (of people) for their cover! Take a look:

I’m sure you can easily differentiate between real humans and AI-generated ones. Does the AI-generated humans on the cover change something for you, in terms of your preference for the magazine? It doesn’t to me, since the content is more important than the cover, real of AI generated.

What are the broader implications here from a brand marketing point of view?

For one, there was a reason why large brands lavished money on production budgets and expensive shoots either for print advertising or TV advertising. There was a reason why expensive magazines splashed expensive photo shoots for cover photographs. The basic idea is that bigger the production budget, bigger and more premium the product probably is.

But production budget is only one part of the story. The other part is media/distribution. If every single brand can, eventually, produce impressive, big production-like output using AI, they would spend far less on production and splurge the big money only on media buying and distribution. So we can be impressed by not only a Coca-Cola’s mega-budget TV ad campaign but also a tiny beverage drink’s equally big-looking ad film. The only thing both would need is an impressive script that persuades the viewers.

If every brand can produce big-looking ads, print or TV, money/budget won’t be a signal for ‘premium’ anymore. And smaller brands producing big-looking ads may need to back up their premium positioning in other areas too like customer service, post-sale touchpoints, etc.

With money being out of the equation, what would truly matter for marketing is the ideas that AI can execute. Can AI generate persuasive ideas too? That would democratize things to an even greated degree.

But there is one hold-out – celebrities! AI can fake celebrities, but that can be legally challenged. So, expensive celebrities and brand ambassadors, and their real time and effort can become even more expensive, and exclusive.


Now, let me move to one-to-one communication use cases.

Customer service chatbots

The simplest use case, and one most people would be very familiar with already – customer service chat. Consider a fairly famous chatbot – Zomato!

Do people know that they are chatting with an AI bot? Oh yes, Zomato calls it out clearly with the first message to make it clear that a virtual assistant is chatting (with a human name ‘Zia’, to make it seem more relatable).

Of course, there are issues, and a lot of people do complain that the responses are not relevant, that it doesn’t address the issue raised, or that it refuses to transfer the chat to a human (among others). Take a look (a live search on Twitter)!

On the other hand, I have found the chatbot employed by Airtel on their app to be absolutely atrocious! In fact, they even have a response ‘The bot did not understand my query’ but if I choose that, the chat ends with, ‘We have registered your feedback’ and ‘Was this chat helpful?’ 🙂 So utterly pointless!

But broadly, as long as the bot offers coherent responses to the queries raised, most people do not seem to mind it at all. If things go wrong, then people are upset with the kinds of resolutions offered.

So, overall, not very different from a human chatting with customers. It doesn’t matter that a bot is chatting – the result is the same: if issues are solved, no complaints; if unsolved, we customers blame the other side, whether it is a bot or a human.

College/High school essays

Next, how about AI-written essays submitted by students in a college or school?

I recently came across a note that said “Using AI for your essay will result in a zero grade” in a college essay instruction!

The question is, how would the professor know that the essay she is reading has gone through an AI tool? Simple! There are many, many online AI checkers that colleges already use to run writeups submitted by students for a preliminary check. This is not very different from the plagiarism detectors that colleges are already using.

This AI use case is a bit different from the previous two instances. How?

In this case, more than the end result, the writing process itself is the benefit. It may be the end result that gets the grade for students, but what they learn is to sift through multiple sources, find out what is relevant to their topic, the ability to form an opinion and articulate their perspective in a coherent and readable manner. This is the actual learning part of education, in colleges.

From the professor’s point of view, the college (or high school) essay is a personal note – a one-to-one communication. So, if she finds out that the essay she is reading for assessment is written (in part or whole) through an AI tool, how would she react?

More than the professor’s ‘feeling’, if she looks at its practically (given that she’d have hundreds of papers to assess), she would be disappointed that she is not reading the student’s own thoughts but a result of a prompt by the student on an AI tool.

The same sentiment may not spring up when we are witnessing an AI bot in a chat on Zomato (for instance) or when we notice the AI generated models on a Tata Cliq ad. The reason? We are in a transactional relationship with Zomato, and as long as that relationship’s broad contours are met, we don’t care whether it is a human or an AI that is chatting with us. The Tata Cliq ad is far easier – we couldn’t care less any which way… it’s just an ad, after all.

But chatbots are supposedly getting better at mimicking human-like conversations.

And AI visual generators are supposedly getting alarmingly good that we may not be able to tell the difference between AI generated humans and actual photographs very soon. Consider these:

Would such AI evolutions mar the college essay segment too? That is, would there be a point where professors/colleges may not be able to find out if an essay was written with the help of AI or not?

After all, colleges now allow the use of calculators in exams. So, if some complex calculations are a result of a calculator, the assessment takes that into account as a given. Similarly, would colleges account for AI’s usage in writing?

Yes, they have! Consider this:

This removes the surprise element of a student using AI and normalizes the usage, much like an open-book test or the use of a calculator. But this is not a recommended route for every kind of test paper.

Business email

How about a business email, written by a salesperson using AI, to a potential client? After all, Gmail and Microsoft Outlook have been going to town talking about AI-assistance across their products, including email!

Should the receiver be made aware that the email was drafted with AI help? Not really. How does it matter to her? All that matters to her should be the content of the email.

But if the receiver comes to know (perhaps someone else in the organization got a mail too which had similar wording. Or, some of the sentence structure seemed familiar to something else she had read, perhaps) that the email was drafted with AI’s help, would it change how she perceives the content of the email?

I’m reasonably sure that there’d be some change in her sentiment towards the sender, but whether it materially matters (to the sales process) or not is difficult to say. If the email had a footer that added a disclosure that AI’s help was used to draft the email, that change in sentiment may not occur at all.

Social media comments and replies

Finally, my recent pet peeve with the use of AI – LinkedIn replies. There are a few people that I notice on LinkedIn who consistently leave replies and comments that are so very obviously generated via some AI tool. LinkedIn itself has AI tools these days, but a simple Google search would point you to many online tools where you can run the original post (for which you want to reply) and get assorted content that you can post as your reply.

How do I identify an AI-generated reply? Usually, such replies regurgitate my own post in different words and sentences while adding nothing new at all.

But the other day, I noticed one particular comment, assumed it to be a real comment by the user who posted it and responded to it in good faith. A few hours later, another user left a comment with suspiciously similar wording and intent! It was then that I realized that both comments were AI-generated.

Now, the question is: am I supposed to engage with this comment in good faith by imagining that this is a genuine point of view by the user who posted it?

If I had not seen the second reply, I would not have realized that the first was AI-generated. But what is the point of engaging with perspectives on an online forum when the perspective is not even yours in the first place?

I agree that not everyone is confident of articulating their perspectives and much like other talents like singing, playing a sport, etc. the power of articulating is a talent too, practiced and honed. But who is asking for a polished piece of writing in a mere LinkedIn comment/reply? Just a few words that showcase what the point of view is more than enough. For instance, I overlook spelling errors and grammatical errors in LinkedIn replies and focus only on the content. I’m fully aware that people may be typing on their phones, typing while traveling and so on. As long as I can understand the point of view, typos and grammar do not matter at all.

But when a perfectly formatted comment comes in, generated with the help of AI, what are we supposed to do with it? Isn’t it a bit like bringing a printed picture to an art class where you are supposed to draw by hand? Or bringing a recorded voice (of someone else) to a music class where you are supposed sing?

Like the implications in marketing, what would be the broader implications of using AI in one-to-one communication?

Broadly, everyone would be equally adept at writing very well. With improvements in AI-based video, everyone can showcase their impressive skills in singing and dancing too, perhaps, at least on video.

But there would be a difference when the reel meets real. Imagine a salesperson who had earlier sent an impressive sounding business email to a prospect. Now she has to meet the prospect and sound as impressive as she did in the email. Or imagine someone who consistently posts highly polished comments on LinkedIn posts. You meet that person in real life during an event. That person better live up to the online persona, if not, things will come crashing down. This is not very different from the famous case of how playback singing is handled in Indian movies – the lead stars we see merely lip-sync, while actual talented singers do the singing in the studios.

Given this background, personal brands that can stand offline/real world scrutiny would be even more powerful and impactful.


We are merely scratching the surface when it comes AI-generated content, across any format (the ones mentioned above, plus a lot more). Imagine how widespread it can get.

From one AI-generated reply to 95% of all your replies on LinkedIn, Twitter, or Instagram, perhaps?

We could have personal bots who could chat on our behalf with whatever incoming chat, across any platform like WhatsApp. So, in effect, two bots would be chatting with each other on our behalf, while we continue watching a movie!

Email auto-responses via AI? Meaning: two AI bots emailing each other?

All this makes me wonder that the price of human attention would go exponentially high. To make a real human take note, offer a point of view in her own words, would be so valuable and expensive. To make her care in real-time, even more so!

But what if it becomes impossible to tell the difference between a human-led conversation/human image/video vs. an AI-led conversation (talking on behalf of a human)/AI-generated image/video? In that case, what would eventually matter is if we can somehow know the difference because of disclosures or us finding it out somehow. And this would matter only in one-to-one communication. In broadcast and one-to-many communication, AI would cease to matter from an audience reaction point of view very soon.

PS 1: The cover picture above is generated via AI with my prompt being ‘a professional looking Indian businessman hiding behind a AI robot’s head’ on Microsoft’s AI image generator! ?

PS 2: An interesting realization: Almost all AI tools depict ‘AI’ as a humanoid metallic robot. This is the opposite of AI’s promise – that is, with AI, a human can become a more polished version of themselves.

Comments

comments