Google’s latest chatbot patent eventually leads to Android Kunjappan!

Google has been granted a patent to make its chatbots use more empathy during its interactions with humans, on December 24, 2019. The patent number 10515635 is about “Forming chatbot output based on user state”.

The idea is for chatbots to “achieve greater social grace by tracking users’ states and providing the corresponding dialog,” according to the patent description granted. In simple terms, what it does is this: the chatbot will remember how the user is feeling based on his/her responses, and use that memory to talk to them with more empathy.

This seems like a no-brainer for us humans, but for a bot, this is a big step forward in simulating them as more human-like, demonstrating empathy.

Does the bot really feel when it is saying, ‘Hope you feel better today’ like we humans do? Or is it merely executing a task it is programmed to? This is not an easy question to answer since humans can simulate such things too – in the service industries, people are trained to on similar levels using flowcharts, to ask something laden with empathy so that their conversation becomes more fruitful. But when a human says, ‘I hope…’, you ascribe that intent to that person’s feelings, whereas, when a chatbot says that, you don’t know who to ascribe that ‘I’ to!

This is a tiny step ahead in the journey towards AI.

We have 3 Alexa interfaces in our home. One on the TV (through a FireStick) and 2 stand-alone Alexa devices, one each for the kids. While we adults and our 16-year-old son use the device by treating it like a machine, our 9-year-old daughter’s interactions with it are more interesting. She says ‘Thank you’, ‘Good night’, and other such things to Alexa. She also asks Alexa how she’s feeling, only to get a ‘I don’t understand that question’ response. Yet, she continues to try to treat Alexa like it is a human in the house, despite not getting adequately human-like responses from the bot.

The next generation may grow up along with chatbots much earlier in their life and hence may come to treat them as human-like, unlike our current and older generations who see it with the necessary cynicism that they deserve – that they are simply machines programmed to enact a series of conversations.

Not just the next generation, though.

The recent Malayalam film, Android Kunjappan Ver.5.25 is a remarkable piece of Black Mirror’ish cinema (available on Prime Video).

It features an old man (played extraordinarily by Suraj Venjaramoodu) in a small town in Kerala. His son (played by the always and hugely watchable Soubin Shahir) is 34, and is tied to the city to tend to his father, who insists that he stay with him and take care of him, instead of taking up employment elsewhere. When the son, a mechanical engineer, gets a good job in Russia, for a Japanese robot manufacturer, he arranges for a home attender/nurse to tend to his father and leaves the country.

Given the father’s whimsical ways, the arrangement doesn’t last and the son, after having seen and worked on a home-attender robot prototype at this company, gets that back to Kerala and assembles it at his home. The father starts by distrusting and making fun of the robot but goes on to love it like his own son. At his old age, he gradually comes to appreciate everything the robot does, assuming it to be human-like, but without ever questioning or disagreeing with him. That subservience warms the old man’s heart, like how an obedient child would. He covers the robot with a mundu, gets the robot’s astrological report and even argues with the local priest when he refuses entry to the robot into the temple because he’s not sure if it is a Hindu!

The film’s humanoid robot is a phenomenal stretch of imagination since it speaks Malayalam fluently, is able to converse smoothly on any and every topic, even offer points of view on things like love, future, casteless society among others. The robot also seems fully water-proof (since it helps the old man wash clothes the old-fashioned way), is able to cook like any human chef in a kitchen without electronic gadgets like mixer, fridge or grinder! This is a movie, after all, and I loved the leap of imagination that aided this wonderful plot.

But inside this plot, there is a remarkable dynamic, of how humans may take to AI in the future when AI is common enough for most normal tasks around us, in our home and other places. It isn’t, right now, but in the near future, it is bound to get a lot more common. We are merely scratching the surface with regard to AI, in the form of bots responding to us in chats and voice calls (for service related queries). But they will learn and evolve.

That is a time when we humans are expected to evolve too, to start living with non-human intelligence around us, interacting with us. We already do that, with the flora and fauna of this planet (animals and plants, which are also non-human intelligence). But the big difference is that we humans firmly believe that the flora and fauna intelligence is inferior to our own intelligence. The coming AI-based non-human intelligence will be very different because they can learn and may become more intelligent than us eventually. We’d start by depending on them for the most menial tasks, and then evolve to depend on them for more advanced tasks. This is not very different from a visually-impaired person depending on a guide-dog for everyday movement, but imagine the guide dog learning every day and becoming more intelligent than the visually impaired person at some point!

Fascinating times ahead!

Comments

comments