Five curious things that can happen simply by giving your chatbot a face

A fan of human psychology? Then you might find these five findings on what happens when you give a chatbot a face and personality amazing.

Published
February 23, 2021
by
Updated
Five curious things that can happen simply by giving your chatbot a face

When you’re greeted by someone in a shop, you’re not just hearing them say hello; you’re seeing their facial expressions, hearing their tone of voice and creating a proper connection. Think of a conversation with your barber or hairdresser, or a shop assistant whose singular purpose is to help you find exactly what you want. Service with a smile!

Now imagine a faceless pop-up appearing in the bottom corner of a website inviting you to chat. Chat with whom exactly? It’s no wonder chatbots exceed at providing convenience and speed, but they can’t replicate what’s on offer through the human touch.

No wonder, then, those organizations who use chatbots are prioritizing their development towards something more natural. As part of the research for our eBook ‘Building a digital workforce’, we found that 42% of brands are looking to add a more human experience to their chatbot strategies as their next imperative.

It’s a smart tactic. Research from various proof of concepts we’ve engaged with show that digital humans garner NPS scores five-times higher than standard chatbots. What’s more, 89% of users say they prefer interacting with a digital human channel over a chatbot one.

But other things happen, too – more subtle changes in consumer behavior simply by interacting with a face over a text box. Here are five things that can happen when you upgrade your chatbot to a digital human.

Statistics on chatbot engagement | UneeQ Blog

1. People may open up to your chatbot more

You’d think when it comes to opening up about subjects that are difficult to talk about, speaking with a real person would be the most effective medium. But it might not necessarily be so.

An article published by Wired reported on the psychological support surveys that members of the United States armed forces received when they return from tours of duty. The surveys are created to try and identify symptoms of post-traumatic stress disorder (PTSD). However, concerns were raised that these veterans might actively try to hide their signs of PTSD in case it disqualified them from further service.

Anonymizing these surveys was somewhat effective, but the fear remains that veterans might not get the care they need because of a lack of disclosure.

A study by the University of Southern California Institute for Creative Technologies found a middleground in a conversational AI called Ellie. Acting as a virtual therapist, Ellie opened these appointments with conversational questions to ease soldiers into the discussion, helping them to connect and open up.

Conversations would flow from Ellie asking “how are you?” to more probing discussions like “how often do you get a good night’s sleep?”

Ultimately, reported Wired, conversations with Ellie resulted in veterans showing more signs of PTSD than they did even on the anonymized forms.

“Again and again,” the research’s author Dr Gale Lucas concluded, “I’m seeing the power of the virtual agents to tease out information that traditional methods just don’t.”

Creating a humanized chatbot is a tactic that could work in any environment where people are afraid of judgment or being embarrassed in front of others. For instance, in financial services when financial literacy is a barrier to seeking much-needed help.

By building a chatbot, their lack of judgement can help people open up. By then giving it a human face, with human qualities, people get a stronger connection that they’re being cared for, as with Ellie.

2. People may become more forgiving of your chatbot’s mistakes

How often do you get quickly frustrated with automated technologies? Closing tabs or shutting down programs as soon as they become an annoyance.

“Unexpected item in the baggage area.” We know! We just put it there for half a second!

Here’s the thing: mistakes will happen. It’s how forgivable they are that will determine how they impact customer experience.

A Sintef study titled “Why People Use Chatbots” found productivity was the main reason people use these services. If a faulty chatbot is disrupting a potential customer from browsing an online store, or not giving the information they need, people quickly move on.

However, the same study also found that the more human a chatbot appears to the user, the more forgiving a person is if and when it gets things wrong.

“To err is human” as the old saying goes. “To forgive, divine.” People accept that other people make mistakes, so bringing more human elements into a chatbot gives people a higher tolerance and makes them more likely to stick around if the chatbot doesn’t give them exactly what they want on their first go.

The researchers said it’s specifically when a chatbot feels overly automated and lacks any human qualities that people lose their connection and treat it like a computer program, rather than a digital assistant trying its best to help.

3. People may cooperate with your chatbot more readily

People sometimes bring preconceived notions when engaging with chatbots and digital humans, and these can influence what they expect chatbots to be able to do.

A study from Stanford University found that it’s better to let the system itself do the talking, in a way. The research found that it’s almost better to undersell these capabilities, rather than overpromise and leave users frustrated when they have to contribute more to the process.

Similarly, the study shows, humanizing a conversational AI brings a certain “warmth” that helps people feel more connected and more satisfied with the experience.

Interestingly, participants in the Stanford study were more willing to cooperate with chatbots that were described with “low-competence metaphors”. If you talk about your chatbot with descriptors like “professional” then people expect perfection and are far less likely to cooperate if there’s a hiccup.

4. People may swear less at your chatbot

Are you tired of reading chatbot transcripts where people are testing the boundaries of what they’ll respond to? It can be a little depressing seeing what people will say to a chatbot when they know there are no consequences.

This study, co-authored by researchers at the University of Oslo, found that the “implied anthropomorphism of chatbots” often triggers negative responses from users. In particular, conversations with a chatbot are more likely to have “shorter messages, less complicated vocabulary and more profanity”.

It’s this “implied” anthropomorphism that’s the important part. The authors say that people are left to perceive a chatbot’s personality for themselves. They seemingly fail to do so and it leads to a greater number of curse-words being used.

Why does this matter? Perhaps it doesn’t to you personally; or perhaps it signifies a discrepancy in how you want people to interact with your brand and how they actually do via a chatbot channel.

A more human experience may help users feel connected during these conversations. They don’t have to try and perceive a personality when talking to a digital human. There’s no implied anthropomorphism. It’s all in front of them, speaking, reacting and showing human-like responses.

In fact, we’ve found across the board that interactions with a digital human tend to follow the opposite path. One of the most common questions users ask our digital humans is if they’re single. Make of that what you will!

5. You may begin to help customers with their emotional needs

Harvard Business Review found that emotionally connected customers are more than twice as valuable as highly satisfied customers.

“These emotionally connected customers,” their analysis showed, “buy more of your products and services, visit you more often, exhibit less price sensitivity, pay more attention to your communications, follow your advice and recommend you more.”

What an opportunity, then that about 40% of user requests to customer service are emotional  rather than just a pure request for information. However, without the ability to relate to these customers emotionally, a customer service chatbot risks failure.

Chatbots have become an important part of the customer experience, so it’s important to understand how users are feeling during these interactions. A chatbot that can’t emotionally engage with customers risks failure.

Unfortunately, the number of people who say chatbot are “friendly and approachable” is as low as 29%.

Giving your chatbot a face, a voice and a personality makes it easier for people to connect with them. It’s the reason why Samantha, the voice assistant from the movie HER, is played by a real person (Scarlett Johansson) and not a robot. While we’re talking about AI portrayal in the movies, it’s why we bond with WALL-E the robot, because he visually shows very human empathy, motivations and awareness. If WALL-E was a chatbot, you’d likely ask for your money back before the second act.

Making your chatbot more human in minutes

In truth, there’s more to humanizing than simply putting a human face onto your AI chatbot.

It’s more than putting a few jokes into your Dialogflow intents. Humanizing your chatbot involves building a personality that relates to your brand, and bringing that to light throughout the conversations your AI has.

It involves relying not just on text, but on human speech, tone of voice, facial expressions and body language – the biggest parts of human communication. With that, you can show empathy and warmth in difficult conversations, or excitement in good ones. You can emotionally engage with your customer, patient or any other user.

Fortunately, having a chatbot is the perfect first step to launching your first digital human. To find out more, take a look at our free digital human buyer's guide here.