Why give artificial intelligence (AI) a face?

UneeQ's Piers Smith shares some insights on what adding a face can do for improving artificial intelligence.

Published
September 2, 2019
by
Updated
Why give artificial intelligence (AI) a face?

When working on the ground-breaking Nadia project for the NDIS, UneeQ’s Solutions Architect, Piers Smith, sketched the prototype on the back of a napkin. From this first digital human project, embodied conversational AI has began to grow into a commercial solution. Piers shares why we should all be excited about giving a face to AI.

When we think of artificially intelligent machines, we often think of those we’re familiar with from the movies: supercomputers such as HAL from 2001: A Space Odyssey, robot Number Five in Short Circuit and, more recently, TARS from Interstellar.

But while Hollywood has been including AI characters in its stories for decades, AI has often played the role of the “bad guy” or been the catalyst for a dramatic turn of events, as in what has become a bit of a cliché in science fiction movies: the robot goes rogue.  

There is one movie where the rogue robot plot twist particularly shocked audiences – the 1979 blockbuster, Alien. More than forty years on, it’s probably safe to disclose a major twist in the film (but spoiler alert nonetheless!). It turns out Ash, a scientist on the fictitious spaceship Nostromo, is an android – and an android with a hidden, awful agenda.

The very definition of a surprise plot twist is one we never saw coming. But the reason Ash being revealed as an android was especially impactful was because not only did we think he was a human being, we also believed that he was a doctor; we instinctively trust that doctors are competent, benevolent, dependable physicians who will take care of us.

Ridley Scott and the science fiction writers of Alien knew the devastating power of fooling audiences into trusting and believing that an android was a human being.

But in the real world, we are starting to see what can happen when human beings willingly extend that trust to an artificially intelligent machine, not because they believe it’s real, but because they know it isn’t. And this is a much more positive kind of plot twist.

Let me give you an example.

Ellie – the groundbreaker in mental healthcare

This is Ellie, and Ellie was created by the University of Southern California for the US Defence Forces. Ellie was designed to study the differences in rates of disclosure of the symptoms of mental illness (PTSD in particular) in soldiers returning from deployments. This was a response to high levels of people passing through the existing face-to-face screening process by selectively representing their condition to avoid stigma, but then going on to experience episodes of PTSD.

The researchers wanted to see if there would be any difference in rates of disclosure between returning soldiers who spoke with a human counsellor and those who spoke with an AI counsellor.

In the first phase of the study, veterans were given the option of interacting with a faceless system instead of a person. Results indicated that they were more willing to open up about how they were feeling, and were less inclined to hide things out of fear of being judged. That result wasn’t particularly surprising, as it’s been repeatedly established through research that an absence of social cues in an interaction promotes a greater sense of privacy, and that effect is even more pronounced when the information is sensitive.

In the second phase of the study, the chatbot was given a name and a computer-generated face and body; she became Ellie, the digital human.

Ellie’s researchers wanted to see if putting a face on a system would make a difference to how willingly people would open up about their mental health, and if they would disclose more to a system with a face than one without.

Now, since it was clear to the veterans that they were still dealing with a computer system, we might assume that giving it a face would make no difference. And if you believe digital humans are a gimmick then you might believe that too.

What they found, in fact, was that veterans were more than twice as likely to report PTSD symptoms to Ellie than the anonymous computer system, and when you read the anecdotal quotes from the participants, it’s clear that they felt a rapport with the digital human, even though they cognitively knew it wasn’t real.

Giving a system a face changes people’s behaviour

What that suggests is that people were forming an emotional connection, and this is where the case study gets really interesting (and its findings are by no means unique, by the way).

When you lay this alongside what emerges when you ask groups of people if they trust AI right now, there’s a lot of evidence that suggests people don’t, and that they’re fearful of it;  there is clearly a difference between what people think, and then how they behave in different scenarios.

Now that is powerful as an idea. It’s also where the branch of AI ethics that I’m interested in sits, which is distinct from the system view that concerns itself with how systems are trained, how bias is eliminated, and how to offer transparency. We must consider carefully how the deployments of components of AI into anthropomorphised forms (that is, digital humans) can change how people behave, over and above what they might say.

The positive impact of AI faces – the Nadia Project for NDIS

The work on Ellie by the University of Southern California takes me back to the beginning of my journey with UneeQ and the Nadia project.

In 2016, the Australian government introduced a new national disability insurance scheme for people living with disability.

The online portal to access the scheme was difficult or impossible for some members of the disability community to use, as it had not been designed with their needs in mind. Nadia was an AI digital human created to help people living with disability to navigate those systems and, most importantly, the design process was led by representatives from the disability community.

The idea itself was not overly complicated nor difficult to explain. In fact, the architecture for Nadia’s first prototype was one I sketched on a napkin during a working lunch in late 2015, with the prototype up and running four months later.

Could Nadia have been represented by a faceless voice or cartoonish avatar? Sure, but all the rapport and emotional connection people associate and build with a lifelike face would have been lost in the process, and it was exactly that type of connection that people living with a disability (and their families and carers) wanted, and which had been missing from so many of their previous interactions with government bureaucracy.

The benefits of digital humans in humanitarian applications are going to be profound. This idea that we can help people break down the barriers to disclosure, provide them easy access to the services they need, and– crucially – help people stay connected with their treatment for as long as they need it, is going to revolutionise the way we are able to provide help wherever and whenever it’s required.

Imagine a world where, anytime you need to access any kind of care or treatment, information or advice, you can do so accompanied by a digital human who helps you to navigate the systems and coaches you through the processes.  

Digital human mental health coaches, like the kind UneeQ is working on with Sir John Kirwan’s ‘Mentemia’, help people by talking through their issues with complete anonymity, connecting them to treatment and then keeping them connected by being constantly available.

Cardiac health coaches extend post-treatment healthcare into people’s homes to keep them connected and engaged with their recovery, and help overcome barriers like health literacy. Imagine having your cardiac coach available 24/7 to answer any questions you have about your treatment, and who speaks to you in simple, reassuring language that doesn’t leave you feeling confused or worried.

Financial literacy is a problem for two-thirds of the US population. In the banking and finance sector, a digital human can provide customers with ready access to personalised support through an easy-to-understand and non-judgemental conversational format. This can make it much less intimidating for customers to interact with credit card companies or mortgage brokers, and more inclined to do business with the sector.

We should be open about the ethical questions

As I have already noted, this idea that embodiment, and the parts of AI that drive embodiment, can change people’s behavioural responses brings with it a number of ethical considerations.

An obvious question relates to trust in AI, or specifically the consequences of over-trust, where the capabilities of the system don’t match people’s perceptions, but they don’t modify their behaviour accordingly.

In a study by Georgia Tech from 2016, 42 participants were introduced to a robot that presented itself as autonomous and driven by AI. It had cute waving arms and seemed friendly enough, but when it asked the participants to follow it into a meeting room, it got lost in the building and took a few wrong turns.

Then, researchers simulated a fire emergency. The robot instructed the participants to evacuate and to follow it out of the building. Even though the robot had mislead them through the corridors earlier, 37 people followed it. Two people stood still and didn’t follow, while only three found their own way back to the front door and asked the researchers what the heck was going on.

This otherwise humorous story points to an obvious ethical concern with embodying systems, which is that this gap between cognition and behaviour can cause people to react in ways you don’t expect until you observe it.

That is a big issue for our industry at the moment, because our ability to produce digital humans that trigger these effects in people (and you are probably aware of the level of realism UneeQ digital humans operate at now) is running ahead of the capability of the systems that enable them.

There are other questions, like whether the sense of privacy that’s created in an interaction with a digital human matches the reality, or how a digital human should respond if someone experiencing mental illness discloses an intention to harm themselves or others. There are also questions about the digital human itself, with things like gender and whether digital humans can negatively reflect stereotypical gender roles.

But does that mean we should do nothing?

I’ve seen writing and opinions that position these ethical questions as a reason not to act.

That’s a view I disagree with, because as long as we’re talking about these questions with the people whom they affect, we can and should aspire to answer them.

One of the reasons I return to Nadia as a case study is because it’s a signpost towards how I think systems like that should come about:

  • They should be built alongside the people who will use, be affected by or rely on them;
  • There should be diversity in the teams building them, not just in representation but in decision-making;
  • We should be open and intentional about the ethical questions; and
  • Their capabilities and their nature should be clear

Then, I believe the evidence about the benefits of digital humans actually obliges us to act.

What’s next then?

So, what’s next for digital humans?

For UneeQ, it means a focus on the business and brand benefits of digital humans, in a range of settings, such as online and increasingly in physical locations like we did for Vodafone.

She’s an example of making business processes more efficient by taking the load off human staff members, but still doing that in a physical storefront noting that many customers still prefer that, whether it’s the safety net of a staff member or not having access to online services.

Vodafone’s digital human, alongside other UneeQ digital humans like Josie, is also an example of how a digital human can embody a brand. Josie is already positioned very visibly by ASB as a “face” for their brand.

Entering the AI era

From Hollywood to hospitals, AI is having an impact.

While the developers work in the back-end on the technologies that will allow digital humans to recognise people, objects, and their environment; to recognise speech and understand natural language; and to learn, the design of the front-end interface – the digitally rendered face – is proving to be critical in determining how people interact with all that technology and, specifically, whether or not they feel as though they can trust it.

When it comes to AI and digital humans, people have shown they are willing to suspend their belief in order to have a more human connection.

Putting a face on conversational AI invites people to put humanity into their interactions with machines and systems, and even to expect to find it. After decades of dealing with companies through online application forms and hours spent on hold to a customer service helpline, people just want to have a conversation, and they have shown that they are ready to have that conversation with a digital human who can offer them the certainty of an intelligent system together with the interpersonal connection with a friendly face.

This is no longer science fiction, this is happening now, and we have the opportunity to write so many good stories, with so many wonderful plot twists.