Fans of the original Matrix trilogy received an early Christmas present in 2021, as The Matrix Resurrections – the fourth in the series – enjoyed its world premiere.

More than 18 years after the last installment, the new movie sees a few familiar faces return, including Keanu Reeves as Neo and Carrie-Anne Moss as Trinity. Even Morpheus is back, albeit with Yahya Abdul-Mateen II in the role, replacing the legendary Laurence Fishburne.

One character who’s conspicuously absent from the long-awaited sequel, however, is Agent Smith, the main antagonist from the first three films. Well, absent in the form we’re all used to seeing him, at least.

Instead, the rogue AI program is played by the excellent Johnathan Groff.

The reason? Scheduling conflicts. The actor, Hugo Weaving, was tied up on other projects, which is a crushing disappointment for Matrix fans (including us!) who still get a pang of dread when they hear the words “Mr Anderson” in that deadpan voice.

Which got us thinking. Why was Agent Smith such a brilliant bad guy without, arguably, being inherently ‘bad’ or technically even a ‘guy’.

The answer lies in him appearing human, despite being an AI program. This topic begins to skirt the surface of how we emotionally connect with human-like things – from cheese graters and manhole covers to interfaces like chatbots and digital humans. We also see how the psychology that makes a cinematic impact (albeit with negative emotions) can also be used to positively impact customer experiences, too. 

So, let us geek out over the new Matrix movie explain.

Agent provocateur: Understanding AI in the movies

How AI is portrayed in movies is something we’ve talked about before in detail.

Hollywood has a habit of emphasizing the dangers of artificial intelligence, rather than highlighting its value. That’s hardly a surprise; dramatic effect is going to trump realism on the silver screen every time. Robots gone rogue admittedly make for compelling viewing. 

And in many ways, the Matrix films depict AI in a similar fashion. The movies show a post-singularity world, where humans are at war with intelligent machines that are hell-bent on subjugation. 

Like the other agents in the Matrix, Agent Smith is a sentient security program that’s designed to eradicate anyone or anything that could reveal the truth about ‘the Matrix’. 

But why is Agent Smith depicted as a human? With human looks, voice, emotions, personality and flaws? After all, the character despises mankind.

“Human beings are a disease,” he tells Morpheus in one scene. “You are a plague, and we are the cure”. 

Choosing to inhabit a human form would seem to be an odd choice for Agent Smith, then.

One obvious reason to portray AI as human is practicality. It’s easier and cheaper to use an actor rather than render a main cast member entirely in CGI or special effects – particularly in 1999. Even George Lucas, with deep pockets and an even deeper love for special effects, relied on some human actors in Star Wars Episode 1, released the same year.

But it goes deeper than that with Agent Smith. There’s also some important real-world psychology at play here. And it comes down to how we (as humans) emotionally connect with people and personalities – both on screen and in real life. 

Heroes and villains

AI protagonists (aka the goodies) are almost always portrayed as humans. 

Whether it’s Rachael in Blade Runner, Bishop in Aliens or Data in Star Trek, benevolent AI characters often have unique personalities. They may be robots, but they usually display empathy, warmth and affection. 

Even those without a human body, such as Wall-E or Samantha in Her, typically show a sense of humor, or possess charisma and other distinctly human traits. Why? Because as humans, we instinctively form emotional connections with things that look, act and sound like us – it’s a matter of trust.

We’re wary of intelligent machines that don’t have a recognizable face or relatable personality.

Research backs this up. For example, self-driving cars have been shown to be far safer than human drivers, and yet nearly half of Americans say they would never get in a taxi or ride-sharing vehicle that was autonomous. 

It’s not just driverless cars. One study analyzed over 750,000 judicial cases in New York City. Around a third of these cases were used to train an algorithm to make better decisions on whether a suspect should be released, detained or offered bail. 

AI was able to reduce the number of defendants kept in jail by 40% without any corresponding increase in crime. The amount of crime committed while on release was also lowered by 25% simply by choosing more suitable defendants to detain. (1)

But neither judges nor criminals are fans of AI in the courtroom. 

“Even knowing that the human judge might make more errors, offenders still prefer a human to an algorithm. They want that human touch,” says Mandeep Dhami, a Professor of Decision Psychology at Middlesex University.

However, this kind of mistrust is not what you need in films like The Matrix if you want to elicit an emotional response.

The man in the machine

‘Humanized’ AI is often used on the silver screen to encourage the audience to emotionally invest in characters. So how does Agent Smith (an obvious villain) fit into all this?

Great storytelling means emotionally investing in not only the movie’s protagonists but also its antagonists. 

By making Agent Smith more ‘human’, the screenwriters and director give the actor the freedom to deliver a more complex, nuanced performance. A performance with personality.

Let’s not forget that Agent Smith in the original trilogy wasn’t a cold, unfeeling AI; he showed VERY human traits, including pride in his work, disdain for humanity and a sense of injustice.

Producer and film critic Christopher Borrelli describes Agent Smith as having “a refreshingly nihilistic wit”. We would even argue that he has more personality than, say, Morpheus or Neo, both of whom often show a detached (dare we say machine-like?) stoicism throughout the series. 

Indeed, Agent Smith even shows visible anger and frustration at being in the Matrix: 

“I hate this place. This zoo, this prison. This reality, whatever you want to call it. I can’t stand it any longer,” he says in the first movie. “I must get out of here. I must get free.” 

As the audience, we can perhaps relate to Agent Smith’s outburst. Despite being the villain, Smith has much in common with our protagonists: he is just another prisoner looking to escape the Matrix.

Could a typical AI program have delivered the same experience if it wasn’t portrayed as a human? We seriously doubt it. There’s a reason the machines in the Matrix aren’t in the same antagonist Hall of Fame as Agent Smith, after all.

How to use personality for better brand experiences

So, if you’re looking for an excuse to watch the original Matrix trilogy and call it business research, we think there’s a pretty compelling case. Brands can learn a lot from how personality is used in the movies – particularly to offer emotionally investable experiences.

If companies want their audience to have memorable, meaningful interactions with automated technologies, then providing the human touch is just as essential. 

For instance, a recent study found that patients deemed healthcare chatbots most helpful when their abilities, integrity and benevolence were on par with a human assistant.

In a marketing context, distinct personality helped in our Digital Einstein campaign, moving the marketing needle throughout the funnel. 

And speaking of marketing, Agent Smith may be a bad guy, but that didn’t stop GE from featuring him in an advertising campaign in 2013, with his unique mannerisms and recognizably dry wit put to good use to engage viewers.

Which just goes to show, whether good or bad, any personality is better than no personality when you’re trying to emotionally connect with an audience. It’s something for all marketers to consider.

Brands today compete on experience. Three-quarters of people around the world say brands have lost touch with the human experience. They may not hire Agent Smith, but they will have to consider how they can put more human touch, interaction and personality into their channels.

Because if your customers are given the choice of a red pill in the form of a brand they can engage with, or a blue pill offering robotic, lifeless service, it’s no choice at all. They’ll choose the red pill every time, and they’ll see how deep the rabbit hole of great customer and brand experiences goes.

  1.  Tim Harford. ‘How to Make the World Add Up’, p180-181