The Wizard of Oz has everything; magic, humor, suspense and unforgettable, endlessly quotable characters. The original novel, written by L. Frank Baum, was published way back in 1900, nearly 40 years before the film most of us know and love.

When the book hit the shelves, the first use of the word ‘robot’ in English was more than two decades away. The term artificial intelligence (AI) wouldn’t be coined for another 56 years.

But The Wizard of Oz and the history of AI have a surprising amount in common, so we’ve written a brief timeline that pays homage to the book as it celebrates its 120-year anniversary. Let’s follow AI’s yellow brick road from then to now. 

Who invented AI?

If you guessed a wizard, you wouldn’t be far wrong. Arguably, it was multiple wizards! Rather than dabbling in magic, they were wizards of mathematics, computer science, philosophy and other fields. Remember, (spoiler alert) even the Wizard of Oz wasn’t a real wizard.

British polymath Alan Turing is often credited as one of the first people to argue that intelligent robots were a real-world possibility rather than just a science fiction fantasy. His 1950 paper, Computing Machinery and Intelligence, got the cogs turning, so to speak.

In 1956, a historic conference was hosted in Dartmouth by cognitive scientists John McCarthy and Marvin Minsky. This gathering of geniuses is where the term ‘artificial intelligence’ is thought to have been born.

The 1900s to 1960s: If only (A)I had a brain

On her journey to the Emerald City, Dorothy met three friends who decided to join her: The Scarecrow, the Tin Man and the Cowardly Lion. Each of them was lacking a special something and believed only the Wizard could help (a brain, a heart and courage, respectively, for anyone who needs a reminder). 

What does this have to do with AI? Well, the trio’s wishlist also illustrates some of the key challenges that AI has faced over the decades. Let us elaborate, starting with the Scarecrow. The Dartmouth conference kickstarted decades of AI research and innovation, but the results often failed to live up to the hype in the early days.

Machines simply didn’t have the computational power to store enough information or process it at a speed that was needed to deliver on these promises. Like the Scarecrow, robots could crudely imitate human movement, but these pre-AI machines didn’t have brains. At least, not a big enough brain. Yet.

The 1960s to 1990s: AI without a heart

Between the ’60s and ’90s, the field began to flourish, spurred by government investment and several exciting technological developments in machine learning algorithms. ELIZA, a computer program that could communicate in English, was a major breakthrough in 1965.

In 1970, one of the first human-like robots was built. Waseda University’s WABOT-1 could move, talk and see. Ten years later, WABOT-2 was unveiled with a few nifty upgrades, including better conversational skills and the ability to read musical scores and play the electric organ. 

By the mid 1990s, there was ALICE; a chatbot with natural language processing for more realistic conversations. And a couple of years later, Deep Blue became the first computer system to win a chess match against a reigning world champion, Garry Kasparov.  

Machines finally had the brains to interact and compete with humans. However, they were still easily distinguishable from real people. The technology was failing the Turing test. This was AI without a heart, just like the Tin Man.

The world realized that AI couldn’t just be transactional and robotic. It needed to inspire feelings. Whether the AI is tasked to handle a distressed customer worried about their mortgage repayments, or a patient with serious questions about their health, people need to feel cared for and understood – the cold stoicism of AI isn’t the way to build great brand connections.

So, how could machines seem more human? The turn of the millennium brought a new focus on empathy and emotional intelligence in AI, both scientifically and culturally. Eventually, this led to embodied AI in the form of digital humans, but that’s jumping the gun a little…

The 2000s: Facing AI fears

In 2000, a robot called Kismet was designed in the US that could recognize and imitate facial expressions. The same year, Honda’s ASIMO was unveiled, which could recognize voices, faces and objects, as well as carry out many other tasks. 

This marked an important milestone for human-to-machine interactions. They were starting to look and engage in a more human way. But ethical dilemmas surrounding AI also began to resurface. Science fiction has always stoked fears of AI, but its portrayal in novels and films had seemed too outlandish, too far in the future. Now, technology was beginning to catch up.

The early 2000s saw concerns rise in the press and public over AI, possibly spurred, in part, by dystopian depictions of robots in blockbuster films like A.I. Artificial Intelligence and I, Robot. The Cowardly Lion is perhaps an unfair comparison, but public fears and uncertainty about AI seemed to peak during this time. 

According to one study, New York Times articles with negative mentions of ‘cyborg’, ‘singularity’ and ‘loss of control’ in relation to AI hit new heights around the mid noughties. Fortunately, this appears to have been a short-lived blip. 

“We find that discussion of AI has increased sharply since 2009 and has been consistently more optimistic than pessimistic,” the study authors go on to say.  

What is AI used for today?

Over the last 10 years, AI has become an everyday part of our lives. Advances in machine learning techniques, such as deep learning and natural language processing, have helped create sophisticated chatbots and virtual assistants like Siri, Alexa and Google Assistant.

We rely on AI to answer our questions, organize our schedules, personalize our shopping experiences and operate our workplaces. AI can even save lives. Earlier this year, researchers announced they had developed automated technologies that could spot breast cancer in mammograms and potentially predict heart attacks and strokes. This isn’t the first time AI has been used in medicine and healthcare, and it certainly won’t be the last.

As another example, you can checkout a video of our Cardiac Coach solution below.

 

We now live in a time where AI has the necessary processing power (brains). Public perception of these technologies is at an all-time high (courage). And we’re well on the way towards giving AI a heart and the empathy to form deeper connections with users. With no wicked witch seemingly on the horizon, what’s next for AI?   

Looking behind the curtain: digital humans in 2020

As our brief history shows, AI has evolved a lot over the last 120 years – we’re certainly not in Kansas anymore. 

But we believe digital humans are the next step in that evolution. They can speak, see, smile, laugh and talk in a way that provides a distinctly human touch to our interactions with machines.

Just like we decided to give machines IQ, it’s becoming undeniable that they need EQ, too. And the way that’s happening today is by embodying AI and creating digital humans. 

Brains. Heart. And an ability to build trust with consumers and make them comfortable (if not courageous) that AI is there to help them. 

It’s this convergence of information, empathy and positive experience that consumers expect from brands today. Not just AI that emulates human movement or thought patterns, but emulates the human connections we form when we interact with one another.

Ultimately, The Wizard of Oz is a story about enlightenment. Dorothy wants to learn how to get back home. This is perhaps where our metaphor breaks down because when it comes to AI and digital humans, we don’t think now is the time to look back, click our heels together and wish we were back where we once were. It’s the time to move forward.Digital humans eBook CT