The Rome Call. You would be forgiven for thinking it’s the kind of name that Dan Brown might come up with when he’s running low on ideas for his next novel. But no, the Rome Call isn’t a lazy follow-up to the Da Vinci Code. For a start, it’s non-fiction.
In February, Pope Francis announced the Vatican’s endorsement for an artificial intelligence (AI) ethics framework. ‘The Rome Call for AI ethics’ is a pledge to promote the appropriate use of AI through six key principles. And it’s already got major backers in the tech world, with both IBM and Microsoft involved.
Pontif-icating about AI ethics
The Pope probably wouldn’t be the first person to spring to mind if you were asked to name someone with their finger on the pulse of AI. However, the supreme pontiff and various research societies under his control have been exploring the impact of AI, robotics and other emerging technologies on issues such as faith and humanity for some time.
As a culmination point, the Holy See has defined six key principles of AI ethics:
- Transparency
- Inclusion
- Responsibility
- Impartiality
- Reliability
- Security and privacy
These are fundamentals we can get behind here at UneeQ, not least because they align very closely to our own Five Laws of Ethical Digital Human Design: honesty and transparency; AI for good; privacy; respect; and co-design (to avoid bias and promote diversity).
Is AI ethical?
The ethical ramifications of AI have always been a hot topic for debate. But rapid advances in deepfake technology, the rise of bots distributing fake news and other malicious AI use cases have arguably brought these issues into even sharper focus recently.
Like any tool, conversational AI (and AI in general) is only as ethical as its creators and adopters. And all creators, no matter how honourable their intentions, are susceptible to personal bias and errors of judgement. Algorithms are designed to learn, but with tainted or incomplete data, they’re likely to learn the wrong lessons.
Mapping out a comprehensive, principle-based framework ensures the brightest possible future for AI.
Knowing where your partners and third parties sit on such subjects will also make it easier to work with those that align with your own values.
That’s why we launched our Five Laws of Ethical Digital Human Design last year. It’s also why other organisations have worked hard to develop their own guidelines for ethical AI, including the EU, the IEEE and Google.
These are all separate frameworks, yet, encouragingly, they largely offer a shared vision and similar perspectives on how to deliver ethical AI. At the very least, they open a dialogue, and it’s important for everyone to get involved.
AI ethics and digital humans: the benefits
The Rome Call is clear about the Catholic Church’s vision for ethical AI:
“AI systems must be conceived, designed and implemented to serve and protect human beings and the environment in which they live.”
There’s more: “AI-based technology must never be used to exploit people in any way, especially those who are most vulnerable. Instead, it must be used to help people develop their abilities (empowerment/enablement).”
Serve. Protect. Empower. Enable. These are admirable goals, but what do they mean in practice? The good news is there are plenty of examples of how AI, in the form of digital humans, can deliver these benefits.
SERVE: Digital humans are available 24/7 throughout the year, and they’re scalable to the Nth degree. This means consumers have a virtually limitless scope to engage with digital humans whenever they need. Take UBS as an example; the company has digitalized their Chief Economist, Daniel Kalt. The real Daniel only has so many hours in the day, but his digital double is always available to deliver key insights, answer questions and offer advice to UBS’ clients. Service with a smile, and a voice, and an incredible IQ – just like the real Daniel.
PROTECT: Safeguarding mental and physical health is just one way that digital humans can offer protection. They are cost-effective, non-judgemental and able to form strong emotional connections with users. That makes digital humans an excellent first point of contact for any questions that patients may feel are too trivial or embarrassing to discuss with healthcare professionals. The end goal for many of our digital humans in the healthcare sector is to enable patients recovering from health issues to have natural conversations with a digital human about their diet, medication, rehabilitation and more.
EMPOWER: Research shows people develop faster through active rather than passive learning. In other words, participation and engagement often trump textbooks and lectures. Digital humans can communicate through voice, tone, text and body language, opening up engaging user experiences that are fully immersive and empowering. VARK learning methodology shows us that there are four types of learning styles, and we’re proud to say digital humans can engage with people in all four ways, making them empowering tools for a vast majority of the population.
ENABLE: Many features of digital humans that we’ve discussed are especially beneficial for vulnerable consumers. People with disabilities, for example, face many challenges when accessing and shopping in physical stores. E-commerce can help lift some of those barriers, but at a significant cost: face-to-face interaction. All is for All is a company that aims to tackle these problems, and digital humans are a part of the solution. They can talk with consumers online about their garment choices and explain how clothes might fit for people with disabilities, such as wheelchair users. This is just the beginning; the possibilities to make retail more accessible and engaging are endless.
The future of AI ethics
All that is just a birds-eye view of how digital humans can fit into a principle-based AI ethics framework like the Pope’s Rome Call.
Digital humans have the potential to transform human-to-machine interactions for the better. But we must always be vigilant to the ways in which AI can be used to cause harm or emotional distress, so that we can protect against misuse.
AI ethics isn’t a journey with a final destination; society’s morals, standards and values are always evolving. We encourage the community to come together and contribute their thoughts and ideas – feel free to do so on our dedicated LinkedIn Group specifically around the topic of digital human design, and share your stories.
Successful AI innovation will always be underpinned by a good “why”. A strong, principles-based ethical foundation. You can read our guidelines by downloading the 5 Laws of Ethical Digital Human Design below.