The Hub Why ethics are the key to responsible artificial intelligence
Professional Development with PowerED

Why ethics are the key to responsible artificial intelligence

The rise of AI systems doesn’t mean robots are taking over, but it does raise important ethical considerations

When it comes to artificial intelligence (AI), many of our preconceptions come from movies, TV, and the internet. They’re often exaggerations of the foundations of AI or what it’s used for. AI is not The Terminator. It’s not Rosie from The Jetsons. It’s much more precise: think credit card statements, Spotify’s suggested playlists, chess-playing computers, or website chatbots.

The Hub recently chatted with Katrina Ingram, an AI ethicist and CEO of Ethically Aligned AI, an Alberta company that helps organizations make better ethical choices when designing and deploying artificial intelligence. Ingram is also a subject-matter expert for PowerED™ by Athabasca University’s new AI Ethics micro-credential 

Ingram shares perspective on what AI is and isn’t and how it’s used by a growing number of organizations and businesses. She also explains how PowerED’s micro-credential is built for those who want to understand how technology shapes our lives and those who build AI systems but need a better understanding of its responsible use.

The Hub: What is artificial intelligence? Is our understanding of it based on fact or fiction?

Katrina Ingram

Many people don’t know very much about artificial intelligence. And what they do know is really informed by a lot of hype, or a lot of Hollywood misinformation.

When I have these conversations with people about what is AI, we usually have a conversation about the Terminator or one of these Hollywood movies that people have seen, usually where robots are taking over the world and trying to exterminate humanity. And the reality is, that is not what artificial intelligence is todayand hopefully not what it is tomorrow, either.

 


Can you explain what you mean by AI ethics?

The term AI ethics has come to mean, very broadly, this set of issues or harms, things that can go wrong with AI technology.

 


Can you give some examples of this?

One example would be Google. They’re such a massive company and they’re kind of into everything. But you remember those cars they used to drive around neighbourhoods and film your home and film the street and film everything? They created Google world. Just think about that for a minute. Did you say it was okay to film your home? Probably not. I don’t remember signing a consent form to do that. But that’s the kind of stuff that happens.

 


Who would benefit from taking PowerED™’s Artificial Intelligence Ethics micro-credential?

It’s for the general public, people who are interested in learning more about the ways in which technology is shaping society and who want to have more informed discussions about things like the future of work and automation, AI systems that are powering the everyday technology that they use, and who just want to be better informed.

It’s also for people who are building this technology, because so many people who are technologists, data scientists, computer scientists have never really considered ethics and have never really had any educational training in terms of ethics.

 

Filed Under:
Published:
  • April 21, 2022