Episode 12 – Talking Ethical AI with Olivia Gambelin

Share the podcast on socials:


Ethics in artificial intelligence (AI) and technology continue to be hot topics everywhere. And while technologies like AI have been revolutionizing our world, like most things, there are two sides to every coin. However, an algorithm is as good as the data you feed it, and it is the team’s responsibility to keep it in check. 

On the twelfth episode of the Decisions Now podcast, we are joined by AI ethicist, Olivia Gambelin, founder of Ethical Intelligence, who brings us a thought-provoking conversation centered around ethics in AI.  

Tune in as co-hosts Rigvinath Chevala, Evalueserve’s chief technology officer, and Erin Pearson, our vice president of marketing chat with Gambelin about ethics in AI, the challenges that come with it, building an ethical strategy, and what the future looks like!  

What is Ethical AI? 

Remember when Facebook and its privacy issues were creating all the buzz, or when Microsoft released a chatterbot that was later shut down due to offensive tweets on Twitter? – All instances that we are reminded that while technology can be beneficial, it can’t be left unsupervised. This is where ethics in AI comes in. 

“Ethics, really if you’re boiling it down, it’s the study of right and wrong and what constitutes a good versus a bad action. What we’re talking about when it comes to ethics and technology is just the decisions around our technology on what makes a better product, what makes better technology versus technology that’s kind of eh or hitting headlines on scandals,” Gambelin said. “Ethics is that tool that’s helping you differentiate and determine those decisions.” 


Ethical AI vs Explainable AI 

“Do we need algorithms that are transparent, where you can explain it and then move on to making them ethical? Or do you do it the other way?” Chevala asked. 

Before you launch your ethical AI plan, it’s essential to be able to explain the AI first, he added. 

Gambelin said in agreement that, explainability comes first, as this allows an ethicist to come in and work with the system and pinpoint what needs tweaking.  
It becomes harder and time-consuming for ethicists to update the technology when they aren’t explainable. This involves more digging and asking if you got everything since you can’t see into the model, she said. 

“If AI is a black box, you can still control the inputs, like the training data, the test data, and still make it somewhat ethical to your point, but then you don’t know if the output is still integral or with integrity, I guess,” Chevala said. “So, that’s what we also arrived at and it’s an important distinction because most of AI that people understand today is really not transparent.” 

AI is still a black box, making it hard to trust, he added.