Episode 13 – Understanding the AI Bill of Rights with Christopher Sanchez

Share the podcast on socials:

Twitter
LinkedIn
Facebook

The rising interest in Ethical AI has called organizations to be more responsible with their technology. On the latest episode of the Decisions Now podcast, we are joined by, the CEO and Founder of Emergent Line, Christopher Sanchez who shares all about the Global AI Bill of Rights, which he wrote. 

In this episode, co-hosts Rigvinath Chevala, Evalueserve’s chief technology officer, and Erin Pearson, our vice president of marketing, and Sanchez discuss the bill, the need for responsible tech and its effects, privacy, and why explainability is important. 

 

Global AI Bill of Rights 

The bill was developed as an idea from conversations Sanchez had with his client and teams with the motive of being human-centric, ethical and designing products that empowered not only end users but even workers. 

He founded dataoath.org and wrote The Absurd Dinner Bill, all of which led to conversations and buzz about the need for defined user rights.  

“AI is inherently global. And then what would that algorithm look like that applies to those rights. And then what are the data practices? Because the data practices feed the algorithms that would ensure those rights,” Sanchez said. “So, if there’s one thing that I’ll touch on, I firmly believe everybody wants to do the right thing. Everybody inherently is a good person, but you also have to make it simple for them to do the right thing.” 

The bill touches upon topics like facial recognition rights, biometrics, and other identifying data rights, ESG (Environmental Social Governance), emotional and mental state identification, data rights of users, system bias, and equality of outcomes. 

Making Rights Accessible 

Due to the abundance of information and different AI frameworks, businesses may find it challenging to apply ethics to their operations, making it essential to simplify them, Sanchez mentioned.  

A key aspect in algorithmic due process is explainability.  

When looking at human impact AI, that’s where you need explainability with what’s going on. Not always in the case of when my package will be delivered but more impactful situations that determine someone getting a job interview or their insurance premiums etc., he said.  
 
“I believe as AI practitioners, we have a responsibility, not only to create amazing products that improve people’s lives, but to think about everybody who will be impacted, even if we never see them, even if they never use our products,” Sanchez mentioned.  

When making explainability work, here are a few questions teams can ask themselves:  

  • How are you thinking when creating these products?
  • How are you collecting peoples’ data?  
  • Who is the data representing?  
  • Do you have a diverse team?
  • Am I testing the outputs and are people being treated correctly? 
     

Make a list of people that exist in society and ask if the products will work for everyone.  

“Remember the problem with AI is problems happen at scale, it doesn’t impact one person. It impacts tens of thousands of people at once and very fast,” he added. 

 

Education 

Issues with unethical technology effects the everyday worker, what can people do to understand this and educate themselves? Pearson asked Sanchez. 

“Because even if businesses make it available to them, it doesn’t necessarily mean that people are going to seek out that information. So, do you think that’s an element of it? There has to be a shift in culture for them to understand it better or understand how it’s actually impacting them,” Pearson asked.  

Sanchez said this was a three-part issue, where companies must be clear about the AI their consumers are interacting with. And secondly, there needs to be some personal responsibility as well. Lastly, this education must be included in school curriculums.  

In 2016, his team launched a site called Wandering Alpha, with a mission to educate folks on AI, NFTs, blockchain and more.  

The whole point was to make it simple and concise for someone to understand, Sanchez said.  

Often people still believe AI is limited to robots, education will help bridge the disconnect between reality and outdated views and help people understand the numerous AI interactions they have every day on their phones and computers. 

 “There is a bigger chance that you will be mistreated by poorly designed AI systems than you will be by super intelligence,” Sanchez said. “Where they start developing populations that understand this from a very young age, they’re more sophisticated about what they’re dealing with and the data that they’re leaving and also the technologies they have to master in order to thrive in the world that we’re all going to be going into. 

 

Data Privacy 

 
With literacy comes knowing that many things you see on your social feeds have AI behind it, determining what you see based on your past behaviors, preferences, and network, Chevala said.  
 
Providers should give the power to the end user to choose what they do and don’t want to see. When you see a job posting in a certain city and not others, it’s helpful you know why and that’s where the explainability first comes in, he added. 
 
Customers must know the why before the contest or agree to something in terms of privacy and their data. 

Diving into an example of the above issue, he brought to attention how before, companies would collect GPS points without consent from phones, but with literacy people are now cautious and concerned about why their location is tracked.  

“But you had consented. You just scrolled through it and clicked this accept, but you just didn’t know what you accepted too,” Chevala said. “Now it’s more evident. I think that same evolution needs to happen on the AI, especially the algorithms that you and I use on a day-to-day basis.” 

 

Sanchez mentioned two rules when it came to data collection:  

  1. Data has value, so organizations shouldn’t scrape data from people without their consent or without compensating them.
      
  2. There must be limits on how data is collected, and how much you know about a person. 

 

“The key thing that I want you to think of with the global AI bill of rights is we are thinking of how we are creating the world that not only we will live in, but future generations will live in,” Sanchez said. “If algorithms don’t have explainability, they don’t have transparency. If algorithms don’t have explainability, they don’t have transparency.” 

The trio agreed that people should have the right to understand why they aren’t getting certain opportunities. 

To learn more about what these rights look like or to contribute to this conversation, visit: https://www.billofrights.ai/.

 

Listen to this episode of Decisions Now podcast featuring Christopher Sanchez today to learn more about Global AI Bill of Rights. Subscribe today! 

Facebook
Twitter
LinkedIn

The podcast

Decisions Now is a bi-weekly podcast presented by Evalueserve discusses how to generate decision-ready insights from artificial intelligence and data. In each episode, co-hosts Rigvi Chevala and Erin Pearson talk with experts, analysts and business leaders across industries to bring you insights on diverse AI subjects.  
 
Subscribe today and don’t miss an episode. 

Listen & watch on:

Latest episodes