On Friday, February 17, 2023, Duke University hosted its 2023 Provost’s Forum. This year’s topic was Big Problems in Big Tech. The morning session was a panel on the topic moderated by David Hoffman and featuring the following panelists:
- Casey Fiesler, associate professor in the Department of Information Science and Computer Science at the University of Colorado at Boulder
- Matthew Kenney, senior machine learning engineer at Alethea and former machine learning engineer at Apple
- David Page, professor and chair of biostatistics and bioinformatics and professor of computer science at Duke University
- Aarthi Vadde, associate professor of English at Duke University
Some of the main topics of the panel’s discussion were:
1) The integration of generative AI into search engines, such as the new combination of Bing and OpenAI.
David Page said search has looked pretty much the same for the last 30 years, and this will be a major change.
The panelists discussed how messy the rollout of the new Bing/OpenAI demo was. Casey Fiesler mentioned that a common complaint in the news about the updated Bing is that it’s a bit creepy. However, Fiesler said it’s not that surprising that Bing would bring up people’s fears about AI, for example, given that it trained on human conversations. So, it’s mimicking human conversation.
Aarthi Vadde said she’s surprised how quickly people forget that new technology is often met with this type of reaction, an uncanny valley scenario.
“You could go back to early film and remember that audiences would run out of the theater at the specter of a train coming at them,” she said.
Because Big Tech is known for its pixel-perfect product launches, Matthew Kenney said he was very surprised by the issues with the Bing/OpenAI demo. He thinks the hype around ChatGPT and the generative AI race we’re seeing fueled the premature Google announcement and Bing release.
2) The potential to generate personalized disinformation campaigns
Several panelists – Fiesler, Kenney, Page – said they believe generative AI has an immense ability to create disinformation campaigns.
“One of the things that I asked it (ChatGPT) to do first when I was playing with it was I gave it a description of a family member, here are some things about them, here are some things that they believe politically, here’s the person they voted for in the last election, can you give me an argument to convince them of this thing? Very good argument,” Fiesler said.
3) Generative AI in education
Many educators reacted with a fair degree of panic to early ChatGPT news and discussions. Vadde and Fiesler said the “moral panic” with which many educators reacted to ChatGPT was not the right response.
“I’m not saying if you’re in a hurry and you’re reading tons of applications, it couldn’t get by you – I’m sure it could get by me in some ways – but I’m not that interested in whether AI can get by me, whether it can play gotcha with me, whether I look silly,” said Vadde. “I’m really interested in the idea that students don’t want to turn to it as a time or labor-saving device because they’re interested enough in the materials, they’ve had a classroom experience that allowed them to develop their writing over time, that it doesn’t look like the best option.”
Vadde and Fiesler also agreed that they are less interested in policing students’ usage of AI than they are in understanding what would drive a student to want to “cheat” with generative AI. Vadde believes AI literacy is something educators will need to begin teaching students and that generative AI brings with it learning opportunities.
The panel brought up some fears surrounding the accessibility of generative AI at a free or low-cost price point moving forward. They are concerned that generative AI platforms moving to paid and subscription models could exacerbate existing inequalities.