New artificial intelligence tools such as DALL-E and ChatGPT have made headlines recently for their uncanny ability to create realistic art and writing based on simple prompts. As the hype grows around these technologies and their impact upon society and the economy, many people wonder about their potential, challenges, and dangers.

On January 26th, Professors Rebecca Willett and Ben Zhao of the University of Chicago joined Wisconsin Public Radio to answer some of these questions from host Kate Archer Kent and listeners. The two discussed whether these models could replace human artists and writers, their possible value for health care and scientific research, the bias and ethics of AI, and security measures that might prevent their misuse.

Willett, Professor of Statistics and Computer Science and Faculty Director of AI at the Data Science Institute, commented on the complex economic and educational effects as artificial intelligence grows capable of performing what were previously considered exclusively human tasks.

“I think that there are some jobs that will not require as much human effort as they did previously. There is going to be a change to the economy,” Willet said. “But I think that we are going to see a need for new jobs, that there will be new roles that will emerge as we develop these various AI tools. So that is going to mean that we’ll have to rethink the way that we train students or workers, both at the university level and perhaps even the K through 12 level, but also as we think about training programs for mid-career professionals…There’s going to be a need for learning how to best work with and utilize these tools, how to use them ethically and responsibly and ensure that we’re not doing anything that hurts the underrepresented.”

Zhao, Neubauer Professor of Computer Science and a researcher on machine learning security, discussed the “cat and mouse” game of watermarking the output of AI models for imagery and text, as well as how society will need to think about the bias of these technologies as they are used for critical decision-making.

“When you deal with other humans, you understand the existence of bias, you account for it in your mental calculus when you deal with them, and so it is easier to expect certain types of bias with certain people,” Zhao said. “For machine learning, one of the troublesome things about it is that it does have bias, there’s embedded bias inside that’s almost impossible to get rid of, only minimize. But at the same time, it is not obvious that it is there. So part of the issue with dealing with bias may be just getting people more aware that machine learning itself is, in many ways, like people. It has its own bias, because a lot of its training data comes from data generated by people. It is a product of society and culture and what we do, so it has that carried-in bias. And if we can understand that, that will help us deal with and accustomize ourselves to some of that bias and its impact.”

Listen to the full segment at Wisconsin Public Radio.

Related News

More UChicago CS stories from this research area.
Video

“Machine Learning Foundations Accelerate Innovation and Promote Trustworthiness” by Rebecca Willett

Jan 26, 2024
Video

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024
UChicago CS News

Research Suggests That Privacy and Security Protection Fell To The Wayside During Remote Learning

A qualitative research study conducted by faculty and students at the University of Chicago and University of Maryland revealed key...
Oct 18, 2023
UChicago CS News

UChicago Researchers Win Internet Defense Prize and Distinguished Paper Awards at USENIX Security

Sep 05, 2023
In the News

In The News: U.N. Officials Urge Regulation of Artificial Intelligence

"Security Council members said they feared that a new technology might prove a major threat to world peace."
Jul 27, 2023
UChicago CS News

UChicago Computer Scientists Bring in Generative Neural Networks to Stop Real-Time Video From Lagging

Jun 29, 2023
UChicago CS News

Chicago Public Schools Student Chris Deng Pursues Internet Equity with University of Chicago Faculty

May 16, 2023
UChicago CS News

Computer Science Displays Catch Attention at MSI’s Annual Robot Block Party

Apr 07, 2023
UChicago CS News

UChicago / School of the Art Institute Class Uses Art to Highlight Data Privacy Dangers

Apr 03, 2023
UChicago CS News

UChicago, Stanford Researchers Explore How Robots and Computers Can Help Strangers Have Meaningful In-Person Conversations

Mar 29, 2023
UChicago CS News

Postdoc Alum John Paparrizos Named ICDE Rising Star

Mar 15, 2023
UChicago CS News

New EAGER Grant to Asst. Prof. Eric Jonas Will Explore ML for Quantum Spectrometry

Mar 03, 2023
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube