Fact or AI fiction?

Discussion surrounding artificial intelligence has grown tremendously over the past year, with many questioning how to integrate its generative capabilities into everyday life and others striving to combat it from becoming more prevalent given its nefarious aspects and uses.

As mentioned in a previous Republic-Times article, AI-created content is particularly concerning for some given its potential use for misinformation, which could become even more prominent as the U.S. gets further into the 2024 presidential election cycle.

Sheldon H. Jacobson of the University of Illinois spoke about the implications of AI and the role it might have in the near future when it comes to creating and spreading false information online.

Jacobson holds a Ph.D. in Operations Research from Cornell University, is a founder professor in engineering at the U of I, and currently serves in a variety of other positions at the university including appointments in Industrial and Enterprise Systems Engineering and Electrical and Computer Engineering.

Describing AI’s already existing place in misinformation, Jacobson pointed out how just last week Republican presidential candidate Ron DeSantis’ campaign shared apparently AI-generated images of fellow candidate Donald Trump embracing former National Institute of Allergy and Infectious Diseases Director Anthony Fauci.

He noted the capability of AI-generated content to fool many individuals, also pointing out how the fake images of Trump and Fauci were shared along real images of the two in order to make DeSantis’ story – that Trump was supportive of Fauci and his response to the COVID-19 pandemic – more convincing.

While this is just one example of AI being used to develop fake news pertaining to the 2024 election, Jacobson said the quantity of such content in the future could grow substantially.

“For a human being to create a misrepresented image takes time,” Jacobson said. “For generative AI to do it, it’s very rapid, which means the sheer volume of it can be overwhelming.”

As discussed in the previous Republic-Times article, generative AI generally consists of neural networks – systems that are fed a very large amount of information such as images, text or even video – that are able to recognize patterns and produce content similar to what they were fed.

While the prospective wave of such content could well have major influence on the next election, Jacobson said its impact could be worse as it is now, at least, somewhat expected.

“The element of surprise plays a critical role,” Jacobson said. “We saw this with COVID-19 and the pandemic. We didn’t see that coming… It’s the element of surprise that really results in the greatest impact, and in this particular case, the element of surprise isn’t really there anymore.”

He further said that, though AI introduces new avenues to produce misinformation, fake news is nothing new. Social media, he pointed out, has long been fraught with articles and posts lacking credibility or plainly spreading lies.

Since AI-generated content first grew in popularity a little over a year ago, certain “tells” have existed to determine what is and isn’t AI – such as excessive fingers and misaligned teeth on people in images or stilted speech in text or audio.

Jacobson similarly pointed out a common issue with AI video: shaking and flickering when objects and figures move such as when someone uses a background during a Zoom call.

While Jacobson said people will have to develop a keen eye to notice such details in the things they see online, he also said that such tells are likely to become less reliable as time goes on and AI’s capabilities develop.

A somewhat more reliable method for determining whether or not an image, text or a video is fake is pretty much the same as it’s always been: by considering what is being expressed and where the message is coming from.

“Ultimately, you have to look at the gestalt of the message, whatever the message is, and then make an assessment whether it’s possible, or it’s implausible or it’s just downright a fake,” Jacobson said. “There’s no silver bullet that will help people spot it.”

He also mentioned how this sort of increased skepticism is likely to have people suggesting that real information and images which they don’t believe were merely AI generated.

Jacobson described the current moment as a transient phase when it comes to AI, specifically referring to is as a “Wild West,” that people will hopefully become acclimated to and learn to live in.

He further spoke to the nature of generative AI, noting how it isn’t the sci-fi idea of a computer perfectly mimicking human behavior.

Instead, these systems fundamentally rely on the information that is fed into them, and as this information chiefly comes from the internet, AI content can have mixed results.

Jacobson specifically mentioned how he has played around with ChatGPT – an AI text generator – asking it questions he knows the answer to, only to have it spit out false information.

“Underlying ChatGPT is machine learning algorithms which have learned a tremendous amount of information from the internet, and the internet, as we know, has very good information, but it also has not so good,” Jacobson said.

Despite this shortcoming, Jacobson noted a variety of benefits AI could have in the world, comparable to major technological changes such as the switch from scribes to the printing press.

Diagnostics, he said, could be one area where AI serves as a boon. While it wouldn’t boot doctors out of a job, it could act as a way for them to more efficiently provide diagnoses.

He ultimately reiterated on the fundamental difference between how AI generation and the human brain operate.

“It can learn better than a human being can,” Jacobson said. “What it can’t do as well as a human being is reason, but it can certainly learn.”

In related news, Illinois Attorney General Kwame Raoul joined 23 other attorneys general this week in urging the National Telecommunications and Information Administration to push for AI governance policies prioritizing transparency.

“Consumers should be informed if companies are using AI in their products and services, and the potential impacts on people should be considered in shaping regulations,” Raoul said. “I am proud to join my fellow attorneys general in urging federal regulators to adopt standards that support the responsible development, use and deployment of AI systems.”

Andrew Unverferth

HTC web
MCEC Web