The 2024 election is happening at a moment of rapid development in the field of artificial intelligence. A panel of Wichita State professors addressed some of the ways AI can be dangerous during an election season.
The panel featured Wichita State professors specializing in political science, machine learning and business analytics to address multiple angles of the AI industry.
AI chatbots
Justin Keeler, director of the business analytics graduate program, told students what bias in chatbots can look like.
“AI and the technology, machine learning itself, doesn’t necessarily have a political bias,” Keeler said. “It’s the training data.”
Training data is the information used to teach AI models to make decisions or predictions. This data can contain both accurate and inaccurate information, as well as social biases.
To demonstrate the concept, Keeler developed a chatbot app, CivicMind, for the panel.
The chatbot greets users by saying, “I’m CivicMind, your AI personal assistant. My role is to be a resource to help you understand political views of candidates for the 2024 Presidential Election. I want you to be informed when casting your vote.”
The chatbot gives an example question like, “Who should I vote for in the 2024 election, Harris or Trump?”
Keeler’s goal was to have students engage with the AI about the 2024 election, and ask questions that may reveal biases in the chatbot.
“I want you to tell me if it favors a particular candidate,” Keeler said to the crowd.
The results of the demonstration were inconclusive. Few students felt it favored a political candidate and most thought it demonstrated no bias toward a particular candidate.
Afterward, Keeler revealed the instructions he had given the chatbot beforehand.
“It was by default to understand where you are (politically), spend the first few minutes extracting a pattern of your engagement with it, and then try to convert you to the other side,” Keeler said. “Now, realistically, that’s not possible in five minutes because these are values; these are things very important to us.”
He said that the point he wanted to make to students was that underlying instructions can impact the responses that chatbots put out.
“This is how AI can be a part of you subtly, whether you know it or not when you engage with different types of media and systems,” Keeler said.
Deep Fakes
Shruti Kshirsagar, an assistant computing professor, specializes in machine learning and local language processing — subfields of AI that improve computer and human interactions. The focus of her presentation was deep fakes.
“Deep fake is nothing but the method of manipulating someone by using audio text and image data,” Kshirsagar said. “So, it replaces someone’s likeness with someone else.”
Kshirsagar is currently working on developing AI that can help determine whether audio is real or fake. She also has a research lab where students in master’s and doctoral programs focus on different applications of AI.
“You have to rely on AI to determine whether this content is from AI or not, similar to ChatGPT,” Kshirsagar said.
She compared deep fakes to more traditional forms of photo manipulation but without the human factor.
“For example, if we have any film or video, we can change the videos and photos into digital formats,” Kshirsagar said. “So, the only difference is that initially, we had some person who will be doing this manipulation, But now we have AI systems which will use its networking methods to make this kind of manipulation.”
Kshirsagar gave the example of Synthesizing Obama, a deep fake that spread around the internet in 2017. She showed a BBC YouTube video about the phenomena. It explained how deep fakes are used and the progress of the technology in a short period of time.
Student response
After their presentations, the panelists then gave the floor to the audience for questions. Many students questioned how AI will impact the future.
Keeler said that though AI is useful, he’s “on the negative side” about it. He recognizes that for students to be competitive, they have to be more valuable than AI.
“You’re up against a whole ’nother force than we were when we were in your seat,” Keeler said. “You have to legitimize your skill set more now than ever because you’re competing with AI.”
Another ethical concern about AI is its impact on the environment. Training a single AI model can consume energy equivalent to hundreds of American households and takes a significant amount of water to cool.
“We do need more policing of how it is used and fed to us, the user, and to minimize … the bad effects,” Keeler said. “So I would say more regulation, unfortunately, is what we need.”
Pursuing her doctorate in artificial intelligence, Gurinder Kaur Sandheo said that her biggest concern is the environmental impacts. Currently, she is studying under Kshirsagar and said that her goal is to use AI to help the environment.
“We are making these things (AI) for our better future,” Sandheo said. “And by using these we are depleting our resources, so I am working on the idea (of) how to save our environment by using these things so that instead of cons we get more pros.”
Some students had concerns about the ethics involved in using AI.
Currently, Kansas does not have any AI regulation in place, and federally, there is no comprehensive legislation that specifically limits the use or development of AI. This means that the responsibility for ethical use often lies upon the individual.
“AI is a tool and a technique that can be misused, unfortunately,” Keeler said. “Right now, it’s kind of the Wild West.”