At this point in time, AI, generative AI in particular, feels like an incredibly powerful force. It doesn’t matter how you approach the technology; you can’t deny that its presence in our digital ecosystem is utterly inescapable.
Nine times out of ten, the first search result on Google is an AI summary. Pictures, songs and paragraphs can be created from simple text inputs. College students brag about passing classes with ChatGPT on social media. AI-generated Italian brainrot is the hot new trend.
There are concerns about AI replacing human jobs in creative and technical industries. The environmental impact is well-documented at this point. It’s a wild west out there, and Wichita State has recently embraced it, with caution, through the implementation of Copilot into certain lesson plans and administrative workflow.
The implementation is by no means a unified and complete technological overhaul, however. Depending on what classes you take or what department you work in, AI is used quite differently throughout campus.
“We are using AI tools for some coding projects and stuff like that that we do here in the MRC (Media Resources Center) and that’s a great use case for us,” MRC executive director John Jones said. “We’re doing a lot of this stuff so we’re using it a lot but academic advising may not use it at all or may use it only in small ways.”
Jones said that the reason behind this implementation has to do with preparing students for the workforce.
“If our role as an institution is to prepare students for the workforce they’re going to enter, that workforce is going to be one that has AI in it,” Jones said. “And if you walk into that room, that space, without skills that let you operate those tools, then you’re going to be at a disadvantage compared to other people. So, it’s not necessarily that we love everything about it. It’s that it’s a reality that we have to face.”
One important distinction in the university’s use of Copilot has to do with the collection of data. More specifically, under the contract between Microsoft, Copilot’s owner, and the university, Microsoft retains zero ownership of the “information stream,” as Jones puts it.
“In the case of our enterprise contract with Copilot, the information stream doesn’t flow back to the AI to continue to retrain the AI,” Jones said. “And so there is no risk of that information going back into the system, and they claim no ownership of it. So it’s through legality and the severing of that feedback loop that provides us some security, some safety there.”
The university taking steps to curb data collection is a step in the right direction, but what does the implementation of Copilot entail for our students and faculty? I’m personally not a big fan of generative AI, I think that within our society, these tools are being commercialized and accepted way too quickly, without regard to their potential negative impacts.
But those same tools can be undeniably useful in certain contexts.
I actually use AI programs to speed up the transcriptions of my interviews. I guess that makes me a hypocrite, but I think there’s more to it than that.
To me, a lot of this implementation seems to be dependent on transparency and AI being used in the “right” context, at least according to Jones.
“I think that a big part of that ethical use is about transparency,” Jones said. “So if I use AI to generate something, I’ll usually at one point in or another, I will disclose that AI was used to create this or at least in the process.”
The thing is though, I used that program to speed up the transcription process, but I still manually went through the interviews to make sure I put everything down correctly.
To me, that’s AI being used in the “right” context, but that right context could look different to somebody else, and that’s where things start to get problematic. It’s a slippery slope, and once you go down far enough, it’s no wonder that these programs can make us stupider.
It doesn’t help that a lot of the official WSU AI guidelines and use-cases are rather vague and could apply to just about anything. If we’re to prevent this behavior going forward, a more specific set of these guidelines is paramount. To a certain degree, WSU has been cautious in its encouragement of AI, but cheating is still an issue, and if we’re continuing to encourage these tools the way we are now, it’ll only exacerbate things. After seeing the prevalence of AI in the English department, associate professor and chair Francis Connor has a similar perspective.
“There are so many cases that instructors just don’t know what to do with all of these students,” Connor said. “Do we report everyone to academic affairs and get everyone kicked out of school? How do we assess papers? Do we just grade them as if it was a normal paper?”
At this point, according to Connor, it’s easy for the English department to distinguish between what’s written by AI and what isn’t.
“I will say, we can tell when something was written with AI,” Connor said. “It does not synthesize information well.”
Ultimately, I think the way WSU is approaching the AI boom with Copilot is worthy of some praise, but the vagueness of their policy is counterproductive and doesn’t account for personal interpretation. I’m hopeful that we’ll be able to get past this AI craziness though, just in general, and that we’ll be able to approach these tools with more nuance as they get more efficient, much in the same way that we were able to approach the internet in academia.
“I think we came to a kind of equilibrium with online resources where, you know, over time we can instruct students what good and bad uses of that are,” Connor said “And, you know, I rarely in the past couple of years saw people just steal Google pages, and put them in a paper as I did in like 2004 and 2005.”
“I’m hoping we get to that sort of thing by AI. I’m hoping students get tired of having their essays evaluated as, you know, ‘This sounds like AI,’ and want to do them for themselves.”