What is Artificial Intelligence?
- Laura Jones
- Feb 3
- 3 min read
Prior to returning to GSU in August 2024, I would have produced a response that projected disapproval. I don't know, but I don't like it. In the news stories I paid attention to, AI was dangerous.
Then, as AI composition technologies began to creep, then flood its way into discussions of academic integrity, it became at least to my mind, almost like a performance enhancing drug.
I asked ChatGPT to create a meme for me:

Me: (well crap.) "Why is the runner a black male?"
AI: "I understand your concern. That was not intentional and I appreciate you pointing it out. The AI-generated image doesn't allow me to select specific appearances for characters unless explicitly requested. If you'd prefer a different portrayal for the runner, I can adjust the prompt to ensure it's more inclusive or neutral. Would you like me to do that?"
And now, I realize a few things are happening to my understanding of AI. ChatGPT is learning about me, from my reactions to the content it creates for me. I am learning about ChatGPT, and I am also learning about myself.
I know why I am pissed off that the meme features a black male as the runner in the lead. Why was I not pissed about the white male student using AI to write his paper?
And - what might ChatGPT be suggesting about class disparities in access to AI technologies that I should be noticing?
Still, yuck.
I am still working on an answer to this. And part of me is still stuck in the 80s. If you've never seen War Games, you probably should. It can definitely provide some insight into some of the things I imagine a bunch of us Gen Xers are worried about.
I was excited to see that Purdue has a blog post about the film! That's worth the read as well.

I anticipate coming back to the class disparity issue in a future post. I use the open air version of Chat GPT. I can't spend all day on it. But, the more that I tinker with it, the more I am coming to understand one of the aspects of AI that I personally appreciate. When I ask AI to create something for me, I have to give it as much information as I possibly can, to get as close as possible to the product I want to see.
The process of describing what I need affirms the stuff that I know, and it helps me identify the stuff I still need to learn. Last night, I asked ChatGPT to help me design a shirt for my HS Varsity Girls Softball team. We are being inducted into our HS Athletic Hall of Fame later this year, and I want to make a commemorative jersey.

The jersey kind of confirms what I already know - my mind is pretty cluttered. But having this AI tool is helping me see that there is a spot for the random stuff in my brain, and when I can clear a little space by emptying out the clutter so that it is right in front of me, I ehave room to generate new ideas too.
Very cool read. Your hesitations and discomfort are warranted. Here's a weird question: since ChatGPT and the other models perpetuate the bias encoded in their training data, but your use of ChatGPT (as in the case of you pointing out the bias in the meme) actually can train the models to perform/behave differently, then do you (or I) have some kind of responsibility to use them in this fashion to help eliminate/counteract the bias?