top of page
Search

What is Artificial Intelligence?

Prior to returning to GSU in August 2024, I would have produced a response that projected disapproval. I don't know, but I don't like it. In the news stories I paid attention to, AI was dangerous.





Then, as AI composition technologies began to creep, then flood its way into discussions of academic integrity, it became at least to my mind, almost like a performance enhancing drug.


I asked ChatGPT to create a meme for me:



See any problems here?
See any problems here?

Me: (well crap.) "Why is the runner a black male?"


AI: "I understand your concern. That was not intentional and I appreciate you pointing it out. The AI-generated image doesn't allow me to select specific appearances for characters unless explicitly requested. If you'd prefer a different portrayal for the runner, I can adjust the prompt to ensure it's more inclusive or neutral. Would you like me to do that?"


And now, I realize a few things are happening to my understanding of AI. ChatGPT is learning about me, from my reactions to the content it creates for me. I am learning about ChatGPT, and I am also learning about myself.


I know why I am pissed off that the meme features a black male as the runner in the lead. Why was I not pissed about the white male student using AI to write his paper?


And - what might ChatGPT be suggesting about class disparities in access to AI technologies that I should be noticing?


Still, yuck.


I am still working on an answer to this. And part of me is still stuck in the 80s. If you've never seen War Games, you probably should. It can definitely provide some insight into some of the things I imagine a bunch of us Gen Xers are worried about.


I was excited to see that Purdue has a blog post about the film! That's worth the read as well.




I anticipate coming back to the class disparity issue in a future post. I use the open air version of Chat GPT. I can't spend all day on it. But, the more that I tinker with it, the more I am coming to understand one of the aspects of AI that I personally appreciate. When I ask AI to create something for me, I have to give it as much information as I possibly can, to get as close as possible to the product I want to see.


The process of describing what I need affirms the stuff that I know, and it helps me identify the stuff I still need to learn. Last night, I asked ChatGPT to help me design a shirt for my HS Varsity Girls Softball team. We are being inducted into our HS Athletic Hall of Fame later this year, and I want to make a commemorative jersey.



I couldn't have done it better myself....
I couldn't have done it better myself....

The jersey kind of confirms what I already know - my mind is pretty cluttered. But having this AI tool is helping me see that there is a spot for the random stuff in my brain, and when I can clear a little space by emptying out the clutter so that it is right in front of me, I ehave room to generate new ideas too.


 
 
 

Recent Posts

See All
Final Project Notes

Write a paper about AI and memory. Keep in mind that memory in this context also refers to the AIs memory, what it can remember of your...

 
 
 

2 Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Jason P.
Feb 08

Very cool read. Your hesitations and discomfort are warranted. Here's a weird question: since ChatGPT and the other models perpetuate the bias encoded in their training data, but your use of ChatGPT (as in the case of you pointing out the bias in the meme) actually can train the models to perform/behave differently, then do you (or I) have some kind of responsibility to use them in this fashion to help eliminate/counteract the bias?

Like
Laura
Feb 19
Replying to

Gosh. On some level yes, but perhaps there should also be a little more added to the disclaimer acknowledging (at this point on the AI timeline) both the likelihood of responses reflecting biases of users as well as the vital role that users play in reducing biased responses.


I guess the more I think about this, the more I am inclined to say yes - it is our responsibility to participate in training the models in the same ways as it is our responsibility to point out biases, inconsistencies, misinformation, and even typos to the publishers of the texts we use as teachers.


Given the opportunity, I would love to do some AI DEI training.


Thanks for asking that!!

Like
bottom of page