Teaching AI as bullshit
As a religious studies scholar teaching interdisciplinary writing this semester, I decided to include a unit on AI this semester. It taught me a lot about student perspectives on AI.
Teaching writing has been a new experience. This past semester has consistently reminded me of high school English teachers who wrote me off as a “math and science guy.” While I have been teaching religion courses for nearly eight years, teaching writing is a different beast. And teaching writing in the age of generative AI currently feels like the wild west.
When I tell people that I teach interdisciplinary writing courses, it is not uncommon to get follow-up questions about how AI is changing my work. People ask questions like “How do you catch AI cheating?” or “How prevalent is the technology used among college students?” Having taught at a variety of different institutions over the past few years, and given that AI is so relatively new, I find it difficult to give a concrete answer.
While I know plenty of faculty who share with overwhelming confidence that they can always tell when a paper has been AI generated, in my experience this is often true but it is often a bit more difficult. Yes, there are times when papers are clearly written in such general and superficial ways that you can tell they were written by AI. Other times, AI papers seem just a bit off the mark.
Diagnosing why papers seem a bit off can take time. Sometimes papers that first appear to be AI generated after a while just feel like underdeveloped and superficial ideas, albeit rather polished. Technology that ‘catches’ AI is also still quite unreliable.
While cases of AI cheating have been rising across colleges and universities, I think many students are afraid to get caught and unsure about how the technology works. Faculty and institutions more broadly are also still figuring out how to police and utilize this technology. I will be the first to admit that I am still figuring out how this technology works, how to catch and report it effectively, and how to address generative AI in the classroom.
Teaching AI
This semester, I made the choice to incorporate a unit on generative AI in my interdisciplinary writing course. While I recognize that there are serious ethical concerns related to the usage of platforms like ChatGPT including intellectual property and copyright infringement as well as climate concerns among others, I decided that since students were already using these platforms it would be important for them to learn how they work.
We first read “Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT,” an article that attempts to provide a balanced perspective of the usages of AI in university settings. Drawing upon new materialist and object-oriented ontological ideas, the article utilized the method of “thing ethnography” to interview ChatGPT on its own conceptions of the challenges and opportunities of generative AI for higher education.
The article provided a general overview of the standard challenges and opportunities you might expect an AI platform to provide. AI platforms could provide personalized learning and 24/7 student support. The novelty of these platforms created challenges in terms of how-to police abuse of these platforms as well as challenges related to their novelty and the lack of understanding among faculty and administrators.
Students were skeptical of asking ChatGPT these questions because of the platform’s bias. They were torn as to how to police such a technology. Some asked questions as to how generative AI could be considered plagiarism. Others thought schools were not being strict enough in their punishments. Nearly all of my students felt that AI abuse was not beneficial to their learning.
Many shared stories from high school where a friend had been accused of using AI when they had not. The stories gave off a “damned if you do, damned if you don’t” mentality. Just as faculty are becoming paranoid about catching AI abuse, students are worried they might be falsely accused.
ChatGPT as bullshit
I additionally had my students read a recently published article in the journal Ethics and Information Technology entitled “ChatGPT is bullshit” to better understand how Large Language Models (LLMs) work. The article draws from Harry Frankfurt’s understanding of “bullshit” as speech that has no regard for the truth. Unlike lying which requires an intent to conceal the truth, bullshiting is not concerned with the truth.
LLMs, according to authors Hendricks, Humpries and Slater, are not concerned with truth, but rather the replication and production of “human-like text.” The power of LLMs is their ability to predict text. Because they merely predict human-like textual responses, the inaccuracies that result from ChatGPT, the authors claim, should not be considered ‘hallucinations’ as many have characterized them.
Calling their mistakes ‘hallucinations’ isn’t harmless: it lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This, as we’ve argued, is the wrong metaphor. The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination…they are not trying to convey information at all. They are bullshitting.
Students seemed to enjoy the class discussion if for no other reason that they had the opportunity to say “bullshit” in class. The analogy, however, was relatable for students, many of whom claimed that they themselves were good bullshitters.
Students on AI
For the students who wrote final papers on the ethical concerns and challenges of AI for colleges and universities, I was struck by the diversity of opinions.
There were students who wrote passionate pleas for all out bans of platforms like ChatGPT because they inhibited students from learning the basic fundamentals of particular disciplines and fields of study. Others wrote for a balanced approach that sought to slowly introduce AI in college courses in ways that aided learning fundamental skills like critical thinking and communications. There were even students proposing the rapid integration of AI for students to be able to compete in a fraught global economy.
I had one student make a compelling case for AI as a tool to help equalize the playing field for students with certain disabilities.
Many students acknowledged that they use AI for certain things including checking over spelling and grammar and helping them write more formal emails, planning their study schedule, and generating practice exam questions. All of which they found to be ethical usage of these platforms.
Students became a little more hesitant and leerier about the ethics of using an AI platform to generate an outline or ideas for papers. While some may have seen such usage as permissible, many others saw this as crossing or potentially crossing a line.
I was struck by the level of disagreement among students on their perspectives on AI usage. I have experienced that many college students are hesitant to voice controversial or disagreeable opinions out loud in discussion, so hearing such a diversity of opinions was striking.
I am interested to see how students next semester respond to this unit on generative AI. As a humanist, this is not my area of expertise, so I find talking with students about this technology is productive if only to better understand how it works and how students think about it. The conversation seemed to give students some agency and ownership in conversations where they are often passive subjects of university and classroom policies. I hope it gives them some language to better understand what generative AI can and can't—should and shouldn’t—be used for.
Sources
Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5
Michel-Villarreal, Rosario, Eliseo Vilalta-Perdomo, David Ernesto Salinas-Navarro, Ricardo Thierry-Aguilera, and Flor Silvestre Gerardou. 2023. "Challenges and Opportunities of Generative AI for Higher Education as Explained by ChatGPT" Education Sciences 13, no. 9: 856. https://doi.org/10.3390/educsci13090856
Please like, comment and share as you are inclined. If there is a topic that you would like me to write about or if you would like to collaborate, contact me at abgardner2@gmail.com