Do you know what a mountain chicken is? What would you imagine it to look like? If you’re thinking of Foghorn Leghorn, then you’re probably old enough to be my parent, or you took a shot in the dark because this little bundle of joy is what a mountain chicken is:
I say, I say, I say, I swear to a nondenominational God that this frog is called a mountain chicken. You can Google it if you don’t believe me.
This is an example of a misnomer: something that seems incorrectly named for what it is. Calling a frog a chicken is misleading, even if the reasoning is that the mountain chicken’s meat tastes similar to a chicken’s meat.
Artificial intelligence is the trendiest misnomer that has been sweeping the world. This new lingo brings to mind ideas of The Terminator, HAL 9000, and robots taking over the Earth. What artificial intelligence is, in actuality, is a machine with the ability to generate customized messages based on its programmed ability to store information and parse an answer from its memory. Siri, Alexa, Cortana, and this kind of software are the most useful versions of artificial intelligence, and they have severe limitations. For instance, an iPhone user can’t ask Siri to walk their dog. They either have to do that themselves or go to the Rover app and hire someone to do it. The AI can only answer based on the answers they have programmed it to memorize.
Can artificial intelligence replace college writing?
So, this brings forth the question: can artificial intelligence replace college writing or even education in general? Remember having to do multiplication tables in the age of smartphones? It felt ridiculous to need to memorize that the product of a number multiplied by nine had the sum of its digits equal to nine while we all walked around with calculators that could make phone calls in our pockets. Should it be considered ridiculous to write? Did a person even write this, or did we tell our Zinkerz AI to write about itself in a way that wouldn’t scare you? What even is real anymore? They did surgery on a grape, but would you trust a robot to repair your torn ACL?
Recently, Drew Gooden (the YouTuber and creator of pictureofhotdog.com, not the former Cleveland Cavaliers) published a video in which he used AI software to write his YouTube video. The premise of the program he used was that the user could type a summary of what they wanted to be written. They programmed the AI to read the entire internet and to use this memory to generate an article based on this summary. So, the user could type in “What if the United States government moved the capital to Cleveland in 1849 so that they could control the gold rush while remaining in contact with the east coast?” and have the AI produce an essay of their desired word count.
Gooden ran into the issue that the AI is essentially useless unless given an immense amount of information in the prompt. So, for the program to write his video, he almost needed to write the video himself. Otherwise, the AI would repeat itself, word for word, from what it had written before. It is unaware of what it has written before and cannot connect ideas on purpose. The basis is that it can extend and expand upon what it has been given in the prompt based on what it has read before. So, the software would, theoretically, be able to write a college paper in the sense that it’s capable of generating the number of words or pages required. But the paper would be repetitive, off-topic, and garbage. It would be helpful to receive an F as a grade.
The Chronicle of Higher Education recently published an article on an AI, called GPT-3 (produced by Elon Musk’s OpenAI), that can write “decent papers on the cheap” and questions the point of attending a four-year university if anyone can achieve college writing through artificial intelligence.
Yes, AI could replace college writing.
However, the ramifications of this development would be disastrous for the world. For starters, what happens when the software malfunctions? The user wouldn’t know. Their goal is to hit a word count. They might not even read the essay before passing it in, so they could miss the inaccuracies and grammatical hodgepodge. And how would the computer scientists and engineers of the future be able to create a solution? They spent their college careers having software to write their answers for them. Their learning stopped at reading the question that they were meant to answer. They didn’t ponder a response because the AI would create one in seconds. So, now that they have acquired this job to maintain the AI software for future generations of students, they don’t know how to fix the code, and the education system is being forced backward.
We can replicate this for any job. If you rely on an AI to answer everything for your college education, the job you’re supposed to be tuned to will be a disaster. You won’t be able to understand technical complexities or be able to make rational, well-thought-out decisions on your own. It would professionally reduce you to becoming a body for the software to live through vicariously—similar to a dog walker who, for that half-hour of walking, is reduced to being a poop carrier. The computer (like the dog) has been placed in charge, and the human has to put their preferences on hold. An English graduate wouldn’t know how to write a story. A chemist wouldn’t understand a Grignard reaction. A psychologist would struggle to differentiate between their client’s mind reading and catastrophizing cognitive distortions, and incorrectly prescribe medication.
Having a computer write a paper for a student gives them more time to attend basketball games or play Xbox with their roommates. However, it takes away from their ability to interact with the future they desire. They say that cheaters never prosper, and, in this case, that is true. Cheaters never prosper, life is hard, and students must work to get something out of their education.
I used the GPT-3 beta.
I wrote two scenarios, and these are their results:
We can see in this one that the AI could perform its job. It told my boss I could not work on Sunday because the Patriots were playing. The green text is what the AI-generated. The un-highlighted text is what I told the AI to do. So, we can also see that it cannot do much more than what I gave as a prompt. This is a straightforward scenario, so let’s try a more complex idea.
The AI, again, did what it was supposed to do. It answered what would have happened. A historian may prove this wrong, but as someone unfamiliar with the history of 1840s Cleveland, Ohio, I am not in a position to argue against it. However, we can see that the AI achieved little in its answer. What it could do was rewrite the prompt, but from the viewpoint that moving the capital to Ohio would be a bad idea (which we already knew). By the way, all I had to do to access this was go to beta.openai.com/playground and sign up with an email address and phone number—try it out yourself.
Artificial intelligence is an interesting case.
It can increase efficiency within some sectors of our world. It can easily compile information and relay this data throughout an organization or group of people. However, it cannot create constructive ideas like a student can. Sorry to disappoint, but that ten-page paper you have due on Friday will have to be typed by you if you want to pass the assignment.
If you struggle to write papers or would like to improve your academic writing skills, especially in a course like AP English Language and Composition, consider signing up for classes with Zinkerz. You can customize your schedule with us and can even take classes on the weekends. The best part is that our students consistently improve their AP scores and become better prepared for their advancement into higher education.