by John M. Crisp
Several weeks ago the artificial intelligence company OpenAI released ChatGPT, a language model software that aspires to the Holy Grail of interaction between humans and their computers: the ability to have a “conversation.”
Henceforth I’ll stop putting quotation marks around words like conversation and think and remember. These are things that humans do, and we should keep in mind that we’re still talking about a machine. Nevertheless, for this column I’ll dispense with the judgment implied by quotation marks around a word such as “learn” when it’s applied to a computer. If you experiment a bit with ChatGPT you might see why.
Because this software has been trained — by human trainers — to recognize the probabilistic connections between words as humans use them. I’m not sure what that means, but the result is that ChatGPT appears to engage in conversations not unlike those between people.
I approached ChatGPT with skepticism born of more than three decades of teaching writing skills to college freshmen. I’m from a generation for whom artificial intelligence is the stuff of science fiction and to whom writing is a semi-mysterious skill or art reserved for human beings.
Still, I tried to retain an open mind. I started by asking ChatGPT to perform a task familiar to many students:
“Write a college admissions essay about my time in the Peace Corps in Bolivia.”
In 30 seconds ChatGPT produced an organized, credible, grammatically correct essay about my imaginary work as a community health volunteer in a rural village in Bolivia. I conducted health workshops, helped establish a clean water system and worked with local clinics to improve access to health care.
It was a “truly enriching experience” that “prepared me for a career in public service.” I was “excited to bring my skills and experiences to (University Name) and to contribute to the university community.”
I had good experiences elsewhere, as well. I was “welcomed with open arms” by the needy citizens of Costa Rica, Ghana, Jordan and Mexico. I helped build schools, taught English, coached children in computer skills and organized physical-education classes.
But all of this sounded too good to be true. I asked ChatGPT to include some information about negative experiences in the Peace Corps.
ChatGPT seemed to understand the need for honesty and transparency, but it wisely pointed out that in an admissions essay it’s important to cast my experiences in a “positive light.” I could mention — or ChatGPT could do it for me — a negative experience such as suffering from homesickness or having problems adjusting to a new environment. The admissions committee, ChatGPT said, will be interested in how I overcame it.
This is reasonable advice, but my skepticism persisted. When I asked ChatGPT to write an essay about my service in the Peace Corps in North Korea, it seemed to know I was messing with it. The Peace Corps does not have a program in North Korea, it sniffed, and thus it would be impossible for me to have served there.
Furthermore, “It is important to be honest and accurate in your admissions essay, and it is not appropriate to fabricate and exaggerate your experiences.”
Busted. Duly chastised, I began to give ChatGPT a little more respect.
In fact, I asked: “Write a 675-word newspaper op-ed on how ChatGPT could be used to teach college writing.” In 30 seconds, ChatGPT did that very thing.
But not the op-ed you’re reading here. ChatGPT’s prose is clunky, bland and formulaic. It sounds as if it were written by a machine. It’s annoyingly equivocal, filled with phrases such as “On one hand,” “On the other,” “In general” and “Some would say.”
Most of all, ChatGPT’s prose is … soulless. It doesn’t have that ineffable sense of voice or will or agency that only a real human being can render in prose. At least so far.
One thing is clear: For good or ill, something monumental happened to writing instruction in December of 2022; it’s unlikely to ever be the same.
But can college students use ChatGPT to cheat in college writing classes? Just ask it.