Vanderbilt University had to apologize after sending out an email to students in the wake of the mass shooting at Michigan State University last month because the email showed it had written using the ChatGPT chatbot.
It's not that the email was particularly inappropriate or coldhearted, but the university seemed to not realize the optics of using the chatbot to write such sensitive communication.
The message itself read like a fairly standard boilerplate response to a tragedy:
"In the wake of the Michigan shootings, let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus."
"By doing so, we can honor the victims of this tragedy and work towards a safer, more compassionate future for all."
While this seems like a pretty standard and acceptable message to send out in the wake of a tragedy, the last line of the email really ruined the message.
"Paraphrase from OpenAI’s ChatGPT language model, personal communication, February 15, 2023."
People couldn't believe the school had used an AI chatbot to write such important communication.
@scalzi/Twitter
\u201c@scalzi\u201d— John Scalzi (@John Scalzi) 1676992619
\u201c@scalzi Basic emails (like the ones that don\u2019t require critical thinking and are just routine) are the only things I think current ChatGPT stuff should be used for BUT NOT *THIS* TYPE OF EMAIL \ud83d\ude2d\ud83d\ude2d\ud83d\ude2d\ud83d\ude2d\u201d— John Scalzi (@John Scalzi) 1676992619
\u201c@washingtonpost That\u2019s shameful of Vanderbilt. Wow\u201d— The Washington Post (@The Washington Post) 1676999761
\u201c@scalzi I appreciate well-placed laziness and automation, but this is just.... I'm not even sure I have the right word for this.\u201d— John Scalzi (@John Scalzi) 1676992619
\u201c@scalzi If you thought the whole \u201cthoughts and prayers\u201d routine felt \u2026 rote before? Vanderbilt just lowered the bar even further. Amazing.\u201d— John Scalzi (@John Scalzi) 1676992619
\u201cChatGPT can be a useful writing tool, but it should not replace PR experts. Public Relations is an incredibly important field that involves a lot of time, work, and nuance. PR isn't a luxury; it's a necessity for businesses. https://t.co/OXWL3vZeNU\u201d— Sabrina Ram (@Sabrina Ram) 1677172573
Others just couldn't believe nobody thought to remove the line saying it was paraphrased from ChatGPT.
\u201c@kerri_tobin @scalzi I was like, \u201cWell, how did people know it was ChatGPT?\u201d I was thinking it had some weird error, but no, they literally included that citation \ud83d\ude2d\u201d— John Scalzi (@John Scalzi) 1676992619
\u201c@Steve_Mang @scalzi That's what they were counting on, and it highlights some of the problems with using AI.\u201d— John Scalzi (@John Scalzi) 1676992619
\u201c@scalzi The cynic in me thinks 2 things: they're only sad they didn't catch the citation to remove it before sending the email and such statements are rote so there's no significant difference in person/machine for the average person to notice/care.\u201d— John Scalzi (@John Scalzi) 1676992619
The university quickly responded to the criticism with an apology in which the decision to use ChatGPT for the message was called "poor judgement."
Nicole Joseph, Assistant Dean for Equity, Diversity, and Inclusion at Vanderbilt, sent a followup email to explain how such an obvious mistake was made in the first place.
Laith Kayat, a Vanderbilt student in his senior year whose sister attends Michigan State, called the EDI department's use of the chatbot for such important and sensitive communication "disgusting."
Fellow student Bethany Stauffer agreed, telling Vanderbilt's student newspaper the Vanderbilt Hustler:
"There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself."
Kayat challenged the school's administration to be better.
"Deans, provosts, and the chancellor: Do more. Do anything. And lead us into a better future with genuine, human empathy, not a robot. [Administrators] only care about perception and their institutional politics of saving face."
With ChatGPT and similar tools that use machine learning to predict text getting more complex and becoming more widespread, incidents like this are likely to become more common.
It is probably time for a conversation about when it is and isn't appropriate to use machine generated text in communications.