Disclosing the use of ChatGPT in your writing – should you?

The office of Equity, Diversity and Inclusion (EDI) at Vanderbilt’s Peabody College of Education last week issued a statement addressing the recent mass shooting at Michigan State University where a 43-year-old gunman shot students at random that killed three students and injured five more on February 13, 2023.

Here is that statement, shared on February 15, 2023.

The school was forced to apologize eventually because of a disclosure added to the communication. This:

(Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023).

The official student newspaper of Vanderbilt University pointed to a factual error in the statement. The statement says, “recent Michigan shootings” in plural even though only one shooting has occurred. It also pointed out that the statement “mentions ‘Peabody’ once and does not use any other Vanderbilt-specific terms”.

And the student newspaper also reported that several students categorically denounced the fact that an AI text generator was used to craft this statement.

While I understand the sentiment behind the disgust, there is a far deeper implication here.

To explain that, allow me to assume 2 different kinds of scenarios.

Scenario 1: The disclosure was not added to the statement.

Imagine the statement did not disclose that they had used ChatGPT to construct the sentences (based on a prompt, no doubt).

What would have happened?

The student newspaper, or anyone else for that matter, would not know that ChatGPT was used. They could have pointed out the factual error and the lack of Vanderbilt terms beyond just one, but they could not have added the disgust they felt which arrived only upon knowing that ChatGPT was used.

This is not very different from an everyday scenario that happens to most of us: you wish someone on their birthday. They feel really touched that you remembered their birthday. You don’t tell them that Facebook alerted you and that you did not actually remember their birthday.

If you disclose the alert from Facebook, they would take back their appreciation and may perhaps think poorly about you too. But they could also understand that this is very, very common and this is perhaps the default way of remembering birthdays these days.

Another scenario: you are engaged in a Twitter conversation on some topic. You make some cogent and solid points. If you disclose to the world that you read these points on a deep Google research, what would happen? Either people may think that you just passed information off the internet, or they would understand that mere information is available to everyone at their fingertips these days; how and where it is applied is the point.

For both these examples, we have become accustomed to such external help (Facebook to alert birthdays, and the internet being a bottomless treasure trove of all kinds of content). So it really doesn’t matter whether we disclose our sources or not. It would also depend entirely on the person, to disclose or not. To a large extent, such external help is normalized, so one could argue you don’t need to disclose such things anymore, much like using calculators in a math test.

Scenario 2: There is a disclosure.

So, a disclosure was added, shouldn’t the office of EDI be praised for the candor? Yes, on the back of the disgust shared about the act of using ChatGPT, praising them seems awkward, but they did disclose the help sought while they need not have.

A few years from now when generative AI is totally accepted and normalized, disclosure or not, it may not matter at all.

Yes, if you add a disclosure to a personal note of love or bereavement, for instance, you’d still be judged, just like how the office of EDI is being judged here. But increasingly, you may be tempted or forced to use help from generative AI simply because it is available.

It’s simply like having a choice between looking outside the window or checking an app to know the weather. If you look outside the window, you may be able to discern the basic weather – sunny, may rain (cloudy), etc. But the app would perhaps be slightly more accurate and for a wider area.

Similarly, using generative AI’s help would kickstart your thinking process towards getting the task done faster, without needing to rack your brain from the very beginning. Even if you feel guilty inside your heart for seeking such help for a very intensely personal note (of any kind), that may be just the initial period like these days when generative AI seems very new. With time, it may be completely normal/normalized.

There are circumstances where a disclosure would be sought mandatorily and not disclosing would involve some kind of penalty.

Exams, for instance.

If you are being tested for your memory, you obviously cannot depend on the internet. If you do, you would need to disclose the amount of help sought.

If you are being tested for your articulation skills, you may need to categorically disclose how you used generative AI’s support to let the examiner determine your skill either in articulation or in prompting to get the best draft upon which you worked.


With regard to the statement from Peabody’s office of EDI, I wonder who took the call to add the disclosure. What were they thinking? Did they follow some recent instructions from the college on the use of generative AI that indicated that disclosure is mandatory if used in exams, like the AI policy by Ethan Mollick, professor in the Entrepreneurship group of the Management Department at The Wharton School.

As Ethan in a post, “But being a bad writer is a problem. Often, people who write badly are penalized, including in academia. Many people get around this in elaborate and risky ways. ChatGPT levels the playing field. Everyone can now produce credible writing. While this is a problem for writing classes (which will likely have to return to blue books and longhand essays), it isn’t in my particular context, so I am suggesting students use AI to ensure their writing is of high quality. In fact, I now expect to see only high-quality written work, since there is no longer an excuse.”

Even then, the office of EDI should have applied common sense to consider if the disclosure was absolutely essential in this case or not. Simple common sense indicates that it was not essential. They were not being tested for anything. It’s a statement. The thought and intent are more important. And if they disclosed that they outsourced part of that thought, it is bound to backfire on them because we humans are intensely emotional people, unlike ChatGPT which can, at best, simulate emotion without really feeling it.

To put it even more eloquently, let me quote from a recent article in the New York Magazine,

“large language models, or LLMs—the technology behind chatbots like ChatGPT—are built on statistics. They work by looking for patterns in huge troves of text and then using those patterns to guess what the next word in a string of words should be. They’re great at mimicry and bad at facts.

Why?

LLMs have no access to real-world, embodied referents. This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter, as philosopher Harry Frankfurt, author of On Bullshit, defined the term.

Bullshitters, Frankfurt argued, are worse than liars. They don’t care whether something is true or false. They care only about rhetorical power — if a listener or reader is persuaded.”

Full disclosure: No AI language model or generative AI tool was used in any way to write even a single word of this post.

Comments

comments