Is it cheating to use AI tools such as Bard, ChatGPT, and Claude.ai to help write your PA School personal statement?
Over the past year, Artificial Intelligence (AI) has gone from being a subject for science fiction movies to a tool that just about anyone can access.
Today, the AI tools that are of critical importance to aspiring PAs are “generative AI” chatbots; the most commonly known and used is OpenAI’s ChatGPT.
ChatGPT is officially defined as a “large language model-based chatbot developed by OpenAI….that enables users to refine and steer a conversation toward a desired length, format, style, level of detail, and language” (Wikipedia).
What this means in practical terms is that you can ask the chatbot to do things to help you with writing tasks, such as research topics, correct surface errors, or even make the tone of an email more polite.
With so much riding on written documents like personal statements and supplemental essays, aspiring PAs want to use all of their resources to make sure that their applications are as strong as possible. What some may not know, however, is that CASPA has added the following academic integrity affirmation component to its release statement:
CASPA’s statement on generative AI: practical and ethical considerations
"I certify that all written passages within my CASPA application, including but not limited to, personal statements, essays, and descriptions of work and education activities and events, are my own work, and have not been written, in whole or part, by any other person or any generative artificial intelligence platform, technology, system, or process, including but not limited to ChatGPT (collectively, "Generative AI"). I am strictly prohibited from using Generative AI to create, write and/or modify any content, in whole or part, submitted in CASPA and/or provided to PA programs on behalf through any means of communication. PAEA and PA Programs reserve the right to use platforms, technology, systems, and processes that detect content submitted in CASPA and/or processes that detect content submitted in CASPA and/or provided to PA programs that were created, written, and/or modified, in whole or part, through the use of Generative AI." - Source CASPA Release Statement
Anyone who has used the US common application to apply to colleges and universities has seen an affirmation statement like this. It basically asks applicants to state that what they’re submitting is their own work and that the content is true.
Following some recent high-profile cases of wealthy parents paying consultants to get their otherwise unqualified kids into prestigious universities, institutions of higher education are taking stronger positions on dishonesty. CASPA is clearly stating that the use of generative AI is currently considered academic dishonesty.
Here at The PA Life, our editors have received a number of questions from clients on the topic, including everything from “is it okay to use Grammarly to check for spelling errors (affiliate link)?” to “Can readers tell if I’ve used ChatGPT?”
As a resource for aspiring PAs and practicing PAs, we at The PA Life take questions of professional integrity very seriously. Let’s start the conversation with some of our most frequently asked questions (FAQs).
Is it cheating to use an editor to help with my personal statement?
This is a topic of ongoing discussion among The PA Life’s leadership and our editors. We draw a clear line between editing and revising personal statements (and other documents) for clarity and “ghost writing.” Our editors can help authors optimize organization, sentence structure, word choice, and style, but we do not “generate” content. If we feel that an essay is missing important information, we will share that information with its author and make recommendations. The key to a strong personal statement is the word “personal.” If someone or something else writes your personal statement for you, it will be not only dishonest, but inauthentic, and readers have a good sense for authenticity in writing.
What about using The PA Life’s one-on-one service?
Our one-on-one service gives writers a chance to talk to the editors in real-time as they collaborate on personal statements. Editors might ask questions or suggest topics, but, again, the editors do not “generate” content.
Can I use tools like spellcheck or Grammarly to check my essay for errors?
At this time, most institutions of higher education simply consider these tools because they do not create content. Here, we are differentiating between content, what you mean, and how you say it. Grammarly may recommend word choices that are close but aren’t exactly correct (we see it from time to time), so proceed with caution and double-check any suggestions that you accept.
Can readers tell if I have used ChatGPT or other generative AI for my personal statement?
The short answer is yes. Think of it this way: if you can ask a chatbot to write something for you, check it for errors, or make adjustments to it, you can also ask a chatbot to check an essay to see if it’s been generated by AI. There are actually specific apps that are made to do this. Because we see a lot of personal statements, we noticed very quickly that there are some distinctive characteristics of AI generated essays. Admissions committee readers will be reading a lot of personal statements, too, so we can assume that they’d begin to make the same observations.
Can ChatGPT make mistakes?
The short answer to this is also yes. You may have heard the term “machine learning.” AI only knows what we, as humans, teach it. It learns from us. And, critically, it learns from the information that is out there floating around on the internet: the good, the bad, and the ugly. AI can reflect our biases and our mistakes. It has been known to generate texts that are hateful and exclusionary. It has been known to generate texts that are outright plagiarism of the texts that have been used to train it. It also can’t “hear” the sound of language, so it misses some of the subtleties that we can hear in our heads when we read written language. Theoretically, these issues will get better over time, but for now, don’t make the assumption that any chatbot is smarter than you are or better at writing.
Is all generative AI like ChatGPT, Bard, or Claude.ai bad?
No, it’s not. We’ve seen the very valid concerns of writers and other artists who are worried that they’ll be replaced by AI, and AI can be used to do some pretty bad stuff. However, with careful leadership and governance, generative AI, especially the language- and text-based applications, can be democratizing. For under-resourced and traditionally excluded communities, these tools can provide opportunities for inclusion. Non-native English speakers can use generative AI to express their concerns for wider audiences, especially regarding issues that disproportionately affect economically disadvantaged communities.
We can look at AI in the historical context of technological innovations. In the early days of X-ray technology, near the beginning of the twentieth century, people were fascinated by seeing people’s skeletons in real-time. These machines were used to do everything from measuring feet for shoe stores to entertaining partygoers, and there was a general assumption that X-rays were no more harmful than light. It didn’t take long, however, for researchers and users to start reporting skin burns and even cancers. X-ray technology was transformative; we can’t imagine modern medicine without it. But it wasn’t entirely harmless, and it took us a while to learn how to use it responsibly.
It’s not a bad idea to keep in mind the fact that, throughout human history, our ability to create new technologies has often outpaced our ability to reckon with their outcomes. We are witnessing the early days of AI tools, a technology with tremendous potential to change our world and to change us.
As healthcare providers, PAs should consider the complexities of AI and stay up to date on ethical concerns as we learn more about the topic. We at The PA Life welcome your questions and participation in this ongoing conversation.
CASPA has taken a firm stance prohibiting the use of AI tools like ChatGPT to generate any part of application materials, including personal statements. Doing so would be considered academic dishonesty. While editing tools are currently allowed, admissions committees can likely identify AI-written essays. Aspiring PAs should write authentic personal statements in their own words.
- CASPA prohibits using generative AI to create application content and considers it academic dishonesty.
- AI-driven editing tools like Grammarly are acceptable for checking errors but use suggestions cautiously.
- Admissions readers can likely identify AI-generated essays, and aspiring PAs should write personal statements authentically in their own words.
- Professional editors can help with clarity and style, but not writing content.
- AI can be powerful, but it's crucial to use it responsibly and ethically.
Note: Some of the links in this post are affiliate links. This means that if you click on a link and make a purchase, I may receive a commission. This helps me keep the lights on and continue creating content for you. I only recommend products that I personally use and love.