Teaching and Writing in the Age of Artificial Intelligence

The roll out of the chatbot ChatGPT has raised many questions about writing and ethics. Chat GPT can generate and revise text on a wide array of content, as well as solve math problems or write code.  NYT writer Kevin Roose (new tab) calls it “ominously good” at producing the sort of work school often requires. At UVM, reactions have varied.  Some colleagues worry that technology like ChatGPT will enable students to skip learning opportunities, while others see possibilities for using Chat GPT to level the playing field with its valuable explanations. Some of us have started using AI technology in our work; others are developing policies around it. At WID, we are committed to continuing these conversations as we learn from and with each other.

Example Syllabi Statements

The Impact of AI on Teaching Writing

AI and writing have a long history.  Grammar checkers, like Grammarly, and summarizers, like those built into Microsoft Word, already offer students automated assistance with their work. Sociologist Tressie Cottam McMillam (new tab) reminds us that new technologies frequently have a “hype cycle” in which new developments promote promise and fear.  Eventually, we adapt, and react, finding ways to adjust our human activities in a new technological context.  Math and language instructors have a lot to teach writing teachers about how their pedagogy has adapted to calculating and translation tools: rather than letting fear creep into our teaching, we can invite this technological moment to clarify our pedagogical aims and our communication with students.

Academic Integrity, AI, and Writing at UVM

For instructors considering how ChatGPT might affect their courses, the most relevant piece of UVM’s Code of Academic Integrity (new tab) is that instructors need to clearly and intentially communicate their expectations to students. Indeed, the Code acknowledges that the implementation of its principles may vary from course to course, and instructors must be clear about what they expect about the use of AI as well as other components of assignment substance and writing processes. Key ways to help students understand your expectations:

  • Talk about ChatGPT: ask students what they know, and think with them, as appropriate, about whether and how Chat GPT might affect your course, their work, or the world beyond your course.
  • Put ChatGPT in context: connect your views on AI and writing with other expectations about how students can collaborate with or talk to others about a given assignment, and whether and how they should handle and engage data or external sources. 
  • Include a clear statement about AI in your syllabus and/or assignments: Illustrative language from UVM’s Center for Student Conduct and other institutions can help you craft your own. 
  • Link your expectations to UVM’s Code: the Center for Student Conduct has helpfully broken down the parts of the Code that are most connected to these issues:
    • Plagiarism Standard
      • “All ideas, arguments, and phrases, submitted without attribution to other sources must be the creative product of the student.”
    • Cheating Standard
      •  “Students must adhere to the guidelines provided by their instructors for completing academic work.”
      •  “Students may only use materials approved by their instructor when completing an assignment or exam.”
      •  “Students may not claim as their own work any portion of academic work that was not completed by the student.”

Adapting Your Teaching in Light of AI

Depending on the ways you see Chat GPT intersecting with your course and assignments, you will have decisions to make about changes or additions to your assignments and policies.  Let your own basic philosophies guide you. In particular:

  • Create clarity around your learning outcomes.  For your course, for your assignments, for your assessments, what do you want students to learn?  And why is that learning relevant for their lives?  Clear outcomes shape the context for your interactions with students, and the more you can help students see why you have the assignments you do, the more you create a learning environment that encourages students’ self-efficacy and metacognition.  This reduces incentives to cheat (PDF) and emphasizes student growth.
  • Scaffold your writing assignments with opportunities for students to talk with you and/or each other about their evolving work.  When you give students specific directions about how to talk about work in progress, they can take responsibility for managing the work in light of your assignment and course goals.  AI can produce text, but it can’t have the metacognitive conversations your students will within your scaffolding.  It also can’t create authentic reflections on a student’s process, so assigning reflections will both help your students assess their own progress and communicate your interest in their learning.
  • Evaluate how your assignments are specifically connected to your course and/or particular communities.  The more your assignments ask students to make connections to specific components of your class, the more they are rooted in a context that only your students are experiencing.  We at WID, in collaboration with Center for Student Conduct, suggest a few ways to do this:
    • Design assignments that require students to produce work that is not solely in text format (e.g., presentation, video, podcast, artwork, or visual components). Utilize non-text sources and materials in assignment prompts, as ChatGPT cannot respond to or interpret images or video.
    • Ask students to cite sources, as ChatGPT currently is not able to accurately create sources. However, be prepared to review citations to check for false sources.
    • Incorporate current events (after 2021) into assignments, for which ChatGPT does not have data. Ask students to include their own personal experiences or make a local connection to UVM or Vermont in their response or ask students to reference specific examples from class.
    • Consider using a platform such as Perusall (new tab) to have students annotate articles and notes.
    • Incorporate peer review into your assignment cycles.
  • Play with ChatGPT to see how it handles prompts in your discipline or course (although remember, in doing so you are helping to train it).  You’ll be able to see evaluate some of the patterns you see, which will help you articulate your expectations for students.  In general, AI Isn't good at producing accurate, nuanced responses.  Your assignment explanations can help students understand the particulars of your expectations.
  • Explore different types and uses of AI and see how they fit in with your course and pedagogical practices. ChatGPT is not the only form of AI, nor is all AI strictly generative. WID has compiled a list of possible uses of AI writing tools for faculty members.
  • Keep your focus positive.  Teaching to police errors and wrongdoing simply doesn’t emphasize student growth and learning.  You already have experience teaching students about the ethics of your class and your discipline.  If you let fears or tech panic about the possibility of AI push you into a position where your responses to students are focused first on the question of whether or not ChatGTP was involved, you will likely find yourself endlessly frustrated by unanswered questions.  If you keep your focus on conversations with students that uncover their processes, their evolving questions, and their evolving thoughts, you can guide their growth.
  • Don’t expect a technological solution to the challenges of ChatGPT.  While AI detectors do exist (Chat GPT’s creators have made one), they are not uniformly reliable—and like other AI tools, will only get better over time.  Writing performance is a human activity, and looking to human interactions in your course is the best way to promote learning.

Broader Concerns About AI Tools

No AI tool is perfect, and there are technological limits to what ChatGPT and similar tools can accomplish.  The Center for Student Conduct notes:

  • ChatGPT is sometimes inaccurate and will fill in gaps with wrong information. When a prompt is vague or unclear, ChatGPT guesses the user’s meaning, which can lead to incorrect answers.
  • The tool was trained on information publicly available through the end of 2021 (for the current GPT3 version). ChatGPT is unable to respond to any prompts referencing events that occurred after 2021—although new versions of ChatGPT (such as the one that has been connected with the search engine Bing) will likely overcome this limitation soon. 
  • It is inherently biased due to the data sources used to train it. Sources may be explicitly or implicitly discriminatory and ChatGPT will mirror that information.
  • Though OpenAI, the company that created ChatGPT, has attempted to remove all discriminatory and violent data to prevent harmful output, and it filters responses with its content moderation tool, there are reports of work-arounds and abuses. There are also reports of the exploitation of Kenyan laborers who were hired to remove violent, racist, and discriminatory data sources.
  • Unlike search engines or even Wikipedia, it is impossible to know the original sources of data. Information could be from a peer-reviewed journal article, a Reddit post, an outdated textbook, or a poorly researched paper.
  • ChatGPT currently is not able to cite sources accurately and may include fake but realistic article titles, journals, authors, and page numbers.

In addition, some privacy issues of note:

  • ChatGPT requires a login which necessitates that users provide personal info such as name and email.

    In addition to personal contact information, ChatGPT’s Privacy Policy acknowledges it collects usage data such as IP address, time of use, and type of device, and explicitly states that OpenAI could provide user data to third parties.

Additional Resources

Events Calendar

Looking for a teaching-writing resource that isn't here?  Have a question and can't find what you need?  Contact wid@uvm.edu - we are happy to help!

Join Our Mailing List!

Contact Us