Notice: This page will be continually updated as AI policies and guidelines for researchers and research staff are adopted.
The Use of Generative AI in Research and Innovation
Introduction
Broad adoption of generative AI tools across many domains has had a swift and often unclear impact on research and innovation. UVM provides a roadmap of resources for researchers who want to ethically engage with generative AI tools in our research and innovation ecosystem.
A Framework for AI Ethics
Developing a framework for generative AI ethics requires an understanding of how the technology works including its inherent risks, implications of its use, and close attention to emerging standards in research across disciplines. Here we provide a starting point for researchers who wish to understand how to embrace generative AI technologies while working ethically in this space.
There are myriad issues that researchers must consider when using generative AI tools. Literature benchmarking institutional approaches to ethical AI use research reveals that there are common issues that emerge across disciplinesi. Topics that researchers should consider include: explainability and validity, transparency, privacy, data protection, safety and security, informed consent, piracy and copyright, plagiarism, and stakeholder perception and engagement, to name a few.
For example, if using a proprietary generative AI tool (such as ChatGPT), researchers should consider whether their methods could be replicated, given the tool’s black box and evolving nature. Given GenAI tools’ tendency to produce false content (termed “hallucinations”), researchers should have procedures in place for checking and validating AI-generated output – this might include checks against factually incorrect information as well as biased representations of groups or perspectives. When uploading content to an AI tool, researchers should be aware that by uploading content to a genAI tool, they are giving that data to a private company that may sell the data to third parties or use it for model training.
More broadly, there is growing dialogue on societal ethics regarding generative AI, including energy consumption and health impacts from data centers, the labor conditions of data workers used to train and refine large language models, and the use of copyrighted works in the training of most widely available large language models. Researchers who use generative AI tools should be familiar with the landscape of project-specific ethical considerations as well as broader societal considerations.
There are also common misconceptions about data protection and privacy that all researchers should be mindful of – for example, even if a person pays for a personal license for a generative AI tool like ChatGPT, Claude, or Gemini, they can inadvertently disclose intellectual property while using the tool for scientific writing (to explore this further, read the IP and data ownership section below).
Recommended reading:
A helpful exploration of these topics is described by Columbia University’s IRB: Understanding Artificial Intelligence with the IRB: Ethical Considerations and Advice for Responsible Research in the AI Era
Bouhouita-Guermech S, Gogognon P and Bélisle-Pipon J-C (2023) Specific challenges posed by artificial intelligence in research ethics. Front. Artif. Intell. 6:1149082. doi: 10.3389/frai.2023.1149082
Resnik, D.B., Hosseini, M. The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. AI Ethics 5, 1499–1521 (2025). https://doi.org/10.1007/s43681-024-00493-8
Federal Research Compliance Considerations for the use of AI in Research
Some federal agencies have issued policies on the use of generative AI for research submissions. For example, the NIH recently announced “the NIH will not consider applications that are either substantially developed by AI, or contain sections substantially developed by AI, to be original ideas of applicants. If the detection of AI is identified post award, NIH may refer the matter to the Office of Research Integrity to determine whether there is research misconduct while simultaneously taking enforcement actions including but not limited to disallowing costs, withholding future awards, wholly or in part suspending the grant, and possible termination.”
Supporting Fairness and Originality in NIH Research Applications
Please consult with UVM’s Research Development team and Sponsored Projects Administration when preparing grant applications to be sure that you are compliant with current federal policies on the use of generative AI.
The use of AI in Human Subjects Protection
The IRB has an active protocol of AI in human subject research, which can be found here.
IP and data ownership questions resulting from the use of Large Language Models and other forms of generative AI
Under current law, inventions and creative works discovered or created using generative AI may not be patented or copyrighted. You may be able to protect your non-generative AI contribution to an invention or creative work. We are closely monitoring developments in this area and will update if this changes.
Be aware that if you input non-public or confidential information into an open generative AI tool, this may be considered a public disclosure which will prevent you from obtaining a patent. In addition, the confidential information or data of UVM or a third party should not be input into these tools without first confirming that UVM has the right to use the confidential information or data in this way.
Current Approved AI Tools for Researchers
UVM has an enterprise license with Microsoft Co-pilot.
The Vermont Advanced Computing Center (VACC) hosts local LLMs for research use that provides data protection in a closed environment. Learn more here.
The offices of the CIO and CTO are exploring new opportunities for AI tooling and these resources will evolve in the coming months, including a new call for AI innovation pilots.
Research Protections Office's Institutional Review Board
Artificial Intelligence in Human Participant Research
With the emergence of artificial intelligence (AI), researchers have a unique opportunity to use AI in their own research studies. However, the current regulatory framework for human participant research does not include the use of artificial intelligence. In the 2018 Common Rule, definitions for human participant, private versus public, secondary use, and identifiability are all challenged when AI is introduced.
Best practices are not in place, nor is there a consensus on appropriate use. Given this, our guidance is subject to change based on the development of this novel technology and best practices for its use. The IRB is working on building capacity and expanding our competencies in review of AI research, the creation of tools and consent language appropriate to use of AI, and development of educational opportunities.
At this time, we request that you continue to submit your protocols as usual, and the IRB will rely upon UVM faculty experts to assist in the technical aspect of our reviews. We will notify you when we are adjusting procedures to assist you and the Committee with ensuring the highest level of protection for participants in research.
Visit the Research Protections Office and the Institutional Review Board to learn more.
RPO IRB Policies and Procedures
Artificial Intelligence (AI) is an ever-evolving field that presents new opportunities for research while amplifying the risk of data privacy and confidentiality. This guidance covers the use of AI, machine learning, deep learning, and related AI techniques used in research activities.