1. Home
  2. Software and Licensing
  3. Best Practices for Copilot/LLM Prompting

Best Practices for Copilot/LLM Prompting

Why It Matters 

The quality of your AI output depends heavily on how you craft your prompts. Think of prompting as “programming with words” — clear, structured input leads to better results and a reduced number of calls. 

UVM-Specific Tips

  • Respect Data Policies: Use your UVM account (look for the Green Shield Icon to ensure Enterprise Data Protection is enabled) 
  • Verify Outputs: Always fact-check AI responses before sharing. 
  • Understand Limitations: LLMs may hallucinate; use them as assistants, not authorities. 
  • Prompt Gallery: Save and reuse effective prompts via Copilot’s Prompt Gallery feature for consistency. 

Core Principles

Provide Context 

  • Explain the background, audience, and purpose. 
  • Example: “Write a 200-word summary of this report for a non-technical audience.” 

Be Specific 

  • Include details like tone, format, and constraints. 
  • Example: “Generate five social media captions under 100 characters, using a friendly tone.” 

Use Step-by-Step Instructions 

  • Break complex tasks into smaller steps. 
  • Example: “First outline the key points, then draft a summary.” 

Iterate and Refine 

  • Start broad, then refine based on the AI’s response. 
  • Use follow-up prompts to clarify or adjust. 

Leverage Examples 

  • Provide sample outputs or templates. 
  • Example: “Here’s an example of the tone I want: [insert example].” 

Set Role or Perspective 

  • Assign the AI a role for better alignment. 
  • Example: “Act as a cybersecurity analyst and explain the risks of weak passwords.” 

Common Pitfalls

  • Vague prompts like “Write something about AI” → lead to generic results. 
  • Overloading with multiple unrelated tasks in one prompt. 
  • Ignoring iterative refinement. 

Updated on September 30, 2025

Related Articles

Not the solution you were looking for?
Don’t worry we’re here to help!
Submit a Help Ticket