Skip to Main Content

Using Generative AI in Academic Study: Ethics, Risks and Drawbacks

A guide to best practices for using Generative AI in your university study

🚩 Risks, Drawbacks and Ethical Considerations for Researchers

AI systems often produce biased, inaccurate, or fabricated information that can undermine the quality of academic work. There are also data storage and privacy concerns to consider, as well as major ethical issues which can often stem from the training data taken without permission or from problematic sources. 

Additionally, over-relying on AI may prevent students from developing essential skills like critical thinking and original writing.

Bias, Misinformation and Discrimination

This map has six connected stages for you to view and complete. Journey through three major challenges of using AI content; read the information, take the quizzes and watch the short videos.

Plagiarism

Understanding Privacy Risks in AI Tools

Using Co-Pilot at Waikato

Microsoft Copilot with GPT-5 ensures a higher level of security for University data. When signed in with your Waikato user account, Microsoft Copilot with GPT-5 protects both user and institutional data. A green ‘Protected’ symbol in the top-right corner of your screen indicates that your data is secure.

Key security features include:

No data storage: Prompts and responses are not saved.

Privacy: Microsoft has no access to the content you generate.

Non-training data: Chat data is not used to train the underlying AI models.

No access to organisational data: Unlike Copilot for Microsoft 365, this version does not have access to any organisational data.

Taken from the University Tech Hub write-up on Microsoft Copilot, 2025