Skills for Learning – Guide to Using Generative AI

Please note: the University is currently preparing more specific policy regarding the use of generative AI. The following pages are intended as a general guide only; for more specific advice regarding your assignments, please speak with your module tutor.

This guide is designed as an introduction to generative AI, within an academic context. Its purpose is to guide students through the following:

  • An overview of generative AI
  • A guide to appropriate use of generative AI in assignments
  • SfL’s recommendations for using generative AI

Artificial Intelligence (AI) is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” (Copeland, 2023). AI can be used to perform a range of functions with different AI programs becoming increasingly developed for specific purposes. In terms of academic work, the most frequent uses are to generate, structure or proofread text and to generate images.

Some of the most popular examples of generative AI tools are ChatGPT, Microsoft Copilot, Google Gemini, Grammarly and Canva. Most AI products come in different versions, if you are using a free version, you may be limited to what you can do. For example, the free version of ChatGPT does not allow image creation.

Though research is still emerging on exactly how students are using generative AI, current findings suggest the following:

- Students have some awareness of reliability issues with AI.

- In general, students do not critique the information/references generated by AI tools like ChatGPT (Kuvalja, 2023; Marris, 2023; The Learning Network, 2023).

There are issues with AI and intellectual copyright. When you use AI chatbots, you are agreeing to the terms and conditions of that chatbot. This may include the chatbot using your work to further its own knowledge base. This may be an infringement of intellectual copyright. 

Generative AI programs use Large Language Models (LLMs) when generating text. LLMs are trained on billions of words from ordinary language. They learn patterns, grammar, and context. But here’s the catch: no one fully understands their inner workings because they’re not explicitly programmed like traditional software. LLMs predict the next word in a sentence. They do this by considering the context—the words that came before (Copilot, 2024).

This means that AI has no understanding of the information it provides, it is simply predicting a response based on the data it has. The majority of that data will have come from the internet, so any bias, prejudice or incorrect information will be included in that prediction.

There have also been concerns raised regarding AI and Intellectual Property. When you use AI chatbots, you are agreeing to the terms and conditions of that chatbot. This may include the chatbot using your work to further its own knowledge base and could even lead to your work being shared with others. This may be an infringement of intellectual copyright.

When asking questions of AI, some AI programs will provide references to support their answers, others may need prompting before references are given. For example, Copilot will provide references as a standard, but ChatGPT will not provide references unless asked. It should be noted that AI programs have been known to fabricate references, and the use of sources such as Wikipedia is common. Careful analysis of AI generated answers is therefore very important. Use of information that is considered inappropriate or unacademic could result in you losing marks in your assignment.

It is very important not to rely on, or automatically accept, any information AI may provide. When researching for an assignment, whether AI is used or not, it is necessary to critically evaluate any information you wish to include. This is especially important when using AI.

When using generative AI for assignments, students must be cautious about using AI generated text in their final submission. As AI creates text on the basis of information that has been input, it has been argued that any text generated by AI is automatically plagiarism. It is therefore important that you do not include AI generated text in your work. Content which has been generated by AI may be detected by Turnitin (the Canvas submission tool) as plagiarism. Because AI generates text by drawing from the sources in its training data, it sometimes copies material, often without giving a reference to the original source. If you copy the AI output in your work, you could be seen as committing plagiarism.

Please note: In some cases, you may be permitted, or even required, to use AI in your assignment. Below, we have outlined a few examples of where using generative AI may be appropriate, as well as where it is not recommended. However, you must consult with your module tutor and assessment brief for confirmation of this.

 

May be permissible

Check first

Should not be used

Explanation of ideas, theories, models etc, as an initial stage of information retrieval and examination

Image creation

Writing sections of your assignment

Generating a suggested structure for an assignment

Proofreading

Finding sources of information without checking for validity

Idea generation for an assignment

Translation

Providing references

 

 

Using AI to paraphrase texts without acknowledgement

When asking questions of AI, some AI programs will provide references to support their answers, others may need prompting – or ‘prompt engineering’ - before references are given. For example, Copilot will provide references as a standard, but ChatGPT will not provide references unless asked. 

If you would like to use generative AI to help write your assignment, here are our recommendations to ensure you maintain academic integrity:

- If AI suggests sources of information, you must check that all these sources are both legitimate and reliable, apply critical analysis to all sources of information.

- Be prepared to ask AI to refine its results for you. This is termed prompt engineering, where you ask follow-up questions to bring more relevant results. To learn more about this, see this short video: https://www.linkedin.com/learning/introduction-to-prompt-engineering-for-generative-ai/joining-the-nlp-revolution 

- If you have generated text to form part of your assignment, you must paraphrase this and include references. Please note, we would not recommend using AI to generate text in this way. For guidance on paraphrasing, see this short video tutorial: https://wlv.cloud.panopto.eu/Panopto/Pages/Viewer.aspx?id=a6b7c881-ec92-49e0-ade9-abf101005c68.

- Build up your academic skills by signing up for our online workshops, or coming to one of our drop-ins for a chat. 

There are many different AI products available, with some products being for specific uses. Currently, the University is providing all staff and students with access to Microsoft Co-pilot. If you use your University of Wolverhampton username and password, you will have access to the protected version of co-pilot. You should see this symbol in the top right hand corner of the screen:  Protected symbolThis indicates that your data is protected.

Whilst not normally permitted, some assignments may allow the use of AI in generating content. It is also possible that some assignment may require the use of AI in generating content. If you have used AI for this purpose, you must acknowledge that use. To cite AI generated content of any kind, use the name of the AI and the year it was used, for example (OpenAI ChatGPT, 2024).

If you are using Cite Them Right Harvard to reference your work, you would reference AI as follows:

Name of AI (Year of communication) Medium of communication Receiver of communication, Day/month of communication.

For example:

Open AI ChatGPT (2024) ChatGPT response to John Smith, 4 May.

AI is referenced in a similar format to a conversation or personal communication. The reason for this is that most AI interactions will not replicable. When you ask AI a question, it generally gives an answer that is unique to that question. If you asked the same question again, you would most likely receive a slightly different answer.

Because of this, it is important to save any AI generated answers you may be relying on, or may need to include as part of your work.

Referencing AI generated images.

If you have used AI to generate images for inclusion in your work, you must acknowledge this through correctly citing and referencing them.

When using Harvard to reference an AI generated image you would use the following format:

Creator (Year) Title of work [Medium]. Available at: URL (Accessed: date).

For example,

Dragonfly AI Image

Co-pilot (2024) Dragonfly on a daffodil [Digital art]. Available at: https://www.bing.com/images/create/-242779103/2-66a4e7cbf25f4187adb1cbf999254c75?id=mY8a8FwgpyTuIB8eXVz4og%3d%3d&view=detailv2&idpp=genimg&idpclose=1&genimgbce=1&thId=OIGBCE4.DJOSig93ExO35wymD.i_&FORM=SYDBIC (Accessed: 27 July 2024).

Copeland, B.J. (2023) Artificial intelligence. Encyclopedia Britannica. Available at: https://www.britannica.com/technology/artificial-intelligence (Accessed: 14 August 2023).

Kuvalja, M. (2023) ‘The use of ChatGPT for content creation: A student perspective’, Cambridge Assessment, 29 November. Available at: https://www.cambridgeassessment.org.uk/blogs/the-use-of-chatgpt-for-content-creation/ (Accessed: 20 December, 2023).

Marris, L. (2023) ‘How are students using ChatGPT in their studies and the graduate recruitment process?’ Cibyl, 25 May. Available at: https://www.cibyl.com/cibyl-insights/how-are-students-using-chatgpt-in-their-studies-and-the-graduate-recruitment-process (Accessed: 21 December, 2023).

The Learning Network (2023) ‘What Students Are Saying About ChatGPT’, New York Times, 2 February. Available at: https://www.nytimes.com/2023/02/02/learning/students-chatgpt.html (Accessed: 21 December 2023).