Guidelines for use of generative AI

AI guidance

The following Guidelines concern the use of generative AIs, and specifically Large Language Models (LLMs) like ChatGPT or Google Bard. These principles were developed based on the state of this technology around mid 2023. Keep in mind that generative AI is a rapidly developing field and that these Guidelines will therefore evolve over time.

September 2023

You are responsible for your use of generative AI

The responsibility for the use of the output of any generative AI, including LLMs, always lies with you, the user. It is your responsibility to ensure that the AI’s output is generated and used in alignment with Dutch and European law. Thus, if you share output that you generated using an AI through your work, you are accountable for it. If you are not willing to take this responsibility, you should not make use of generative AI in the context of your work.

 

Reflect on ethical issues related to AI use 

When using AI, reflect on the ethical implications of your use of the AI in the short and long term. For instance, when writing a piece of text with the help of AI, inform yourself of any specific guidelines of the journal, educational institution or committee you are writing for. Using generative AI to improve a text that you have written could be acceptable, while relying on AI to create text from scratch is not. In this example, there is a difference in who generates the ideas in the text – the AI or the user. Also consider to what degree you want to be or become reliant on AI in your work, in the long run. 

Ethics also relates to AI’s societal and environmental impact. Hidden bias in generative AI models is a potential concern. Such AI models generate texts that reflect their training data, which can be skewed towards data from high-income, English-speaking parts of the world. Underrepresentation of perspectives from other regions of the world could perpetuate biases pertaining to, for instance, race, language, and culture. Popular AI´s like ChatGPT and the companies providing these technologies have been criticised on several fronts, including the exploitation of cheap labour during the training process and the environmental impact of the data centres powering the AI. We encourage you to inform yourself about such criticisms and to reflect on whether, and if so how, they should influence your use of AI.

 

Never share non-anonymized information 

You must never share non-anonymized information relating to your work with a generative AI model. This includes information relating to you, your work, your colleagues, and other third parties that you interact with in the context of your work (e.g., students, research participants, etc.). Similarly, you must never share research data with a generative AI, as long as the appropriate Institutional Review Board (IRB) has not explicitly approved this. If you do have IRB approval to share (parts of your) data with a generative AI model, where possible, opt-out of your data being saved and/or used for the training of future AI models.

 

Check the factuality of generative AI output 

Be aware that generative AI is known for “hallucinating” – it occasionally produces seemingly accurate facts that are wrong. Always make sure to check your output, using external sources, for factual inaccuracies before you rely on AI generated output or share it with other people. In general, avoid relying on AI to research or produce facts.

 

Invest in learning how to use generative AI effectively 

The quality of the instructions you give to a generative AI model (called prompting) determines the usefulness of the output you can expect. Hence, there is a cost-benefit continuum between the effort put into writing a good prompt and the benefit of the output to your work. We encourage you to experiment with AI in your work to learn to write effective prompts efficiently. For more details about how to write prompts, you can check out the following resources: