Before you decide to take part in this study it is important for you to understand why the research is being done and what it will involve. Please take time to read the following information carefully and discuss it with others if you wish. A member of the team can be contacted if there is anything that is not clear or if you would like more information. Take time to decide whether or not you wish to take part.
AI-generated deepfakes have been a major societal problem since the rise of deep learning technologies and their threats have been further accelerated by generative AI and large pretrained AI models in recent years. This research aims to investigate whether simple and quick training can effectively improve people's performance in detecting deepfake facial images. The study will be completed by summer, 2026.
You are invited to participate in this study because you meet the inclusion criteria:
These criteria are necessary for full and effective participation of designed activities of the study.
Taking part is entirely voluntary. You are free to decide whether or not to take part. If you do decide to take part, you will be asked to give your consent. You are still free to withdraw at any time after giving your consent and without giving a reason. A decision to withdraw, or a decision not to take part, will not affect you in any way. Refusal or withdrawal will involve no penalty or loss, now or in the future.
If you agree to take part, you will complete an online survey through a dedicated website, which will proceed as follows:
Survey (30-45 minutes):
There are no lifestyle restrictions as a result of participating.
The disadvantages and/or risks in taking part are considered low and comparable to those encountered in everyday life.
By taking part in the study, you will learn more about deepfake facial images and how to better distinguish them from real ones. Such skills can help you avoid becoming a victim of online harms generated by deepfake facial images.
You will be compensated for the time you spend on the study. In addition, if you are among the top performers, you will receive a cash prize.
Beyond the benefits for yourself, your participation will help the researchers understand the effectiveness of the training on deepfake detection, which can benefit a wider group of people and organisations. For instance, the findings learned from the research will help guide the creation of training programmes and public awareness campaigns on digital literacy and help people more effectively detect deceptive online content.
You will be asked to give permission to allow restricted access to the information collected in the course of the study, which will be kept strictly confidential. All data will be identified only by a code, with personal details kept in a locked file or secure computer with access only by the immediate research team.
To comply with data protection legislation, the University of Cambridge provides general information about how personal data is used: Research Participant Data Policy.
The study is designed to be anonymous and will not collect any personally identifiable or sensitive data. Anonymous data will be collected via the testing website, initially stored on its server based in the UK, and then transferred to an online data portal accessible only to the research team. Data will also be securely backed up on a secure server at the University of Cambridge.
The results of the research are expected to be published, presented and communicated at different venues such as scientific journals and conferences, preprint servers, talks and websites of members of the research team. Results are normally presented in terms of groups of individuals. If any individual data are presented, the data will be totally anonymous, without any means of identifying the individuals involved.
A copy of the published results and links to any future publications will be made available on the websites of members of the research team. As we are not collecting your contact information, we cannot inform you directly, so you are encouraged to check the websites in the future for updates.
The anonymous research data may be released as a public resource for researchers, educators, policymakers and other stakeholders to use. This is to ensure transparency and allow for further research.
The research is organised by a team of international researchers from University of Cambridge (UK), Xinjiang University (China), Hunan University (China), Max Planck Institute for Security and Privacy (Germany), University of Luxembourg (Luxembourg) and the University of Kent (UK). The whole research team includes the following researchers:
All researchers are funded by their own institute to collaborate on the research, which receives funding from Xinjiang University, China and Max Planck Institute for Security and Privacy, Germany for the experiments.
The project has been reviewed by Cambridge Judge Business School’s ethics review group.
For questions about the research, please contact the researcher: Dr Luning Sun, University of Cambridge (l.sun@jbs.cam.ac.uk).
Thank you for taking the time to read this information sheet and for considering taking part in this research.