This event will be held on Zoom. To attend, please click here.
Giada Pistilli (Sorbonne Université & HuggingFace) will present her co-authored paper “Stronger Together: on the Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML”: https://doi.org/10.1145/3593013.3594002
David M. Berry (University of Sussex) will present his chapter "AI, ethics, and digital humanities": https://sussex.figshare.com/articles/chapter/AI_ethics_and_digital_humanities/23309129
Format: 2*45 min interventions, Q&A included and then 30 min of general discussion.
Convened by Éric Pardoux (ENS Lyon)
Digitalization of scholarship is affecting all areas of research in humanities—and beyond. In this overall trend, Artificial Intelligence (AI) is probably one of the biggest and latest change factors. Although computational methods are now well implemented in research, some novel AI techniques, based on (un)supervised learning from datasets, are questioning the ways we conduct research in digitized worlds, both in terms of which and whose data we use, as much as how we use it. The emergence of big data already raised several ethical questions, regarding their provenance and use. Computations and training of AI models performed on such datasets have their ethical questions.
How do we report and ensure the ethicality of such operations? How do we secure the reproducibility of research in a rapidly evolving context? What does it mean to work with tools not designed for research activities? Should we use tools of which we do not know the exact design? (such as models from private companies which do not disclose the data used to train them?) Furthermore, what does it take for scholars to use AI or collaborate with AI practitioners?
Publication of research is impacted as well, with the rise of generative AI systems, able to produce texts or images for instance. On the one hand, writing or editing can be eased thanks to such tools, on the other hand, they enable the possibility of fabrication of texts, data and figures. Lastly, such tools are not socially or environmentally neutral, we should ponder such long-term issues before massively using such technologies. Given all this, some have called for controlling the use of AI tools in research.
The main aim of this session is to highlight some of the issues with ethics, scientific integrity and research governance that the introduction of AI systems is raising. We will also explore possible ways to address them. We intend to bring together scholars and practitioners who are undertaking actions to mitigate these issues from different perspectives, through the evolution of tools, infrastructures, institutions, theories and practices.