Thomas Souverain
Please tell us about your research project.
Currently in the third year of my PhD in Philosophy at the Ecole Normale Supérieure de Paris – rue d’Ulm (ENS Ulm, Paris Sciences et Lettres University), my topic is Explanation and Fairness of machine learning algorithms: “Can we Explain Artificial Intelligence (AI)? Technical Solutions, Ethical Issues in Financial Services”.
To that end, my thesis is a three-year partnership between ENS Ulm and a company of data-scientists training me (CIFRE thesis with DreamQuark, Paris). Through delving into DreamQuark, I acquired coding skills applied to the specific cases of credit granting and wealth management.
My thesis feeds on what is really AI in these use cases, to consider concretely its ethical issues. I pursue a two-pronged approach, thinking the explanation through the lens of:
- Transparency: how could we get a more “human” way of reasoning?
Fortunately, designers themselves raised the question of understanding machine learning “black box” – that, fast as it is, entails default settings and training that are hard to follow to human minds. Techniques of so-called “explainable AI” have been created to address the black-box issue.
A problem is that these techniques often take place after AI models have been trained, e.g. reconstituting the influence of the feature “age” of 31% to predict the loan attribution of a client. Therefore, I work on building algorithms directly integrating business knowledge on models for financial services – e.g. setting in advance that the “age” has to causally determine the “years of work experience” for AI predictions, that is a technological challenge for current non-causal AI. Hence, direct insights on “transparent” AI would tell what the model does, not what it might have done.
- Fairness: can transparent access to AI produce “fairer” situations?
My second research axis regards ethical stakes of financial services use cases, less studied by ethicists. Indeed, to assess the model to “fairly” allocate the loans among individuals, we first have to build it as intelligible. To know that W did not get access to her loan and everything else being equal as a man M she would have been eligible, and that this gap can be counterbalanced, seems a prerequisite for aligning AI with users’ values.
That’s why I developed a package for unfairness detection and mitigation, with the intent to delve into a data-scientist perspective, and to introduce a fairness tool accessible to users without technical background. A second part of my experiment is to test, as I did with the
data-scientist point of view, if an “intuitive” fairness tool increases the feeling of understanding and trust amongst lay users as bank customers.
Could you please tell us a bit more about your stay at Oxford?
Visiting Oxford is motivated by my research on fairness in AI. Staying at the Maison Française d’Oxford (MFO) until the end of Michaelmas Term 2022, the interactions with residents of the MFO and leading researchers specialised in AI Ethics (Institute for Ethics in AI, Human Centered Computing, Oxford Internet Institute) currently help me consolidate, starting with the loan lending case, my analysis of fairness assessment among non-expert users.
First impressions of Oxford?
How charming is Oxford!
All the subtility of Lord Henry, and the beauty of Dorian Gray, are present in its delicate gothic revival houses and the excellence of its researchers.
Between the large quiet avenues of the Maison Française quarter sparsed with manors, or when I cross the Cherwell or the Thames to run in the large meadows, the nature is always present in this university centre.
Both passions of nature and intellect can be joined. What a surprise for a Parisian student like me! I really appreciate my stay – squirrels like me seem to enjoy…