Random Name


PhD student in Computer Science

INRIA Laboratory, LACODAM team
Aalborg University, Human-Centred Computing group

Work Experience Teaching Personnal Publication

About Me

I am a sociable, meticulous student with an excellent ability to multi-task and demonstrate great resistance to stress. I have skills in the fields of finance, public relations, and computer science.

I am 25 years old and currently in my third year of Ph.D. in computer sciences at Rennes 1 University and Inria laboratory. My thesis is about the Automatic Construction of Explanations for AI Models. My supervisors are Christine Largouët and Luis Galárraga from the LACODAM team.

Due to recent advances in AI and in particular in deep learning, models are becoming more and more difficult to trust since their success is often due to their high complexity. Relying on the answers of a black box can be an issue for technical, ethical, and legal reasons. The purpose of my Ph.D. is to extend actual interpretable methods for machine learning methods to explain the inner mechanisms of a complex machine model.

During the first part of my Ph.D., I studied the generation of post-hoc local explanation and proposed two frameworks to select a priori the best surrogate model for tabular and text data. Next, I am searching to collaborate with international researchers from another laboratory or company to study more in detail the impact of the surrogate model on human understandability. My projects are about integrating the user in the loop for interpretability (or what I want to call Activate Interpretability Learning) or conducting user studies to evaluate the impact of the chosen explanation model.

If you are interested in collaboration, don't hesitate to send an email!

Work experience

2022 -

  • PhD student visiting Niels van Berkel at Aalborg University in Danemark for research collaboration
  • After having developped methods such as APE to evaluate a priori whether a linear explanation is suitable to approximate locally a black box model, I am currently visiting the associate Professor Niels van Berkel to conduct user studies in order to assess the users understanding depending on the explanation module.
    Our first step is to compare the understandability of linear, rule-based and counterfactual explanations. In a second round, we will evaluate the impact of the user experience in his understanding.

    2020 -

  • PhD student supervised by Christine largouët and Luis Galárraga in the domain of interpretability
  • The recently approved GDPR contemplates the right of individuals to contest decisions made on their personal data. That encompasses decisions made by algorithms. To select the most suitable explanation for a use case taking into account criteria such as fidelity, complexity, scope, semantics, and the target user. Such a task is far from trivial because it requires a knowledge of the guarantees of the different explanation methods, as well as an analysis of the context in which the explanation will be delivered.
    This task is time-consuming and unfeasible for non-technical users. We argue, however, that it can be automated.


  • 6 months internship in Lacodam team supervised by Luis Galárraga and Christine largouët
  • I complete my research master's degree with a second internship in a research laboratory. This internship was part of the FABLE project (that leads to my PhD thesis.) The purposes was to think and solve the question: "when are anchors based explanation not a good idea?" I firstly studyied Anchors, a local interpretability method based on decision rules introduced by Marco Tulio Ribeiro, the author of Lime. I discovered for tabular data that the discretization method employed by LIME and Anchors impact their fidelity of the black box model and I proposed a better discretization method to improve these methods.
    Furthermore, I extended the latent research space used by Anchors to generate textual explanation by incorporating pertinent negatives. This internship ended with the publication of our work in the international conference CIKM 2020.


  • In charge of communication for the association of resident of Sherbrooke University. Agrus
  • I was in charge of the communication of an association with more than 500 adherent members. I created and organized events through spreading them on social networks and communication with responsible. I also took part to the organization of some events such as visiting a sugar shack or the winter carnaval in Québec.


  • 4 months internship in Lacodam team supervised by Luis Galárraga
  • I complete my university degree with an internship in a research laboratory. This was my first foot in the research, and I never leave it after. This internship ended with the writing of an article concerning the mining of referring expressions published in the international conference EDBT 2020.. During this internship I coded a programm called REMI, supervised by Luis Galárraga.
    A referring expression (RE) is a description that identifies a set of instances unambiguously. Mining REs from data finds applications in natural language generation, algorithmic journalism, and data maintenance. Since there may exist multiple REs for a given set of entities, it is common to focus on the most intuitive ones, i.e., the most concise and informative. REMI is a system that can mine intuitive REs on large RDF knowledge bases.
  • Vacataire Champs libres
  • During two years, I work as a student job in the library of Rennes Metropole, where is also located the Museum of Brittany, the Sciences Space and a conference hall. I did public service, organized and sorted the books. Finally, I was in charge of the reception of the visitors.


    Julien Delaunay, Luis Galárraga, Christine largouët When Should We Use Linear Explanations? Full paper at the Conference on Information and Knowledge Management (CIKM 2022), Atlanta. [Code]

    Romaric Gaudel, Luis Galárraga, Julien Delaunay, Laurence Rozé, Vaishnavi Bhargava. s-LIME: Reconciling Locality and Fidelity in Linear Explanations. Intelligent Data Analysis (IDA 2022), Rennes. [Preprint]

    Julien Delaunay, Luis Galárraga, Christine largouët Improving Anchor-based Explanations. Poster at the Conference on Information and Knowledge Management (CIKM 2020), Galway. [Preprint] [Presentation] [Code]

    Luis Galárraga, Julien Delaunay, Jean-Louis Dessalles. REMI: Mining Intuitive Referring Expressions. International Conference on Extending Database Technology (EDBT/ICDT 2020), Copenhagen. [Technical report] [Full text] [Presentation] [Code]

    Rennes, France



    PhD student in computer science,
    University of Rennes 1, France, 2020-

    Research master's degree in computer science,
    University of Rennes 1, France, 2019-2020

    Master's degree in computer science,
    University of Sherbrooke, Canada, 2018-2019

    University degree MIAGE informatics methods applied to business management,
    University of Rennes 1, France, 2015-2018

    Scientific and european baccalaureate,
    High School St Martin, Rennes, France, 2012-2015

    Technical skills

  • Python
  • Java
  • Latex
  • Html, css
  • javascript, php
  • Android
  • SQL
  • Languages

    Native speaker
    Professional level
    Basic skills
    Beginner level


    Mentoring of Jacques Lacourt, a final year trainee at Centrale Marseille.

    Organisation member

  • 2020 - 2022
  • Member of the team organizing the monthly seminars of the Data Knowledge Management department at Inria/Irisa Rennes.
  • 2020 - 2022
  • Member of the Centre Committee at Inria Rennes where I represent the C College.
  • 2018
  • In charge of communication for the association of resident of Sherbrooke University. Agrus