Francesca NarettoPh.D. student in Data Science

Francesca Naretto is a third-year Ph.D. student in Data Science, under Prof. Fosca Giannotti and Prof. Anna Monreale’s supervision. She is working on eXplainable AI with the group of ERC XAI led by Prof. Fosca Giannotti (“Science and technology for the eXplanation of AI decision making”). She is also part of the EU H2020 project SoBigData++ and the TAILOR project.

She graduated in Computer Science (Bachelor’s degree from the University of Turin, Master’s degree from the University of Pisa). During her Master’s, she won a scholarship to work on her Master’s thesis abroad at the University College of London. The thesis topic was a framework for privacy risk prediction and explanations tailored for sequence data.

She received an award for her Master’s thesis: ETIC Award 2019-2020 – District Award 2031 Rotary International. This award was given due to the promising results obtained in the context of ethical issues, including data privacy and the right to an explanation.

Her primary research interest is Ethical AI, with a particular interest in Data Privacy and Explainable AI. These two ethical values are essential: achieving them may enable the definition of a trustworthy ethical AI. However, the achievement of these ethical values has different requirements. For this reason, there are both synergies and tensions in this context. During her Ph.D she faced these problems.

In the context of her project, she published EXPERT: a framework for predicting the privacy risk of a user’s data, correlating the output with a local explanation. THEY developed this framework for tabular and sequential data and exploit the state-of-the-art machine learning models for privacy risk prediction, such as LSTMs, Rocket, InceptionTime and GCForest. For the local explanation, we exploit LIME, LORE and SHAP.

In this context, they also defined and empirically tested a new privacy attack for text data based on the psychometric profile extracted. This new privacy attack allows them to explore further the behavior of our framework EXPERT with different kinds of data and privacy attacks.

Then, they proposed HOLDA: a new hierarchical federated learning approach for cross-silo, in which the goal is to maximize the generalization capabilities of the machine learning models trained. Lastly, she also worked on a Survey of Explainable AI methods, with a benchmarking of Python’s most popular XAI methods.

During her Ph.D. she worked as a teaching assistant for a course in Data Mining (for the Master Degree in Computer Science) and in programming with Python (a first programming course for the Master in Big Data and Data Science).

This year, she also conducted some seminars at Scuola Normale Superiore on Machine Learning techniques and Explainable AI. During these courses, she teaches mainly clustering techniques, machine learning techniques and eXplainable AI methods from a practical point of view.

 

View all Impact Deal people

NapolitanoGiuseppe Nigro