Login

Privacy Audit Platform for Machine Learning Model in Medical Data Scenario

BA, IS
State: Open
Published: 2024-09-03

Artificial intelligence (AI) has increasingly become a cornerstone in medical research and clinical practice, offering advanced capabilities for diagnosing diseases, predicting patient outcomes, and personalizing treatment plans. Federated learning (FL), a decentralized machine learning approach, has gained prominence in the medical domain by enabling multiple institutions to collaboratively train models without the need to centralize sensitive patient data. This decentralized approach is crucial for maintaining patient privacy and adhering to stringent regulations, such as the Health Insurance Portability and Accountability Act (HIPAA). However, despite the inherent privacy advantages of FL, it remains vulnerable to sophisticated privacy attacks, such as membership inference attacks. These attacks aim to infer whether a particular data point was included in the training set, thereby compromising patient confidentiality. The growing adoption of AI and FL in healthcare underscores the urgent need to address these privacy concerns and to develop robust methods for safeguarding sensitive medical data.

Goals for Developing a Privacy Attack Framework in Federated Learning:

1. Design and implement a comprehensive framework capable of simulating a variety of privacy attacks, including membership inference attacks, across both traditional and federated learning models.

2. Ensure the framework can handle heterogeneous medical datasets, incorporating both image and text data, to reflect real-world clinical scenarios.

3. Develop methods to evaluate the effectiveness and robustness of privacy attacks on models trained with varying degrees of data heterogeneity and model complexity.

4. Integrate mechanisms within the framework to assess the impact of different federated learning configurations, such as client distribution and data sharing strategies, on the susceptibility to privacy attacks.

5. Provide detailed documentation and analysis tools to support researchers in understanding the vulnerabilities of AI models in medical contexts and to aid in the development of more secure federated learning systems.

20% Design, 70% Implementation, 10% Documentation
python, pytorch

Supervisors: Dr Alberto Huertas

back to the main page