Navigation auf uzh.ch
Reference: |
Armasuisse S+T (CYD-C-2020003) |
Source of funding: |
Armasuisse |
Project Duration: |
1.02.2024- 30.11.2024 |
The main objective of the CyberMind project is to research, design, and implement cybersecurity frameworks providing various measures that can be taken to protect AI-based systems and models and keep them secure from a range of emerging attacks. To achieve this goal, the following objectives are defined:
To advance the state of the art in terms of adversarial attacks compromising the robustness and privacy of Decentralized Federated Learning (DFL). This will be achieved by conducting a thorough analysis of existing techniques and methodologies utilized for adversarial attacks on DFL frameworks. Innovative approaches for poisoning and inference attacks on DFL will be developed, implemented, and evaluated by identifying and exploiting vulnerabilities in DFL.
To enhance the resiliency of DFL systems to cyberattacks by designing and implementing a multilayer defense framework that leverages novel cybersecurity mechanisms. It is necessary to perform a comprehensive threat analysis for the entire lifecycle of a DFL system. Furthermore, different layers of security threats, including data, network, and model aspects, will be identified. To mitigate these vulnerabilities, defensive strategies such as the Moving Target Defense (MTD) will be designed and implemented to counter cyberattacks targeting DFL systems across diverse layers.
To increase the trustworthiness of ML/DL/FL models by designing and implementing a framework in charge of computing the reputation of participants training FL models and assessing the quality of datasets used to train ML/DL models. These aspects will be achieved by conducting an exhaustive analysis of existing aspects and techniques utilized for reputation systems in distributed and AI-based scenarios. In addition, the analysis of work done in the field of data quality assessment will be critical to later propose, design, implement, and validate novel solutions improving the trustworthiness of AI.
[Full Paper] Alberto Huertas Celdrán, Jan von der Assen, Chao Feng, Sandro Padovan, Gérôme Bovet, Burkhard Stiller: Next Generation of AI-based Ransomware, 2024 IEEE Global Communications Conference: Communication & Information Systems Security, Cape Town, South Africa, December, 2024 (To appear)
In this demonstration, ThreatFinderAI is used to model threats and identify countermeasures. In addition, residual risks are discussed based on business impact analysis and quantification. In the simplified scenario, a hypothetical company assesses the architecture of a digital customer care platform relying on Large Language Models (LLMs) and Retrieval Augmented Generation (RAG).
Inquiries may be directed to the local Swiss project management:
Prof. Dr. Burkhard Stiller,
Dr. Alberto Huertas Celdrán
University of Zürich, IFI
Binzmühlestrasse 14
CH-8050 Zürich
Switzerland
stiller@ifi.uzh.ch,
huertas@ifi.uzh.ch
Phone: +41 44 635 75 85
Fax: +41 44 635 68 09