LIVIVO - The Search Portal for Life Sciences

zur deutschen Oberfläche wechseln
Advanced search

Search results

Result 1 - 10 of total 139

Search options

  1. Book ; Online: Metamaterials and Metasurfaces

    Canet-Ferrer, Josep

    2019  

    Keywords Materials science ; graphene, terahertz, transmission, liquid crystals, antenna, modal analysis
    Language English
    Size 1 electronic resource (202 pages)
    Publisher IntechOpen
    Document type Book ; Online
    Note English
    HBZ-ID HT030645858
    ISBN 9781838817350 ; 1838817352
    Database ZB MED Catalogue: Medicine, Health, Nutrition, Environment, Agriculture

    More links

    Kategorien

  2. Article ; Online: The counterfactual framework in Jarmin et al. is not a measure of disclosure risk of respondents.

    Muralidhar, Krishnamurty / Ruggles, Steven / Domingo-Ferrer, Josep / Sánchez, David

    Proceedings of the National Academy of Sciences of the United States of America

    2024  Volume 121, Issue 11, Page(s) e2319484121

    Language English
    Publishing date 2024-03-05
    Publishing country United States
    Document type Letter
    ZDB-ID 209104-5
    ISSN 1091-6490 ; 0027-8424
    ISSN (online) 1091-6490
    ISSN 0027-8424
    DOI 10.1073/pnas.2319484121
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  3. Book ; Online: Database Reconstruction Is Not So Easy and Is Different from Reidentification

    Muralidhar, Krishnamurty / Domingo-Ferrer, Josep

    2023  

    Abstract: In recent years, it has been claimed that releasing accurate statistical information on a database is likely to allow its complete reconstruction. Differential privacy has been suggested as the appropriate methodology to prevent these attacks. These ... ...

    Abstract In recent years, it has been claimed that releasing accurate statistical information on a database is likely to allow its complete reconstruction. Differential privacy has been suggested as the appropriate methodology to prevent these attacks. These claims have recently been taken very seriously by the U.S. Census Bureau and led them to adopt differential privacy for releasing U.S. Census data. This in turn has caused consternation among users of the Census data due to the lack of accuracy of the protected outputs. It has also brought legal action against the U.S. Department of Commerce. In this paper, we trace the origins of the claim that releasing information on a database automatically makes it vulnerable to being exposed by reconstruction attacks and we show that this claim is, in fact, incorrect. We also show that reconstruction can be averted by properly using traditional statistical disclosure control (SDC) techniques. We further show that the geographic level at which exact counts are released is even more relevant to protection than the actual SDC method employed. Finally, we caution against confusing reconstruction and reidentification: using the quality of reconstruction as a metric of reidentification results in exaggerated reidentification risk figures.

    Comment: Journal of Official Statistics (to appear)
    Keywords Computer Science - Cryptography and Security ; Computer Science - Databases ; 68P27 Privacy of data ; H.2 ; G.3
    Subject code 303
    Publishing date 2023-01-24
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  4. Article ; Online: Confidence-ranked reconstruction of census records from aggregate statistics fails to capture privacy risks and reidentifiability.

    Sánchez, David / Domingo-Ferrer, Josep / Muralidhar, Krishnamurty

    Proceedings of the National Academy of Sciences of the United States of America

    2023  Volume 120, Issue 18, Page(s) e2303890120

    Language English
    Publishing date 2023-04-24
    Publishing country United States
    Document type Letter
    ZDB-ID 209104-5
    ISSN 1091-6490 ; 0027-8424
    ISSN (online) 1091-6490
    ISSN 0027-8424
    DOI 10.1073/pnas.2303890120
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  5. Article ; Online: Enhanced Security and Privacy via Fragmented Federated Learning.

    Jebreel, Najeeb Moharram / Domingo-Ferrer, Josep / Blanco-Justicia, Alberto / Sanchez, David

    IEEE transactions on neural networks and learning systems

    2024  Volume 35, Issue 5, Page(s) 6703–6717

    Abstract: In federated learning (FL), a set of participants share updates computed on their local data with an aggregator server that combines updates into a global model. However, reconciling accuracy with privacy and security is a challenge to FL. On the one ... ...

    Abstract In federated learning (FL), a set of participants share updates computed on their local data with an aggregator server that combines updates into a global model. However, reconciling accuracy with privacy and security is a challenge to FL. On the one hand, good updates sent by honest participants may reveal their private local information, whereas poisoned updates sent by malicious participants may compromise the model's availability and/or integrity. On the other hand, enhancing privacy via update distortion damages accuracy, whereas doing so via update aggregation damages security because it does not allow the server to filter out individual poisoned updates. To tackle the accuracy-privacy-security conflict, we propose fragmented FL (FFL), in which participants randomly exchange and mix fragments of their updates before sending them to the server. To achieve privacy, we design a lightweight protocol that allows participants to privately exchange and mix encrypted fragments of their updates so that the server can neither obtain individual updates nor link them to their originators. To achieve security, we design a reputation-based defense tailored for FFL that builds trust in participants and their mixed updates based on the quality of the fragments they exchange and the mixed updates they send. Since the exchanged fragments' parameters keep their original coordinates and attackers can be neutralized, the server can correctly reconstruct a global model from the received mixed updates without accuracy loss. Experiments on four real data sets show that FFL can prevent semi-honest servers from mounting privacy attacks, can effectively counter-poisoning attacks, and can keep the accuracy of the global model.
    Language English
    Publishing date 2024-05-02
    Publishing country United States
    Document type Journal Article
    ISSN 2162-2388
    ISSN (online) 2162-2388
    DOI 10.1109/TNNLS.2022.3212627
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  6. Book ; Online: Bistochastic privacy

    Ruiz, Nicolas / Domingo-Ferrer, Josep

    2022  

    Abstract: We introduce a new privacy model relying on bistochastic matrices, that is, matrices whose components are nonnegative and sum to 1 both row-wise and column-wise. This class of matrices is used to both define privacy guarantees and a tool to apply ... ...

    Abstract We introduce a new privacy model relying on bistochastic matrices, that is, matrices whose components are nonnegative and sum to 1 both row-wise and column-wise. This class of matrices is used to both define privacy guarantees and a tool to apply protection on a data set. The bistochasticity assumption happens to connect several fields of the privacy literature, including the two most popular models, k-anonymity and differential privacy. Moreover, it establishes a bridge with information theory, which simplifies the thorny issue of evaluating the utility of a protected data set. Bistochastic privacy also clarifies the trade-off between protection and utility by using bits, which can be viewed as a natural currency to comprehend and operationalize this trade-off, in the same way than bits are used in information theory to capture uncertainty. A discussion on the suitable parameterization of bistochastic matrices to achieve the privacy guarantees of this new model is also provided.

    Comment: To be published in Lecture Notes in Artificial Intelligence vol 13408, Modeling Decisions for Artificial Intelligence 19th International Conference MDAI 2022, Sant Cugat, Catalonia, August 30 - 2 September 2022
    Keywords Computer Science - Cryptography and Security
    Subject code 303
    Publishing date 2022-07-08
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  7. Book ; Online: FL-Defender

    Jebreel, Najeeb / Domingo-Ferrer, Josep

    Combating Targeted Attacks in Federated Learning

    2022  

    Abstract: Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers. This makes it possible i) to train more accurate models due to learning from rich joint training data, and ii) to ... ...

    Abstract Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers. This makes it possible i) to train more accurate models due to learning from rich joint training data, and ii) to improve privacy by not sharing the workers' local private data with others. However, the distributed nature of FL makes it vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model while, unfortunately, being difficult to detect. Existing defenses against those attacks are limited by assumptions on the workers' data distribution, may degrade the global model performance on the main task and/or are ill-suited to high-dimensional models. In this paper, we analyze targeted attacks against FL and find that the neurons in the last layer of a deep learning (DL) model that are related to the attacks exhibit a different behavior from the unrelated neurons, making the last-layer gradients valuable features for attack detection. Accordingly, we propose \textit{FL-Defender} as a method to combat FL targeted attacks. It consists of i) engineering more robust discriminative features by calculating the worker-wise angle similarity for the workers' last-layer gradients, ii) compressing the resulting similarity vectors using PCA to reduce redundant information, and iii) re-weighting the workers' updates based on their deviation from the centroid of the compressed similarity vectors. Experiments on three data sets with different DL model sizes and data distributions show the effectiveness of our method at defending against label-flipping and backdoor attacks. Compared to several state-of-the-art defenses, FL-Defender achieves the lowest attack success rates, maintains the performance of the global model on the main task and causes minimal computational overhead on the server.
    Keywords Computer Science - Machine Learning ; Computer Science - Cryptography and Security
    Subject code 006
    Publishing date 2022-07-02
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  8. Article: Fair detection of poisoning attacks in federated learning on non-i.i.d. data.

    Singh, Ashneet Khandpur / Blanco-Justicia, Alberto / Domingo-Ferrer, Josep

    Data mining and knowledge discovery

    2023  , Page(s) 1–26

    Abstract: Reconciling machine learning with individual privacy is one of the main motivations behind federated learning (FL), a decentralized machine learning technique that aggregates partial models trained by clients on their own private data to obtain a global ... ...

    Abstract Reconciling machine learning with individual privacy is one of the main motivations behind federated learning (FL), a decentralized machine learning technique that aggregates partial models trained by clients on their own private data to obtain a global deep learning model. Even if FL provides stronger privacy guarantees to the participating clients than centralized learning collecting the clients' data in a central server, FL is vulnerable to some attacks whereby malicious clients submit bad updates in order to prevent the model from converging or, more subtly, to introduce artificial bias in the classification (poisoning). Poisoning detection techniques compute statistics on the updates to identify malicious clients. A downside of anti-poisoning techniques is that they might lead to discriminate minority groups whose data are significantly and legitimately different from those of the majority of clients. This would not only be unfair, but would yield poorer models that would fail to capture the knowledge in the training data, especially when data are not independent and identically distributed (non-i.i.d.). In this work, we strive to strike a balance between fighting poisoning and accommodating diversity to help learning fairer and less discriminatory federated learning models. In this way, we forestall the exclusion of diverse clients while still ensuring detection of poisoning attacks. Empirical work on three data sets shows that employing our approach to tell legitimate from malicious updates produces models that are more accurate than those obtained with state-of-the-art poisoning detection techniques. Additionally, we explore the impact of our proposal on the performance of models on non-i.i.d local training data.
    Language English
    Publishing date 2023-01-04
    Publishing country United States
    Document type Journal Article
    ZDB-ID 1479890-6
    ISSN 1573-756X ; 1384-5810
    ISSN (online) 1573-756X
    ISSN 1384-5810
    DOI 10.1007/s10618-022-00912-6
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

  9. Book ; Online: Defending Against Backdoor Attacks by Layer-wise Feature Analysis

    Jebreel, Najeeb Moharram / Domingo-Ferrer, Josep / Li, Yiming

    2023  

    Abstract: Training deep neural networks (DNNs) usually requires massive training data and computational resources. Users who cannot afford this may prefer to outsource training to a third party or resort to publicly available pre-trained models. Unfortunately, ... ...

    Abstract Training deep neural networks (DNNs) usually requires massive training data and computational resources. Users who cannot afford this may prefer to outsource training to a third party or resort to publicly available pre-trained models. Unfortunately, doing so facilitates a new training-time attack (i.e., backdoor attack) against DNNs. This attack aims to induce misclassification of input samples containing adversary-specified trigger patterns. In this paper, we first conduct a layer-wise feature analysis of poisoned and benign samples from the target class. We find out that the feature difference between benign and poisoned samples tends to be maximum at a critical layer, which is not always the one typically used in existing defenses, namely the layer before fully-connected layers. We also demonstrate how to locate this critical layer based on the behaviors of benign samples. We then propose a simple yet effective method to filter poisoned samples by analyzing the feature differences between suspicious and benign samples at the critical layer. We conduct extensive experiments on two benchmark datasets, which confirm the effectiveness of our defense.

    Comment: This paper is accepted by PAKDD 2023
    Keywords Computer Science - Cryptography and Security ; Computer Science - Machine Learning
    Subject code 006
    Publishing date 2023-02-24
    Publishing country us
    Document type Book ; Online
    Database BASE - Bielefeld Academic Search Engine (life sciences selection)

    More links

    Kategorien

  10. Article: Local synthesis for disclosure limitation that satisfies probabilistic

    Oganian, Anna / Domingo-Ferrer, Josep

    Transactions on data privacy

    2019  Volume 10, Issue 1, Page(s) 61–81

    Abstract: Before releasing databases which contain sensitive information about individuals, data publishers must apply Statistical Disclosure Limitation (SDL) methods to them, in order to avoid disclosure of sensitive information on any identifiable data subject. ... ...

    Abstract Before releasing databases which contain sensitive information about individuals, data publishers must apply Statistical Disclosure Limitation (SDL) methods to them, in order to avoid disclosure of sensitive information on any identifiable data subject. SDL methods often consist of masking or synthesizing the original data records in such a way as to minimize the risk of disclosure of the sensitive information while providing data users with accurate information about the population of interest. In this paper we propose a new scheme for disclosure limitation, based on the idea of
    Language English
    Publishing date 2019-08-13
    Publishing country Spain
    Document type Journal Article
    ZDB-ID 2516254-8
    ISSN 2013-1631 ; 1888-5063
    ISSN (online) 2013-1631
    ISSN 1888-5063
    Database MEDical Literature Analysis and Retrieval System OnLINE

    More links

    Kategorien

To top