A Methodology for Controlling Bias and Fairness in Synthetic Data Generation

Enrico Barbierato, Marco Luigi Della Vedova, Daniele Tessera, Daniele Toti, Nicola Vanoli

Risultato della ricerca: Contributo in rivistaArticolo in rivistapeer review

Abstract

The development of algorithms, based on machine learning techniques, supporting (or even replacing) human judgment must take into account concepts such as data bias and fairness. Though scientific literature proposes numerous techniques to detect and evaluate these problems, less attention has been dedicated to methods generating intentionally biased datasets, which could be used by data scientists to develop and validate unbiased and fair decision-making algorithms. To this end, this paper presents a novel method to generate a synthetic dataset, where bias can be modeled by using a probabilistic network exploiting structural equation modeling. The proposed methodology has been validated on a simple dataset to highlight the impact of tuning parameters on bias and fairness, as well as on a more realistic example based on a loan approval status dataset. In particular, this methodology requires a limited number of parameters compared to other techniques for generating datasets with a controlled amount of bias and fairness.
Lingua originaleEnglish
pagine (da-a)N/A-N/A
RivistaAPPLIED SCIENCES
Volume12
DOI
Stato di pubblicazionePubblicato - 2022

Keywords

  • bias
  • data generation
  • fairness
  • machine learning
  • structural equation modeling

Fingerprint

Entra nei temi di ricerca di 'A Methodology for Controlling Bias and Fairness in Synthetic Data Generation'. Insieme formano una fingerprint unica.

Cita questo