A Methodology for Controlling Bias and Fairness in Synthetic Data Generation

Enrico Barbierato, Marco Luigi Della Vedova, Daniele Tessera, Daniele Toti, Nicola Vanoli

Research output: Contribution to journalArticlepeer-review

Abstract

The development of algorithms, based on machine learning techniques, supporting (or even replacing) human judgment must take into account concepts such as data bias and fairness. Though scientific literature proposes numerous techniques to detect and evaluate these problems, less attention has been dedicated to methods generating intentionally biased datasets, which could be used by data scientists to develop and validate unbiased and fair decision-making algorithms. To this end, this paper presents a novel method to generate a synthetic dataset, where bias can be modeled by using a probabilistic network exploiting structural equation modeling. The proposed methodology has been validated on a simple dataset to highlight the impact of tuning parameters on bias and fairness, as well as on a more realistic example based on a loan approval status dataset. In particular, this methodology requires a limited number of parameters compared to other techniques for generating datasets with a controlled amount of bias and fairness.
Original languageEnglish
Pages (from-to)N/A-N/A
JournalAPPLIED SCIENCES
Volume12
DOIs
Publication statusPublished - 2022

Keywords

  • bias
  • data generation
  • fairness
  • machine learning
  • structural equation modeling

Fingerprint

Dive into the research topics of 'A Methodology for Controlling Bias and Fairness in Synthetic Data Generation'. Together they form a unique fingerprint.

Cite this