Digital Compliance: The Case for Algorithmic Transparency

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

Together with their undeniable advantages, the new technologies of the Fintech Revolution bring new risks. Some of these risks are already known but have taken on a new form; some are entirely new. Among the latter, one of the most relevant concerns the opacity of artificial intelligence (AI). This lack of transparency generates questions not only about measuring the correctness and efficiency of the choices made by the algorithm, but also about the impact of these choices on third parties. There is, therefore, an issue of the legitimacy of the decision thus made: its opacity makes it arbitrary and insensitive to the rights of third parties affected by the choice. Thus it is essential to understand what level of explanation is needed in order to allow the use of the algorithm. Focusing on the AI transparency issue, there are grounds for believing that, at least in the EU, the costs deriving from a lack of transparency cannot be passed on to third parties and must instead be managed inside the enterprise. Therefore, the task of the enterprise, its directors, and in particular its compliance function must be dynamic, taking into account all foreseeable AI risks.
Original languageEnglish
Title of host publicationCorporate Compliance on a Global Scale. Legitimacy and Effectiveness
EditorsF. Centonze, S. Manacorda
Pages259-284
Number of pages26
Publication statusPublished - 2022

Keywords

  • Artificial Intelligence
  • Transparency
  • compliance
  • digital
  • forward compliance
  • innovation

Fingerprint

Dive into the research topics of 'Digital Compliance: The Case for Algorithmic Transparency'. Together they form a unique fingerprint.

Cite this