Sparse models for machine learning

Jianyi Lin*

*Autore corrispondente per questo lavoro

Risultato della ricerca: Contributo in libroChapter

Abstract

Arguably one of the most notable forms of the principle of parsimony was formulated by the philosopher and theologian William of Ockham in the 14th century, and later became well known as Ockham’s Razor principle, which can be phrased as: “Entities should not be multiplied without necessity.” This principle is undoubtedly one of the most fundamental ideas that pervade many branches of knowledge, from philosophy to art and science, from ancient times to modern age, then summarized in the expression “Make everything as simple as possible, but not simpler” as likewise asserted by Albert Einstein. The sparse modeling is an evident manifestation capturing the parsimony principle just described, and sparse models are widespread in statistics, physics, information sciences, neuroscience, computational mathematics, and so on. In statistics the many applications of sparse modeling span regression, classification tasks, graphical model selection, sparse M-estimators, and sparse dimensionality reduction. It is also particularly effective in many statistical and machine learning areas where the primary goal is to discover predictive patterns from data, which would enhance our understanding and control of underlying physical, biological, and other natural processes, beyond just building accurate outcome black-box predictors. Common examples include selecting biomarkers in biological procedures, finding relevant brain activity locations, which are predictive about brain states and processes based on fMRI data, and identifying network bottlenecks best explaining end-to-end performance. Moreover, the research and applications of efficient recovery of high-dimensional sparse signals from a relatively small number of observations, which is the main focus of compressed sensing or compressive sensing, have rapidly grown and became an extremely intense area of study beyond classical signal processing. Likewise interestingly, sparse modeling is directly related to various artificial vision tasks, such as image denoising, segmentation, restoration and superresolution, object or face detection and recognition in visual scenes, as well as action recognition and behavior analysis. Sparsity has also been applied in information compression, text classification, and recommendation systems. In this chapter, we provide a brief introduction of the basic theory underlying sparse representation and compressive sensing and then discuss some methods for recovering sparse solutions to optimization problems in an effective way, together with some applications of sparse recovery in a machine learning problem known as sparse dictionary learning.
Lingua originaleEnglish
Titolo della pubblicazione ospiteEngineering Mathematics and Artificial Intelligence: Foundations, Methods, and Applications
EditorHerb Kunze, Davide La Torre, Adam Riccoboni, Manuel Ruiz Galán
Pagine107-146
Numero di pagine40
Volume2023
DOI
Stato di pubblicazionePubblicato - 2023

Serie di pubblicazioni

NomeMATHEMATICS AND ITS APPLICATIONS

Keywords

  • compressed sensing
  • sparse dictionary learning
  • machine learning
  • sparse models

Fingerprint

Entra nei temi di ricerca di 'Sparse models for machine learning'. Insieme formano una fingerprint unica.

Cita questo