TY - CHAP
T1 - Sparse models for machine learning
AU - Lin, Jianyi
PY - 2023
Y1 - 2023
N2 - Arguably one of the most notable forms of the principle of parsimony was formulated by the philosopher and theologian William of Ockham in the 14th century, and later became well known as Ockham’s Razor principle, which can be phrased as: “Entities should not be multiplied without necessity.” This principle is undoubtedly one of the most fundamental ideas that pervade many branches of knowledge, from philosophy to art and science, from ancient times to modern age, then summarized in the expression “Make everything as simple as possible, but not simpler” as likewise asserted by Albert Einstein.
The sparse modeling is an evident manifestation capturing the parsimony principle just described, and sparse models are widespread in statistics, physics, information sciences, neuroscience, computational mathematics, and so on. In statistics the many applications of sparse modeling span regression, classification tasks, graphical model selection, sparse M-estimators, and sparse dimensionality reduction. It is also particularly effective in many statistical and machine learning areas where the primary goal is to discover predictive patterns from data, which would enhance our understanding and control of underlying physical, biological, and other natural processes, beyond just building accurate outcome black-box predictors. Common examples include selecting biomarkers in biological procedures, finding relevant brain activity locations, which are predictive about brain states and processes based on fMRI data, and identifying network bottlenecks best explaining end-to-end performance.
Moreover, the research and applications of efficient recovery of high-dimensional sparse signals from a relatively small number of observations, which is the main focus of compressed sensing or compressive sensing, have rapidly grown and became an extremely intense area of study beyond classical signal processing. Likewise interestingly, sparse modeling is directly related to various artificial vision tasks, such as image denoising, segmentation, restoration and superresolution, object or face detection and recognition in visual scenes, as well as action recognition and behavior analysis. Sparsity has also been applied in information compression, text classification, and recommendation systems. In this chapter, we provide a brief introduction of the basic theory underlying sparse representation and compressive sensing and then discuss some methods for recovering sparse solutions to optimization problems in an effective way, together with some applications of sparse recovery in a machine learning problem known as sparse dictionary learning.
AB - Arguably one of the most notable forms of the principle of parsimony was formulated by the philosopher and theologian William of Ockham in the 14th century, and later became well known as Ockham’s Razor principle, which can be phrased as: “Entities should not be multiplied without necessity.” This principle is undoubtedly one of the most fundamental ideas that pervade many branches of knowledge, from philosophy to art and science, from ancient times to modern age, then summarized in the expression “Make everything as simple as possible, but not simpler” as likewise asserted by Albert Einstein.
The sparse modeling is an evident manifestation capturing the parsimony principle just described, and sparse models are widespread in statistics, physics, information sciences, neuroscience, computational mathematics, and so on. In statistics the many applications of sparse modeling span regression, classification tasks, graphical model selection, sparse M-estimators, and sparse dimensionality reduction. It is also particularly effective in many statistical and machine learning areas where the primary goal is to discover predictive patterns from data, which would enhance our understanding and control of underlying physical, biological, and other natural processes, beyond just building accurate outcome black-box predictors. Common examples include selecting biomarkers in biological procedures, finding relevant brain activity locations, which are predictive about brain states and processes based on fMRI data, and identifying network bottlenecks best explaining end-to-end performance.
Moreover, the research and applications of efficient recovery of high-dimensional sparse signals from a relatively small number of observations, which is the main focus of compressed sensing or compressive sensing, have rapidly grown and became an extremely intense area of study beyond classical signal processing. Likewise interestingly, sparse modeling is directly related to various artificial vision tasks, such as image denoising, segmentation, restoration and superresolution, object or face detection and recognition in visual scenes, as well as action recognition and behavior analysis. Sparsity has also been applied in information compression, text classification, and recommendation systems. In this chapter, we provide a brief introduction of the basic theory underlying sparse representation and compressive sensing and then discuss some methods for recovering sparse solutions to optimization problems in an effective way, together with some applications of sparse recovery in a machine learning problem known as sparse dictionary learning.
KW - compressed sensing
KW - sparse dictionary learning
KW - machine learning
KW - sparse models
KW - compressed sensing
KW - sparse dictionary learning
KW - machine learning
KW - sparse models
UR - http://hdl.handle.net/10807/247134
U2 - 10.1201/9781003283980-5
DO - 10.1201/9781003283980-5
M3 - Chapter
SN - 9781003283980
VL - 2023
T3 - MATHEMATICS AND ITS APPLICATIONS
SP - 107
EP - 146
BT - Engineering Mathematics and Artificial Intelligence: Foundations, Methods, and Applications
A2 - Kunze, Herb
A2 - La Torre, Davide
A2 - Riccoboni, Adam
A2 - Ruiz Galán, Manuel
ER -