Analysis of Manifold and its Application

Gyanvendra Pratap Singh and Shristi Srivastav *

Department of Mathematics and Statistics, DDU Gorakhpur University, Gorakhpur (UP) India.
 
Review
International Journal of Science and Research Archive, 2024, 12(01), 394–404.
Article DOI: 10.30574/ijsra.2024.12.1.0812
Publication history: 
Received on 28 March 2024; revised on 04 May 2024; accepted on 07 May 2024
 
Abstract: 
Manifold learning is a field of study in machine learning and statistics that is closely associated with dimensionality reduction algorithmic techniques is gaining popularity these days. There are two types of manifold learning approaches: linear and nonlinear.
Principal component analysis (PCA) and multidimensional scaling (MDS) are two examples of linear techniques that have long been staples in the statistician's arsenal for evaluating multivariate data. Nonlinear manifold learning, which encompasses diffusion maps, Laplacian Eigenmaps, Hessian Eigenmaps, Isomap, and local linear embedding, has seen a surge in research effort recently. A few of these methods are nonlinear extensions of linear approaches. A nearest search, the definition of distances or affinities between points (a crucial component of these methods' effectiveness), and an Eigen problem for embedding high-dimensional points into a lower dimensional space make up the algorithmic process of the majority of these techniques. The strengths and weaknesses of the new method are briefly reviewed in this article. In the field of computer graphics, we utilize a particular manifold learning method was first presented in statistics and machine learning to create a global, Spectral-based shape descriptor.
 
Keywords: 
Manifold Learning; Isomaps; Embedding; Principal Component Analysis (PCA); Multi-dimensional scaling (MDS); Generative Topology Mapping (GTM)
 
Full text article in PDF: