Side effects of learning from low-dimensional data embedded in a Euclidean space
From MaRDI portal
Publication:2687305
DOI10.1007/s40687-023-00378-yOpenAlexW4321438151MaRDI QIDQ2687305
Juncai He, Rachel Ward, Yen-Hsi Richard Tsai
Publication date: 2 March 2023
Published in: Research in the Mathematical Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2203.00614
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An implicit interface boundary integral method for Poisson's equation on arbitrary domains
- Explicit constructions of RIP matrices and related problems
- Optimal rates of convergence for covariance matrix estimation
- Provable approximation properties for deep neural networks
- Optimal approximation rate of ReLU networks in terms of width and depth
- Meta-mgnet: meta multigrid networks for solving parameterized partial differential equations
- A weight initialization based on the linear product structure for neural networks
- Error bounds for approximations with deep ReLU networks
- MgNet: a unified framework of multigrid and convolutional neural network
- Volumetric variational principles for a class of partial differential equations defined on surfaces and curves
- Finding the homology of submanifolds with high confidence from random samples
- On early stopping in gradient descent learning
- Numerical wave propagation aided by deep learning
- New and Improved Johnson–Lindenstrauss Embeddings via the Restricted Isometry Property
- 10.1162/153244304322972667
- Extensions of Lipschitz mappings into a Hilbert space
- Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles
- Universal approximation bounds for superpositions of a sigmoidal function
- Canonical Correlation Analysis: An Overview with Application to Learning Methods
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Randomized Kaczmarz Converges Along Small Singular Vectors
- Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers
- Geometry of Linear Convolutional Networks
- Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data
- Testing the manifold hypothesis
- A deep network construction that adapts to intrinsic dimensionality beyond the domain
This page was built for publication: Side effects of learning from low-dimensional data embedded in a Euclidean space