From kernel methods to neural networks: a unifying variational formulation
DOI10.1007/s10208-023-09624-9MaRDI QIDQ6659493
Publication date: 9 January 2025
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
convex optimizationregularizationneural networksBanach spacekernel methodsmachine learningrepresenter theorem
Artificial neural networks and deep learning (68T07) Radon transform (44A12) Applications of functional analysis in optimization, convex analysis, mathematical programming, economics (46N10) Linear operators and ill-posed problems, regularization (47A52)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Interpolation of scattered data: distance matrices and conditionally positive definite functions
- Kernel methods in machine learning
- Multivariate interpolation at arbitrary points made simple
- Denseness of the Lizorkin-type spaces \(Phi_ v \)in \(L_ p(\)\(R^ n)\)
- Approximation by superposition of sigmoidal and radial basis functions
- Spline solutions to L\(^1\) extremal problems in one and several variables
- On 'best' interpolation
- Locally adaptive regression splines
- Multilayer feedforward networks are universal approximators
- Understanding neural networks with reproducing kernel Banach spaces
- A unifying representer theorem for inverse problems and machine learning
- Sparsity of solutions for variational inverse problems with finite-dimensional data
- Neural network with unbounded activation functions is universal approximator
- Convex optimization in sums of Banach spaces
- Kernels for Vector-Valued Functions: A Review
- Integral Geometry and Radon Transforms
- Regularization Algorithms for Learning That Are Equivalent to Multilayer Networks
- An Introduction to Sparse Stochastic Processes
- Classical Fourier Analysis
- Universal approximation bounds for superpositions of a sigmoidal function
- Radial Basis Functions
- Splines Are Universal Solutions of Linear Inverse Problems with Generalized TV Regularization
- On Different Facets of Regularization Theory
- Fractional Splines and Wavelets
- Multikernel Regression with Sparsity Constraint
- What Kinds of Functions Do Deep Neural Networks Learn? Insights from Variational Spline Theory
- On Representer Theorems and Convex Regularization
- Uniform asymptotic expansions at a caustic
- Breaking the Curse of Dimensionality with Convex Neural Networks
- The Ridgelet transform of distributions
- Theory of Reproducing Kernels
- Scattered Data Approximation
- Approximation by superpositions of a sigmoidal function
- Explicit representations for Banach subspaces of Lizorkin distributions
This page was built for publication: From kernel methods to neural networks: a unifying variational formulation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6659493)