scientific article; zbMATH DE number 7626714
From MaRDI portal
Publication:5053184
Ding-Xuan Zhou, Jinshan Zeng, Yuan Yao, Shao-Bo Lin
Publication date: 6 December 2022
Full work available at URL: https://arxiv.org/abs/1902.02060
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items
Block coordinate type methods for optimization and learning, Moreau envelope augmented Lagrangian method for nonconvex optimization with linear constraints
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- On gradients of functions definable in o-minimal structures
- On semi- and subanalytic geometry
- Geometry of subanalytic and semialgebraic sets
- Why does deep and cheap learning work so well?
- Global convergence of ADMM in nonconvex nonsmooth optimization
- Provable approximation properties for deep neural networks
- Approximation properties of a multilayered feedforward artificial neural network
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Acceleration of primal-dual methods by preconditioning and simple subproblem procedures
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- Universality of deep convolutional neural networks
- A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion
- Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems
- Reducing the Dimensionality of Data with Neural Networks
- Neural Networks for Localized Approximation
- Deep distributed convolutional neural networks: Universality
- Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ
- ADMM for multiaffine constrained optimization
- Deep neural networks for rotation-invariance approximation and learning
- Learning representations by back-propagating errors
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Approximation by superpositions of a sigmoidal function