Mathematical Research Data Initiative
Main page
Recent changes
Random page
Help about MediaWiki
Create a new Item
Create a new Property
Create a new EntitySchema
Merge two items
In other projects
Discussion
View source
View history
Purge
English
Log in

Every Local Minimum Value Is the Global Minimum Value of Induced Model in Nonconvex Machine Learning

From MaRDI portal
Publication:5214402
Jump to:navigation, search

DOI10.1162/neco_a_01234zbMath1494.68217arXiv1904.03673OpenAlexW3104627075WikidataQ90718207 ScholiaQ90718207MaRDI QIDQ5214402

Leslie Pack Kaelbling, Kenji Kawaguchi, Jiaoyang Huang

Publication date: 7 February 2020

Published in: Neural Computation (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1904.03673



Mathematics Subject Classification ID

Artificial neural networks and deep learning (68T07) Nonconvex programming, global optimization (90C26) Learning and adaptive systems in artificial intelligence (68T05)




Cites Work

  • Unnamed Item
  • Unnamed Item
  • Stratification of real analytic mappings and images
  • Gradient descent optimizes over-parameterized deep ReLU networks
  • Stochastic subgradient method converges on tame functions
  • Introduction to Smooth Manifolds
  • Some NP-complete problems in quadratic and nonlinear programming
  • Variational Analysis
  • Gradient Descent with Identity Initialization Efficiently Learns Positive-Definite Linear Transformations by Deep Residual Networks
  • Effect of Depth and Width on Local Minima in Deep Learning


This page was built for publication: Every Local Minimum Value Is the Global Minimum Value of Induced Model in Nonconvex Machine Learning

Retrieved from "https://portal.mardi4nfdi.de/w/index.php?title=Publication:5214402&oldid=19820527"
Tools
What links here
Related changes
Special pages
Printable version
Permanent link
Page information
MaRDI portal item
This page was last edited on 8 February 2024, at 18:39.
Privacy policy
About MaRDI portal
Disclaimers
Imprint
Powered by MediaWiki