How Implicit Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part I: the 1-D Case of Two Layers with Random First Layer (Q6328725)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: How Implicit Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part I: the 1-D Case of Two Layers with Random First Layer |
preprint article from arXiv
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | How Implicit Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part I: the 1-D Case of Two Layers with Random First Layer |
preprint article from arXiv |
Statements
7 November 2019
0 references
cs.LG
0 references
cs.NA
0 references
math.NA
0 references
stat.ML
0 references
Jakob Heiss
0 references
Josef Teichmann
0 references
Hanna Wutte
0 references