Pages that link to "Item:Q2134105"
From MaRDI portal
The following pages link to Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration (Q2134105):
Displaying 14 items.
- The interpolation phase transition in neural networks: memorization and generalization under lazy training (Q2105197) (← links)
- Deep learning: a statistical viewpoint (Q5887827) (← links)
- HARFE: hard-ridge random feature expansion (Q6049834) (← links)
- A note on the prediction error of principal component regression in high dimensions (Q6050280) (← links)
- Adversarial examples in random neural networks with general activations (Q6062703) (← links)
- On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions (Q6070298) (← links)
- A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors (Q6090836) (← links)
- Dense Hebbian neural networks: a replica symmetric picture of supervised learning (Q6095677) (← links)
- Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration (Q6359036) (← links)
- SRMD: sparse random mode decomposition (Q6575285) (← links)
- Operator learning using random features: a tool for scientific computing (Q6585281) (← links)
- Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks (Q6590448) (← links)
- Precise learning curves and higher-order scaling limits for dot-product kernel regression (Q6611439) (← links)
- New equivalences between interpolation and SVMs: kernels and structured features (Q6617269) (← links)