CLIP: cheap Lipschitz training of neural networks
From MaRDI portal
Publication:826198
DOI10.1007/978-3-030-75549-2_25zbMath1484.68204arXiv2103.12531OpenAlexW3165333460MaRDI QIDQ826198
Leon Bungert, René Raab, Daniel Tenbrinck, Tim Roith, Leo Schwinn
Publication date: 20 December 2021
Full work available at URL: https://arxiv.org/abs/2103.12531
stabilityLipschitz constantmachine learningvariational regularizationdeep neural networkadversarial attack
Related Items (4)
Designing rotationally invariant neural networks from PDEs and variational methods ⋮ Approximation of Lipschitz Functions Using Deep Spline Neural Networks ⋮ Invertible residual networks in the context of regularization theory for linear inverse problems ⋮ Connections between numerical algorithms for PDEs and neural networks
Cites Work
- Regularisation of neural networks by enforcing Lipschitz continuity
- Lipschitz Certificates for Layered Network Structures Driven by Averaged Activation Operators
- Deep Neural Networks With Trainable Activations and Controlled Lipschitz Constant
- On Lipschitz Bounds of General Convolutional Neural Networks
- Variational regularisation for inverse problems with imperfect forward operators and general noise models
- A Guide to the TV Zoo
- Solution paths of variational regularization methods for inverse problems
- Morozov's discrepancy principle for Tikhonov-type functionals with nonlinear operators
- Understanding Machine Learning
- An Iterative Regularization Method for Total Variation-Based Image Restoration
This page was built for publication: CLIP: cheap Lipschitz training of neural networks