scientific article; zbMATH DE number 6982318
From MaRDI portal
Publication:4558169
Dominik Csiba, Peter Richtárik
Publication date: 21 November 2018
Full work available at URL: https://arxiv.org/abs/1602.02283
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
convex optimizationimportance samplingempirical risk minimizationminibatchingvariance-reduced methods
Lua error in Module:PublicationMSCList at line 37: attempt to index local 'msc_result' (a nil value).
Related Items (6)
SelectNet: self-paced learning for high-dimensional partial differential equations ⋮ Adaptive coordinate sampling for stochastic primal–dual optimization ⋮ Improving sampling accuracy of stochastic gradient MCMC methods via non-uniform subsampling of gradients ⋮ Batched Stochastic Gradient Descent with Weighted Sampling ⋮ Stochastic quasi-gradient methods: variance reduction via Jacobian sketching ⋮ On Adaptive Sketch-and-Project for Solving Linear Systems
Uses Software
Cites Work
- Unnamed Item
- Parallel coordinate descent methods for big data optimization
- On optimal probabilities in stochastic coordinate descent methods
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- Erratum to: ``Minimizing finite sums with the stochastic average gradient
- Pegasos: primal estimated sub-gradient solver for SVM
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Distributed Coordinate Descent Method for Learning with Big Data
- Coordinate descent with arbitrary sampling I: algorithms and complexity†
- Coordinate descent with arbitrary sampling II: expected separable overapproximation
- Hybrid Deterministic-Stochastic Methods for Data Fitting
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Accelerated, Parallel, and Proximal Coordinate Descent
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- Semi-stochastic coordinate descent
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- A Stochastic Approximation Method
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
This page was built for publication: