On the convergence rate issues of general Markov search for global minimum
DOI10.1007/s10898-017-0544-7zbMath1386.60247OpenAlexW2733784916WikidataQ59609470 ScholiaQ59609470MaRDI QIDQ1685583
Publication date: 14 December 2017
Published in: Journal of Global Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10898-017-0544-7
global optimizationsimulated annealingconvergence rateself-adaptationaccelerated random searchMarkov search
Discrete-time Markov processes on general state spaces (60J05) Lyapunov and other classical stabilities (Lagrange, Poisson, (L^p, l^p), etc.) in control theory (93D05) Asymptotic stability in control theory (93D20)
Related Items (2)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The robust constant and its applications in random global search for unconstrained global optimization
- Nonautonomous stochastic search in global optimization
- Markov chains and stochastic stability
- Global convergence of discrete-time inhomogeneous Markov processes from dynamical systems perspective
- Random linkage: A family of acceptance/rejection algorithms for global sation
- Convergence and first hitting time of simulated annealing algorithms for continuous global optimization
- Convergence of the simulated annealing algorithm for continuous global optimization
- Evolution strategies. A comprehensive introduction
- Quantitative bounds on convergence of time-inhomogeneous Markov chains
- Global optimization in action. Continuous and Lipschitz optimization: algorithms, implementations and applications
- Nonautonomous stochastic search for global minimum in continuous optimization
- Derivative-free optimization: a review of algorithms and comparison of software implementations
- Improving simulated annealing through derandomization
- Monotonous random search on a torus: integral upper bounds for the complexity
- Stochastic global optimization.
- Optimization of stochastic systems. Topics in discrete-time systems
- Randomized Smoothing for Stochastic Optimization
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Convergence properties of simulated annealing for continuous global optimization
- Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains
- Pure Random Search with exponential rate of convergency
- Convergence properties of stochastic optimization procedures
- Minimization by Random Search Techniques
- Convergence theorems for a class of simulated annealing algorithms on ℝd
- Numerical Optimization
- Convergence of simulated annealing using Foster-Lyapunov criteria
- On Accelerated Random Search
- Real Analysis and Probability
- On the convergence rate of the Markov homogeneous monotone optimization method
- Convergence of a simulated annealing algorithm for continuous global optimization.
- Theory of genetic algorithms
This page was built for publication: On the convergence rate issues of general Markov search for global minimum