Complexity analysis for certain convex programming problems (Q1974569)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Complexity analysis for certain convex programming problems |
scientific article; zbMATH DE number 1439897
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Complexity analysis for certain convex programming problems |
scientific article; zbMATH DE number 1439897 |
Statements
Complexity analysis for certain convex programming problems (English)
0 references
7 May 2000
0 references
For the convex linear-constrained optimization problem \[ f(x)\to \min,\quad\text{s.t. }\langle x,b_i\rangle\geq \beta_i\quad (i= 1,\dots, p) \] in finite-dimensional spaces, the author provides a barrier method using the auxiliary function \[ f_t(x):= f(x)- t \sum^p_{i=1} \ln(\langle x,b_i\rangle- \beta_i),\quad t>0. \] In the theoretical part it is pointed out that (under weak assumptions) this function admits for any \(t>0\) an unique minimum point \(\xi_t\) which varies smoothly with \(t\to 0\) to the minimum point \(\xi_0\) of the original problem. Obviously, these minimum points \(\xi_t\) are zeros of \(\nabla f_t\), i.e. they are solutions of the nonlinear equations \(\nabla f_t(x)= 0\). The presented algorithm bases on the classical Newton method. The author describes a suitable subdivision \(1= t_0> t_1>\cdots> t_M> 0\) of the interval \([0,1]\) such that -- starting with a suitable point \(x_0\) (near to the zero \(\xi_{t_0}\)) and a small positive number \(\varepsilon\) -- one gets after \(M\) Newton steps according to \[ x_{i+ 1}:= x_i- [\nabla^2 f_{t_{i+ 1}}(x_i)]^{- 1}\nabla f_{t_{i+ 1}}(x_i) \] an approximative zero \(x_M\) of \(\nabla f_{t_M}\) with associated exact zero \(\xi_{t_M}\) which is an \(\varepsilon\)-minimizer of the original problem. The integer \(M\)-proportional to the quantity \(\sqrt p(1+\delta)(|\ln \varepsilon|+ \ln p)\) (here \(\delta\) is a generalized norm of the functions \(f_t\), \(t\in [0,1]\), calculated by different derivatives) -- provides a measure for the complexity of the algorithm. Obviously this quantity is simpler using linear or quadratic objective functions since in this case it is \(\delta= 0\). The approach of the paper bases heavily on Smale's alpha-theory.
0 references
convex programming
0 references
Smale's alpha-theory
0 references
complexity
0 references
barrier method
0 references
0 references
0 references