Generalization performance of least-square regularized regression algorithm with Markov chain samples
From MaRDI portal
Publication:662073
DOI10.1016/j.jmaa.2011.11.032zbMath1231.62167OpenAlexW2078432849MaRDI QIDQ662073
Luoqing Li, Bin Zou, Zong Ben Xu
Publication date: 11 February 2012
Published in: Journal of Mathematical Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jmaa.2011.11.032
Time series, auto-correlation, regression, etc. in statistics (GARCH) (62M10) Markov chains (discrete-time Markov processes on discrete state spaces) (60J10)
Related Items
Learning from non-irreducible Markov chains, Generalization performance of bipartite ranking algorithms with convex losses, Learning performance of regularized regression with multiscale kernels based on Markov observations, Generalization ability of online pairwise support vector machine
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Markov chains and stochastic stability
- Learning from dependent observations
- General state space Markov chains and MCMC algorithms
- Learning from uniformly ergodic Markov chains
- Rates of convergence for empirical processes of stationary mixing sequences
- Hoeffding's inequality for uniformly ergodic Markov chains
- Learning and generalisation. With applications to neural networks.
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Concentration of measure inequalities for Markov chains and \(\Phi\)-mixing processes.
- The covering number in learning theory
- The generalization performance of ERM algorithm with strongly mixing observations
- Regularization networks and support vector machines
- The performance bounds of learning machines based on exponentially strongly mixing sequences
- Learning rates of least-square regularized regression
- Concentration inequalities for dependent random variables via the martingale method
- Weighted sums of certain dependent random variables
- On the mathematical foundations of learning
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- Capacity of reproducing kernel spaces in learning theory
- ONLINE LEARNING WITH MARKOV SAMPLING
- Minimum complexity regression estimation with weakly dependent observations
- Scale-sensitive dimensions, uniform convergence, and learnability
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- Extension of the PAC framework to finite and countable Markov chains