Learning and generalisation. With applications to neural networks.
From MaRDI portal
Publication:1856371
zbMath1008.68102MaRDI QIDQ1856371
Publication date: 3 February 2003
Published in: Communications and Control Engineering (Search for Journal in Brave)
identificationempirical meansneural networkNP-hardmixing propertystatistical learningfinite-time estimatessynthesis algorithms
Lua error in Module:PublicationMSCList at line 37: attempt to index local 'msc_result' (a nil value).
Related Items (34)
Stationary Density Estimation of Itô Diffusions Using Deep Learning ⋮ Generalization performance of Lagrangian support vector machine based on Markov sampling ⋮ Is the \(k\)-NN classifier in high dimensions affected by the curse of dimensionality? ⋮ Regularized least square regression with dependent samples ⋮ Error bounds for compressed sensing algorithms with group sparsity: A unified approach ⋮ Localization of VC classes: beyond local Rademacher complexities ⋮ Simulation-based optimization of Markov decision processes: an empirical process theory approach ⋮ On a clustering criterion for dependent observations ⋮ Generalization bounds of ERM algorithm with Markov chain samples ⋮ Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations ⋮ Lasso guarantees for \(\beta \)-mixing heavy-tailed time series ⋮ Learning rates of regularized regression for exponentially strongly mixing sequence ⋮ Consistency of maximum likelihood for continuous-space network models. I ⋮ Learning from non-irreducible Markov chains ⋮ Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains ⋮ Lower bounds on performance of metric tree indexing schemes for exact similarity search in high dimensions ⋮ Indexability, concentration, and VC theory ⋮ Computational complexity of randomized algorithms for solving parameter-dependent linear matrix inequalities. ⋮ Fast learning from \(\alpha\)-mixing observations ⋮ Price of anarchy in electric vehicle charging control games: when Nash equilibria achieve social welfare ⋮ Learning Theory Estimates with Observations from General Stationary Stochastic Processes ⋮ Classification with non-i.i.d. sampling ⋮ Generalization performance of least-square regularized regression algorithm with Markov chain samples ⋮ The performance bounds of learning machines based on exponentially strongly mixing sequences ⋮ Unsupervised slow subspace-learning from stationary processes ⋮ Probabilistic methods in cancer biology ⋮ Monte Carlo algorithms for optimal stopping and statistical learning ⋮ Learning rate of magnitude-preserving regularization ranking with dependent samples ⋮ Non parametric learning approach to estimate conditional quantiles in the dependent functional data case ⋮ SVM-boosting based on Markov resampling: theory and algorithm ⋮ Probabilistic feasibility guarantees for solution sets to uncertain variational inequalities ⋮ A statistical learning perspective on switched linear system identification ⋮ Optimal rate for support vector machine regression with Markov chain samples ⋮ Generalization performance of Gaussian kernels SVMC based on Markov sampling
This page was built for publication: Learning and generalisation. With applications to neural networks.