Statistical computational learning
From MaRDI portal
Publication:6602226
DOI10.1007/978-3-030-06164-7_11zbMATH Open1547.6865MaRDI QIDQ6602226
Frédéric Koriche, Antoine Cornuéjols, Richard Nock
Publication date: 11 September 2024
Artificial neural networks and deep learning (68T07) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Bagging predictors
- On the complexity analysis of randomized block-coordinate descent methods
- Integer linear programming for the Bayesian network structure learning problem
- Learning intersections and thresholds of halfspaces
- The EM algorithm for graphical association models with missing data
- Exponentiated gradient versus gradient descent for linear predictors
- The robustness of the \(p\)-norm algorithms
- A coordinate gradient descent method for nonsmooth separable minimization
- A universal prior for integers and estimation by minimum description length
- Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
- On the complexity of polyhedral separability
- Rank-\(r\) decision trees are a subclass of \(r\)-decision lists
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- The densest hemisphere problem
- Toward efficient agnostic learning
- The hardness of approximate optima in lattices, codes, and systems of linear equations
- A decision-theoretic generalization of on-line learning and an application to boosting
- On the difficulty of approximately maximizing agreements.
- Introductory lectures on convex optimization. A basic course.
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Maximum likelihood bounded tree-width Markov networks
- Learning DNF in time \(2^{\widetilde O(n^{1/3})}\)
- On structured output training: hard cases and an efficient alternative
- Improved boosting algorithms using confidence-rated predictions
- Coordinate descent algorithms
- Label ranking by learning pairwise preferences
- The complexity of properly learning simple concept classes
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- Cryptographic hardness for learning intersections of halfspaces
- On the density of families of sets
- Discrete Mathematics of Neural Networks
- Machine Learning
- Optimization with Sparsity-Inducing Penalties
- Decision Forests: A Unified Framework for Classification, Regression, Density Estimation, Manifold Learning and Semi-Supervised Learning
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Coresets, sparse greedy approximation, and the Frank-Wolfe algorithm
- Label Ranking Algorithms: A Survey
- A Survey and Empirical Comparison of Object Ranking Methods
- An Elementary Introduction to Statistical Learning Theory
- Probability for Statistics and Machine Learning
- Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints
- Learning in the Presence of Malicious Errors
- NON-NULL RANKING MODELS. I
- Algebraic Geometry and Statistical Learning Theory
- Learnability and the Vapnik-Chervonenkis dimension
- Support Vector Machines
- On Agnostic Learning of Parities, Monomials, and Halfspaces
- Preference Learning
- Graphical Models, Exponential Families, and Variational Inference
- Modeling and Reasoning with Bayesian Networks
- A theory of the learnable
- Computational limitations on learning from examples
- Learning Integer Lattices
- Using the Perceptron Algorithm to Find Consistent Hypotheses
- Learning Boolean formulas
- Scale-sensitive dimensions, uniform convergence, and learnability
- 10.1162/153244303322533188
- 10.1162/153244302760200704
- 10.1162/1532443041827916
- Sparse Approximate Solutions to Linear Systems
- Agnostic Learning of Monomials by Halfspaces Is Hard
- An Introduction to Statistical Learning
- Neural Network Learning
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Greedy Sparsity-Constrained Optimization
- Ranking Tournaments
- Approximating discrete probability distributions with dependence trees
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
- A Stochastic Approximation Method
- Random forests
- New analysis and results for the Frank-Wolfe method
- The Elements of Statistical Learning
- Computing machinery and intelligence
This page was built for publication: Statistical computational learning