Pages that link to "Item:Q1009294"
From MaRDI portal
The following pages link to Convex multi-task feature learning (Q1009294):
Displaying 50 items.
- Minimum \( n\)-rank approximation via iterative hard thresholding (Q299732) (← links)
- The bounds of restricted isometry constants for low rank matrices recovery (Q365859) (← links)
- Geometry preserving multi-task metric learning (Q374164) (← links)
- Learning with infinitely many features (Q374174) (← links)
- Estimating variable structure and dependence in multitask learning via gradients (Q415606) (← links)
- Multi-output learning via spectral filtering (Q439000) (← links)
- kLog: a language for logical and relational learning with kernels (Q464624) (← links)
- Feature selection and multi-kernel learning for sparse representation on a manifold (Q470203) (← links)
- Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions (Q526680) (← links)
- Approximation accuracy, gradient methods, and error bound for structured convex optimization (Q607498) (← links)
- Inferring multiple graphical structures (Q637986) (← links)
- Max-norm optimization for robust matrix recovery (Q681486) (← links)
- Regularized multidimensional scaling with radial basis functions (Q747026) (← links)
- Joint Laplacian feature weights learning (Q898384) (← links)
- Harnessing lab knowledge for real-world action recognition (Q903539) (← links)
- Flexible latent variable models for multi-task learning (Q1009298) (← links)
- A joint convex penalty for inverse covariance matrix estimation (Q1623469) (← links)
- A distributed Frank-Wolfe framework for learning low-rank matrices with the trace norm (Q1631800) (← links)
- Data shared Lasso: a novel tool to discover uplift (Q1659082) (← links)
- A novel multivariate performance optimization method based on sparse coding and hyper-predictor learning (Q1669174) (← links)
- A unified approach to error bounds for structured convex optimization problems (Q1675267) (← links)
- Multi-target regression via input space expansion: treating targets as inputs (Q1689552) (← links)
- Group online adaptive learning (Q1698874) (← links)
- Least absolute deviations learning of multiple tasks (Q1716982) (← links)
- Low-rank representation-based object tracking using multitask feature learning with joint sparsity (Q1722184) (← links)
- Simultaneous nonparametric regression in RADWT dictionaries (Q1727921) (← links)
- Linearized and kernelized sparse multitask learning for predicting cognitive outcomes in Alzheimer's disease (Q1731300) (← links)
- Rate-optimal perturbation bounds for singular subspaces with applications to high-dimensional statistics (Q1747733) (← links)
- Slice inverse regression with score functions (Q1753150) (← links)
- Regularized nonnegative shared subspace learning (Q1944970) (← links)
- Finite rank kernels for multi-task learning (Q1946523) (← links)
- Regularizers for structured sparsity (Q1949299) (← links)
- Composite kernel learning (Q1959567) (← links)
- Multi-domain learning by confidence-weighted parameter combination (Q1959570) (← links)
- Exact matrix completion based on low rank Hankel structure in the Fourier domain (Q1979919) (← links)
- Alternating direction multiplier method for matrix \(l_{2,1}\)-norm optimization in multitask feature learning problems (Q2007113) (← links)
- An accelerated IRNN-iteratively reweighted nuclear norm algorithm for nonconvex nonsmooth low-rank minimization problems (Q2029679) (← links)
- Low-rank matrix recovery via regularized nuclear norm minimization (Q2036488) (← links)
- Tackling ordinal regression problem for heterogeneous data: sparse and deep multi-task learning approaches (Q2036753) (← links)
- Inductive matrix completion with feature selection (Q2038483) (← links)
- Low-rank approximation algorithms for matrix completion with random sampling (Q2038493) (← links)
- \(\ell_{2,0}\)-norm based selection and estimation for multivariate generalized linear models (Q2048127) (← links)
- Discriminant analysis of regularized multidimensional scaling (Q2061339) (← links)
- Sensitivity of low-rank matrix recovery (Q2100520) (← links)
- Pointwise mutual information sparsely embedded feature selection (Q2105575) (← links)
- Proof methods for robust low-rank matrix recovery (Q2106469) (← links)
- Regularized high dimension low tubal-rank tensor regression (Q2137811) (← links)
- Which option is a better way to improve transfer learning performance? (Q2149856) (← links)
- Robust reduced rank regression in a distributed setting (Q2158850) (← links)
- Joint feature selection and classification for positive unlabelled multi-label data using weighted penalized empirical risk minimization (Q2162144) (← links)