Convergence of Gradient Descent for Minimum Error Entropy Principle in Linear Regression
From MaRDI portal
Publication:4619681
DOI10.1109/TSP.2016.2612169zbMath1414.94263OpenAlexW2522471716MaRDI QIDQ4619681
Qiang Wu, Ding-Xuan Zhou, Ting Hu
Publication date: 7 February 2019
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tsp.2016.2612169
Related Items (8)
Learning theory of minimum error entropy under weak moment conditions ⋮ Unnamed Item ⋮ Fast rates of minimum error entropy with heavy-tailed noise ⋮ Distributed kernel gradient descent algorithm for minimum error entropy principle ⋮ Stability and optimization error of stochastic gradient descent for pairwise learning ⋮ Online minimum error entropy algorithm with unbounded sampling ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ A Framework of Learning Through Empirical Gain Maximization
This page was built for publication: Convergence of Gradient Descent for Minimum Error Entropy Principle in Linear Regression