Asymptotics of $\ell_2$ Regularized Network Embeddings
From MaRDI portal
Publication:6387578
arXiv2201.01689MaRDI QIDQ6387578
Author name not available (Why is that?)
Publication date: 5 January 2022
Abstract: A common approach to solving prediction tasks on large networks, such as node classification or link prediction, begin by learning a Euclidean embedding of the nodes of the network, from which traditional machine learning methods can then be applied. This includes methods such as DeepWalk and node2vec, which learn embeddings by optimizing stochastic losses formed over subsamples of the graph at each iteration of stochastic gradient descent. In this paper, we study the effects of adding an penalty of the embedding vectors to the training loss of these types of methods. We prove that, under some exchangeability assumptions on the graph, this asymptotically leads to learning a graphon with a nuclear-norm-type penalty, and give guarantees for the asymptotic distribution of the learned embedding vectors. In particular, the exact form of the penalty depends on the choice of subsampling method used as part of stochastic gradient descent. We also illustrate empirically that concatenating node covariates to regularized node2vec embeddings leads to comparable, when not superior, performance to methods which incorporate node covariates and the network structure in a non-linear manner.
Has companion code repository: https://github.com/aday651/embed-reg
This page was built for publication: Asymptotics of $\ell_2$ Regularized Network Embeddings
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6387578)