A distributed kernel summation framework for general‐dimension machine learning
From MaRDI portal
Publication:4969920
DOI10.1002/sam.11207OpenAlexW1908407117MaRDI QIDQ4969920
Richard Vuduc, Piyush Sao, Alexander Gray, Dongryeol Lee
Publication date: 14 October 2020
Published in: Statistical Analysis and Data Mining: The ASA Data Science Journal (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1002/sam.11207
kernel methodsnonparametric methodsCUDAGPGPUparallel machine learningparallel multidimensional trees
Related Items
A hybrid stochastic interpolation and compression method for kernel matrices ⋮ An \(O(N \log N)\) hierarchical random compression method for kernel matrices by sampling partial matrix entries ⋮ Fast and Stable Multivariate Kernel Density Estimation by Fast Sum Updating ⋮ ASKIT: An Efficient, Parallel Library for High-Dimensional Kernel Summations ⋮ ASKIT: Approximate Skeletonization Kernel-Independent Treecode in High Dimensions
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A kernel-independent adaptive fast multipole algorithm in two and three dimensions
- Load balancing and data locality in adaptive hierarchical \(N\)-body methods: Barnes-Hut, fast multipole, and radiosity
- PetFMM-A dynamically load-balancing parallel fast multipole library
- Multidimensional binary search trees used for associative searching
- A decomposition of multidimensional point sets with applications to k -nearest-neighbors and n -body potential fields
- On Estimation of a Probability Density Function and Mode
- The Fast Gauss Transform
- A fast algorithm for particle simulations
This page was built for publication: A distributed kernel summation framework for general‐dimension machine learning