Mathematical Research Data Initiative
Main page
Recent changes
Random page
Help about MediaWiki
Create a new Item
Create a new Property
Merge two items
In other projects
Discussion
View source
View history
Purge
English
Log in

EGC: entropy-based gradient compression for distributed deep learning

From MaRDI portal
Publication:2056378
Jump to:navigation, search

DOI10.1016/J.INS.2020.05.121zbMath1475.68326OpenAlexW3092359860MaRDI QIDQ2056378

Di Kuang, Yuan Mei, Weigang Wu, Danyang Xiao, Mengqiang Chen, Binbin Guo

Publication date: 2 December 2021

Published in: Information Sciences (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/j.ins.2020.05.121


zbMATH Keywords

entropyneural networksdeep learningdistributed traininggradient compression


Mathematics Subject Classification ID

Artificial neural networks and deep learning (68T07)


Related Items (3)

A new algorithm for compression of partially commutative alphabets ⋮ Byzantine-robust variance-reduced federated learning over distributed non-i.i.d. data ⋮ RTGA: robust ternary gradients aggregation for federated learning


Uses Software

  • Penn Treebank
  • MNIST
  • CIFAR
  • ImageNet
  • Find
  • TernGrad
  • GNMT
  • HOGWILD
  • GPipe



Cites Work

  • Unnamed Item
  • Performance Optimization Using Partitioned SpMV on GPUs and Multicore CPUs
  • Scheduling Precedence Constrained Stochastic Tasks on Heterogeneous Cluster Systems
  • A Novel Security-Driven Scheduling Algorithm for Precedence-Constrained Tasks in Heterogeneous Distributed Systems




This page was built for publication: EGC: entropy-based gradient compression for distributed deep learning

Retrieved from "https://portal.mardi4nfdi.de/w/index.php?title=Publication:2056378&oldid=14539285"
Tools
What links here
Related changes
Special pages
Printable version
Permanent link
Page information
MaRDI portal item
This page was last edited on 1 February 2024, at 20:02.
Privacy policy
About MaRDI portal
Disclaimers
Imprint
Powered by MediaWiki