Distributed Learning of Non-Convex Linear Models with One Round of Communication (2019)

by Mike Izbicki and Christian R. Shelton


Abstract: We present the optimal weighted average (OWA) distributed learning algorithm for linear models. OWA achieves statistically optimal learning rates, uses only one round of communication, works on non- convex problems, and supports a fast cross validation procedure. The OWA algorithm first trains local models on each of the compute nodes; then a master machine merges the models using a second round of optimization. This second optimization uses only a small fraction of the data, and so has negligible computational cost. Compared with similar distributed estimators that merge locally trained models, OWA either has stronger statistical guarantees, is applicable to more models, or has a more computationally efficient merging procedure.

Download Information

Mike Izbicki and Christian R. Shelton (2019). "Distributed Learning of Non-Convex Linear Models with One Round of Communication." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. pdf          

Bibtex citation

@inproceedings{IzbShe19,
   author = "Mike Izbicki and Christian R. Shelton",
   title = "Distributed Learning of Non-Convex Linear Models with One Round of Communication",
   booktitle = "European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases",
   booktitleabbr = "ECML/PKDD",
   year = 2019,
}