We then illustrate this numerically, both on synthetic and real datasets. We show theoretically that DP-GCD can improve utility by exploiting structural properties of the problem’s solution (such as sparsity or quasi-sparsity), with very fast progress in early iterations. At each iteration, DP-GCD privately performs a coordinate-wise gradient step along the gradients’ (approximately) greatest entry. ![]() To exploit this, we propose a differentially private greedy coordinate descent (DP-GCD) algorithm. ![]() In high dimension, it is common for some model’s parameters to carry more information than others. This is a major obstacle to privately learning large machine learning models. It has been shown that the (worst-case) utility of DP-ERM reduces as the dimension increases. High-Dimensional Private Empirical Risk Minimization by Greedy Coordinate Descent(arXiv)Īuthor : Paul Mangold, Aurélien Bellet, Joseph Salmon, Marc TommasiĪbstract : In this paper, we study differentially private empirical risk minimization (DP-ERM).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |