Zeno: Distributed Stochastic Gradient Descent with Suspicion-based Fault-tolerance Cong Xie, Oluwasanmi Koyejo, Indranil Gupta June 12, 2019 Poster 158 1 / 4
Poster 158 Security in Distributed ML Zeno: distributed synchronous SGD that • tolerates an arbitrary number of malicious workers • provides convergence guarantees for non-convex problems Goal: converge under attacks/failures, regardless of false negative Prev. Ours Tolerates a majority of malicious workers No Yes Considers the progress of optimization No Yes Tolerates stealth adversary (empirically) No Yes 2 / 4
Poster 158 Byzantine-tolerant SGD ps_byz.drawio https://www.draw.io/#Wb778d378ebdb14ee/B778D378EBDB14EE!33942 m workers, distributed SGD: 4: Aggregation Server 1: Pull B 3: Push t y n z e a i d n a t i n r G e t G c e r a r r d o i e C n t Byzantine Honest Worker Honest Worker Worker 2: Gradient Computation 3 / 4 1 of 1 5/8/2019, 12:27 PM
Poster 158 Main Idea & Results ⋆ Sort g i ( x ) , i ∈ [ m ] by the Stochastic descent score: Definition Stochastic descent score of any update u : Score γ,ρ ( u, x ) = f r ( x ) − f r ( x − γu ) − ρ � u � 2 , f r ( x ) : unbiased estimator of the loss F ( x ) , for validation. � m − b 1 ⋆ Zeno: filter the b “worst” gradients i =1 ˜ v ( i ) , b > q . m − b ⋆ Convergence after T iterations: � 1 � T − 1 t =0 E �∇ F ( x t ) � 2 � � ( b − q + 1)( m − q ) � ≤ O √ + O . ( m − b ) 2 T T 4 / 4
Recommend
More recommend