Distributed stochastic constrained optimization with constant step-sizes via saddle-point dynamics

Yi Huang, Shisheng Cui*, Xianlin Zeng, Ziyang Meng

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper considers distributed stochastic optimization problems over a multi-agent network, where each agent collaboratively minimizes the sum of individual expectation-valued cost functions subject to nonidentical set constraints. We first recast the distributed constrained optimization as a constrained saddle-point problem. Subsequently, two distributed stochastic algorithms via optimistic gradient descent ascent (SOGDA) and extragradient (SEG) methods are developed with constant step sizes, in which the variable sample-size technique is incorporated to reduce the variance of the sampled gradients. We present the explicit selection criteria of the constant step size, under which the developed algorithms achieve almost sure convergence to an optimal solution. Moreover, the convergence rate is O(1/k) for merely convex cost functions, which matches the optimal rate of its deterministic counterpart. Finally, a numerical example is provided to reflect the theoretical findings.

Original languageEnglish
Article number112575
JournalAutomatica
Volume183
DOIs
Publication statusPublished - Jan 2026
Externally publishedYes

Keywords

  • Distributed stochastic optimization
  • Multi-agent network
  • Saddle-point dynamics
  • Set constraints

Fingerprint

Dive into the research topics of 'Distributed stochastic constrained optimization with constant step-sizes via saddle-point dynamics'. Together they form a unique fingerprint.

Cite this