Abstract
This paper considers distributed stochastic optimization problems over a multi-agent network, where each agent collaboratively minimizes the sum of individual expectation-valued cost functions subject to nonidentical set constraints. We first recast the distributed constrained optimization as a constrained saddle-point problem. Subsequently, two distributed stochastic algorithms via optimistic gradient descent ascent (SOGDA) and extragradient (SEG) methods are developed with constant step sizes, in which the variable sample-size technique is incorporated to reduce the variance of the sampled gradients. We present the explicit selection criteria of the constant step size, under which the developed algorithms achieve almost sure convergence to an optimal solution. Moreover, the convergence rate is O(1/k) for merely convex cost functions, which matches the optimal rate of its deterministic counterpart. Finally, a numerical example is provided to reflect the theoretical findings.
| Original language | English |
|---|---|
| Article number | 112575 |
| Journal | Automatica |
| Volume | 183 |
| DOIs | |
| Publication status | Published - Jan 2026 |
| Externally published | Yes |
Keywords
- Distributed stochastic optimization
- Multi-agent network
- Saddle-point dynamics
- Set constraints