`a`
Journal of Industrial and Management Optimization (JIMO)
 

Incremental gradient-free method for nonsmooth distributed optimization
Pages: 1841 - 1857, Issue 4, October 2017

doi:10.3934/jimo.2017021      Abstract        References        Full text (457.6K)           Related Articles

Jueyou Li - School of Mathematical Sciences, Chongqing Normal University, Chongqing, 400047, China (email)
Guoquan Li - School of Mathematical Sciences, Chongqing Normal University, Chongqing, 400047, China (email)
Zhiyou Wu - School of Mathematical Sciences, Chongqing Normal University, Chongqing, 400047, China (email)
Changzhi Wu - Australasian Joint Research Center for Building Information Modelling, School of Built Environment, Curtin University, Bentley, WA, 6102, Australia (email)
Xiangyu Wang - Australasian Joint Research Center for Building Information Modelling, School of Built Environment, Curtin University, Bentley, WA, 6102, Australia (email)
Jae-Myung Lee - Department of Naval Architecture and Ocean Engineering, Pusan National University, Busan, South Korea (email)
Kwang-Hyo Jung - Department of Naval Architecture and Ocean Engineering, Pusan National University, Busan, South Korea (email)

1 A. M. Bagirov, M. Ghosh and D. Webb, A derivative-free method for linearly constrained nonsmooth optimization, J. Ind. Manag. Optim., 2 (2006), 319-338.       
2 D. P. Bertsekas, Stochastic optimization problems with nondifferentiable cost functionals, J. Optim. Theory Appl., 12(1973), 218-231.       
3 D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, Athena Scientific, Belmont, MA, 1989.
4 D. P. Bertsekas, A. Nedić and E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003.       
5 D. P. Bertsekas, Incremental proximal methods for large scale convex optimization, Math. Program. B., 129 (2011), 163-195.       
6 A. R. Conn, K. Scheinberg and L. N. Vicente, Introduction to Derivative-Free Optimization, MPS-SIAM Series on Optimization, SIAM, Philadelphia, 2009.       
7 J. C. Duchi, A. Agarwal and M. J. Wainwright, Dual averaging for distributed optimization: Convergence analysis and network scaling, IEEE Trans. Autom. Control., 57 (2012), 592-606.       
8 J. C. Duchi, P. L. Bartlet and M. J. Wainwrighr, Randomized smoothing for stochastic optimization, SIAM J. Optim., 22 (2012), 674-701.       
9 X. X. Huang, X. Q. Yang and K. L. Teo, A smoothing scheme for optimization problems with Max-Min constraints, J. Ind. Manag. Optim., 3 (2007), 209-222.       
10 J. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms I, Springer, Berlin, 1996.
11 X. Zhang, C. Wu, J. Li, X. Wang, Z. Yang, J. M. Lee and K. H. Jung, Binary artificial algae algorithm for multidimensional knapsack problems, Applied Soft Computing, 43 (2016), 583-595.
12 B. Johansson, M. Rabi and M. Johansson, A randomized incremental subgradient method for distributed optimization in networked systems, SIAM J. Optim., 20 (2009), 1157-1170.       
13 K. C. Kiwiel, Convergence of approximate and incremental subgradient methods for convex optimization, SIAM J. Optim., 14 (2004), 807-840.       
14 J. Y. Li, C. Z. Wu, Z. Y. Wu and Q. Long, Gradient-free method for nonsmooth distributed optimization, J. Glob. Optim., 61 (2015), 325-340.       
15 J. Y. Li, C. Z. Wu, Z. Y. Wu, Q. Long and X. Y. Wang, Distributed proximal-gradient method for convex optimization with inequality constraints, ANZIAM J., 56 (2014), 160-178.       
16 A. Nedić and D. P. Bertsekas, Convergence rate of incremental subgradient algorithm, in Stochastic Optimization: Algorithms and Applications (eds. S. Uryasev and P. M. Pardalos), Applied Optimization, 54, Springer, 2001, 223-264.
17 A. Nedić and D. P. Bertsekas, Incremental subgradient methods for nondifferentiable optimization, SIAM J. Optim., 12 (2001), 109-138.       
18 A. Nedić and A. Ozdaglar, Distributed subgradient methods for multi-agent optimization, IEEE Trans. Autom. Control., 54 (2009), 48-61.       
19 Y. Nesterov, Random Gradient-Free Minimization of Convex Functions, Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain, January 2011. Available from: http://www.ecore.be/DPs/dp_1297333890.pdf.
20 B. T. Polyak and J. Tsypkin, Robust identification, Automatica, 16 (1980), 53-63.       
21 M. G. Rabbat and R. D. Nowak, Quantized incremental algorithms for distributed optimization, IEEE J. Sel. Areas Commun., 23 (2005), 798-808.
22 S. S. Ram, A. Nedićand V. V. Veeravalli, Incremental stochastic subgradient algorithms for convex optimization, SIAM J. Optim., 20 (2009), 691-717.       
23 Q. J. Shi, C. He and L. G. Jiang, Normalized incremental subgradient algorithm and its application, IEEE Signal Processing, 57 (2009), 3759-3774.       
24 R. L. Sheu, M. J. Ting and I. L. Wang, Maximum folw problem in the distribution network, J. Ind. Manag. Optim., 2 (2006), 237-254.       
25 M. V. Solodov, Incremental gradient algorithms with stepsizes bounded away from zero, Comput. Optim. Appl., 11 (1998), 28-35.       
26 D. M. Yuan, S. Y. Xu and J. W. Lu, Gradient-free method for distributed multi-agent optimization via push-sum algorithms, Int. J. Robust Nonlinear Control, 25 (2015), 1569-1580.       
27 Q. Long and C. Wu, A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization, J. Ind. Manag. Optim., 10 (2014), 1279-1296.       
28 G. H. Yu, A derivative-free method for solving large-scale nonlinear systems of equations, J. Ind. Manag. Optim., 6 (2010), 149-160.       
29 C. J. Yu, K. L. Teo, L. S. Zhang and Y. Q. Bai, A new exact penalty function method for continuous inequality constrained optimization problems, J. Ind. Manag. Optim., 6 (2010), 895-910.       
30 F. Yousefian, A. Nedić and U. V. Shanbhag, On stochastic gradient and subgradient methods with adaptive steplength sequences, Automatica, 48 (2012), 56-67.       
31 J. Li, G. Chen, Z. Dong and Z. Wu, A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints, Comp. Opt. Appl., 64 (2016), 671-697.       

Go to top