- 无标题文档
查看论文信息

论文中文题名:

 最优化算法的优化效益及其应用研究    

姓名:

 陈咪咪    

学号:

 18201009004    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 070104    

学科名称:

 理学 - 数学 - 应用数学    

学生类型:

 硕士    

学位级别:

 理学硕士    

学位年度:

 2021    

培养单位:

 西安科技大学    

院系:

 理学院    

专业:

 应用数学    

研究方向:

 最优化理论与算法    

第一导师姓名:

 王雪峰    

第一导师单位:

 西安科技大学    

第二导师姓名:

 赵梦玲    

论文提交日期:

 2022-03-01    

论文答辩日期:

 2021-12-11    

论文外文题名:

 Optimization benefit of optimization algorithm and its application research    

论文中文关键词:

 最优化算法 ; 优化效益 ; 算法评价 ; 双目标规划    

论文外文关键词:

 Optimization algorithm ; Optimization benefit ; Algorithm evaluation ; Dual objective programming    

论文中文摘要:

最优化是一门应用相当广泛的学科,其理论与算法的研究已相对成熟。而实际应用中,好的算法是有效解决问题的关键,所以对算法的评价与比较尤为重要。以往对优化算法的比较,多是从算法本身的收敛速度、迭代次数及有无大范围收敛等特性进行比较,一般这种评价无法兼顾问题的各个方面,不足以体现算法解决问题的优劣。因此本文提出优化算法的迭代效益、效益均值、效益方差、最优效益。最优化算法的优化效益的研究是一个全新的研究领域,从优化效益的角度来比较算法的优劣。该研究不仅为优化算法提供了有效的评价方式,同时能够用优化效益的观点有效解决实际问题,且具有广泛的应用价值。

首先,概述论文所涉及到的优化理论与优化方法。基于算法评价指标,本文给出算法迭代的优化效益相关定义,提出一种新的算法评价方法。众多优化算法,在求极小值问题中均希望找到目标函数有相当量的下降的方向进行迭代。算法解决问题的优劣与函数值下降的量有着密切的关系,因此提出的优化效益即考虑目标函数值下降量与迭代点步长的关系,优化效益的研究价值在于建立了算法本身与目标函数之间的联系。

其次,利用优化效益对最速下降法、修正牛顿法、共轭梯度法以及拟牛顿法进行评价,通过对每一类算法的迭代数据进行优化效益相关值计算,分别给出以上四类算法在求解单一算例时,算法的优化效益分析;进一步对拟牛顿法、共轭梯度法进行评价,分别给出两类算法在求解多个优化算例时,基于优化效益相关值对两种算法进行优劣比较,通过对结论的分析其结论与实际是相符的,且这种评价方式避免了以往对算法评价的复杂性与局限性。

最后,用优化效益的观点解决两类实际问题,即投资风险问题和高等学校学费制定标准问题,分别在优化效益意义下,在优化问题的求解过程进行优化效益相关值计算,最终得到最优效益意义下问题的最优解。从优化效益观点解决实际问题不仅比以往求解优化问题的方法更简便,同时求解结果更具有实际意义。

论文外文摘要:

Optimization is a widely used subject, its theory and algorithm research has been relatively mature. In practical application, a good algorithm is the key to solve problems effectively, so the evaluation and comparison of algorithms is particularly important. In the past, optimization algorithms were compared in terms of their convergence speed, iteration times and whether there is a large range of convergence. Generally, such evaluation cannot take into account all aspects of the problem, and is not enough to reflect the advantages and disadvantages of the algorithm to solve the problem. Therefore, this paper proposes the definition of optimization benefit, including iterative benefit, mean benefit, benefit variance and optimal benefit. The optimization efficiency of optimization algorithm is a new research field, from the perspective of optimization efficiency to compare the advantages and disadvantages of algorithms. This study not only provides an effective evaluation method for optimization algorithms, but also can effectively solve practical problems with the view of optimization efficiency, and has a wide range of application value.

Firstly, the optimization theory and optimization methods involved in this paper are summarized. The definition of optimization benefit related to algorithm iteration is given, which is mainly proposed based on many optimization algorithms. In the minima problem, we all hope to find the direction of the objective function to iterate with a considerable amount of decline. There is a close relationship between the problem solved by the algorithm and the amount of function value decline. Therefore, the optimization benefit proposed considers the relationship between the value decline of the objective function and the step length of the iteration point, which is essentially an evaluation method of the algorithm. The research value of optimization benefit lies in establishing the relationship between the algorithm itself and the objective function.

Secondly, the optimization benefits are used to evaluate the fastest descent method, the modified Newton method, the conjugate gradient method and the quasi-Newton method, and the iterative data of each algorithm are calculated to evaluate the relative iterative benefits of the above algorithms when only one optimization problem is solved. Furthermore, the quasi-Newton method and conjugate gradient method are evaluated, and the comprehensive iterative benefits of the two algorithms in solving multiple optimization problems are evaluated respectively. Based on the optimization efficiency correlation value, the two algorithms are compared and the conclusion is consistent with the reality. This evaluation method avoids the complexity of the previous algorithm evaluation.

Finally, two kinds of practical problems, namely investment risk problem and college tuition standard problem, are solved from the viewpoint of optimization benefit. In the sense of optimization benefit, the relevant value of optimization benefit is calculated in the process of solving the optimization problem, and finally the optimal solution of the problem in the sense of optimal benefit is obtained. To solve practical problems from the view of optimization efficiency is not only easier than the previous method of solving optimization problems, but also more practical significance.

参考文献:

[1] 陈宝林. 最优化理论与算法[M]. 北京:清华大学出版社, 2005.

[2] 叶峰, 邵之江, 梁昔明, 等. 四种无约束优化算法的比较研究[J]. 数学的实践与认识,2004, 34(5):108-112.

[3] VladimirN.Vapnik, 瓦普尼克, 许建华, 等. 统计学习理论[M].电子工业出版社, 2004.

[4] Tjalling J, Ypma. Historical Development of the Newton-Raphson Method[J]. SIAM Review,1995, 37(4):531—551.

[5] Rao S S. Optimization-Theory and Applications(Second Edition)[M]. New Delhi:Wiley Eastern Limited,1984.

[6] 赵凤治, 尉续英. 约束最优化计算方法[M]. 北京:科学出版社, 1991.

[7] 席少霖. 非线性最优化方法[M]. 北京:高等教育出版社, 1992.

[8] 甘应爱, 田丰, 李维铮, 等. 运筹学[M]. 北京:清华大学出版社, 1990.

[9] 许建强, 李俊玲主编. 数学建模及其应用[M]. 上海交通大学出版社, 2018.

[10]许国根, 赵后随, 黄智勇. 最优化方法及其MATLAB实现[M]. 北京航空航天大学出版社, 2018:73.

[11] Ochs P, Fadili J, Brox T. Non-smooth Non-convex Bregman Minimization: Unification and New Algorithms[J]. Journal of optimization theory and applications, 2019(1):244-278.

[12] Boyd S, Vandenberghe L. Convex Optimization [M]. Cambridge University Press,2004.

[13] Broyden C G . Quasi-Newton Methods and their Application to Function Minimisation[J]. Mathematics of Computation, 1967, 21(99):368-381.

[14] Boyd S , Kim S J , Vandenberghe L , et al. A tutorial on geometric programming[J]. Optimization & Engineering, 2007, 8(1):67.

[15] Bazaraa M, Sherali H,Shetty C. Nonlinear Programming: Theory and Algorithms. John Wiley & Sons,2006.

[16] 吴凤平, 陈艳萍. 现代决策方法[M]. 河海大学出版社, 2011.

[17] Ortega J M, Rheinboldt W C. Iterative solution of nonlinear equations in several variables /. Academic Pr, 1970.

[18] 李庆扬, 莫孜中, 祁力群. 非线性方程组的收值解法[M]. 北京:科学出版社, 1987.

[19] Kuhn H W,Tucker A.W.Nonlinear programming[C]. Proceedings of the Second

Berkeley Symposium on Mathematical Statimics and Probability, 195l:481-492.

[20] Horst R, Pardalos P M, Thoai NV, 等. 全局优化引论[M]. 清华大学出版社, 2003.

[21] 胡格宁. 投资中的收益与风险问题分析[J]. 时代金融, 2018(11):2.

[22] 杨桂元. 一类组合投资问题的线性规划解法[J]. 运筹与管理, 2004, 13(6):31- 36.

[23] 赖炎连, 贺国平. 最优化方法[M]. 北京:清华大学出版社, 2004.

[24] 杨轶华, 吕显瑞, 刘庆怀,等. 求双目标凸规划问题有效解集的内点同伦算法[J]. 吉林大学学报:理学版, 2006, 44(1):5.

[25] 高雷阜, 魏帅. 求解双目标规划的近似邻近外梯度算法[J]. 辽宁工程技术大学学报:自然科学版, 2014, 33(4):4.

[26] 王兆杰, 高峰, 翟桥柱,等. 高耗能企业关口平衡问题的双目标规划模型[J]. 西安交通大学学报, 2013, 47(8):7.

[27] 刘兴高,胡云卿. 应用最优化方法及MATLAB实现[M]. 科学出版社, 2014.

[28] Bellman R. Some Problems in the Theory of Dynamic Programming[J].

Econometrica,1954,22(1):37-48.

[29] 袁亚湘, 孙文瑜. 最优化理论与方法[M]. 北京:科学出版, 1997.

[30] Young D M, Traub J E. Iterative method for the solution of equations [M]. American

Mathematical Monthly,1967.

[31] Petkovi M S, Neta B, Petkovi L D, Duni J .Multipoint methods for solving nonlinear equations[J].Applied Mathematics and Computation, 2014,226(1):635-660.

[32] 李董辉, 童小娇, 万中. 数值最优化算法与理论[M]. 北京:科学出版社, 2010:2.

[33] Elizabeth D.bDolan Jorge J. Moré. Benchmarking optimization software with performance profiles.2002,91(2): 201–213.

[34] 黄席越, 张著洪, 胡小兵等. 现代智能算法理论及应用[M]. 北京:科学出版社,2005.

[35] Metropolis N, Roscnbluth A W , Rosenbluth M N and et a1.Equation of state

calculation by fast computing machines[J].Journal of Chemical Physics,1953,

21:1087-92.

[36] 焦李成, 杜海峰, 刘芳等. 免疫优化、计算、学习与识别[M].北京:科学出版社,2006.

[37] Kennedy J,and Eberhart R C.Particle swarm optimization[C]. Proc. of IEEE

International Conference on Neural Networks(ICNN), 1995, IV: 1942-1948.

[38] 张健, 徐宁, 李春光, 等. 几种现代优化算法的比较研究[J]. 系统工程与电子技术.2002,12:100-103.

[39] 陈冬芳, 薛继伟, 张漫. 全局最优化算法及其应用[J]. 东北石油大学学报, 2005, 029(001):89-93.

[40] Traeb J F . Iterative Methods for the Solution of Equations[J]. The American Mathematical Monthly, 1965, 19(89).

[41] 黄正海, 苗新河. 最优化计算方法[M]. 北京:科学出版社, 2015.

[42] 薛毅. 最优化原理与方法[M]. 北京:北京工业大学出版社, 2008.

[43] Hager W W,Zhang H C. A conjugate gradient methods with guaranteed descent[J].ACM Transaction on Mathematical Software,2006,32:113-137.

[44] Hager W W, Zhang H C. A survey of nonlinear conjugate gradient methods[J].Pacific Journal of optimization, 2006, 2:35-58.

[45] Wei Z, Li G, Qi L. New nonlinear conjugate gradient formulas for large scale unconstrained optimization problems[J]. Applied Mathematics and Computation,

2006(1 79): 407-430.

[46] Roman D,Mitra G Portfolio selection models: A review and new directions[J].Wilmott Journal, 2009, 1(2).

[47] Vath V L , Mnif M , Pham H . A model of optimal portfolio selection under liquidity risk and price impact[J]. Finance and Stochastics, 2007, 11(1):51-90.

[48] 邓乃扬. 无约束最优化计算方法[M]. 北京:科学出版社, 1982: 1-203.

[49] 李欣. 求解无约束优化问题的算法研究[D]. 西安电子科技大学, 2009.

[50] Sun W, Yuan Y . Optimization theory and methods. Nonlinear programming[M]. 2006.

[51] Li Yingjie,Li Donghui,Truncated regularized Newton method for convex

minimizations[J].Computer Optimization Application,43(1),2009.119-131.

[52] 黎勇, 韦增欣. 一种自动充分下降的共轭梯度法[J]. 西南师范大学学报(自然科学版), 2016, 41(5): 36-40.

[53] 马昌凤. 最优化方法及其程序设计[M]. 北京: 科学出版社, 2010.

[54] 王宜举, 修乃华. 非线性最优化理论与方法[M]. 北京: 科学出版社, 2012.

[55] 黄海. 非线性无约束优化问题的新共轭梯度法[J]. 河南大学学报(自然科学版), 2014, 44(2): 141-145.

[56] 韩璞, 董泽, 姚万业,等. 最优化算法的发展及应用[C]// 仿真技术学术会议. 2003.

[57] 王圣元. 投资的收益与风险问题[J]. 知识经济, 2020(4):2.

[58] 孙芳芳. 浅议灰色关联度分析方法及其应用[J]. 科技信息, 2010(17):3.

中图分类号:

 O224    

开放日期:

 2022-03-04    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式