- 无标题文档
查看论文信息

论文中文题名:

 一类求解大规模全局优化问从的协同进化算法研究    

姓名:

 包雪凡    

学号:

 20201221056    

保密级别:

 公开    

论文语种:

 chi    

学科代码:

 027000    

学科名称:

 经济学 - 统计学    

学生类型:

 硕士    

学位级别:

 经济学硕士    

学位年度:

 2023    

培养单位:

 西安科技大学    

院系:

 理学院    

专业:

 应用统计    

研究方向:

 大规模优化算法    

第一导师姓名:

 梁飞    

第一导师单位:

 西安科技大学    

论文提交日期:

 2023-06-14    

论文答辩日期:

 2023-06-01    

论文外文题名:

 A class of co-evolutionary algorithms for solving large-scale global optimization problems    

论文中文关键词:

 大规模全局优化 ; 局部优化 ; 全局搜索 ; 分组算法    

论文外文关键词:

 large-scale global optimization ; local optimization ; global search ; grouping algorithm    

论文中文摘要:

       随着数据收集变得更加全面,大规模优化全局优化(LSGO)问题的应用也日益广泛。因此,如何有效地解决LSGO问题具有重要的现实意义。协同进化(CC)算法是一个可以有效解决LSGO问题的计算框架。其工作原理是将原问题分解为子问题,并分配计算资源对子问题进行优化。CC算法的主要挑战在于如何准确地对原问题进行分解以及如何合理地分配计算资源。对于上述两个问题,本文提出了以下解决方案:

       首先,为了解决分组的问题,本文在差分分组算法(DG)的基础上提出了双重检测的DG算法(DCDG)。该算法对DG算法的判别规则进行了改进,使用两种判别规则对变量的相关性进行识别,当两种判别规则相互冲突时,再进行一次判别。实验结果表明,与DG算法相比,DCDG算法分组准确率更高,更加稳定,特别是在非相加的部分可分离函数上表现优秀。

       其次,针对DCDG算法无法识别间接相关的变量的缺点,本文提出了关联集合差分分组算法(ASDG),该算法构造了两组集合,一组存储当前变量及其相关变量,一组存储未分组变量。遍历第一组集合,判断当前变量和未分组变量之间是否存在交互关系,当第一组集合不再发生改变时,分组结束。实验结果表明,与其他改进的DG算法相比,ASDG算法分组更精确,可以识别具有间接相关关系的变量。

       最后,为了解决计算资源分配问题,本文提出了一种新的自适应的计算资源分配机制(ACRA),ACRA算法根据子问题的贡献和阈值选择被优化的子问题。其中子问题的贡献为原问题优化后相对变化率的均值,这种定义方式可以有效利用每轮的优化信息。阈值为原问题优化后相对变化率的sigmod函数,选择时会生成一个随机数,当随机数大于阈值时,选择贡献最大的子问题进行优化,否则随机选择一个子问题进行优化。该选择策略在优化初期,会偏向选择贡献最大的子问题进行优化,加快目标函数下降速度,在优化后期,会倾向随机选择子问题进行优化,增加目标函数下降空间。实验结果表明,与其他算法相比该算法表现优秀。

论文外文摘要:

      As data collection becomes more comprehensive, the large-scale optimization global optimization (LSGO) problem is increasingly used. Therefore, it is of great practical importance how to solve the LSGO problem effectively. The Cooperative Coevolution(CC) algorithm is a computational framework that can effectively solve the LSGO problem. The main challenge of the CC algorithm is how to accurately decompose the original problem and how to reasonably allocate computational resources to the subproblems. For the above two problems, solutions are proposed in this paper.

        Firstly, to solve the problem of grouping, this paper proposes the Double Checked Difference Grouping (DCDG) algorithm based on the Difference Grouping (DG) algorithm. This algorithm improves the discriminant rules of the DG algorithm by using two discriminant rules to identify the correlation of variables. When the two discriminant rules conflict with each other, another discriminant is performed. The experimental results show that compared with DG algorithm, DCDG algorithm grouping accuracy is higher and more stable, especially excellent in non-additive partially separable functions.

        Secondly, to address the drawback that the DCDG algorithm cannot identify indirectly related variables, this paper proposes the Associated Set Difference Grouping (ASDG) algorithm, which constructs two sets of sets, one storing the current variables and their related variables, and one storing the ungrouped variables. The first set of sets is traversed to determine whether there is an interaction between the current variable and the ungrouped variables, and the grouping ends when the first set of sets no longer changes. The experimental results show that the ASDG algorithm groups more accurately than other improved DG algorithms and can identify variables with indirect correlations.

        Finally, to solve the computational resource allocation problem, Adaptive Computing Resource Allocation  (ACRA) algorithm is proposed in this paper. The ACRA algorithm selects the subproblem being optimized based on the contribution of the subproblem and the threshold value. Where the contribution of the subproblem is the mean value of the relative rate of change after the optimization of the original problem. The optimization information of each round can be effectively used through this definition method. The threshold value is the Sigmod function of the relative rate of change after optimization of the original problem, and a random number is generated for selection. When the random number is larger than the threshold value, the subproblem with the largest contribution is selected for optimization, otherwise a subproblem is randomly selected for optimization. In the early stages of optimization, the subproblems with the greatest contribution will be biased towards optimization, thereby accelerating the descent rate of the objective function. In the later stages of optimization, the subproblems will be biased towards random optimization, thereby increasing the descent space of the objective function. Experimental results show that this algorithm performs well compared with other algorithms.

中图分类号:

 TP301.6    

开放日期:

 2023-06-14    

无标题文档

   建议浏览器: 谷歌 火狐 360请用极速模式,双核浏览器请用极速模式