一种构造辅助函数的全局优化方法外文翻译资料

 2023-01-12 10:37:35

一种构造辅助函数的全局优化方法

原文作者:Yong-Jun Wang,Jiang-She Zhang

摘要:文章基于研究一个两级确定的构造全局优化算法的辅助函数方法.具体而言,就是首先获得一个局部最小值的原有函数,然后利用拉伸函数技术修改目标函数使所获得的值为局部最小值.转换函数获得的函数值高于所获得的最小值,而那些低的值不变.接下来构造一个辅助函数的延伸功能,辅助函数总是在所述函数值与所获得的最小值相比更高的区域中,产生一个固定的较低的区域.我们优化了辅助功能,并使用所发现的固定点为起点,以转动到第一步骤重新开始搜索,重复上述过程,直到结束,并作出理论分析.新方法的主要特点是,数值实验基准函数不同维度(50)表明,新算法具有更快的收敛性和更高的成功率,并且可以找到高质量的解决方案,与现有其他类似的算法相比,是与理论分析相一致的.

关键词:全局优化; 连续变量; 拉伸函数技术;辅助函数方法; 填充函数

非线性函数优化问题中具有许多局部极小,在他们的搜索空间中的应用,如工程设计,分子生物学是广泛的,和神经网络训练.虽然现有的传统的方法,如最速下降方法,牛顿法,拟牛顿方法,信赖域方法,共轭梯度法,收敛迅速,可以找到解决方案,为高精度的连续可微函数,这在很大程度上依赖于初始点和最终的全局解的质量很难保证.在全局优化中存在的困难阻碍了许多学科的进一步发展.因此,全局优化通常成为一个具有挑战性的计算任务的研究.

一般来说,设计一个全局优化算法是由两个原因造成的困难:一是如何确定所得到的最小是全球性的(当时全球最小的是事先不知道),和其他的是,如何从中获得一个更好的最小跳.对第一个问题,一个停止规则称为贝叶斯终止条件已被报道.许多最近提出的算法的目标是在处理第二个问题.一般来说,这些方法可以被类fi主要分两大类,即:(一)确定的方法,及(ii)的随机方法.随机的方法是基于生物或统计物理学,它跳到当地的最低使用基于概率的方法.这些方法包括遗传算法(GA),模拟退火法(SA)和粒子群优化算法(PSO).虽然这些方法有其用途,它们往往收敛速度慢和寻找更高精度的解决方案是耗费时间.他们更容易实现和解决组合优化问题.然而,确定性方法如填充函数法,盾构法,等,收敛迅速,具有较高的精度,通常可以找到一个解决方案.这些方法往往依赖于修改目标函数的函数“少”或“低”局部极小,比原来的目标函数,并设计算法来减少该fiED功能逃离局部极小更好的发现.

引用确定性算法中,扩散方程法,有效能量的方法,和积分变换方法近似的原始目标函数的粗结构由一组平滑函数的极小的“少”.这些方法通过修改目标函数的原始目标函数的积分.这样的集成是实现太贵,和辅助功能的最终解决必须追溯到原始目标函数的最小值,而所追踪的结果可能不是真正的全球最小的问题.终端器无约束子能量法和动态隧道方法修改fiES的目标函数的基础上的动态系统的稳定性理论的全局优化的梯度下降算法的杂交方法.这些方法都将动态系统和相应的计算非常耗时,尤其是目标函数的维数的增加,因为他们的好点是通过搜索沿各坐标到终止的发现.拉伸函数的方法是一个典型的辅助函数法,通过利用以前的搜索得到的信息使目标函数和帮助算法跳出局部最小更有效.这种技术已被纳入PSO的提高找到全局极小的成功率.然而,这种混合算法是建立在一个随机的方法,其收敛速度慢、应用更易与低维问题.填充函数法是另一个辅助函数法作案fiES为目标函数的填充函数,然后找到更好的局部极小值逐步优化填充函数构造上得到的最小值.填充函数法为我们提供了一个好主意,使用局部优化技术来解决全局优化问题.如果无法估计的参数可以得到解决,设计的填充函数可以应用于高维函数,填充函数方法在文献中的前途是光明的.掘进方法修改fiES的目标函数,以确保未来的出发点具有相同的函数值所得到的最小离获得一个,从而找到全局极小的概率增加.一个连续的会话的方法(SCM)将目标函数转化为一个在函数值都高于得到的地区没有局部极小或固定点,除了预fi固定值.这个方法似乎有希望如果通过预fi造成不影响固定的点被排除在外.

不管拉伸功能的方法,已设计的填充函数法,或隧道算法的使用,他们往往依赖于几个关键的参数是不同的fi邪教的预估中的应用,如在极小的存在和上下的目标函数的导数边界的间隔长度.因此,一个在理论上有效的辅助函数法是困难的fi邪教在实践中,由于参数的不确定性,实现一一维函数的一个例子.

显然,1和2说明了“墨西哥帽”效应出现在辅助函数法(已填充函数法和拉伸函数法)在一个地方点x*= 4.60095.不必要的影响,即引入新的局部极小值,通过参数设置不当等引起的.新推出的局部极小值将增加原问题的复杂性和影响算法的全局搜索.

因此,一个有效的参数调节方便的辅助功能的方法是值得研究的.基于此,在本文中,我们给出了一个简单的两阶段的函数变换方法,转换1398纽约王骥,J. S.张/数学和计算机和数学建模 47(2008)1396–1410.x*= 4.60095的功能定义(3).“墨西哥帽”效应出现在两个点原目标函数f(x)迅速下降的收敛性和高的能力逐渐找到更好的解决方案,在更广阔的区域的一个辅助功能.这个想法是,填充函数法很相似.具体来说,我们首先发现的原始目标函数的局部最小.然后拉伸函数法和模拟填充函数法对目标函数进行连续的两个阶段的转换.构建的功能是在原来的目标函数值是高于获得一个在第一步区下降,而一个固定点必须在更好的区域存在.接下来,我们尽量减少辅助功能找到它的一个固定点(一个好点的f(x)比局部极小获得之前),然后下一个局部优化的出发点.我们重复这个过程直到终止.在新方法中,参数容易设置,例如两个常数可以被预处理,由于辅助函数的性质是不依靠不同的参数来实现,虽然两个参数中引入辅助函数.上一集的尺寸为50,与其他方法的比较表明,新的算法是更有效的标准测试问题的数值试验.

为了做一个公平的比较,我们执行20独立运行在复杂的函数来获取结果.在表格1中,终止条件如下:如果,精确到,是由算法得到的函数值,是测试问题的已知最小的函数值,该算法被认为是“成功的”和计算被停止.比较算法是TRUST方法,扩散方程方法(反),填充函数法(FF),多级单键方法(实验室),利维的隧道算法和SCM方法.新算法的详细报告在表格的10号和11号中被给出,预设方案精确到(记为),在20中随即搜索的成功率(记为Succ)和计算的成本(记为CPU时间,单位:秒)被列出.如果数量没有列在表中相应的列,这表明计算数量太大或本例中没有检测方法在相应的文件.算法测试个人电脑与奔腾IV 3.0 G Matalab编程代码.

应该指出,虽然SCM问题没有在表格的10号和11号中50的尺寸的数量评估函数,我们不列出在表格2中(氟化钠)与SCM问题的比较,因为终止精度和独立运行的数量的平均数量是至关重要的功能在评估中不能获得.同时,填充函数方法测试的结果只基于在几个维度一些确定性选择的起点作者并没有在表格2中列出.

综合以上可以得出,新方法发现在运行的整体解与预设精度上有100%的成功,虽然我们没有在引索表1中列出.但在表格1中可以看出,新方法在功能评价的平均数目中表现得比其他类似的方法更好.从表格2中,我们可以看到,新方法的测试功能在10号和11号中表现得更好在不同地50种尺寸中,在大多数情况下,成功率是100%,这是可以接受的CPU时间.相比较于另一个辅助函数法——隧道法,在一些情况下的计算表明,新方法可以减少三分之一的施工流程.这就验证了新方法是有效的而且高效率的,这个结论与第二节中的理论分析完全一致.

外文文献出处:http://www.sciencedirect.com/science/article/pii/S0895717707002774

附外文文献原文

A new constructing auxiliary function method

for Global optimization

Abstract

A new auxiliary function method based on the idea which executes a two-stage deterministic search for global optimization is proposed. Specifically, a local minimum of the original function is first obtained, and then a stretching function technique is used to modify the objective function with respect to the obtained local minimum. The transformed function stretches the function values higher than the obtained minimum upward while it keeps the ones with lower values unchanged. Next, an auxiliary function is constructed on the stretched function, which always descends in the region where the function values are higher than the obtained minimum, and it has a stationary point in the lower area. We optimize the auxiliary function and use the found stationary point as the starting point to turn to the first step to restart the search. Repeat the procedure until termination. A theoretical analysis is also made. The main feature of the new method is that it relaxes significantly the requirements for the parameters. Numerical experiments on benchmark functions with different dimensions (up to 50) demonstrate that the new algorithm has a more rapid convergence and a higher success rate, and can find the solutions with higher quality, compared with some other existing similar algorithms, which is consistent with the analysis in theory.

c 2007 Elsevier Ltd. All rights reserved.

Keywords: Global optimization; Continuous variables; Stretching function technique; Auxiliary function method; Filled function

Text

Nonlinear function optimization problems which possess many local minimizers in their search spaces are widespread in applications such as engineering design, molecular biology, and neural network training. Although the existing traditional methods such as the steepest descent method, Newton method, quasi Newton methods, trust region method, and conjugate gradient method converge rapidly and can find the solutions with high precision for continuously differentiable functions, they rely heavily on the initial point and the quality of the final global solution is hard to guarantee. The existing difficulty in global optimization prevents many subjects from developing further.Therefore, global optimization generally becomes a challenging computational task for researchers.

Generally speaking, the difficulty in designing an algorithm on global optimization is due to two reasons: One is how to determine that the obtai

剩余内容已隐藏,支付完成后下载完整资料


A new constructing auxiliary function method

for Global optimization

Abstract

A new auxiliary function method based on the idea which executes a two-stage deterministic search for global optimization is proposed. Specifically, a local minimum of the original function is first obtained, and then a stretching function technique is used to modify the objective function with respect to the obtained local minimum. The transformed function stretches the function values higher than the obtained minimum upward while it keeps the ones with lower values unchanged. Next, an auxiliary function is constructed on the stretched function, which always descends in the region where the function values are higher than the obtained minimum, and it has a stationary point in the lower area. We optimize the auxiliary function and use the found stationary point as the starting point to turn to the first step to restart the search. Repeat the procedure until termination. A theoretical analysis is also made. The main feature of the new method is that it relaxes significantly the requirements for the parameters. Numerical experiments on benchmark functions with different dimensions (up to 50) demonstrate that the new algorithm has a more rapid convergence and a higher success rate, and can find the solutions with higher quality, compared with some other existing similar algorithms, which is consistent with the analysis in theory.

c 2007 Elsevier Ltd. All rights reserved.

Keywords: Global optimization; Continuous variables; Stretching function technique; Auxiliary function method; Filled function

Text

Nonlinear function optimization problems which possess many local minimizers in their search spaces are widespread in applications such as engineering design, molecular biology, and neural network training. Although the existing traditional methods such as the steepest descent method, Newton method, quasi Newton methods, trust region method, and conjugate gradient method converge rapidly and can find the solutions with high precision for continuously differentiable functions, they rely heavily on the initial point and the quality of the final global solution is hard to guarantee. The existing difficulty in global optimization prevents many subjects from developing further.Therefore, global optimization generally becomes a challenging computational task for researchers.

Generally speaking, the difficulty in designing an algorithm on global optimization is due to two reasons: One is how to determine that the obtained minimum is a global one (when the global minimum is not known in advance), and the other is that how to jump from the obtained minimum to a better one. In treating the first problem, a stopping rule named the Bayes in termination condition has been reported.Many recently proposed algorithms aim at dealing with the second problem. Generally, these methods can be classfied into two main categories, namely: (i)deterministic methods, and (ii) stochastic methods. The stochastic methods are based on biology or statistical physics,which jump to the local minimum by using a probability based approach. These methods include genetic algorithm(GA), simulated annealing method (SA) and particle swarm optimization method (PSO). Although these methods have their uses, they often converge slowly and finding a solution with higher precision is time consuming.They are easier to implement and to solve combinational optimization problems. However, deterministic methods such as the filled function method, tunneling method, etc, converge more rapidly, and can often find a solution with a higher precision. These methods often rely on modifying the objective function to a function with “fewer” or “lower” local minimizers than the original objective function, and then design algorithms to minimize the modified function to escape from the found local minimum to a better one.

Among the referenced deterministic algorithms, the diffusion equation method, the effective energy method, and integral transform scheme approximate the coarse structure of the original objective function by a set of smoothed functions with “fewer” minimizers. These methods modify the objective function via integration of the original objective function. Such integrations are too expensive to implement, and the final solution of the auxiliary function has to be traced to the minimum of the original objective function, whereas the traced result may be not the true global minimum of the problem. The terminal repeller unconstrained sub-energy tunneling method and the method of hybridization of the gradient descent algorithm with the dynamic tunneling method modifies the objective function based on the dynamic systemsrsquo; stability theory for global optimization. These methods have to integrate a dynamic system and the corresponding computation is time consuming, especially with the increase of the dimension of the objective function, since their better point is found through searching along each coordinate till termination. The stretching function technique is an auxiliary function method which uses the obtained information in previous searches to stretch the objective function and help the algorithm to escape from the local minimum more effectively. This technique has been incorporated into the PSO to improve its success rate of finding global minima. However, this hybrid algorithm is constructed on a stochastic method, which converges slowly and applies more easily to the problem with a lower dimension. The filled function method is another auxiliary function method which modifies the objective function as a filled function, and then finds the better local minima gradually by optimizing the filled functions constructed on the obtained minima. The filled function method provides us with a good idea to use the local optimization techniques to solve global optimization problems. If the difficulty in estimating the

剩余内容已隐藏,支付完成后下载完整资料


资料编号:[287084],资料为PDF文档或Word文档,PDF文档可免费转换为Word

您需要先支付 30元 才能查看全部内容!立即支付

课题毕业论文、外文翻译、任务书、文献综述、开题报告、程序设计、图纸设计等资料可联系客服协助查找。