A combination of local and global optimization strategies seems very promising since they combine the fast convergence of the gradient method with the robustness of the global algorithms regarding the initial guess. One could think of a scenario where a global optimizer tries to find an initial guess for a local optimizer. The global optimizer thereby performs a small number of iterations. Some of the best individuals of this global optimization are then used as initial guess for the local optimizer. The local optimization is terminated as soon as no important reduction of the target function is achieved. Several iterations between global and local algorithms are possible where the global optimizer could either restart with the individuals that were used as initial guess or the result from the local optimizer could be used as a new starting point for a global refining step.
2003-03-27