Abstract

A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch ” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented. Other issues addressed include time-varying optimization problems and a priori “head-to-head” minimax distinctions between optimization algorithms, distinctions that result despite the NFL theorems’ enforcing of a type of uniformity over all algorithms.

Keywords

MinimaxOptimization problemMathematical optimizationClass (philosophy)A priori and a posterioriMathematicsInterpretation (philosophy)Computer scienceL-reductionAlgorithmContinuous optimizationArtificial intelligenceMulti-swarm optimization

Affiliated Institutions

Related Publications

Handbook of Genetic Algorithms

This book sets out to explain what genetic algorithms are and how they can be used to solve real-world problems. The first objective is tackled by the editor, Lawrence Davis. Th...

1991 7308 citations

Publication Info

Year
1997
Type
article
Volume
1
Issue
1
Pages
67-82
Citations
13304
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

13304
OpenAlex

Cite This

David H. Wolpert, William G. Macready (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation , 1 (1) , 67-82. https://doi.org/10.1109/4235.585893

Identifiers

DOI
10.1109/4235.585893