Global Optimization Test Result Presentation

See also:

You are encouraged to submit test results (latex source or HTML format) for your own algorithms on any of the test sets in the Global Optimization page. Test results submitted will be made available here (so far nearly empty). The idea is that you can enter a competition on the fastest algorithms and you can look up the results others obtained with their favorite methods.


I'd like to encourage a common format for reporting results and invite your ideas about what you would like to see in a result database. My current wishes are:

Report for each problem the type of information used (e.g., function values, gradients, Hessians, their interval enclosures or underestimating functions) and how often you used it (a) until you found the global optimizer for the first time, and (b) until your stopping test decided that there is no better feasible point, and quits. If the global optimum is not found, or if the best point is not known, indicate the lowest function value reached. If the best point found is not yet in the database, give coordinates and function value, and indicate whether you believe it to be the global minimizer, and why.

Report times used in units 1 = time for 1000 objective function (and constraint) evaluations. (This seems more useful than the traditions of either reporting absolute times, or of reporting times relative to the Shekel 5 function, which may be completely unrelated to the kind of expressions used in the current function. There is some discussion on the choice of units to something that is independent of the programming style of researchers. So it may well be that this is changing soon. So before you do your final tests and submit timings, please check this page again to see whether anything has changed.)

For probabilistic algorithms, run the method at least 10 times (for smaller problems 100 times), and report the quotient q = -log (probabilility of missing the global minimizer) / (time used). Here the natural logarithm should be used, and to make the figures more machine independent, the time used should again be measured in units 1 = time for 1000 function evaluations. The probability of success of the method run repeatedly for the time equivalent to 1000*T function evaluations is then 1-exp(-qT).

Be specific about the program name and version, the parameter settings used to tune your algorithm, starting points used (if there is a choice in your algorithm) and the stopping rule used. In particular, for each particular problem collection, all problems should be handled by the same set of parameters and stopping rules. If you only test part of a collection, you should give your reasons. If you use test problems from several collections, please make separate tables.

You don't need to disclose details or code of your method, but give at least an idea of the techniques used. And if you use for comparison one of the codes listed above, please include these results, too.

If you test one of the algorithms in my list of public domain global optimization codes, please don't be silent when the results are disappointingly bad, but submit them so that others have a more complete basis to assess the kind of problems where particular algorithms fail. If you find in the database bad results on an algorithm you think is good, try to play with the tuning parameters, and report your experiences. In particular, explain your rules of thumb when to use which parameters.

Test results on other test problem collections are invited, too, provided you supply (or point to) complete online problem definitions in ASCII, HTML or NOP.

If you'd like to discuss these conditions, please give me your comments.


Global Optimization
my home page (http://www.mat.univie.ac.at/~neum)

Arnold Neumaier (Arnold.Neumaier@univie.ac.at)