[slightly edited version of benchmark discussion from sci-op]
Date: 3 Nov 1995 02:43:17 GMT
From: simon1@bu.edu (Simcha Streltsov)
Subject: Global Optimization benchmark
[Ron van Iwaarden wrote:]
Probably, real comparison should be based on
(best value) - (cost)* (N samples)
instead of "maximum found after N iterations" -
and algorithms should be compared for different values of the cost (in this case, btw, then no calibration is needed)
Do we have understanding of "classes" of problems in global optimization?
[ similar to local optimization - where, i.e. eigenvalue can tell you something
Simon Streltsov
simon1@bu.edu
home page:
http://conx.bu.edu/~simon1/go.html
Date: 3 Nov 1995 16:26:59 GMT
From: Fabio Schoen <schoen@ingfi1.ing.unifi.it>
Subject: Global Optimization benchmark
simon1@bu.edu (Simcha Streltsov) wrote:
My point of view is that we should carry on "honest" experiments. By this I mean that we should use different metrics for the same algorithm. Some possibilities:
All the 3 above neglect stopping rules. Then we can select "smart" algorithms and experimentt with smart stopping rules (which are extremely hard to find and implement) in a sensible way)
I also agree about the fact that many kind of algorithms are based on different requirements (local searches, for example). But this is common, e.g., with what happens in NLP: we have methods for continuous optimization, differentiable or not, constrained or unconstrained, using derivative information or not, and so on.
So we are wrong if we look for THE global optimization method: we need methods for unconstrained globopt (with and without local searches), for box-constrained problems, for concave minimization, for d.c. problems, for Lipschitz continuous functions, and so on.
Fabio Schoen
Date: 9 Nov 1995 20:50:21 GMT
From: simon1@bu.edu (Simcha Streltsov)
Subject: Global Optimization benchmark
Fabio Schoen (schoen@ingfi1.ing.unifi.it) writes:
Fabio proposes to use:
It would be OK, if your test functions are a good reflection for the real problem but for a general glob.opt. algorithm it may create a difficulty.
> 2) track the progress towards the global optimum (when do we reach > 50%, 75%,90%, ... precision) This is probably an improvement over 1)
Local opt. algorithms usually differ in O(dimensions) times - depending on how many derivatives they use. Global opt. algorithms may differ in O(iterations) - as you may want to use information about all previous samples to update your global model.
Therefore, it maybe interesting to use a performance measure
J = ( optimal_value - computational_time * time_unit_cost)
and test algorithms for different values of time_unit_costs and their own stopping criteria.
Simon Streltsov
simon1@bu.edu
http://conx.bu.edu/~simon1/go.html
Dept. of Manufacturing Engineering/ OR
Boston University