See also

*It is God's privilege to conceal things,
but the kings' pride is to research them.
(Proverbs 25:2)*

A= Analysis, C = Combinatorics, F = Foundations,

L = Linear Algebra, N = Numerical Analysis, O = Optimization,

P = Physics/Chemistry, S = Statistics

* = book

For preprints of the papers listed below,
abstracts and ps and/or pdf files are available online;
for the published version see the references given.

For my remaining publications,
scanned copies of the published version are at
list of publications.

For manuscripts with an e-print number, you can also get the latex
source (of some version of the paper)
from an e-print archive such as
http://xxx.uni-augsburg.de

I do **not** send out paper copies of my manuscripts.
If you have problems downloading, decoding, or printing a file, look at
*downloading/printing problems?*

O000.

We propose new algorithms for (i) the local optimization of bound constrained quadratic programs, (ii) the solution of general definite quadratic programs, and (iii) finding a point satisfying given linear equations and inequalities or a certificate of infeasibility. The algorithms are implemented in Matlab and tested against state-of-the-art quadratic programming software.

NO000.

The objective of optimal tearing is to maximize the number of variables eliminated by solving univariate equations. First, a simple algorithm is presented for automatically identifying feasible assignments: If an equation can be solved symbolically for one of its variables, and the solution is unique, explicit, and numerically stable, then, and only then, this equation-variable pair represents a feasible assignment. Tearing searches for the optimal elimination order over these numerically safe feasible assignments. We give a novel integer programming formulation of optimal tearing. Unfortunately, a high amount of permutation symmetry can cause performance problems; therefore, we developed a custom branch and bound algorithm. Based on the performance tests on the COCONUT Benchmark, we consider the proposed branch and bound algorithm practical (a) for the purposes of global optimization, and (b) in cases where systems with the same sparsity patterns are solved repeatedly and the time spent on tearing pays off.

NO000.

The goal of tearing is to reduce the computation time needed to solve a given system of equations by exploiting its sparsity pattern: A sequence of small subproblems is solved according to an ordering. The ultimate objective of tearing is to find the sequence that results in the most savings with respect to the solution time including the time to find the sequence. Since this objective would be too complicated to handle rigorously, compromises have to be made. The choice of the objective function is a matter of judgment and highly context dependent. In the context of nonlinear systems of equations, a popular objective is to maximize the number of variables eliminated by solving univariate equations. This is significantly different from the objective of the so-called fill-reducing orderings. The present paper is the first of two reports on tearing; here, we review the published exact and heuristic methods.

NO000.

Given a directed graph G, a feedback arc set of G is a subset of its edges containing at least one edge of every cycle in G. Finding a feedback arc set of minimum cardinality is the minimum feedback arc set problem. The present paper focuses on large and sparse graphs. The minimum set cover formulation of the minimum feedback arc set problem is practical as long as all the simple cycles in G can be enumerated. Unfortunately, even sparse graphs can have Omega(2^n) simple cycles, and such graphs appear in practice. An exact method is proposed that enumerates simple cycles in a lazy fashion, and extends an incomplete cycle matrix iteratively in the hope that only a tractable number of cycles has to be enumerated until a minimum feedback arc set is found. Numerical results are given on a test set containing large and sparse test graphs relevant for industrial applications.

O000.

This paper describes an algorithm for solving structured nonsmooth convex optimization problems using the optimal subgradient algorithm (OSGA), a first-order method with optimal complexity both for Lipschitz continuous nonsmooth problems and for smooth problems with Lipschitz continuous gradient. If the nonsmoothness of the problem is manifested in a structured way, we reformulate the problem in a form that can be solved efficiently by OSGA with the complexity for the smooth case. To solve the reformulated problem, we equip OSGA by an appropriate prox-function for which the OSGA subproblem can be solved either in a closed form or by a simple iterative scheme, which decreases the computational cost of applying the algorithm, especially for large-scale problems. We show that applying the new scheme is feasible for many problems arising in applications. Some numerical results are reported.

O000.

This paper presents an acceleration of the optimal subgradient algorithm OSGA for solving convex optimization problems, where the objective function involves costly affine and cheap nonlinear terms. We combine OSGA with a multidimensional subspace search technique, which leads to low-dimensional problem that can be solved efficiently. Numerical results concerning some applications are reported. A software package implementing the new method is available.

O000.

This paper shows that the OSGA algorithm - which uses first-order information to solve convex optimization problems with optimal complexity - can be used to efficiently solve arbitrary bound-constrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the bound-constrained rational subproblem required by OSGA. This leads to an efficient implementation of OSGA on large-scale problems in applications arising signal and image processing, machine learning and statistics.

Numerical experiments demonstrate the promising performance of OSGA on such problems. A software package implementing OSGA for bound-constrained convex problems is available.

O000.

pdf file (172K)

This paper gives a concise, mathematically rigorous description of phenomenological equilibrium thermodynamics for single-phase systems in the absence of chemical reactions and external forces.

The present approach is similar to that of Callen, who introduces in his well-known thermodynamics book the basic concepts by means of a few postulates from which everything else follows. His setting is modified to match the more fundamental approach based on statistical mechanics. Thermodynamic stability is derived from kinematical properties of states outside equilibrium by rigorous mathematical arguments, superseding Callen's informal arguments that depend on a dynamical assumption close to equilibrium.

From the formulas provided, it is an easy step to go to various examples and applications discussed in standard textbooks such as Callen or Reichl. A full discussion of global equilibrium would also involve the equilibrium treatment of multiple phases and chemical reactions. Since their discussion offers no new aspects compared with traditional textbook treatments, they are not treated here.

P000.

arXiv:0810.1019

pdf file (3165K)

The goal of this book is to present classical mechanics, quantum mechanics, and statistical mechanics in an almost completely algebraic setting, thereby introducing mathematicians, physicists, and engineers to the ideas relating classical and quantum mechanics with Lie algebras and Lie groups. The book emphasizes the closeness of classical and quantum mechanics, and the material is selected in a way to make this closeness as apparent as possible.

Much of the material covered here is not part of standard textbook treatments of classical or quantum mechanics (or is only superficially treated there). For physics students who want to get a broader view of the subject, this book may therefore serve as a useful complement to standard treatments of quantum mechanics.

Almost without exception, this book is about precise concepts and exact results in classical mechanics, quantum mechanics, and statistical mechanics. The structural properties of mechanics are discussed independent of computational techniques for obtaining quantitatively correct numbers from the assumptions made. The standard approximation machinery for calculating from first principles explicit thermodynamic properties of materials, or explicit cross sections for high energy experiments can be found in many textbooks and is not repeated here.

O000.

This paper shows that the optimal subgradient algorithm, OSGA, can be used for solving structured large-scale convex constrained optimization problems. Only first-order information is required, and the optimal complexity bounds for both smooth and nonsmooth problems are attained. More specifically, we consider two classes of problems:

(i) a convex objective with a simple closed convex domain, where the orthogonal projection on this feasible domain is efficiently available;

(ii) a convex objective with a simple convex functional constraint.

If we equip OSGA with an appropriate prox-function, the OSGA subproblem can be solved either in a closed form or by a simple iterative scheme, which is especially important for large-scale problems. We report numerical results for some applications to show the efficiency of the proposed scheme. A software package implementing OSGA for above domains is available.

O179.

supplementary material, including source code

Tearing is a long-established decomposition technique, widely adapted across many engineering fields. It reduces the task of solving a large and sparse nonlinear system of equations to that of solving a sequence of low-dimensional ones. The most serious weakness of this approach is well-known: It may suffer from severe numerical instability. The present paper resolves this flaw for the first time.

The new approach requires reasonable bound constraints on the variables. The worst-case time complexity of the algorithm is exponential in the size of the largest subproblem of the decomposed system. Although there is no theoretical guarantee that all solutions will be found in the general case, increasing the so-called sample size parameter of the method improves robustness.

This is demonstrated on two particularly challenging problems. Our first example is the steady-state simulation a challenging distillation column, belonging to an infamous class of problems where tearing often fails due to numerical instability. This column has three solutions, one of which is missed using tearing, but even with problem-specific methods that are not based on tearing. The other example is the Stewart-Gough platform with 40 real solutions, an extensively studied benchmark in the field of numerical algebraic geometry. For both examples, all solutions are found with a fairly small amount of sampling.

O178.

An important aspect in the solution process of constraint satisfaction problems is to identify exclusion boxes which are boxes that do not contain feasible points. This paper presents a certificate of infeasibility for finding such boxes by solving a linearly constrained nonsmooth optimization problem. Furthermore, the constructed certificate can be used to enlarge an exclusion box by solving a nonlinearly constrained nonsmooth optimization problem.

P177.

pdf file (363K)

We describe an asymptotically fast variant of the LLL lattice reduction algorithm. It returns a reduced basis of the Euclidean lattice spanned by an input basis, whose first vector satisfy stronger bounds than previous polynomial algorithms. It terminates within O(n

P176.

The paper describes improved analysis techniques for basis reduction that allow one to prove strong complexity bounds and reduced basis guarantees for traditional reduction algorithms and some of their variants. This is achieved by a careful exploitation of the linear equations and inequalities relating various bit sizes before and after one or more reduction steps.

P175.

pdf file (365K)

In 1960, E.H. Brown defined a set of characteristic curves (also known as ideal curves) of pure fluids, along which some thermodynamic properties match those of an ideal gas. These curves are used for testing the extrapolation behaviour of equations of state. This work is revisited, and an elegant representation of the first-order characteristic curves as level curves of a master function is proposed. It is shown that Brown's postulate - that these curves are unique and dome-shaped in a double-logarithmic p,T representation - may fail for fluids exhibiting a density anomaly. A careful study of the Amagat curve (Joule inversion curve) generated from the IAPWS-95 reference equation of state for water reveals the existence of an additional branch.

P174.

pdf file (313K)

Brown's characteristic curves (also known as ideal curves) describe states at which one thermodynamic property of a real pure fluid matches that of an ideal gas; these curves can be used for testing the extrapolation behaviour of equations of state. In this work, some characteristic curves are computed directly - without recourse to an equation of state - for some pair potentials by Monte Carlo computer simulation. The potentials used are an ab-initio potential for argon, the 1-center Lennard-Jones potential, and a softer pair potential whose short-range part is in accordance with quantum mechanical predictions. The influence of the short-distance repulsion on the characteristic curves is found to be significant even in the 10-100 MPa pressure range.

N173.

pdf file (275K)

Given linear elliptic partial dierential equations with mixed boundary conditions, with uncertain parameters constrained by inequalities, it is shown how to use finite element approximations to compute worst case a posteriori error bounds for linear response functionals determined by the solution. All discretization errors are taken into account.

The bounds are based on the dual weighted residual (DWR) method of Becker & Rannacher and treat the uncertainties with an optimization approach.

The method was implemented for Poisson-like equations with an uncertain mass distribution and mixed Dirichlet/Neumann boundary conditions on arbitrary polygonal domains. No knowledge of domain-dependent a priori constants is necessary.

To get the error bounds, a first order formulation is used whose solution with linear finite elements produces compatible piecewise linear approximations of the solution and its gradient. From the solution of nine related boundary value problems, the bounds are produced.

O172.

pdf file (971K)

This paper presents new techniques for pruning boxes in the presence of an additional quadratic constraint, a problem relevant for branch and bound methods for global optimization and constraint satisfaction. This is done by generating powerful linear and parabolic relaxations from a quadratic constraint and bound constraints, which are then subject to standard constraint propagation techniques. The techniques are often applicable even if the original box is unbounded in some but not all variables.

As an auxiliary tool - needed to make our theoretical results implementable in floating-point arithmetic without sacrificing mathematical rigor - we extend the directed Cholesky factorization from Domes and Neumaier (SIAM J. Matrix Anal. Appl. 32 (2011), 262-285) to a partial directed Cholesky factorization with pivoting. If the quadratic constraint is convex and the initial bounds are sufficiently wide, the final relaxation and the enclosure are optimal up to rounding errors.

Numerical tests show the usefulness of the new factorization methods in the context of pruning.

O171.

pdf file (186K)

This paper presents an algorithm for approximately minimizing a convex function in simple, not necessarily bounded convex domains, assuming only that function values and subgradients are available. No global information about the objective function is needed apart from a strong convexity parameter (which can be put to zero if only convexity is known).

The worst case number of iterations needed to achieve a given accuracy is independent of the dimension and -- apart from a constant factor - best possible under a variety of smoothness assumptions on the objective function.

O170.

pdf file (1198K)

In rigorous constrained global optimization, upper bounds on the objective function help to reduce the search space. Their construction requires finding a narrow box around an approximately feasible solution, verified to contain a feasible point. Approximations are easily found by local optimization, but the verification often fails.

In this paper we show that even if the verification of an approximate feasible point fails, the information extracted from the local optimization can still be used in many cases to reduce the search space. This is done by a rigorous filtering technique called constraint aggregation. It forms an aggregated redundant constraint, based on approximate Lagrange multipliers or on a vector valued measure of constraint violation. Using the optimality conditions, two sided linear relaxations, the Gauss-Jordan algorithm and a directed Cholesky factorization, the information in the redundant constraint is turned into powerful bounds on the feasible set. Constraint aggregation is especially useful since it also works in a tiny neighborhood of the global optimizer, thereby reducing the cluster effect.

A simple example demonstrates how our new method works. Extensive tests show the performance on a large benchmark.

O169.

pdf file (705K)

This paper considers the problem of finding and verifying feasible points for constraint satisfaction problems, including those with uncertain coecients. The main part is devoted to the problem of finding a narrow box around an approximately feasible solution for which it can be rigorously and automatically proved that it contains a feasible solution. Some examples demonstrate difficulties when attempting verification. We review some existing methods and present a new method for verifying the existence of feasible points of constraint satisfaction problems in an automatically determined narrow box. Numerical tests within GloptLab, a solver developed by the authors, show how the different methods perform. Also discussed are the search for approximately feasible points and the use of approximately feasible points within a branch and bound scheme for constraint satisfaction.

N168.

pdf file (1012K)

The numerical computation of the exponentiation of a real matrix has been studied intensively. The main objective of a good numerical method is to deal with round-off errors and computational cost. The situation is more complicated when dealing with interval matrix exponentiation: Indeed, the main problem will now be the dependence loss in the interval evaluation due to multiple occurrences of some variables, which may lead to enclosures that are too wide to be useful. In this paper, the problem of computing a sharp enclosure of the interval matrix exponential is proved to be NP-hard. Then the scaling and squaring method is adapted to interval matrices and shown to drastically reduce the dependence loss compared with the interval evaluation of the Taylor series.

Although most of what is presented in this paper seems to be known to the experts, one can find nowhere a coherent source for the results. The present paper fills the gap, and adds numerical examples and new insights.

S167.

pdf file (2341K)

Microbial contamination of groundwater used for drinking water production is of major concern to public health, local water authorities and suppliers. A preventive approach to protect raw water sources emphasizes the identification of potential hazards. We propose a non-parametric data mining technique to explore the concentrations of total coliforms (TC) in a groundwater abstraction well and their relationships to continuous time series of readily available hydrometric monitoring parameters (seven year time record of precipitation, river water levels, groundwater heads). The initial monitoring parameters were used to create an extensive data set of explanatory variables by considering different accumulation or averaging periods, as well as temporal offsets of the predictors. The recursive partitioning algorithm Chi-Squared Automatic Interaction Detection (CHAID) revealed statistical significant relationships of heavy precipitation to the occurrence of TC in the production well and a close-by monitoring well. Additionally, different predictors for the two well systems were identified: elevated water levels of the river and short-term fluctuations were indicative to TC in the observation well, suggesting infiltration of contaminated river water into the aquifer. TC in the production well were related to elevated groundwater heads and fluctuations. The creation of generic variables proved to be useful in increasing the significance of the predictors. Also, the prediction of TC based on hydrometric variables worked fairly well with an overall correct prediction rate of 90% and about 50% of positive test results being correctly classified.

AN166.

pdf file (403K)

In a paper published in 1955, E.M. Wright proved that all solutions of the delay differential equation u'(t) = - alpha u(t-1)(1 + u(t)) converge to zero for alpha in (0, 1.5], and conjectured that this is even true for alpha in (0, pi/2).

The present paper provides a computer-assisted proof of the conjecture for alpha in [1.5, 1.5706] (compare with pi/2 = 1.570796...).

PN165.

pdf file (460K)

We propose a new form for equations of state (EOS) of thermodynamic systems in the Ising universality class. The new EOS guarantees the correct universality and scaling behavior close to critical points and is formulated in terms of the scaling fields only -- unlike the traditional Schofield representation, which uses a parametric form.

Close to a critical point, the new EOS expresses the square of the strong scaling field as an explicit function of the thermal scaling field and the dependent scaling field. A numerical expression is derived, valid close to critical points.

As a consequence of the construction it is shown that the dependent scaling field can be written as an explicit function of the relevant scaling fields without causing strongly singular behavior of the thermodynamic potential in the one-phase region.

Augmented by additional scaling correction fields, the new EOS also describes the state space further away from critical points. It is indicated how to use the new EOS to model multiphase fluid mixtures, in particular for vapor-liquid-liquid equilibrium (VLLE) where the traditional revised scaling approach fails.

O164.

pdf file (346K)

Branch and bound methods for finding all solutions of a global optimization problem in a box frequently have the difficulty that subboxes containing no solution cannot be easily eliminated if they are close to the global minimum. This has the effect that near each global minimum, and in the process of solving the problem also near the currently best found local minimum, many small boxes are created by repeated splitting, whose processing often dominates the total work spent on the global search.

This paper discusses the reasons for the occurrence of this so-called cluster effect, and how to reduce the cluster effect by defining exclusion regions around each local minimum found, that are guaranteed to contain no other local minimum and hence can safely be discarded. In addition, we will introduce a method for verifying the existence of a feasible point close to an approximate local minimum.

These exclusion regions are constructed using uniqueness tests based on the Krawczyk operator and make use of first, second and third order information on the objective and constraint functions.

NO163.

pdf file (345K) supplementary material, including source code

A globally convergent method is proposed that either returns all solutions to steady-state models of distillation columns or proves their infeasibility. Initial estimates are not required. The method requires a specific but fairly general block-sparsity pattern; in return, the computational efforts grow linearly with the number of stages in the column. The well-known stage-by-stage (and the sequential modular) approach also reduces the task of solving high-dimensional steady-state models to that of solving a sequence of low-dimensional ones. Unfortunately, these low-dimensional systems are extremely sensitive to the initial estimates, so that solving them can be notoriously difficult or even impossible. The proposed algorithm overcomes these numerical difficulties by a new reparameterization technique. The successful solution of a numerically challenging reactive distillation column with seven steady-states shows the robustness of the method. No published software known to the authors could compute all solutions to this difficult model without expert tuning.

O162.

pdf file (331K)

The OPTIMIZATION TEST ENVIRONMENT is an interface to efficiently test different optimization solvers. It is designed as a tool for both developers of solver software and practitioners who just look for the best solver for their specific problem class. It enables users to:

The OPTIMIZATION TEST ENVIRONMENT is free to use for research purposes.

O161.

pdf file (540K)

This study discusses some algorithms for solving high-dimensional convex optimization problems appearing in applied sciences like signal and image processing, machine learning and statistics. We improve an optimal rst-order approach for a class of objective functions including costly ane terms by employing a special multidimensional subspace search. We report some numerical results for some imaging problems including nonsmooth regularization terms.

S160.

pdf file (662K)

The eHome assistance system was developed for supporting older people living alone. The complete activity data resulting from 18 months in 11 flats was subjected in anonymised form to a nearly flat-independent statistical analysis. The data analysis resulted in action classes, from which typical behaviour patterns can be deduced. On their basis, the expert system of eHome will be complemented by a self-learning Bayesian network.

N159.

pdf file (662K)

We propose a new interpretation of the generalized overdetermined eigenvalue problem (A-lambda B)v=0 for two rectangular matrices A and B, its stability analysis, and an efficient algorithm for solving it. Usually, the matrix pencil does not have any rank deficient member. Therefore we aim at computing a lambda for which A-lambda B is as close as possible to rank deficient; i.e., we search for lambda that locally minimize the smallest singular value over the matrix pencil A-lambda B.

Practically, the proposed algorithm requires O(mn^2) operations for computing all the eigenpairs. We also describe a method to compute practical starting eigenpairs.

The effectiveness of the new approach is demonstrated with numerical experiments.

O158.

pdf file (282K)

Many mixed-integer constraint satisfaction problems and global optimization problems contain some variables with unbounded domains. Their solution by branch and bound methods to global optimality poses special challenges as the search region is infinitely extended. Many usually strong bounding methods lose their efficiency or fail altogether when infinite domains are involved.

Most implemented branch and bound solvers add artificial bounds to make the problem bounded, or require the user to add these. However, if these bounds are too small, they may exclude a solution, while when they are too large, the search in the resulting huge but bounded region may be very inefficient. Moreover, if the global solver must provide a rigorous guarantee (as for the use in computer-assisted proofs), such articial bounds are not permitted without justication by proof.

We developed methods based on compactication and projective geometry as well as asymptotic analysis to cope with the unboundedness in a rigorous manner. Based on projective geometry we implemented two different versions of the basic idea, namely (i) projective constraint propagation, and (ii) projective transformation of the variables, in the rigorous global solvers COCONUT and GloptLab. Numerical tests demonstrate the capability of the new technique, combined with standard pruning methods, to rigorously solve unbounded global problems. In addition, we present a generalization of projective transformation based on asymptotic analysis.

Compactification and projective transformation, as well as asymptotic analysis, are useless in discrete situations. But they can very well be applied to compute bounded relaxations, and we present methods for doing that in an efficient manner.

N157

pdf file (295K)

We use preconditioned defect estimates and optimization techniques to compute error bounds for approximate solutions of initial value problems for ordinary differential equations with uncertain initial conditions. The bounds are based on new results about conditional differential inequalities, which are developed, too.

The scheme is implemented in MATLAB and AMPL, and the resulting enclosures are compared with the packages VALENCIA-IVP, VNODE-LP and VSPODE for bounding solutions of ODEs. The current prototype implementation uses heuristics to solve the global optimization subproblems, hence the bounds obtained in the numerical experiments are not fully rigorous. They could be made so by using rigorous global optimization and rounding error control, but the effect on the bounds is likely to be marginal only.

P156

GLOBAL is a multistart type stochastic method for bound constrained global optimization problems. Its goal is to find the best local minima that are potentially global. For this reason it involves a combination of sampling, clustering, and local search. The role of clustering is to reduce the number of local searches by forming groups of points around the local minimizers from a uniformly sampled domain and to start few local searches in each of those groups. We evaluate the performance of the GLOBAL algorithm on the BBOB~2009 noiseless testbed, containing problems which reflect the typical difficulties arising in real-word applications. The obtained results are also compared with those obtained form the simple multistart procedure in order to analyze the effects of the applied clustering rule. An improved parametrization is introduced in the GLOBAL method and the performance of the new procedure is compared with the performance of the MATLAB GlobalSearch solver by using the BBOB~2010 test environment.

AC155.

supplementary material, including source code

Chemical process models are highly structured. Information on how the hierarchical components are connected helps to solve the model efficiently. The structural information retrieved from the JModelica environment will play an important role in the development of our novel optimization methods.

Foundations of a Modelica library for general-purpose chemical process modeling have been built. Multiple steady-states in ideal two-product distillation were computed as a proof of concept. The Modelica source code is available at the project homepage. The issues encountered during modeling may be valuable to the Modelica language designers.

F154.

pdf file (580K)

software

This paper introduces a dynamic generalized parser aimed primarily at natural mathematical language. Our algorithm combines the efficiency of GLR parsing, the dynamic extensibility of tableless approaches and the expressiveness of extended context-free grammars such as parallel multiple context-free grammars (PMCFGs). In particular, it supports efficient dynamic rule additions to the grammar at any moment.

The algorithm is designed in a fully incremental way, allowing to resume parsing with additional tokens without restarting the parse process, and can predict possible next tokens. Additionally, we handle constraints on the token following a rule. This allows for grammatically correct English indefinite articles when working with word tokens. It can also represent typical operations for scannerless parsing such as maximal matches when working with character tokens. Our long-term goal is to computerize a large library of existing mathematical knowledge using the new parser. In this paper, we present the algorithmic ideas behind our approach, give a short overview of the implementation, and present some efficiency results.

A153.

We discuss properties of continuity notions for multi-valued mappings which allow disconnected images, but still have a useful zero property. The starting point is the notion of c-continuity introduced by the second author in the book Interval Methods for Systems of Equations to study enclosure properties of interval Newton operators. It was claimed in that book that c-continuity possesses a zero property. However, we provide a counterexample. Two other continuity notions are introduced and examined, and applied in a logical context.

FO152.

pdf file (135K)

We discuss a framework for the representation and processing of mathematics developed within and for the MoSMath project. The MoSMath project aims to create a software system that is able to translate optimization problems from an almost natural language to the algebraic modeling language AMPL.

As part of a greater vision (the FMathL project), this framework is designed both to serve the optimization-oriented MoSMath project, and to provide a basis for the much more general FMathL project.

We introduce the semantic memory, a data structure to represent semantic information, a type system to define and assign types to data, and the semantic virtual machine (SVM), a low level, Turing-complete programming system that processes data represented in the semantic memory.

Two features that set our approach apart from other frameworks are the possibility to reflect every major part of the system within the system itself, and the emphasis on the context-awareness of mathematics.

Arguments are given why this framework appears to be well suited for the representation and processing of arbitrary mathematics. It is discussed which mathematical content the framework is currently able to represent and interface.

O151.

pdf file (368K)

This paper presents rigorous filtering methods for continuous constraint satisfaction problems based on linear relaxations. Filtering or pruning stands for reducing the search space of constraint satisfaction problems. We discuss partially improved and existing methods as well as new approaches for rigorously enclosing the solution set of linear systems of inequalities. We also discuss different methods for computing linear relaxations. This allows one to customize combinations of relaxation and filtering. Care is taken to ensure that all methods correctly account for rounding errors in the computations. Although most of the results apply more generally, strong emphasis is given to relaxing and filtering quadratic constraints, as implemented in the GloptLab environment, which internally exploits a quadratic structure. Demonstrative examples and tests comparing the different linear relaxation methods are also presented.

FO150.

pdf file (1469K)

While mathematicians already benefit from the computer as regards numerical problems, visualization, symbolic manipulation, typesetting, etc., there is no common facility to store and process information, and mathematicians usually have to communicate the same mathematical content multiple times to the computer.

We are in the process of creating and implementing a framework that is capable of representing and interfacing optimization problems, and we argue that this framework can be used to represent arbitrary mathematics and contribute towards a universal mathematical database.

N149.

This paper discusses the rigorous enclosure of an ellipsoid by a rectangular box, its interval hull, providing a convenient preprocessing step for constrained optimization problems.

A quadratic inequality constraint with a strictly convex Hessian matrix defines an ellipsoid. The Cholesky factorization can be used to transform a strictly convex quadratic constraint into a norm inequality, for which the interval hull is easy to compute analytically. In exact arithmetic, the Cholesky factorization of a nonsingular symmetric matrix exists iff the matrix is positive definite. However, to cope efficiently with rounding errors in inexact arithmetic is nontrivial.

Numerical tests show that even nearly singular problems can be handled successfully by our techniques. To rigorously account for the rounding errors involved in the computation of the interval hull and to handle quadratic inequality constraints having uncertain coefficients, we define the concept of a directed Cholesky factorization and give two algorithms for computing one.

We also discuss how a directed Cholesky factorization can be used for testing positive definiteness.

Some numerical tests are given in order to exploit the features and boundaries of the directed Cholesky factorization methods.

O148.

This paper presents basic features of a new family of algorithms for unconstrained derivative-free optimization, based on line searches along directions generated from QR factorizations of past direction matrices. Emphasis is on fast descent with a low number of function values, so that the algorithm can be used for fairly expensive functions. The theoretical total time overhead needed per function evaluation is of order O(n^2), where $n$ is the problem dimension, but the observed overhead is much smaller.

Numerical results are given for a particular algorithm VXQR1 from this family, implemented in Matlab, and evaluated on the scalability test set of Herrera et al. for problems in dimensions 50, 100, 200, 500, and 1000.

Performance depends a lot on the graph of the function along line segments. The algorithm is typically very fast on smooth problems with not too rugged graphs, and on problems with a roughly separable structure. It typically performs poorly on problems where the graph along many directions is highly multimodal without pronounced overall slope (e.g., for smooth functions with superimposed oscillations of significant size), where the graphs along many directions are piecewise constant (e.g., for problems minimizing a maximum norm), or where the function overflows on the major part of the search region and no starting point with finite function value is known.

O147.

pdf file (175K)

In mixed integer programming branching methods are a powerful and frequently employed tool. This paper presents a branching strategy for the case that the integer constraints are associated with a finite set of points in a possibly multidimensional space. We use the knowledge about this discrete set represented by its minimum spanning tree and find a splitting based on convex relaxation. Typical applications include design optimization problems where design points specifying several discrete choices can be considered as such discrete sets.

O146.

This paper considers constraint propagation methods for continuous constraint satisfaction problems consisting of linear and quadratic constraints. All methods can be applied after suitable preprocessing to arbitrary algebraic constraints. The basic new techniques consist in eliminating bilinear entries from a quadratic constraint, and solving the resulting separable quadratic constraints by means of a sequence of univariate quadratic problems. Care is taken to ensure that all methods correctly account for rounding errors in the computations.

O145.

In this paper, we examine sets of symbolic tools associated to modeling systems for mathematical programming which can be used to automatically detect the presence or lack of convexity and concavity in the objective and constraint functions. As a consequence, convexity of the feasible set may be assessed to some extent. The coconut solver system focuses on nonlinear global continuous optimization and possesses its own modeling language and data structures. The Dr.AMPL meta-solver aims to analyze nonlinear differentiable optimization models and hooks into the ampl Solver Library [Gay02]. The symbolic analysis may be supplemented with a numerical disproving phase when the former returns inconclusive results. We report numerical results using these tools on sets of test problems for both global and local optimization.

O144.

Robust design optimization methods applied to real life problem face some major difficulties: how to deal with the estimation of probability densities when data are sparse, how to cope with high dimensional problems and how to use valuable information in the form of unformalized expert knowledge.

In this paper we introduce in detail the clouds formalism as means to process available uncertainty information reliably, even if limited in amount and possibly lacking a formal description, to providing a worst-case analysis with confidence regions of relevant scenarios which can be involved in an optimization problem formulation for robust design.

O143.

The task of autonomous and robust design cannot be regarded as a single task, but consists of two tasks that have to be accomplished concurrently. First, the design should be found autonomously; this indicates the existence of a method which is able to find the optimal design choice automatically. Second, the design should be robust; in other words: the design should be safeguarded against uncertain perturbations. Traditional modeling of uncertainties faces several problems. The lack of knowledge about distributions of uncertain variables or about correlations between uncertain data, respectively, typically leads to underestimation of error probabilities. Moreover, in higher dimensions the numerical computation of the error probabilities is very expensive, if not impossible, even provided the knowledge of the multivariate probability distributions. Based on the clouds formalism we have developed new methodologies to gather all available uncertainty information from expert engineers, process it to a reliable worst-case analysis and finally optimize the design seeking the optimal robust design.

SO142.

In many real-life situations, we do not know the probability distribution of measurement errors but only upper bounds on these errors. In such situations, once we know the measurement results, we can only conclude that the actual (unknown) values of a quantity belongs to some interval. Based on this interval uncertainty, we want to find the range of possible values of a desired function of the uncertain quantities. In general, computing this range is an NP-hard problem, but in a linear approximation, valid for small uncertainties, there is a linear time algorithm for computing the range. In other situations, we know an ellipsoid that contains the actual values. In this case, we also have a linear time algorithm for computing the range of a linear function. In some cases, however, we have a combination of interval and ellipsoid uncertainty. In this case, the actual values belong to the intersection of a box and an ellipsoid. In general, estimating the range over the intersection enables us to get a narrower range for the function. In this paper, we provide two algorithms for estimating the range of a linear function over an intersection in linear time: a simpler O(n log(n)) algorithm and a (somewhat more complex) linear time algorithm. Both algorithms can be extended to the l^p-case, when instead of an ellipsoid we have a set defined by a p-norm.

O141.

Robust design optimization methods applied to real life problems face some major difficulties: how to deal with the estimation of probability densities when data are sparse, how to cope with high dimensional problems and how to use valuable information in the form of unformalized expert knowledge.

We introduce the clouds formalism as means to process available uncertainty information reliably, even if limited in amount and possibly lacking a formal description. We provide a worst-case analysis with confidence regions of relevant scenarios which can be involved in an optimization problem formulation for robust design.

O140.

Over the last few years, much research has been dedicated to the creation of decisions support systems for space system engineers or even for completely automated design methods capturing the reasoning of system experts. However, the problem of taking into account the uncertainties of variables and models defining an optimal and robust spacecraft design have not been tackled effectively yet. This chapter proposes a novel, simple approach based on the clouds formalism to elicit and process the uncertainty information provided by expert designers and to incorporate this information into the automated search for a robust, optimal design.

O139.

The task of autonomous and robust design cannot be regarded as a single task, but consists of two tasks that have to be accomplished concurrently.

First, the design should be found autonomously; this indicates the existence of a method which is able to find the optimal design choice automatically.

Second, the design should be robust; in other words: the design should be safeguarded against uncertain perturbations. Traditional modeling of uncertainties faces several problems. The lack of knowledge about distributions of uncertain variables or about correlations between uncertain data, respectively, typically leads to underestimation of error probabilities. Moreover, in higher dimensions the numerical computation of the error probabilities is very expensive, if not impossible, even provided the knowledge of the multivariate probability distributions.

Based on the clouds formalism we have developed new methodologies to gather all available uncertainty information from expert engineers, process it to a reliable worst-case analysis and finally optimize the design seeking the optimal robust design.

The new methods are applied to problems for autonomous optimization in robust spacecraft system design at the European Space Agency (ESA).

O138.

The software package Snobfit for bound constrained (and soft constrained) noisy optimization of an expensive objective function is described. It combines global and local search by branching and local quadratic fits. The program is made robust and flexible for practical use by allowing for hidden constraints, batch function evaluations, change of search regions, etc..

O137.

Good scaling is an essential requirement for the good behavior of many numerical algorithms. In particular, for problems involving multivariate polynomials, a change of scale in one or more variable may have drastic effects on the robustness of subsequent calculations. This paper surveys scaling algorithms for systems of polynomials from the literature, and discusses some new ones, applicable to arbitrary polynomial constraint satisfaction problems.

AN136.

In many applications, partial differential equations depend on parameters which are only approximately known.

Using tools from functional analysis and global optimization, methods are presented for obtaining certificates for rigorous and realistic error bounds on the solution of linear elliptic partial differential equations in arbitrary domains, either in an energy norm, or of key functionals of the solutions, given an approximate solution.

Uncertainty in the parameters specifying the partial differential equations can be taken into account, either in a worst case setting, or given limited probabilistic information in terms of clouds.

O135.

In the last few years, much research has been dedicated to the development of decisions support systems for the space system engineers or even of completely automated design methods capturing the reasoning of the system experts. However, the problem of taking into account the uncertainties of the variables and models to determine an optimal and robust spacecraft design has not been tackled effectively yet. Based on the clouds formalism we propose a novel approach to process the uncertainty information provided by expert designers and incorporate it into the automated search for a robust optimal design.

AO134.

This paper discusses the problem what makes a computer-assisted proof trustworthy, the quest for an algorithmic support system for computer-assisted proof, relations to global optimization, an analysis of some recent proofs, and some current challenges which appear to be amenable to a computer-assisted treatment.

C133.

The structure of graphs whose largest eigenvalue is bounded by 3/2 sqrt(2) (approx. 2.1312) is investigated. In particular, such a graph can have at most one circuit, and has a natural quipu structure.

OP132.

This paper presents new, simple arguments improving the lower bounds for the total energy and the minimal inter-particle distance in minimal energy atom cluster problems with interactions given by a Morse potential, where the atom separation problem is difficult due to the finite energy at zero atom separation. Apart from being sharper than previously known bounds, they also apply for a wider range rho>4.967 of the parameter in the Morse potential. Most results also hold for more general pair potentials.

NO131.

Linear systems whose coefficients have large uncertainties arise routinely in finite element calculations for structures with uncertain geometry, material properties, or loads. However, a true worst case analysis of the influence of such uncertainties was previously possible only for very small systems and uncertainties, or in special cases where the coefficients do not exhibit dependence.

This paper presents a method for computing rigorous bounds on the solution of such systems, with a computable overestimation factor that is frequently quite small. The merits of the new approach are demonstrated by computing realistic bounds for some large, uncertain truss structures, some leading to linear systems with over 5000 variables and over 10000 interval parameters, with excellent bounds for up to about 10% input uncertainty.

Also discussed are some counterexamples for the performance of traditional approximate methods for worst case uncertainty analysis.

O130.

New theorems of the alternative for polynomial constraints (based on the Positivstellensatz from real algebraic geometry) and for linear constraints (generalizing the transposition theorems of Motzkin and Tucker) are proved. Based on these, two Karush-John optimality conditions - holding without any constraint qualification - are proved for single- or multi-objective constrained optimization problems. The first condition applies to polynomial optimization problems only, and gives for the first time necessary and sufficient global optimality conditions for polynomial problems. The second condition applies to smooth local optimization problems and strengthens known local conditions. If some linear or concave constraints are present, the new version reduces the number of constraints for which a constraint qualification is needed to get the Kuhn-Tucker conditions.

N129.

A standard for the notation of the most used quantities and operators in interval analysis is proposed.

NO128.

A directed acyclic graph (DAG) representation of optimization problems represents each variable, each operation, and each constraint in the problem formulation by a node of the DAG, with edges representing the flow of the computation.

Using bounds on ranges of intermediate results, represented as weights on the nodes and a suitable mix of forward and backward evaluation, it is possible to give efficient implementations of interval evaluation and automatic differentiation. It is shown how to combine this with constraint propagation techniques to produce narrower interval derivatives and slopes than those provided by using only interval automatic differentiation preceded by constraint propagation.

The implementation is based on earlier work by Kolev on optimal slopes and by Bliek on backward slope evaluation. Care is taken to ensure that rounding errors are treated correctly.

Interval techniques are presented for computing from the DAG useful redundant constraints, in particular linear underestimators for the objective function, a constraint, or a Lagrangian.

The linear underestimators can be found either by slope computations, or by recursive backward underestimation.

For sufficiently sparse problems the work is proportional to the number of operations in the calculation of the objective function (resp. the Lagrangian).

P208.

pdf file (116K)

O126.

Results are reported of testing a number of existing state of the art solvers for global constrained optimization and constraint satisfaction on a set of over 1000 test problems in up to 1000 variables, collected from the literature.

The test problems are available online in AMPL and were translated into the input formats of the various solvers using routines from the COCONUT environment. These translators are available online, too.

N125.

In this paper, we demonstrate how methods based on interval arithmetic and interval analysis can be used to achieve numerical certification of the kinematic calibration of a parallel robots. After describing the usual calibration methods and the motivations for a numerical certification, we briefly present the interval methods we used and the kinematic calibration problem. In the main part, we develop our certified approach of this problem in the case of a Gough platform, and we show with numerical examples how this approach avoids wrong solutions produced by classical approach. Details on implementation and performance are also given.

NO124.

This survey covers the state of the art of techniques for solving general purpose constrained global optimization problems and continuous constraint satisfaction problems, with emphasis on complete techniques that provably find all solutions (if there are finitely many). The core of the material is presented in sufficient detail that the survey may serve as a text for teaching constrained global optimization.

After giving motivations for and important examples of applications of global optimization, a precise problem definition is given, and a general form of the traditional first order necessary conditions for a solution. Then more than a dozen software packages for complete global search are described.

A quick review of incomplete methods for bound constrained problems and recipes for their use in the constrained case follows, an explicit example is discussed, introducing the main techniques used within branch and bound techniques. Sections on interval arithmetic, constrained propagation and local optimization are followed by a discussion of how to avoid the cluster problem. Then a discussion of important problem transformations follows, in particular of linear, convex, and semilinear (= mixed integer linear) relaxations that are important for handling larger problems.

Next, reliability issues - centering around rounding error handling and testing methodology - are discussed, and the COCONUT framework for the integration of the different techniques is introduced. A list of challenges facing the field in the near future concludes the survey.

NO123.

Branch and bound methods for finding all zeros of a nonlinear system of equations in a box frequently have the difficulty that subboxes containing no solution cannot be easily eliminated if there is a nearby zero outside the box. This has the effect that near each zero, many small boxes are created by repeated splitting, whose processing may dominate the total work spent on the global search.

This paper discusses the reasons for the occurrence of this so-called cluster effect, and how to reduce the cluster effect by defining exclusion regions around each zero found, that are guaranteed to contain no other zero and hence can safely be discarded.

Such exclusion regions are traditionally constructed using uniqueness tests based on the Krawczyk operator or the Kantorovich theorem. These results are reviewed; moreover, refinements are proved that significantly enlarge the size of the exclusion region. Existence and uniqueness tests are also given.

SO122.

Clouds are a concept for uncertainty mediating between the concept of a fuzzy set and that of a probability distribution. A cloud is to a random variable more or less what an interval is to a number. We discuss the basic theoretical and numerical properties of clouds, and relate them to histograms, cumulative distribution functions, and likelihood ratios.

We show how to compute nonlinear transformations of clouds, using global optimization and constraint satisfaction techniques. We also show how to compute rigorous enclosures for the expectation of arbitrary functions of random variables, and for probabilities of arbitrary statements involving random variables, even for problems involving more than a few variables.

Finally, we relate clouds to concepts from fuzzy set theory, in particular to the consistent possibility and necessity measures of Jamison and Lodwick.

C121.

A new inequality for the parameters of distance-regular graphs is proved. It implies that if there are infinitely many distance-regular graphs with fixed lambda, mu, a_i and c_i containing an induced quadrangle then necessarily c_{i+1} >= 1+(mu -1)c_i. As the dual polar graphs show, this inequality is best possible. Some related results are also discussed.

O120.

An algorithm is presented that produces an integer vector nearly parallel to a given vector. The algorithm can be used to discover exact rational solutions of homogeneous or inhomogeneous linear systems of equations, given a sufficiently accurate approximate solution.

As an application, we show how to verify rigorously the feasibility of degenerate vertices of a linear program with integer coefficients, and how to recognize rigorously certain redundant linear constraints in a given system of linear equations and inequalities. This is a first step towards the handling of degeneracies and redundandies within rigorous global optimization codes.

NO119.

Current mixed-integer linear programming solvers are based on linear programming routines that use floating point arithmetic. Occasionally, this leads to wrong solutions, even for problems where all coefficients and all solution components are small integers. An example is given where many state-of-the-art MIP solvers fail. It is then shown how, using directed rounding and interval arithmetic, cheap pre- and postprocessing of the linear programs arising in a branch-and-cut framework can guarantee that no solution is lost, at least for mixed-integer programs in which all variables can be bounded rigorously by bounds of reasonable size.

P118.

N0117.

We present a short overview over the modeling language NOP-2 for specifying general optimization problems, including constrained local or global nonlinear programs and constrained single and multistage stochastic programs. The proposed language is specifically designed to represent the internal (separable and repetitive) structure of the problem.

N116.

Some notes on mathematical modeling, listing motivations, applications, a numerical toolkit, general modeling rules, modeling conflicts, useful attitudes, and structuring the modeling work into 16 related activities by means of a novel modeling diagram.

O115.

A benchmarking suite describing over 1000 optimization problems and constraint satisfaction problems covering problems from different traditions is described, annotated with best known solutions, and accompanied by recommended benchmarking protocols for comparing test results.

PN113.

PS112.

N111.

Lagrange interpolation and partial fraction expansion can be used to derive a Gerschgorin-type theorem that gives simple and powerful a posteriori error bounds for the zeros of a polynomial if approximations to all zeros are available. Compared to bounds from a corresponding eigenvalue problem, a factor of at least two is gained.

The accuracy of the bounds is analyzed, and special attention is given to ensure that the bounds work well not only for single zeros but also for multiple zeros and clusters of close zeros.

A Rouché-type theorem is also given, that in many cases reduces the bound even further.

O110.

For constrained smooth or nonsmooth optimization problems, new continuously differentiable penalty functions are derived. They are proved exact in the sense that under some nondegeneracy assumption, local optimizers of a nonlinear program are precisely the optimizers of the associated penalty function. This is achieved by augmenting the dimension of the program by a variable that controls both the weight of the penalty terms and the regularization of the nonsmooth terms.

NO207.

NOS109.

This paper presents a new approach to fuzzy modeling based on the concept of surprise. The new concept is related to the traditional membership function by an antitone transformation. Advantages of the surprise approach include:

1. It has a consistent semantic interpretation.

2. It allows the joint use of quantitative and qualitative knowledge, using simple rules of logic.

3. It is a direct extension of (and allows combination with) the least squares approach to reconciling conflicting approximate numerical data.

4. It is ideally suited to optimization under imprecise or conflicting goals, specified by a combination of soft and hard interval constraints.

5. It gives a straightforward approach to constructing families of functions consistent with fuzzy associative memories as used in fuzzy control, with tuning parameters (reflecting linguistic ambiguity) that can be adapted to available performance data.

O108.

A family of multivariate rational functions is constructed. It has strong local minimizers with prescribed function values at prescribed positions. While there might be additional local minima, such minima cannot be global. Another family of multivariate rational functions is given, having prescribed global minimizers and prescribed interpolating data.

NS107.

A method for incorporating uncertainty in terrain modelling by expressing elevations as fuzzy numbers is proposed. Given a finite set of fuzzy elevations representative of the topographic surface in a certain region, we develop methods to construct surfaces that incorporate the uncertainty. The smoothness and continuity conditions of the surface generating method are maintained. Using this approach, we generalize some classic interpolators and compare them qualitatively. Extensions to wider classes of interpolators follow naturally from our approach. A numerical example is presented to illustrate this idea.

OP106.
**
C.A. Meyer, C.A. Floudas, and A. Neumaier,
Global optimization with non-factorable constraints**,
Industrial and Engineering Chemistry Research,
25 (2002), 6413-6424.

*ps.gz file (163K)*,
*pdf file (407K)*,
*downloading/printing problems?*

A new inequality for the parameters of distance-regular graphs is
proved. It implies that if there are infinitely many distance-regular
graphs with fixed lambda, mu, a_i and c_i containing an induced
quadrangle then necessarily c_{i+1} >= 1+(mu -1)c_i.
As the dual polar graphs show, this inequality is best possible.
Some related results are also discussed.

N105.

This review is a response to recent discussions on the reliable computing mailing list, and to continuing uncertainties about the properties and merits of Taylor forms, multivariate higher degree generalizations of centered forms. They were invented around 1980 by Lanford, documented in detail in 1984 by Eckmann, Koch and Wittwer, and independently studied and popularized since 1996 by Berz, Makino and Hoefkens. A highlight is their application to the verified integration of asteroid dynamics in the solar system in 2001.

Apart from summarizing what Taylor forms are and do, this review puts them into the perspective of more traditional methods, in particular centered forms, discusses the major applications, and analyzes some of their elementary properties. Particular emphasis is given to overestimation properties and the wrapping effect. A deliberate attempt has been made to offer value statements with appropriate justifications; but all opinions given are my own and might be controversial.

P104.

N103.

This paper contains a list of 'grand challenge' problems in interval analysis, together with some remarks on improved interaction with mainstream mathematics and on raising scientific standards in interval analysis.

NO102.

This research focuses on the methods and application of optimization under uncertainty to radiation therapy planning, where it is natural and useful to model the uncertainity of the problem directly. In particular, we present methods of optimization under uncertainty in radiation therapy of tumors and compare their results. Two themes are developed in this study: (1) the modeling of inherent uncertainty of the problems and (2) the application of uncertainty optimization.

O101.

An enhanced version NOP-2 of the NOP language for specifying global optimization problems is described. Because of its additional features, NOP-2 is comparable to other modeling languages like AMPL and GAMS, and allows the user to define a wide range of problems arising in real life applications such as global constrained (and even stochastic) optimization programs.

NOP-2 permits named variables, parameters, indexing, loops, relational operators, extensive set operations, matrices and tensors, and parameter arithmetic.

The main advantage that makes NOP-2 look and feel considerably different from other modeling languages is the display of the internal analytic structure of the problem. It is fully flexible for interfacing with solvers requiring special features such as automatic differentiation or interval arithmetic.

NS99.

Dynamical characteristics of a complex system can often be inferred from analyses of a stochastic time series model fitted to observations of the system. Oscillations in geophysical systems, for example, are sometimes characterized by principal oscillation patterns, eigenmodes of estimated autoregressive (AR) models of first order. This paper describes the estimation of eigenmodes of AR models of arbitrary order. AR processes of any order can be decomposed into eigenmodes with characteristic oscillation periods, damping times, and excitations. Estimated eigenmodes and confidence intervals for the eigenmodes and their oscillation periods and damping times can be computed from estimated model parameters. As a computationally efficient method of estimating the parameters of AR models from high-dimensional data, a stepwise least squares algorithm is proposed. This algorithm computes model coefficients and evaluates criteria for the selection of the model order stepwise for AR models of successively decreasing order. Numerical simulations indicate that, with the least squares algorithm, the AR model coefficients and the eigenmodes derived from the coefficients are estimated reliably and that the approximate 95% confidence intervals for the coefficients and eigenmodes are rough approximations of the confidence intervals inferred from the simulations.

NS98.

ARfit is a collection of Matlab modules for modeling and analyzing multivariate time series with autoregressive (AR) models. ARfit contains modules for fitting AR models to given time series data, for analyzing eigenmodes of a fitted model, and for simulating AR processes. ARfit estimates the parameters of AR models from given time series data with a stepwise least squares algorithm that is computationally efficient, in particular when the data are high-dimensional. ARfit modules construct approximate confidence intervals for the estimated parameters and compute statistics with which the adequacy of a fitted model can be assessed. Dynamical characteristics of the modeled time series can be examined by means of a decomposition of a fitted AR model into eigenmodes and associated oscillation periods, damping times, and excitations. The ARfit module that performs the eigendecomposition of a fitted model also constructs approximate confidence intervals for the eigenmodes and their oscillation periods and damping times.

PN97.

AN96.

The Lyapunov-Schmidt reduction is generalized to the case of imperfect singularities. The results presented neither need very precise information about the location of the (near) singularities nor a precise knowledge of (near) null spaces.

NS95.

A new approach for the analysis of cranial bone thickness is introduced. A semiautomatic algorithm detects a multitude of thicknesses from CT-data of the investigated bones. Every bone is characterized by its its own distribution pattern of cranial thickness, which is then analyzed statistically.

LN94.

Computational fixed point theorems can be used to automatically verify existence and uniqueness of a solution to a nonlinear system of n equations in variables ranging within a given region of n-space. But such computations succeed only when the Jacobi matrix is nonsingular everywhere in this region. However, in many practical problems, the Jacobi matrix is singular, or nearly so, at the solution. In such cases, tiny perturbations of the problem result in problems either with no solution in the region, or with more than one; thus no general computational technique can prove existence and uniqueness. However, for systems in n complex variables, the multiplicity of such a solution can be verified.

Such verification is possible by computation of the topological degree, but such computations previously required a global search on the (n-1)-dimensional boundary of an n-dimensional region. Here, it is observed that preconditioning leads to a system of equations whose topological degree can be computed with a much lower-dimensional search. Formulas are given for this computation, and the special case of rank-defect one is studied, both theoretically and empirically.

LN93.

A recent method by Shary for enclosing the solution set of a system of linear interval equations is derived in a new way. It is shown that the method converges to the fixed-point inverse, and that it has finite termination with probability 1.

O92.

Inspired by the DIRECT method by Jones, Perttunen and Stuckman, we present a global optimization algorithm based on multilevel coordinate search. It is guaranteed to converge if the function is continuous in the neighborhood of a global minimizer. By starting a local search from certain good points, an improved convergence result is obtained. We discuss implementation details and give some numerical results.

LN91.

Recently, Ning and Kearfott derived a formula for the interval enclosure of the solution set of linear systems of interval equations in the case when the coefficient matrix is an H-matrix. The enclosure is optimal when the midpoint matrix is diagonal, and when the midpoint is the identity, it reduces to the Hansen-Bliek-Rohn method for enclosing preconditioned systems. An elementary proof of this formula is given using only simple properties of H-matrices and Schur complements. The new proof gives additional insight into why the theorem is true. It is also shown how to preserve rigor in the enclosure when finite precision arithmetic is used.

OPS90.

O89.

C.S. Adjiman, I.P. Androulakis and C.A. Floudas, A global optimization method, alphaBB, for general twice-differentiable constrained NLPs - II. Implementation and computational results, Computers and Chemical Engineering 22 (1998), 1159-1179.

A deterministic global optimization method called alphaBB is presented. It finds the global minimizer of constrained nonlinear programs with twice-differentiable objective and constraint functions and finite bounds on the variables. Based on a branch and bound strategy and convex relaxed subproblems, the minimizer is found with guarantee, apart from influences of roundoff errors and the possibility of overflow of work space if too many subproblems need to be considered.

The convex underestimators are generated optimally for bilinear and trilinear terms, whereas for general functions, convexity is achieved by adding a suitable separable quadratic function. The coefficients of this function are calculated using interval arithmetic; a number of different schemes for finding these coefficients are discussed and compared.

LNS88.

SIAM Review 40 (1998), 636-666.

It is shown that the basic regularization procedures for finding meaningful approximate solutions of ill-conditioned or singular linear systems can be phrased and analyzed in terms of classical linear algebra that can be taught in any numerical analysis course. Apart from rewriting many known results in a more elegant form, we also derive a new two-parameter family of merit functions for the determination of the regularization parameter. The traditional merit functions from generalized cross validation (GCV) and generalized maximum likelihood (GML) are recovered as special cases.

O205.

NOS87.

shortened published version of the following manuscript:

This paper surveys the theoretical and computational development of the restricted maximum likelihood (REML) approach for the estimation of covariance matrices in linear stochastic models.

A new derivation of this approach is given, valid under very weak conditions on the noise.

Then the calculation of the gradient of restricted loglikelihood functions is discussed, with special emphasis on the case of large and sparse model equations with a large number of unknown covariance components and possibly incomplete data. It turns out that the gradient calculations require hardly any extra storage, and only a small multiple of the number of operations needed to calculate the function values alone.

The analytic gradient procedure was integrated into the VCE package for covariance component estimation in large animal breeding models. It resulted in dramatic improvements of performance over the previous implementation with finite difference gradients. An example with more than 250 000 normal equations and 55 covariance components took hours instead of days of CPU time, and this was not an untypical case.

NOP85.

LN84.

We introduce structural condition numbers and discuss their significance for the proper scaling of nonsymmetric and symmetric matrices.

O83.

GLOPT is a Fortran77 program for global minimization of a block-separable objective function subject to bound constraints and block-separable constraints. It finds a nearly globally optimal point that is near a true local minimizer. Unless there are several local minimizers that are nearly global, we thus find a good approximation to the global minimizer.

GLOPT uses a branch and bound technique to split the problem recursively into subproblems that are either eliminated or reduced in their size. This is done by an extensive use of the block separable structure of the optimization problem.

In this paper we discuss a new reduction technique for boxes and new ways for generating feasible points of constrained nonlinear programs. These are implemented as the first stage of our GLOPT project. The current implementation of GLOPT uses neither derivatives nor simultaneous information about several constraints. Numerical results are already encouraging. Work on an extension using curvature information and quadratic programming techniques is in progress.

O82.

This paper defines a compact format for specifying general constrained nonlinear optimization problems. The proposed format is a nonlinear analogue of an explicit representation of sparse matrices by means of index lists and values of the corresponding matrix entries. Thus the format abstracts from the meaning of the problem and hence does not allow names for variables or parameters, but it explicitly displays the internal structure of the problem. This is a very useful feature for global or large scale local optimization.

O80.

This paper presents a new approach to the sufficient conditions of nonlinear programming. Main result is a sufficient condition for the global optimality of a Kuhn-Tucker point. This condition can be verified constructively, using a novel convexity test based on interval analysis, and is guaranteed to prove global optimality of strong local minimizers for sufficiently narrow bounds. Hence it is expected to be a useful tool within branch and bound algorithms for global optimization.

NO79/86.

For publication, the paper was split into two parts:

- PET regularization by envelope guided conjugate gradients, IEEE Trans. Medical Imag. 15 (1996), 385-389.
- Regularization of ill-posed problems by envelope guided conjugate gradients, J. Comput. Graph. Stat. 6 (1997), 451-463.

LN78.

This paper discusses a method for determining a good pivoting sequence for Gaussian elimination, based on an algorithm for solving assignment problems. The worst case complexity is O(n^3), while in practice O(n^{2.25}) operations are sufficient.

COS77.

Screening experiments are addressed to the identification of the relevant variables within some process or experimental outcome potentially depending on a large number of variables. In this paper we introduce a new class of experimental designs called edge designs. These designs are very useful for screening experiments since they allow a model-independent estimate of the set of relevant variables, thus providing more robustness than traditional designs.

We give a bound on the determinant of the information matrix of certain edge designs, and show that a large class of edge designs meeting this bound can be constructed from conference matrices. We also show that the resulting conference designs have an optimal space exploration property which is important as a guard against unexpected nonlinearities. We survey the existence of and constructions for conference matrices, and give, for n<50 variables, explicit such matrices when n is a prime, and references to explicit constructions otherwise.

O81.

The optimization of noisy or not exactly known functions is a common problem occuring in various applications as for instance in the task of experimental optimization. The traditional tool for the treatment of such problems is the method of Nelder-Mead (NM). In this paper an alternative method based on a trust region approach (TR) is offered and compared to Nelder-Mead. On the standard collection of test functions for unconstrained optimization by Moré et al., TR performs substantially more robust than NM. If performance is measured by the number of function evaluations, TR is on the average twice as fast as NM.

O76.

The optimization of noisy functions is a common problem occurring in various applications. In this paper, a new approach is presented for low-dimensional bound constrained problems, based on the use of quadratic models and a restriction of the evaluation points to successively refined grids. In the noiseless case, global convergence of the algorithm to a stationary point is proved; in the noisy case a bound for the limit accuracy is derived.

An extensive numerical comparison with two widely used methods - a quasi-Newton method and the simplex method of Nelder and Mead - performed on a standard collection of test problems, shows that the new algorithm is comparable with quasi-Newton in the noiseless case, and is much more robust than Nelder-Mead in the noisy case. If performance is measured solely by the number of function evaluations needed to achieve a prescribed reduction of the difference to the minimal function value (as for instance in the optimization of experiments), the new algorithm is also significantly faster than Nelder-Mead.

C75.

Main result: If the smallest eigenvalue of a graph H exceeds a fixed number larger than the smallest root (approx. -2.4812) of the polynomial x^3+2x^2-2x-2, and if every vertex of H has sufficiently large valency, then the smallest eigenvalue of H is at least -1-sqrt{2} and the structure of H is completely characterized through a new generalization of line graphs.

NP73.

N72.

It is shown how interval analysis can be used to calculate rigorously valid enclosures of solutions to initial value problems for ordinary differential equations. In contrast to previously known methods, the enclosures obtained are valid over larger time intervals, and for uniformly dissipative systems even globally.

This paper discusses the underlying theory; main tools are logarithmic norms and differential inequalities.

N70.

The wrapping effect is one of the main reasons that the application of interval arithmetic to the enclosure of dynamical systems is difficult. In this paper the source of wrapping is analyzed algebraically and geometrically. A new method for reducing the wrapping effect is proposed, based on an interval ellipsoid arithmetic.

Applications are given to the verification of stability regions for nonlinear discrete dynamical systems and to the computation of rigorous confidence regions for nonlinear functions of normally distributed random vectors.

N54.

pdf file (scanned copy) (2909K)

P971.

Insight Article

This article explains how the widespread but misleading informal practice of treating - when explaining the subject of subatomic particles to the general public - virtual particles as real objects popping in and out of existence for a tiny time could arise. This is done at the example of Steve Carlip's page on Hawkings radiation, where Steve Carlip, a well-known theorical physicist working on quantum gravity, gave a lucid but completely mythical narrative about how vacuum fluctuations create Hawking radiation.

P972.

Insight Article

This article goes though a number of wikipedia pages and comments on their misleading statements about virtual particles and Feynman diagrams. In addition, the article discusses some textbooks on quantum field theory and the extent to which they contain problematic formulations about virtual particles.

P973.

Insight Article

In discussions on the internet (including a number of wikipedia pages) and in books and articles for non-experts in particle physics, there is considerable confusion about various notions around the concept of particles of subatomic size, and in particular about the notion of a virtual particle. This is partly due to misunderstandings in the terminology used, and partly due to the fact that subatomic particles manifest themselves only indirectly, thus leaving a lot of leeway for the imagination to equip these invisible particles with properties, some of which sound very magical.

The aim of this Insight article is a definition of physical terms essential for an informed discussion of which of these properties have a solid basis in physics, and which of these are gross misconceptions or exaggerations that shouldn't be taken seriously.

P974.

Insight Article

Relativistic quantum field theory is notorious for the occurrence of divergent expressions that must be renormalized by recipes that on first sight sound very arbitrary and counterintuitive. This Insight article shows that it doesn't have to be this way!.

P975.

pdf file (385K)

Renormalization is an indispensable tool for modern theoretical physics. At the same time, it is one of the least appealing techniques, especially in cases where naive formulations result in divergences that must be cured - a step that is often done in a mathematically dubious way.

In this paper, it is shown how the renormalization procedure works both in singular cases where it removes naive divergences and in regular cases where a naive approach is possible but renormalization improves the quality of perturbation theory. In fact, one can see immediately that the singular situation is simply a limiting case of the regular situation.

The paper introduces three families of toy examples, defined by special perturbations of an arbitrary Hamiltonian with a discrete spectrum. The examples show explicitly many of the renormalization effects arising in realistic quantum field theories such as quantum chromodynamics: logarithmic divergences, running couplings, asymptotic freedom, dimensional transmutation, the renormalization group, and renormalization scheme dependent results at any order of perturbation theory.

Unlike in more realistic theories, everything is derived rigorously and nonperturbatively in terms of simple explicit formulas. Thus one can understand in detail how the infinities arise (whenever they arise) - namely as an unphysical infinitely sensitive dependence on the bare coupling constants. One also sees that all spurious infinities are cured automatically by the same renormalization process that gives robust physical results in the case where no infinities occur.

PS976.

pdf file (193K)

Realistic equations of state valid in the whole state space of a multi-component mixture should satisfy at least three important constraints:

(i) The Gibbs phase rule holds.

(ii) At low densities, one can deduce a virial equation of state with the correct multicomponent structure.

(iii) Close to critical points, plait points, and consolute points, the correct universality and scaling behavior is guaranteed.

This paper discusses semiempirical equations of state for mixtures that express the pressure as an explicit function of temperature and the chemical potentials. In the first part, expressions are derived for the most important thermodynamic quantities. The main result of the second part is the construction of a large family of equations of state with the properties (i)--(iii).

N977.

pdf file (244K)

This paper serves as background information for the Vienna proposal for interval standardization, explaining what is needed in practice to make competent use of the interval arithmetic provided by an implementation of the standard to be.

Discussed are methods to improve the quality of interval enclosures of the range of a function over a box, considerations of possible hardware support facilitating the implementation of such methods, and the results of a simple interval challenge that I had posed to the reliable computing mailing list on November 26, 2008.

Also given is an example of a bound constrained global optimization problem in 4 variables that has a 2-dimensional continuum of global minimizers. This makes standard branch and bound codes extremely slow, and therefore may serve as a useful degenerate test problem.

N978.

pdf file (1193K)

In this paper we propose a fast and accurate method for computing a regularized low rank approximation of a weighted data set.

Weighted data sets are obtained whenever the data points are collected with different level of accuracies, or from different sources etc.. For example, consider the case when each data point is obtained as a sample mean of certain number of observations, where the number of observations per data point are unequal. Even data sets with missing data points are special case of a weighted data set.

Unlike the non-weighted case, the optimization problem posed to obtain a low rank approximation for weighted data have local minima. To alleviate the problem with local minima, and consequently to obtain a meaningful solution, we use a priori information about the data set. Thereby we obtain a regularized solution. In particular, we use a priori information that the low rank approximants are smooth. We use an iterative method to estimate left and right approximants. Within each iteration we estimate the right (left) approximants by posing a constrained weighted least squares problem. Unfortunately the variables of the quadratic optimization problem are not separable. We exploit the sparsity of the associated matrices to design a tractable algorithm.

The proposed method has a potential use in various applied problems, e.g., 3D image reconstruction, background correction in 2D chromatography, background correction of astronomical images. We illustrate the proposed method by reconstructing background of an astronomical image observed in the optical wavelength range. The algorithm is fast, i.e. the number of flops per iteration is of the order of the number of data points, and the memory requirement is negligible compared with the input data storage. We provide extensive numerical results to highlight different features of the method.

F979.

pdf file (396K)

A semantic Turing machine (STM) is a variant of a programable register machine that combines the transparency and simplicity of the action of a Turing machine with a clearly arranged assembler-style programming language and a user-friendly representation of semantic information.

This paper describes the concept of the STM, its memory management and ow control, and shows how a semantic Turing machine can simulate any ordinary Turing machine. Analogous to a universal Turing machine, we give a universal semantic Turing machine (USTM), which is a special STM that can simulate every STM. The USTM serves both as a self-contained semantic explanation of many aspects of the STM, and as a check that an STM implementation works correctly.

Three appendices give the grammar of the STM programming language, tables for character code, and the essential parts of a MATLAB implementation of the STM.

S980.

pdf file (201K)

In latent class analysis (LCA) one seeks a clustering of categorical data, such as patterns of symptoms of a patient, in terms of locally independent stochastic models. This leads to practical definitions of criteria, e.g., whether to include patients in further diagnostic examinations. The clustering is often determined by parameters that are estimated by the maximum likelihood method. The likelihood function in LCA has in many cases - especially for sparse data sets - a complicated shape with many local extrema, even for small-scale problems. Hence a global optimization must be attempted.

This paper describes an algorithm implemented to optimize the likelihood function constrained by the requirement of a good fit of the data with a minimal number of classes. The problem is formulated in the algebraic modeling language AMPL and solved via state of the art optimization solvers.

The approach is successfully applied to three real-life problems. Remarkably, the goodness-of-fit constraint makes one of the three problems identifiable by eliminating all but one of the local minimizers.

F981.

pdf file (497K)

Several frameworks for mathematics have been constructed in the literature. To avoid the paradoxes of naive set theory, Zermelo and Fraenkel constructed a type-free system of sets, based on first order predicate logic, while von Neumann, Bernays and Goedel constructed a system with two types, sets and classes. These systems suffice for the common needs of a working mathematician. But they are inconvenient to use without pervasive abuse of language and notation, which makes human-machine communication of mathematics difficult.

In this document, we construct a new framework for mathematics, designed as part of the specification system FMathL for the formalized communication of mathematics between humans and machines, in a way close to the actual practice of mathematics. The framework is described in such a way that this description can become part of the FMathL system itself.

All axioms are given in the form of familiar existence requirements and properties that are used by mathematicians on an everyday basis. Thus mathematicians trusting the axiom of choice and hence the law of excluded middle can readily convince themselves of their truth in their private (but publicly educated) view of mathematical concepts.

The exposition is such that students exposed to the typical introductory mathematics courses should have no serious difficulty understanding the material.

N982.

pdf file (340K)

This document is an assessment of the value of optimal linear interpolation enclosures and of nonstandard intervals, especially with respect to applications in computer graphics, and of the extent a future IEEE interval standard should support these.

It turns out that essentially all present applications of nonstandard intervals to practical problems can be matched by similarly efficient approaches based on standard intervals only. On the other hand, a number of applications were inspired by the use of nonstandard arithmetic.

This suggests the requirement of a minimal support for nonstandard intervals, allowing implementations of nonstandard interval arithmetic to be compatible with the standard, while a full support by making one of the conflicting variants required seems not appropriate.

P983.

GLOBAL is a multistart type stochastic method for bound constrained global optimization problems. Its goal is to ¯nd all the local minima that are potentially global. For this reason it involves a combination of sampling, clustering, and local search. We report its results on the noisy problems given.

P984.

Benchmarking results with the MCS algorithm for boundconstrained global optimization on the noiseless BBOB 2009 testbed are described.

P985.

Benchmarking results with the SNOBFIT algorithm for noisy bound-constrained global optimization on the noisy BBOB 2009 testbed are described.

P986.

Benchmarking results with the MCS algorithm for boundconstrained global optimization on the noisy BBOB 2009 testbed are described.

P987.

Benchmarking results with the MCS algorithm for boundconstrained global optimization on the noiseless BBOB 2009 testbed are described.

N988.

pdf file (179K)

This document contains a detailed proposal for a future IEEE 1788 standard on interval arithmetic. It is written in a form that should be not too difficult to transform into a formal, complete and fully precise document specifying the standard to be.

Part 1 contains a concise summary of the basic assumptions (some of which may be controversial) upon which the remaining document is based. The items in Part 1 are grouped such that separate voting on each issue is meaningful.

Parts 2-5 specify the internal representation of intervals and the operations defined on them. Part 6 specifies the external representation of intervals and the conversion between internal and external representations.

Part 7 is optional and discusses useful modifications of the directed rounding behavior specified in the IEEE 754-2008 standard that would simplify an implementation of the proposed standard.

This final version incorporates many suggestions and corrections by the people mentioned above. In particular, comprehensive discussions with Michel Hack, who read and commented in detail many intermediate versions, had a strong influence on this proposal.

P989.

An experiment is described which proves, using single photons only, that the standard hidden variables assumptions (commonly used to derive Bell inequalities) are inconsistent with quantum mechanics. The analysis is very simple and transparent. In particular, it demonstrates that a classical wave model for quantum mechanics is not ruled out by experiments demonstrating the violation of the traditional hidden variable assumptions.

P990.

On the basis of new, concise foundations, this paper establishes the four laws of thermodynamics, the Maxwell relations, and the stability requirements for response functions, in a form applicable to global (homogeneous), local (hydrodynamic) and microlocal (kinetic) equilibrium.

The present, self-contained treatment needs very little formal machinery and stays very close to the formulas as they are applied by the practicing physicist, chemist, or engineer. From a few basic assumptions, the full structure of phenomenological thermodynamics and of classical and quantum statistical mechanics is recovered.

Care has been taken to keep the foundations free of subjective aspects (which traditionally creep in through information or probability). One might describe the paper as a uniform treatment of the nondynamical part of classical and quantum statistical mechanics ``without statistics'' (i.e., suitable for the definite descriptions of single objects) and ``without mechanics'' (i.e., independent of microscopic assumptions). When enriched by the traditional examples and applications, this paper may serve as the basis for a course on thermal physics.

P991.

This report discusses a case study for modeling uncertainties in a Matlab model for space system design, with the goal of determining robust feasible designs for the model.

The problem structure involves all difficulties an optimization problem can have: discrete variables, strong nonlinearities, discontinuities due to branching decisions, multiple objectives, multiple local minima.

The formal modeling of uncertainty by means of clouds, and the robust optimization of a resulting uncertain model was considered. A heuristic approach using surrogate function modeling, corner searches, and discrete line searches was used to solve the robust optimization problem in case of interval uncertainties. This approach works satisfactorily for the model problem for the case of interval uncertainty.

Solving the model problem with uncertainty revealed significant robustness advantages of the approach using uncertainty. The influence on assumed knowledge about additional uncertainty information was illustrated by for a few model choices.

S992.

Traditional engineering techniques use the least squares method (i.e., in mathematical terms, the l^2-norm) to process data. It is known that in many practical situations, l^p-methods with other values of p lead to better results. In different practical situations, different values of p are optimal. It is known that in several situations when we need to reconstruct a piecewise smooth signal, the empirically optimal value of p is close to 1. In this paper, we provide a new interval-based theoretical explanation for this empirical fact.

P993.

NO994.

We present formulas for rigorous constraint propagation of quadratic equality or inequality constraints involving a single nonlinear variable. Since the analysis is very elementary, probably everything in here has been known for a long time. The present approach, based on directed rounding only, provide efficient alternatives to the procedures discussed by Hansen and Walster, which employ interval arithmetic.

S995.

Clouds, recently introduced by the author, give - within the traditional probabilistic framework - a concept for imprecise probability, with which quantitative conclusions can be derived from uncertain probabilistic information. A cloud is to a random variable more or less what an interval is to a number.

In this paper, some structural results about clouds are derived. In particular, it is shown that every cloud contains some random variable.

O996.

The goal of this document is to summarize the state of the art of algorithms for solving nonlinear constrained and optimization problems.

PS997.

Based on a principal component analysis of 47 published attempts to quantify hydrophobicity in terms of a single scale, we define a representation of the 20 amino acids as points in a 3-dimensional hydrophobicity space and display it by means of a minimal spanning tree. The dominant scale is found to be close to two scales derived from contact potentials.

O998.

Manuscript (1997).

A global convergence theorem for unconstrained minimization algorithms with an efficient line search is derived. The theorem applies to a new version of the conjugate gradient method derived here in terms of minimizing the effect of zigzagging. The global convergence condition makes much weaker demands on the line search than previous methods.

Local Q-linear convergence in a neighborhood of a strong local minimizer follows under additional conditions that define natural restart conditions based on line search accuracy and loss of conjugacy of successive gradients.

LS999.

We propose a classification scheme that generates a set of balls separating points belonging to different points sets in n-dimensional space such that all points of a given training set are classified correctly. The method is very fast in training. Applications discussed include breast cancer prognosis and character recognition.

I do **not** send out paper copies of my manuscripts;
but here are some hints of how to make your own copy.

To uncompress the files obtained you need the GNU program gunzip.

Some browsers seem to save *.dvi.gz files as *.dvi, so that one thinks one has a dvi-file while one actually may still have to do a gunzip. And some other browsers appear to do an automatic gunzip, while leaving the file name with the suffix .gz! In both cases, the file needs to be renamed appropriately for further processing.

See The gzip homepage or Compression for gunzip, dvi.exe or TeX Facilities or LaTeX Home Page for a dvi-viewer, Ghostscript for the popular free postscript viewer, and

Help for printing PostScript Files.

If you still have difficulties obtaining or printing one of the papers above, please tell me details about the difficulties you encountered.

Global Optimization

Protein Folding

Interval Methods

Regularization

Mathematical Software

Computational Mathematics Links

Mathematics Links

Statistics Links

my home page (http://www.mat.univie.ac.at/~neum)

Arnold Neumaier (Arnold.Neumaier@univie.ac.at)