Gurobi Solver user manual

Gurobi Optimization, [www.gurobi.com] (http://www.gurobi.com)

Introduction

The Gurobi suite of optimization products include state-of-the-art simplex and parallel barrier solvers for linear programming (LP) and quadratic programming (QP), parallel barrier solver for quadratically constrained programming (QCP), as well as parallel mixed-integer linear programming (MILP), mixed-integer quadratic programming (MIQP) and mixed-integer quadratically constrained programming (MIQCP) solvers.

The Gurobi MIP solver includes shared memory parallelism, capable of simultaneously exploiting any number of processors and cores per processor. The implementation is deterministic: two separate runs on the same model will produce identical solution paths.

While numerous solving options are available, Gurobi automatically calculates and sets most options at the best values for specific problems.

Linear, Quadratic and Quadratic Constrained Programming

Gurobi can solve LP and convex QP problems using several alternative algorithms, while the only choice for solving convex QCP is the parallel barrier algorithm. The majority of LP problems solve best using Gurobi’s state-of-the-art dual simplex algorithm, while most convex QP problems solve best using the parallel barrier algorithm. Certain types of LP problems benefit from using the parallel barrier or the primal simplex algorithms, while for some types of QP, the dual or primal simplex algorithm can be a better choice. If you are solving LP problems on a multi-core system, you should also consider using the concurrent optimizer. It runs different optimization algorithms on different cores, and returns when the first one finishes.

The infeasibility finder takes an infeasible linear program and produces an irreducibly inconsistent set of constraints (IIS). An IIS is a set of constraints and variable bounds which is infeasible but becomes feasible if any one member of the set is dropped. The infeasibility finder is activated by the option IIS.

Gurobi supports sensitivity analysis (post-optimality analysis) for linear programs which allows one to find out more about an optimal solution for a problem. In particular, objective ranging and constraint ranging give information about how much an objective coefficient or a right-hand-side and variable bounds can change without changing the optimal basis. In other words, they give information about how sensitive the optimal basis is to a change in the objective function or the bounds and right-hand side. Gurobi reports the sensitivity information as part of the normal solution listing. Sensitivity analysis is activated by the option Sensitivity.

The Gurobi presolve can sometimes diagnose a problem as being infeasible or unbounded. When this happens, Gurobi can, in order to get better diagnostic information, rerun the problem with presolve turned off. The rerun without presolve is controlled by the option ReRun. In default mode only problems that are small (i.e. demo sized) will be rerun.

Gurobi can either presolve a model or start from an advanced basis or primal/dual solution pair. Often the solve from scratch of a presolved model outperforms a solve from an unpresolved model started from an advanced basis/solution. It is impossible to determine a priori if presolve or starting from a given advanced basis/solution without presolve will be faster. By default, Gurobi will automatically use an advanced basis or solution from a previous solve statement.

Mixed-Integer Programming

The methods used to solve pure integer and mixed integer programming problems require dramatically more mathematical computation than those for similarly sized pure linear or quadratic programs. Many relatively small integer programming models take enormous amounts of time to solve.

For problems with discrete variables, Gurobi uses a branch and cut algorithm which solves a series of subproblems, LP subproblems for MILP, QP subproblems for MIQP, and QCP subproblems or LP outer approximation subproblems for MIQCP. Because a single mixed integer problem generates many subproblems, even small mixed integer problems can be very compute intensive and require significant amounts of physical memory. With option nonConvex Gurobi can also solve nonconvex (MI)QP and (MI)QCP problems using a spatial branch-and-bound method.

Gurobi supports Special Order Sets of type 1 and type 2 as well as semi-continuous and semi-integer variables.

You can provide a known solution (for example, from a MIP problem previously solved or from your knowledge of the problem) to serve as the first integer solution.

If you specify some or all values for the discrete variables together with Gurobi option MipStart, Gurobi will check the validity of the values as an integer-feasible solution. If this process succeeds, the solution will be treated as an integer solution of the current problem.

The Gurobi MIP solver includes shared memory parallelism, capable of simultaneously exploiting any number of processors and cores per processor. The implementation is deterministic: two separate runs on the same model will produce identical solution paths.

Nonlinear Programming

Gurobi is not a general purpose nonlinear programming solver, but it is able to handle certain nonlinear constraints by reformulating them into supported linear and/or quadratic constraints. These constraints are:

  • MAX constraint:eq1.. r =e= max(x1,x2,x3,…,c); eq2.. r =e= smax(i, x(i));
  • MIN constraint:eq1.. r =e= min(x1,x2,x3,…,c); eq2.. r =e= smin(i, x(i));
  • AND constraint:eq1.. r =e= b1 and b2 and b3 and …; eq2.. r =e= sand(i, b(i));
  • OR constraint:eq1.. r =e= b1 or b2 or b3 or …; eq2.. r =e= sor(i, b(i));
  • ABS constraint:eq.. r =e= abs(x);
  • EXP constraint:eq1.. r =e= exp(x); eq2.. r =e= x**a; eq3.. r =e= a**x; Here, a > 0 and for eq3 the lower bound of x must be nonnegative.
  • LOG constraint:eq1.. r =e= log(x); eq2.. r =e= log2(x); eq3.. r =e= log10(x);
  • SIN / COS / TAN constraint:eq1.. r =e= sin(x); eq2.. r =e= cos(x); eq3.. r =e= tan(x);
  • NORM constraint:eq1_1.. r =e= sum(i, abs(x(i))); eq1_2.. r =e= abs(x1) + abs(x2) + abs(x3) + abs(x4); eq2_1.. r =e= edist(x1,x2,x3,…); eq2_2.. r =e= sqrt(sum(i, sqr(x(i)))); eq2_3.. r =e= sqrt(sqr(x1) + sqr(x2) + sqr(x3) + sqr(x4)); eq3_1.. r =e= smax(i, abs(x(i))); eq3_2.. r =e= max(abs(x1), abs(x2), abs(x3), abs(x4)); Note that a 2-norm constraint can lead to a non-convex quadratic model which is much harder to solve than a convex quadratic or linear model.
  • POLY constraint:eq.. r =e= poly(x, a0, a1, a2, …);
  • SIGMOID constraint:eq1.. r =e= sigmoid(x); eq2.. r =e= 1 / (1 + exp(-x));

NoteNonlinear constraints must perfectly match one of the above forms. Otherwise, Gurobi will raise a capability error. For example, it is not possible to interchange left-hand-side and right-hand-side of the above constraints.

Feasible Relaxation

The Infeasibility Finder identifies the causes of infeasibility by means of inconsistent set of constraints (IIS). However, you may want to go beyond diagnosis to perform automatic correction of your model and then proceed with delivering a solution. One approach for doing so is to build your model with explicit slack variables and other modeling constructs, so that an infeasible outcome is never a possibility. An automated approach offered in Gurobi is known as FeasOpt (for Feasible Optimization) and turned on by parameter FeasOpt in a Gurobi option file.

With the FeasOpt option Gurobi accepts an infeasible model and selectively relaxes the bounds and constraints in a way that minimizes a weighted penalty function. In essence, the feasible relaxation tries to suggest the least change that would achieve feasibility.

By default all equations are candidates for relaxation and weighted equally but none of the variables can be relaxed. This default behavior can be modified by assigning relaxation preferences to variable bounds and constraints. These preferences can be conveniently specified with the .feaspref option. The input value denotes the users willingness to relax a constraint or bound. The larger the preference, the more likely it will be that a given bound or constraint will be relaxed. More precisely, the reciprocal of the specified value is used to weight the relaxation of that constraint or bound. The user may specify a preference value less than or equal to 0 (zero), which denotes that the corresponding constraint or bound must not be relaxed. It is not necessary to specify a unique preference for each bound or range. In fact, it is conventional to use only the values 0 (zero) and 1 (one) except when your knowledge of the problem suggests assigning explicit preferences.

Preferences can be specified through a Gurobi solver option file using dot options. The syntax is:

(variable or equation).feaspref(value)

For example, suppose we have a declaration:

   Set i /i1*i5/;
   Set j /j2*j4/;
   variable v(i,j); equation e(i,j);

Then, the relaxation preference in the gurobi.opt file can be specified by:

feasopt 1
v.feaspref            1
v.feaspref('i1',*)    2
v.feaspref('i1','j2') 0

e.feaspref(*,'j1')    0
e.feaspref('i5','j4') 2

First we turn the feasible relaxtion on. Futhermore, we specify that all variables v(i,j) have preference of 1, except variables over set element i1, which have a preference of 2. The variable over set element i1 and j2 has preference 0. Note that preferences are assigned in a procedural fashion so that preferences assigned later overwrite previous preferences. The same syntax applies for assigning preferences to equations as demonstrated above. If you want to assign a preference to all variables or equations in a model, use the keywords variables or equations instead of the individual variable and equations names (e.g. variables.feaspref 1).

The parameter FeasOptMode allows different strategies in finding feasible relaxation in one or two phases. In its first phase, it attempts to minimize its relaxation of the infeasible model. That is, it attempts to find a feasible solution that requires minimal change. In its second phase, it finds an optimal solution (using the original objective) among those that require only as much relaxation as it found necessary in the first phase. Values of the parameter FeasOptMode indicate two aspects: (1) whether to stop in phase one or continue to phase two and (2) how to measure the relaxation (as a sum of required relaxations; as the number of constraints and bounds required to be relaxed; as a sum of the squares of required relaxations). Please check description of parameter FeasOptMode for details. Also check example models feasopt* in the GAMS Model library.

Parameter Tuning Tool

The Gurobi Optimizer provides a wide variety of parameters that allow you to control the operation of the optimization engines. The level of control varies from extremely coarse-grained (e.g., the Method parameter, which allows you to choose the algorithm used to solve continuous models) to very fine-grained (e.g., the MarkowitzTol parameter, which allows you to adjust the precise tolerances used during simplex basis factorization). While these parameters provide a tremendous amount of user control, the immense space of possible options can present a significant challenge when you are searching for parameter settings that improve performance on a particular model. The purpose of the Gurobi tuning tool is to automate this search.

The Gurobi tuning tool performs multiple solves on your model, choosing different parameter settings for each, in a search for settings that improve runtime. The longer you let it run, the more likely it is to find a significant improvement.

A number of tuning-related parameters allow you to control the operation of the tuning tool. The most important is probably TuneTimeLimit, which controls the amount of time spent searching for an improving parameter set. Other parameters include TuneTrials (which attempts to limit the impact of randomness on the result), TuneResults (which limits the number of results that are returned), and TuneOutput (which controls the amount of output produced by the tool).

While parameter settings can have a big performance effect for many models, they aren’t going to solve every performance issue. One reason is simply that there are many models for which even the best possible choice of parameter settings won’t produce an acceptable result. Some models are simply too large and/or difficult to solve, while others may have numerical issues that can’t be fixed with parameter changes.

Another limitation of automated tuning is that performance on a model can experience significant variations due to random effects (particularly for MIP models). This is the nature of search. The Gurobi algorithms often have to choose from among multiple, equally appealing alternatives. Seemingly innocuous changes to the model (such as changing the order of the constraint or variables), or subtle changes to the algorithm (such as modifying the random number seed) can lead to different choices. Often times, breaking a single tie in a different way can lead to an entirely different search. We’ve seen cases where subtle changes in the search produce 100X performance swings. While the tuning tool tries to limit the impact of these effects, the final result will typically still be heavily influenced by such issues.

The bottom line is that automated performance tuning is meant to give suggestions for parameters that could produce consistent, reliable improvements on your models. It is not meant to be a replacement for efficient modeling or careful performance testing.

Compute Server

The Gurobi Compute Server allows you to use one or more servers to offload all of your Gurobi computations.

Gurobi compute servers support queuing and load balancing. You can set a limit on the number of simultaneous jobs each compute server will run. When this limit has been reached, subsequent jobs will be queued. If you have multiple compute servers, the current job load is automatically balanced among the available servers. By default, the Gurobi job queue is serviced in a First-In, First-Out (FIFO) fashion. However, jobs can be given different priorities. Jobs with higher priorities are then selected from the queue before jobs with lower priorities.

Gurobi Compute Server licenses and software are not included in GAMS/Gurobi. Contact Gurobi directly to inquire about the software and license.

Distributed Parallel Algorithms

Gurobi Optimizer implements a number of distributed algorithms that allow you to use multiple machines to solve a problem faster. Available distributed algorithms are:

  • distributed MIP solver, which allows you to divide the work of solving a single MIP model among multiple machines. A manager machine passes problem data to a set of worker machines in order to coordinate the overall solution process.
  • distributed concurrent solver, which allows you to use multiple machines to solve an LP or MIP model. Unlike the distributed MIP solver, the concurrent solver doesn’t divide the work associated with solving the problem among the machines. Instead, each machine uses a different strategy to solve the whole problem, with the hope that one strategy will be particularly effective and will finish much earlier than the others. For some problems, this concurrent approach can be more effective than attempting to divide up the work.
  • Distributed parameter tuning, which automatically searches for parameter settings that improve performance on your optimization model. Tuning solves your model with a variety of parameter settings, measuring the performance obtained by each set, and then uses the results to identify the settings that produce the best overall performance. The distributed version of tuning performs these trials on multiple machines, which makes the overall tuning process run much faster.

These distributed parallel algorithms are designed to be almost entirely transparent to the user. The user simply modifies a few parameters, and the work of distributing the computation to multiple machines is handled behind the scenes by Gurobi.

Specifying the Worker Pool

Once you’ve set up a set of one or more distributed workers, you should list at least one of their names in the WorkerPool parameter. You can provide either machine names or IP addresses, and they should be comma-separated.

You can provide the worker access password through the WorkerPassword parameter. All servers in the worker pool must have the same access password.

Requesting Distributed Algorithms

Once you’ve set up the worker pool through the appropriate parameters, the last step to use a distributed algorithm is to set the TuneJobs, ConcurrentJobs, or DistributedMIPJobs parameter. These parameters are used to indicate how many distinct tuning, concurrent, or distributed MIP jobs should be started on the available workers.

If some of the workers in your worker pool are running at capacity when you launch a distributed algorithm, the algorithm won’t create queued jobs. Instead, it will launch as many jobs as it can (up to the requested value), and it will run with these jobs.

These distributed algorithms have been designed to be nearly indistinguishable from the single machine versions. Our hope is that, if you know how to use the single machine version, you’ll find it straightforward to use the distributed version. The distributed algorithms respect all of the usual parameters. For distributed MIP, you can adjust strategies, adjust tolerances, set limits, etc. For concurrent MIP, you can allow Gurobi to choose the settings for each machine automatically or specify a set of options. For distributed tuning, you can use the usual tuning parameters, including TuneTimeLimit, TuneTrails, and TuneOutput.

There are a few things to be aware of when using distributed algorithms, though. One relates to relative machine performance. Distributed algorithms work best if all of the workers give very similar performance. For example, if one machine in your worker pool were much slower than the others in a distributed tuning run, any parameter sets tested on the slower machine would appear to be less effective than if they were run on a faster machine. Similar considerations apply for distributed MIP and distributed concurrent. We strongly recommend that you use machines with very similar performance. Note that if your machines have similarly performing cores but different numbers of cores, we suggest that you use the Threads parameter to make sure that all machines use the same number of cores.

Logging for distributed MIP is very similar to the standard MIP logging. The main differences are in the progress section. The header for the standard MIP logging looks like this:

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

By contrast, the distributed MIP header looks like this:

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | ParUtil Time

Instead of showing iterations per node, the last field in the distributed log shows parallel utilization. Specifically, it shows the fraction of the preceding time period (the time since the previous progress log line) that the workers spent actively processing MIP nodes.

Here is an example of a distributed MIP progress log:

    Nodes    |    Utilization     |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | ParUtil Time

H    0                          157344.61033          -      -          0s
H    0                          40707.729144          -      -          0s
H    0                          28468.534497          -      -          0s
H    0                          18150.083886          -      -          0s
H    0                          14372.871258          -      -          0s
H    0                          13725.475382          -      -          0s
     0     0 10543.7611    0   19 13725.4754 10543.7611  23.2%   99%    0s
*  266                          12988.468031 10543.7611  18.8%          0s
H 1503                          12464.099984 10630.6187  14.7%          0s
* 2350                          12367.608657 10632.7061  14.0%          1s
* 3360                          12234.641804 10641.4586  13.0%          1s
H 3870                          11801.185729 10641.4586  9.83%          1s

Ramp-up phase complete - continuing with instance 2 (best bd 10661)

 16928  2731 10660.9626    0   12 11801.1857 10660.9635  9.66%   99%    2s
 135654 57117 11226.5449   19   12 11801.1857 11042.3036  6.43%   98%    5s
 388736 135228 11693.0268   23   12 11801.1857 11182.6300  5.24%   96%   10s
 705289 196412     cutoff           11801.1857 11248.8963  4.68%   98%   15s
 1065224 232839 11604.6587   28   10 11801.1857 11330.2111  3.99%   98%   20s
 1412054 238202 11453.2202   31   12 11801.1857 11389.7119  3.49%   99%   25s
 1782362 209060     cutoff           11801.1857 11437.2670  3.08%   97%   30s
 2097018 158137 11773.6235   20   11 11801.1857 11476.1690  2.75%   92%   35s
 2468495 11516     cutoff           11801.1857 11699.9393  0.86%   78%   40s
 2481830     0     cutoff           11801.1857 11801.1857  0.00%   54%   40s

One thing you may find in the progress section is that node counts may not increase monotonically. Distributed MIP tries to create a single, unified view of node numbers, but with multiple machines processing nodes independently, possibly at different rates, some inconsistencies are inevitable.

Another difference is the line that indicates that the distributed ramp-up phase is complete. At this point, the distributed strategy transitions from a concurrent approach to a distributed approach. The log line indicates which worker was the winner in the concurrent approach. Distributed MIP continues by dividing the partially explored MIP search tree from this worker among all of the workers.

Another difference in the distributed log is in the summary section. The distributed MIP log includes a breakdown of how runtime was spent:

Runtime breakdown:
  Active:  37.85s (93%)
  Sync:     2.43s (6%)
  Comm:     0.34s (1%)

This is an aggregated view of the utilization data that is displayed in the progress log lines. In this example, the workers spent 93% of runtime actively working on MIP nodes, 6% waiting to synchronize with other workers, and 1% communicating data between machines.

The installation instructions for the Gurobi Remote Services can be found on Gurobi’s web page [www.gurobi.com] (http://www.gurobi.com).

Gurobi Instant Cloud

An alternative to setting up your own pool of machines is to use the Gurobi Instant Cloud. You only need a GAMS/Gurobi link license when you solve your problems in the Gurobi Instant Cloud. The cost for the Gurobi license is paid on a per use basis directly to Gurobi. If you follow through the steps on the Gurobi web site, you eventually get the names of the machines Gurobi has started for you in the cloud. In order to use these machines from GAMS/Gurobi, you need to provide a Gurobi license with access instructions for the Gurobi Instant Cloud .

Solution Pool

While the default goal of the Gurobi Optimizer is to find one proven optimal solution to your model, with a possible side-effect of finding other solutions along the way, the solver provides a number of parameters that allow you to change this behavior.

By default, the Gurobi MIP solver will try to find one proven optimal solution to your model. It will typically find multiple sub-optimal solutions along the way, which can be retrieved later. However, these solutions aren’t produced in a systematic way. The set of solutions that are found depends on the exact path the solver takes through the MIP search. You could solve a MIP model once, obtaining a set of interesting sub-optimal solutions, and then solve the same problem again with different parameter settings, and find only the optimal solution.

To store (some of the) solutions found along the way, you can enable the Solution Pool feature by setting option solnpool. If you’d like more control over how solutions are found and retained, the Gurobi Optimizer has a number of parameters available for this. The first and simplest is PoolSolutions, which controls the size of the solution pool. Changing this parameter won’t affect the number of solutions that are found – it simply determines how many of those are retained.

You can use the PoolSearchMode parameter to control the approach used to find solutions. In its default setting (0), the MIP search simply aims to find one optimal solution. Setting the parameter to 1 causes the MIP search to expend additional effort to find more solutions, but in a non-systematic way. You will get more solutions, but not necessarily the best solutions. Setting the parameter to 2 causes the MIP to do a systematic search for the n best solutions. For both non-default settings, the PoolSolutions parameter sets the target for the number of solutions to find.

If you are only interested in solutions that are within a certain gap of the best solution found, you can set the PoolGap parameter. Solutions that are not within the specified gap are discarded.

Obtaining an OPTIMAL optimization return status when using PoolSearchMode=2 indicates that the MIP solver succeeded in finding the desired number of best solutions, or it proved that the model doesn’t have that many distinct feasible solutions. If the solver terminated early (e.g., due to a time limit), you PoolObjBound attribute (printed to the log) to evaluate the quality of the solutions that were found. This attribute gives a bound on the objective of any solution that isn’t already in the solution pool. The difference between this attribute and ObjBound is that the latter gives a bound on the objective for any solution, and which is often looser than PoolObjBound. The PoolObjBound attribute gives a bound on the objective of undiscovered solutions. Further tree exploration won’t find better solutions. You can use this bound to get a count of how many of the n best solutions you found: any solutions whose objective values are at least as good as PoolObjBound are among the n best.

Solution Pool Example

Let’s continue with a few examples of how these parameters would be used. Imagine that you are solving a MIP model with an optimal (minimization) objective of 100. Further imagine that, using default settings, the MIP solver finds four solutions to this model with objectives 100, 110, 120, and 130.

If you set the PoolSolutions parameter to 3 and solve the model again, the MIP solver would discard the worst solution and return with 3 solutions in the solution pool. If you instead set the PoolGap parameter to value 0.2, the MIP solver would discard any solutions whose objective value is worse than 120 (which would also leave 3 solutions in the solution pool).

If you set the PoolSearchMode parameter to 2 and the PoolSolutions parameter to 10, the MIP solver would attempt to find the 10 best solutions to the model. An OPTIMAL return status would indicate that either (i) it found the 10 best solutions, or (ii) it found all feasible solutions to the model, and there were fewer than 10. If you also set the PoolGap parameter to a value of 0.1, the MIP solver would try to find 10 solutions with objective no worse than 110. While this may appear equivalent to asking for 10 solutions and simply ignoring those with objective worse than 110, the solve will typically complete significantly faster with this parameter set, since the solver does not have to expend effort looking for solutions beyond the requested gap.

Solution Pool Subtleties

There are a few subtleties associated with finding multiple solutions that we’ll cover now.

Continuous Variables

One subtlety arises when considering multiple solutions for models with continuous variables. Specifically, you may have two solutions that take identical values on the integer variables but where some continuous variables differ. By choosing different points on the line between these two solutions, you actually have an infinite number of choices for feasible solutions to the problem. To avoid this issue, we define two solutions as being equivalent if they take the same values on all integer variables (and on all continuous variables that participate in SOS constraints). A solution will be discarded if it is equivalent to another solution that is already in the pool.

Optimality Gap

The interplay between the optimality gap (MIPGap or MIPGapAbs) and multiple solutions can be a bit subtle. When using the default PoolSearchMode, a non-zero optimality gap indicates that you are willing to allow the MIP solver to declare a solution optimal, even though the model may have other, better solutions. The claim the solver makes upon termination is that no other solution would improve the incumbent objective by more than the optimality gap. Terminating at this point is ultimately a pragmatic choice – we’d probably rather have the true best solution, but the cost of reducing the optimality gap to zero can often be prohibitive.

This pragmatic choice can produce a bit of confusion when finding multiple optimal solutions. Specifically, if you ask for the n best solutions, the optimality gap plays a similar role as it does in the default case, but the implications may be a bit harder to understand. Specifically, a non-zero optimality gap means that you are willing to allow the solver to declare that it has found the n best solutions, even though there may be solutions that are better than those that were returned. The claim in this case is that any solution not among the reported n best would improve on the objective for the worst among the n best by less than the optimality gap.

If you want to avoid this source of potential confusion, you should set the optimality gap to 0 when using PoolSearchMode=2.

Logging

If you browse the log from a MIP solve with PoolSearchMode set to a non-default value, you may see the lower bound on the objective exceed the upper bound. This can’t happen with the default PoolSearchMode – if you are only looking for one optimal solution, the search is done as soon as the lower bound reaches the upper bound. However, if you are looking for the n best solutions, you have to prove that the model has no solution better than the n-th best. The objective for that n-th solution could be much worse than that of the incumbent. In this situation, the log file will include a line of the form:

Optimal solution found at node 123 - now completing solution pool...

Distributed MIP

One limitation that we should point out related to multiple solutions is that the distributed MIP solver has not been extended to support non-default PoolSearchMode settings. Distributed MIP will typically produce many more feasible solutions than non-distributed MIP, but there’s no way to ask it to find the n best solutions.

Multiple Objectives

While typical optimization models have a single objective function, real-world optimization problems often have multiple, competing objectives. For example, in a production planning model, you may want to both maximize profits and minimize late orders, or in a workforce scheduling application, you may want to both minimize the number of shifts that are short-staffed while also respecting worker’s shift preferences.

The main challenge you face when working with multiple, competing objectives is deciding how to manage the tradeoffs between them. Gurobi provides tools that simplify the task: Gurobi allows you to blend multiple objectives, to treat them hierarchically, or to combine the two approaches. In a blended approach, you optimize a weighted combination of the individual objectives. In a hierarchical or lexicographic approach, you set a priority for each objective, and optimize in priority order. When optimizing for one objective, you only consider solutions that would not degrade the objective values of higher-priority objectives. Gurobi allows you to enter and manage your objectives, to provide weights for a blended approach, or to set priorities for a hierarchical approach. Gurobi will only solve multi-objective models with strictly linear objectives. Moreover, for continous models, Gurobi will report a primal only solution (not dual information).

Following the workforce application the specifications of the objectives would be done as follows:

equations defObj, defNumShifts, defSumPreferences;
variables obj, numShifts, sumPreferences;

defobj..            obj =e= numShifts - 1/100*sumPreferences;
defNumShifts..      numShifts =e= ...;
defSumPreferences.. sumPreferences =e= ...;

model workforce /all/;
solve workforce minimizing obj using mip;

With the default setting GUROBI will solve the blended objective. Using the parameter MultObj GUROBI will use a hierarchical approach. A hierarchical or lexicographic approach assigns a priority to each objective, and optimizes for the objectives in decreasing priority order. At each step, it finds the best solution for the current objective, but only from among those that would not degrade the solution quality for higher-priority objectives. The priority is specified by the absolute value of the objective coefficient in the blended objective function (defObj). In the example, the numShifts objective with coefficient 1 has higher priority than the sumPreferences objective with absolute objective coefficient 1/100. The sign of the objective coefficient determines the direction of the particular objective function. So here numShifts will be minimized (same direction as on the solve statement) while sumPreferences will be maximized. GAMS needs to identify the various objective functions, therefore the objective variables can only appear in the blended objective functions and in the particular objective defining equation.

By default, the hierarchical approach won’t allow later objectives to degrade earlier objectives. This behavior can be relaxed through a pair of attributes: ObjNRelTol and ObjNAbsTol. By setting one of these for a particular objective, you can indicate that later objectives are allowed to degrade this objective by the specified relative or absolute amount, respectively. In our earlier example, if the optimal value for numShifts is 100, and if we set ObjNAbsTol for this objective to 20, then the second optimization step maximizing sumPreferences would find the best solution for the second objective from among all solutions with objective 120 or better for numShifts. Note that if you modify both tolerances, later optimizations would use the looser of the two values (i.e., the one that allows the larger degradation).

Summary of GUROBI Options

Termination options

OptionDescriptionDefault
bariterlimitBarrier iteration limitinfinity
bestbdstopBest objective bound to stopmaxdouble
bestobjstopBest objective value to stopmindouble
cutoffObjective cutoffmaxdouble
iterationlimitSimplex iteration limitinfinity
memlimitMemory limitmaxdouble
nodelimitMIP node limitmaxdouble
softmemlimitSoft memory limitmaxdouble
solutionlimitMIP feasible solution limitmaxint
timelimitTime limitGAMS reslim
worklimitWork limitmaxdouble

Tolerances options

OptionDescriptionDefault
barconvtolBarrier convergence tolerance1e-08
barqcpconvtolBarrier QCP convergence tolerance1e-06
feasibilitytolPrimal feasibility tolerance1e-06
intfeastolInteger feasibility tolerance1e-05
markowitztolThreshold pivoting tolerance0.0078125
mipgapRelative MIP optimality gapGAMS optcr
mipgapabsAbsolute MIP optimality gapGAMS optca
optimalitytolDual feasibility tolerance1e-06
psdtolPositive semi-definite tolerance1e-06

Simplex options

OptionDescriptionDefault
lpwarmstartWarm start usage in simplex1
networkalgNetwork simplex algorithm-1
normadjustSimplex pricing norm-1
perturbvalueSimplex perturbation magnitude0.0002
quadQuad precision computation in simplex-1
siftingSifting within dual simplex-1
siftmethodLP method used to solve sifting sub-problems-1
simplexpricingSimplex variable pricing strategy-1

Barrier options

OptionDescriptionDefault
barcorrectorsCentral correction limit-1
barhomogeneousBarrier homogeneous algorithm-1
barorderBarrier ordering algorithm-1
crossoverBarrier crossover strategy-1
crossoverbasisCrossover initial basis construction strategy-1
qcpdualCompute dual variables for QCP models1

Scaling options

OptionDescriptionDefault
objscaleObjective scaling0
scaleflagModel scaling-1

MIP options

OptionDescriptionDefault
branchdirBranch direction preference0
concurrentjobsEnables distributed concurrent solver0
concurrentmipEnables concurrent MIP solver1
degenmovesDegenerate simplex moves-1
disconnectedDisconnected component strategy-1
distributedmipjobsEnables the distributed MIP solver0
dumpbcsolDump incumbents to GDX files during branch-and-cut
fixoptfileOption file for fixed problem optimization
heuristicsTurn MIP heuristics up or down0.05
improvestartgapTrigger solution improvement0
improvestartnodesTrigger solution improvementmaxdouble
improvestarttimeTrigger solution improvementmaxdouble
.lazyLazy constraints value0
lazyconstraintsIndicator to use lazy constraints0
minrelnodesMinimum relaxation heuristic control-1
mipfocusSet the focus of the MIP solver0
mipstartUse mip starting values0
mipstopexprStop expression for branch and bound
miqcpmethodMethod used to solve MIQCP models-1
multimipstartUse multiple (partial) mipstarts provided via gdx files
nlpheurControls the NLP heuristic for non-convex quadratic models1
nodefiledirDirectory for MIP node files.
nodefilestartMemory threshold for writing MIP tree nodes to diskmaxdouble
nodemethodMethod used to solve MIP node relaxations-1
nonconvexControl how to deal with non-convex quadratic programs-1
norelheurtimeLimits the amount of time (in seconds) spent in the NoRel heuristic0
norelheurworkLimits the amount of work performed by the NoRel heuristic0
obbtControls aggressiveness of Optimality-Based Bound Tightening-1
.partitionVariable partition value0
partitionplaceControls when the partition heuristic runs15
.priorBranching priorities1
pumppassesFeasibility pump heuristic control-1
rinsRINS heuristic-1
solfilesLocation to store intermediate solution files
solnpoolControls export of alternate MIP solutions
solnpoolmergeControls export of alternate MIP solutions for merged GDX solution file
solnpoolnumsymMaximum number of variable symbols when writing merged GDX solution file10
solnpoolprefixFirst dimension of variables for merged GDX solution file or file name prefix for GDX solution filessoln
solvefixedIndicator for solving the fixed problem for a MIP to get a dual solution1
startnodelimitNode limit for MIP start sub-MIP-1
submipnodesNodes explored by sub-MIP heuristics500
symmetrySymmetry detection-1
varbranchBranch variable selection strategy-1
zeroobjnodesZero objective heuristic control-1

Presolve options

OptionDescriptionDefault
aggfillAllowed fill during presolve aggregation-1
aggregatePresolve aggregation control1
dualreductionsDisables dual reductions in presolve1
precrushAllows presolve to translate constraints on the original model to equivalent constraints on the presolved model0
predeprowPresolve dependent row reduction-1
predualPresolve dualization-1
premiqcpformFormat of presolved MIQCP model-1
prepassesPresolve pass limit-1
preqlinearizePresolve Q matrix linearization-1
PresolvePresolve level-1
presos1bigmControls largest coefficient in SOS1 reformulation-1
presos1encodingControls SOS1 reformulation-1
presos2bigmControls largest coefficient in SOS2 reformulation-1
presos2encodingControls SOS2 reformulation-1
presparsifyPresolve sparsify reduction-1

Tuning options

OptionDescriptionDefault
tunecleanupEnables a tuning cleanup phase0
tunecriterionSpecify tuning criterion-1
tunejobsEnables distributed tuning0
tunemetricMetric to aggregate results into a single measure-1
tuneoutputTuning output level2
tuneresultsNumber of improved parameter sets returned-1
tunetargetmipgapA target gap to be reached0
tunetargettimeA target runtime in seconds to be reached0.005
tunetimelimitTime limit for tuning-1
tunetrialsPerform multiple runs on each parameter set to limit the effect of random noise0

Multiple Solutions options

OptionDescriptionDefault
poolgapRelative gap for solutions in poolmaxdouble
poolgapabsAbsolute gap for solutions in poolmaxdouble
poolsearchmodeChoose the approach used to find additional solutions0
poolsolutionsNumber of solutions to keep in pool10

MIP Cuts options

OptionDescriptionDefault
bqpcutsBQP cut generation-1
cliquecutsClique cut generation-1
covercutsCover cut generation-1
cutaggpassesConstraint aggregation passes performed during cut generation-1
cutpassesRoot cutting plane pass limit-1
cutsGlobal cut generation control-1
flowcovercutsFlow cover cut generation-1
flowpathcutsFlow path cut generation-1
gomorypassesRoot Gomory cut pass limit-1
gubcovercutsGUB cover cut generation-1
impliedcutsImplied bound cut generation-1
infproofcutsInfeasibility proof cut generation-1
liftprojectcutsLift-and-project cut generation-1
mipsepcutsMIP separation cut generation-1
mircutsMIR cut generation-1
modkcutsMod-k cut generation-1
networkcutsNetwork cut generation-1
projimpliedcutsProjected implied bound cut generation-1
psdcutsPSD cut generation-1
relaxliftcutsRelax-and-lift cut generation-1
rltcutsRLT cut generation-1
strongcgcutsStrong-CG cut generation-1
submipcutsSub-MIP cut generation-1
zerohalfcutsZero-half cut generation-1

Distributed algorithms options

OptionDescriptionDefault
workerpasswordPassword for distributed worker cluster
workerpoolDistributed worker cluster

Other options

OptionDescriptionDefault
displayintervalFrequency at which log lines are printed5
.dofuncpieceerrorError allowed for PWL translation of function constraints1e-3
.dofuncpiecelengthPiece length for PWL translation of function constraints1e-2
.dofuncpieceratioControl whether to under- or over-estimate function values in PWL approximation-1
.dofuncpiecesSets strategy for PWL function approximation0
feasoptComputes a minimum-cost relaxation to make an infeasible model feasible0
feasoptmodeMode of FeasOpt0
.feaspreffeasibility preference1
feasrelaxbigmBig-M value for feasibility relaxations1e+06
freegamsmodelPreserves memory by dumping the GAMS model instance representation temporarily to disk0
funcmaxvalMaximum value for x and y variables in function constraints1e+06
funcpieceerrorError allowed for PWL translation of function constraint0.001
funcpiecelengthPiece length for PWL translation of function constraint0.01
funcpieceratioControls whether to under- or over-estimate function values in PWL approximation-1
funcpiecesSets strategy for PWL function approximation0
iisRun the Irreducible Inconsistent Subsystem (IIS) finder if the problem is infeasible0
iismethodIIS method-1
integralityfocusSet the integrality focus0
kappaDisplay approximate condition number estimates for the optimal simplex basis0
kappaexactDisplay exact condition number estimates for the optimal simplex basis0
methodAlgorithm used to solve continuous models-1
miptraceFilename of MIP trace file
miptracenodeNode interval when a trace record is written100
miptracetimeTime interval when a trace record is written1
multiobjmethodWarm-start method to solve for subsequent objectives-1
multiobjpreInitial presolve on multi-objective models-1
multobjControls the hierarchical optimization of multiple objectives0
namesIndicator for loading names1
numericfocusSet the numerical focus0
objnabstolAllowable absolute degradation for objective
objnreltolAllowable relative degradation for objective
printoptionsList values of all options to GAMS listing file0
qextractalgquadratic extraction algorithm in GAMS interface0
readparamsRead Gurobi parameter file
rerunResolve without presolve in case of unbounded or infeasible-1
rngrestartWrite GAMS readable ranging information file
seedModify the random number seed0
sensitivityProvide sensitivity information0
threadsNumber of parallel threads to usethreads
TuningParameter Tuning
usebasisUse basis from GAMSGAMS bratio
varhintGuide heuristics and branching through variable hints0
writeparamsWrite Gurobi parameter file
writeprobSave the problem instance