Gurobi Optimization, [www.gurobi.com] (http://www.gurobi.com)
Introduction
The Gurobi suite of optimization products include state-of-the-art simplex and parallel barrier solvers for linear programming (LP) and quadratic programming (QP), parallel barrier solver for quadratically constrained programming (QCP), as well as parallel mixed-integer linear programming (MILP), mixed-integer quadratic programming (MIQP) and mixed-integer quadratically constrained programming (MIQCP) solvers.
The Gurobi MIP solver includes shared memory parallelism, capable of simultaneously exploiting any number of processors and cores per processor. The implementation is deterministic: two separate runs on the same model will produce identical solution paths.
While numerous solving options are available, Gurobi automatically calculates and sets most options at the best values for specific problems.
Linear, Quadratic and Quadratic Constrained Programming
Gurobi can solve LP and convex QP problems using several alternative algorithms, while the only choice for solving convex QCP is the parallel barrier algorithm. The majority of LP problems solve best using Gurobi’s state-of-the-art dual simplex algorithm, while most convex QP problems solve best using the parallel barrier algorithm. Certain types of LP problems benefit from using the parallel barrier or the primal simplex algorithms, while for some types of QP, the dual or primal simplex algorithm can be a better choice. If you are solving LP problems on a multi-core system, you should also consider using the concurrent optimizer. It runs different optimization algorithms on different cores, and returns when the first one finishes.
The infeasibility finder takes an infeasible linear program and produces an irreducibly inconsistent set of constraints (IIS). An IIS is a set of constraints and variable bounds which is infeasible but becomes feasible if any one member of the set is dropped. The infeasibility finder is activated by the option IIS.
Gurobi supports sensitivity analysis (post-optimality analysis) for linear programs which allows one to find out more about an optimal solution for a problem. In particular, objective ranging and constraint ranging give information about how much an objective coefficient or a right-hand-side and variable bounds can change without changing the optimal basis. In other words, they give information about how sensitive the optimal basis is to a change in the objective function or the bounds and right-hand side. Gurobi reports the sensitivity information as part of the normal solution listing. Sensitivity analysis is activated by the option Sensitivity.
The Gurobi presolve can sometimes diagnose a problem as being infeasible or unbounded. When this happens, Gurobi can, in order to get better diagnostic information, rerun the problem with presolve turned off. The rerun without presolve is controlled by the option ReRun. In default mode only problems that are small (i.e. demo sized) will be rerun.
Gurobi can either presolve a model or start from an advanced basis or primal/dual solution pair. Often the solve from scratch of a presolved model outperforms a solve from an unpresolved model started from an advanced basis/solution. It is impossible to determine a priori if presolve or starting from a given advanced basis/solution without presolve will be faster. By default, Gurobi will automatically use an advanced basis or solution from a previous solve statement.
Mixed-Integer Programming
The methods used to solve pure integer and mixed integer programming problems require dramatically more mathematical computation than those for similarly sized pure linear or quadratic programs. Many relatively small integer programming models take enormous amounts of time to solve.
For problems with discrete variables, Gurobi uses a branch and cut algorithm which solves a series of subproblems, LP subproblems for MILP, QP subproblems for MIQP, and QCP subproblems or LP outer approximation subproblems for MIQCP. Because a single mixed integer problem generates many subproblems, even small mixed integer problems can be very compute intensive and require significant amounts of physical memory. With option nonConvex Gurobi can also solve nonconvex (MI)QP and (MI)QCP problems using a spatial branch-and-bound method.
Gurobi supports Special Order Sets of type 1 and type 2 as well as semi-continuous and semi-integer variables.
You can provide a known solution (for example, from a MIP problem previously solved or from your knowledge of the problem) to serve as the first integer solution.
If you specify some or all values for the discrete variables together with Gurobi option MipStart, Gurobi will check the validity of the values as an integer-feasible solution. If this process succeeds, the solution will be treated as an integer solution of the current problem.
The Gurobi MIP solver includes shared memory parallelism, capable of simultaneously exploiting any number of processors and cores per processor. The implementation is deterministic: two separate runs on the same model will produce identical solution paths.
Nonlinear Programming
Gurobi is not a general purpose nonlinear programming solver, but it is able to handle certain nonlinear constraints by reformulating them into supported linear and/or quadratic constraints. These constraints are:
- MAX constraint:eq1.. r =e= max(x1,x2,x3,…,c); eq2.. r =e= smax(i, x(i));
- MIN constraint:eq1.. r =e= min(x1,x2,x3,…,c); eq2.. r =e= smin(i, x(i));
- AND constraint:eq1.. r =e= b1 and b2 and b3 and …; eq2.. r =e= sand(i, b(i));
- OR constraint:eq1.. r =e= b1 or b2 or b3 or …; eq2.. r =e= sor(i, b(i));
- ABS constraint:eq.. r =e= abs(x);
- EXP constraint:eq1.. r =e= exp(x); eq2.. r =e= x**a; eq3.. r =e= a**x; Here,
a > 0
and foreq3
the lower bound ofx
must be nonnegative. - LOG constraint:eq1.. r =e= log(x); eq2.. r =e= log2(x); eq3.. r =e= log10(x);
- SIN / COS / TAN constraint:eq1.. r =e= sin(x); eq2.. r =e= cos(x); eq3.. r =e= tan(x);
- NORM constraint:eq1_1.. r =e= sum(i, abs(x(i))); eq1_2.. r =e= abs(x1) + abs(x2) + abs(x3) + abs(x4); eq2_1.. r =e= edist(x1,x2,x3,…); eq2_2.. r =e= sqrt(sum(i, sqr(x(i)))); eq2_3.. r =e= sqrt(sqr(x1) + sqr(x2) + sqr(x3) + sqr(x4)); eq3_1.. r =e= smax(i, abs(x(i))); eq3_2.. r =e= max(abs(x1), abs(x2), abs(x3), abs(x4)); Note that a 2-norm constraint can lead to a non-convex quadratic model which is much harder to solve than a convex quadratic or linear model.
- POLY constraint:eq.. r =e= poly(x, a0, a1, a2, …);
- SIGMOID constraint:eq1.. r =e= sigmoid(x); eq2.. r =e= 1 / (1 + exp(-x));
NoteNonlinear constraints must perfectly match one of the above forms. Otherwise, Gurobi will raise a capability error. For example, it is not possible to interchange left-hand-side and right-hand-side of the above constraints.
Feasible Relaxation
The Infeasibility Finder identifies the causes of infeasibility by means of inconsistent set of constraints (IIS). However, you may want to go beyond diagnosis to perform automatic correction of your model and then proceed with delivering a solution. One approach for doing so is to build your model with explicit slack variables and other modeling constructs, so that an infeasible outcome is never a possibility. An automated approach offered in Gurobi is known as FeasOpt (for Feasible Optimization) and turned on by parameter FeasOpt in a Gurobi option file.
With the FeasOpt option Gurobi accepts an infeasible model and selectively relaxes the bounds and constraints in a way that minimizes a weighted penalty function. In essence, the feasible relaxation tries to suggest the least change that would achieve feasibility.
By default all equations are candidates for relaxation and weighted equally but none of the variables can be relaxed. This default behavior can be modified by assigning relaxation preferences to variable bounds and constraints. These preferences can be conveniently specified with the .feaspref
option. The input value denotes the users willingness to relax a constraint or bound. The larger the preference, the more likely it will be that a given bound or constraint will be relaxed. More precisely, the reciprocal of the specified value is used to weight the relaxation of that constraint or bound. The user may specify a preference value less than or equal to 0 (zero), which denotes that the corresponding constraint or bound must not be relaxed. It is not necessary to specify a unique preference for each bound or range. In fact, it is conventional to use only the values 0 (zero) and 1 (one) except when your knowledge of the problem suggests assigning explicit preferences.
Preferences can be specified through a Gurobi solver option file using dot options. The syntax is:
(variable or equation).feaspref
(value)
For example, suppose we have a declaration:
Set i /i1*i5/; Set j /j2*j4/; variable v(i,j); equation e(i,j);
Then, the relaxation preference in the gurobi.opt file can be specified by:
feasopt 1 v.feaspref 1 v.feaspref('i1',*) 2 v.feaspref('i1','j2') 0 e.feaspref(*,'j1') 0 e.feaspref('i5','j4') 2
First we turn the feasible relaxtion on. Futhermore, we specify that all variables v(i,j)
have preference of 1, except variables over set element i1
, which have a preference of 2. The variable over set element i1
and j2
has preference 0. Note that preferences are assigned in a procedural fashion so that preferences assigned later overwrite previous preferences. The same syntax applies for assigning preferences to equations as demonstrated above. If you want to assign a preference to all variables or equations in a model, use the keywords variables
or equations
instead of the individual variable and equations names (e.g. variables.feaspref 1
).
The parameter FeasOptMode allows different strategies in finding feasible relaxation in one or two phases. In its first phase, it attempts to minimize its relaxation of the infeasible model. That is, it attempts to find a feasible solution that requires minimal change. In its second phase, it finds an optimal solution (using the original objective) among those that require only as much relaxation as it found necessary in the first phase. Values of the parameter FeasOptMode indicate two aspects: (1) whether to stop in phase one or continue to phase two and (2) how to measure the relaxation (as a sum of required relaxations; as the number of constraints and bounds required to be relaxed; as a sum of the squares of required relaxations). Please check description of parameter FeasOptMode for details. Also check example models feasopt*
in the GAMS Model library.
Parameter Tuning Tool
The Gurobi Optimizer provides a wide variety of parameters that allow you to control the operation of the optimization engines. The level of control varies from extremely coarse-grained (e.g., the Method parameter, which allows you to choose the algorithm used to solve continuous models) to very fine-grained (e.g., the MarkowitzTol parameter, which allows you to adjust the precise tolerances used during simplex basis factorization). While these parameters provide a tremendous amount of user control, the immense space of possible options can present a significant challenge when you are searching for parameter settings that improve performance on a particular model. The purpose of the Gurobi tuning tool is to automate this search.
The Gurobi tuning tool performs multiple solves on your model, choosing different parameter settings for each, in a search for settings that improve runtime. The longer you let it run, the more likely it is to find a significant improvement.
A number of tuning-related parameters allow you to control the operation of the tuning tool. The most important is probably TuneTimeLimit, which controls the amount of time spent searching for an improving parameter set. Other parameters include TuneTrials (which attempts to limit the impact of randomness on the result), TuneResults (which limits the number of results that are returned), and TuneOutput (which controls the amount of output produced by the tool).
While parameter settings can have a big performance effect for many models, they aren’t going to solve every performance issue. One reason is simply that there are many models for which even the best possible choice of parameter settings won’t produce an acceptable result. Some models are simply too large and/or difficult to solve, while others may have numerical issues that can’t be fixed with parameter changes.
Another limitation of automated tuning is that performance on a model can experience significant variations due to random effects (particularly for MIP models). This is the nature of search. The Gurobi algorithms often have to choose from among multiple, equally appealing alternatives. Seemingly innocuous changes to the model (such as changing the order of the constraint or variables), or subtle changes to the algorithm (such as modifying the random number seed) can lead to different choices. Often times, breaking a single tie in a different way can lead to an entirely different search. We’ve seen cases where subtle changes in the search produce 100X performance swings. While the tuning tool tries to limit the impact of these effects, the final result will typically still be heavily influenced by such issues.
The bottom line is that automated performance tuning is meant to give suggestions for parameters that could produce consistent, reliable improvements on your models. It is not meant to be a replacement for efficient modeling or careful performance testing.
Compute Server
The Gurobi Compute Server allows you to use one or more servers to offload all of your Gurobi computations.
Gurobi compute servers support queuing and load balancing. You can set a limit on the number of simultaneous jobs each compute server will run. When this limit has been reached, subsequent jobs will be queued. If you have multiple compute servers, the current job load is automatically balanced among the available servers. By default, the Gurobi job queue is serviced in a First-In, First-Out (FIFO) fashion. However, jobs can be given different priorities. Jobs with higher priorities are then selected from the queue before jobs with lower priorities.
Gurobi Compute Server licenses and software are not included in GAMS/Gurobi. Contact Gurobi directly to inquire about the software and license.
Distributed Parallel Algorithms
Gurobi Optimizer implements a number of distributed algorithms that allow you to use multiple machines to solve a problem faster. Available distributed algorithms are:
- A distributed MIP solver, which allows you to divide the work of solving a single MIP model among multiple machines. A manager machine passes problem data to a set of worker machines in order to coordinate the overall solution process.
- A distributed concurrent solver, which allows you to use multiple machines to solve an LP or MIP model. Unlike the distributed MIP solver, the concurrent solver doesn’t divide the work associated with solving the problem among the machines. Instead, each machine uses a different strategy to solve the whole problem, with the hope that one strategy will be particularly effective and will finish much earlier than the others. For some problems, this concurrent approach can be more effective than attempting to divide up the work.
- Distributed parameter tuning, which automatically searches for parameter settings that improve performance on your optimization model. Tuning solves your model with a variety of parameter settings, measuring the performance obtained by each set, and then uses the results to identify the settings that produce the best overall performance. The distributed version of tuning performs these trials on multiple machines, which makes the overall tuning process run much faster.
These distributed parallel algorithms are designed to be almost entirely transparent to the user. The user simply modifies a few parameters, and the work of distributing the computation to multiple machines is handled behind the scenes by Gurobi.
Specifying the Worker Pool
Once you’ve set up a set of one or more distributed workers, you should list at least one of their names in the WorkerPool parameter. You can provide either machine names or IP addresses, and they should be comma-separated.
You can provide the worker access password through the WorkerPassword parameter. All servers in the worker pool must have the same access password.
Requesting Distributed Algorithms
Once you’ve set up the worker pool through the appropriate parameters, the last step to use a distributed algorithm is to set the TuneJobs, ConcurrentJobs, or DistributedMIPJobs parameter. These parameters are used to indicate how many distinct tuning, concurrent, or distributed MIP jobs should be started on the available workers.
If some of the workers in your worker pool are running at capacity when you launch a distributed algorithm, the algorithm won’t create queued jobs. Instead, it will launch as many jobs as it can (up to the requested value), and it will run with these jobs.
These distributed algorithms have been designed to be nearly indistinguishable from the single machine versions. Our hope is that, if you know how to use the single machine version, you’ll find it straightforward to use the distributed version. The distributed algorithms respect all of the usual parameters. For distributed MIP, you can adjust strategies, adjust tolerances, set limits, etc. For concurrent MIP, you can allow Gurobi to choose the settings for each machine automatically or specify a set of options. For distributed tuning, you can use the usual tuning parameters, including TuneTimeLimit, TuneTrails, and TuneOutput.
There are a few things to be aware of when using distributed algorithms, though. One relates to relative machine performance. Distributed algorithms work best if all of the workers give very similar performance. For example, if one machine in your worker pool were much slower than the others in a distributed tuning run, any parameter sets tested on the slower machine would appear to be less effective than if they were run on a faster machine. Similar considerations apply for distributed MIP and distributed concurrent. We strongly recommend that you use machines with very similar performance. Note that if your machines have similarly performing cores but different numbers of cores, we suggest that you use the Threads parameter to make sure that all machines use the same number of cores.
Logging for distributed MIP is very similar to the standard MIP logging. The main differences are in the progress section. The header for the standard MIP logging looks like this:
Nodes | Current Node | Objective Bounds | Work Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
By contrast, the distributed MIP header looks like this:
Nodes | Current Node | Objective Bounds | Work Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | ParUtil Time
Instead of showing iterations per node, the last field in the distributed log shows parallel utilization. Specifically, it shows the fraction of the preceding time period (the time since the previous progress log line) that the workers spent actively processing MIP nodes.
Here is an example of a distributed MIP progress log:
Nodes | Utilization | Objective Bounds | Work Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | ParUtil Time H 0 157344.61033 - - 0s H 0 40707.729144 - - 0s H 0 28468.534497 - - 0s H 0 18150.083886 - - 0s H 0 14372.871258 - - 0s H 0 13725.475382 - - 0s 0 0 10543.7611 0 19 13725.4754 10543.7611 23.2% 99% 0s * 266 12988.468031 10543.7611 18.8% 0s H 1503 12464.099984 10630.6187 14.7% 0s * 2350 12367.608657 10632.7061 14.0% 1s * 3360 12234.641804 10641.4586 13.0% 1s H 3870 11801.185729 10641.4586 9.83% 1s Ramp-up phase complete - continuing with instance 2 (best bd 10661) 16928 2731 10660.9626 0 12 11801.1857 10660.9635 9.66% 99% 2s 135654 57117 11226.5449 19 12 11801.1857 11042.3036 6.43% 98% 5s 388736 135228 11693.0268 23 12 11801.1857 11182.6300 5.24% 96% 10s 705289 196412 cutoff 11801.1857 11248.8963 4.68% 98% 15s 1065224 232839 11604.6587 28 10 11801.1857 11330.2111 3.99% 98% 20s 1412054 238202 11453.2202 31 12 11801.1857 11389.7119 3.49% 99% 25s 1782362 209060 cutoff 11801.1857 11437.2670 3.08% 97% 30s 2097018 158137 11773.6235 20 11 11801.1857 11476.1690 2.75% 92% 35s 2468495 11516 cutoff 11801.1857 11699.9393 0.86% 78% 40s 2481830 0 cutoff 11801.1857 11801.1857 0.00% 54% 40s
One thing you may find in the progress section is that node counts may not increase monotonically. Distributed MIP tries to create a single, unified view of node numbers, but with multiple machines processing nodes independently, possibly at different rates, some inconsistencies are inevitable.
Another difference is the line that indicates that the distributed ramp-up phase is complete. At this point, the distributed strategy transitions from a concurrent approach to a distributed approach. The log line indicates which worker was the winner in the concurrent approach. Distributed MIP continues by dividing the partially explored MIP search tree from this worker among all of the workers.
Another difference in the distributed log is in the summary section. The distributed MIP log includes a breakdown of how runtime was spent:
Runtime breakdown: Active: 37.85s (93%) Sync: 2.43s (6%) Comm: 0.34s (1%)
This is an aggregated view of the utilization data that is displayed in the progress log lines. In this example, the workers spent 93% of runtime actively working on MIP nodes, 6% waiting to synchronize with other workers, and 1% communicating data between machines.
The installation instructions for the Gurobi Remote Services can be found on Gurobi’s web page [www.gurobi.com] (http://www.gurobi.com).
Gurobi Instant Cloud
An alternative to setting up your own pool of machines is to use the Gurobi Instant Cloud. You only need a GAMS/Gurobi link license when you solve your problems in the Gurobi Instant Cloud. The cost for the Gurobi license is paid on a per use basis directly to Gurobi. If you follow through the steps on the Gurobi web site, you eventually get the names of the machines Gurobi has started for you in the cloud. In order to use these machines from GAMS/Gurobi, you need to provide a Gurobi license with access instructions for the Gurobi Instant Cloud .
Solution Pool
While the default goal of the Gurobi Optimizer is to find one proven optimal solution to your model, with a possible side-effect of finding other solutions along the way, the solver provides a number of parameters that allow you to change this behavior.
By default, the Gurobi MIP solver will try to find one proven optimal solution to your model. It will typically find multiple sub-optimal solutions along the way, which can be retrieved later. However, these solutions aren’t produced in a systematic way. The set of solutions that are found depends on the exact path the solver takes through the MIP search. You could solve a MIP model once, obtaining a set of interesting sub-optimal solutions, and then solve the same problem again with different parameter settings, and find only the optimal solution.
To store (some of the) solutions found along the way, you can enable the Solution Pool feature by setting option solnpool. If you’d like more control over how solutions are found and retained, the Gurobi Optimizer has a number of parameters available for this. The first and simplest is PoolSolutions, which controls the size of the solution pool. Changing this parameter won’t affect the number of solutions that are found – it simply determines how many of those are retained.
You can use the PoolSearchMode parameter to control the approach used to find solutions. In its default setting (0), the MIP search simply aims to find one optimal solution. Setting the parameter to 1 causes the MIP search to expend additional effort to find more solutions, but in a non-systematic way. You will get more solutions, but not necessarily the best solutions. Setting the parameter to 2 causes the MIP to do a systematic search for the n best solutions. For both non-default settings, the PoolSolutions parameter sets the target for the number of solutions to find.
If you are only interested in solutions that are within a certain gap of the best solution found, you can set the PoolGap parameter. Solutions that are not within the specified gap are discarded.
Obtaining an OPTIMAL optimization return status when using PoolSearchMode=2 indicates that the MIP solver succeeded in finding the desired number of best solutions, or it proved that the model doesn’t have that many distinct feasible solutions. If the solver terminated early (e.g., due to a time limit), you PoolObjBound attribute (printed to the log) to evaluate the quality of the solutions that were found. This attribute gives a bound on the objective of any solution that isn’t already in the solution pool. The difference between this attribute and ObjBound is that the latter gives a bound on the objective for any solution, and which is often looser than PoolObjBound. The PoolObjBound attribute gives a bound on the objective of undiscovered solutions. Further tree exploration won’t find better solutions. You can use this bound to get a count of how many of the n best solutions you found: any solutions whose objective values are at least as good as PoolObjBound are among the n best.
Solution Pool Example
Let’s continue with a few examples of how these parameters would be used. Imagine that you are solving a MIP model with an optimal (minimization) objective of 100. Further imagine that, using default settings, the MIP solver finds four solutions to this model with objectives 100, 110, 120, and 130.
If you set the PoolSolutions parameter to 3 and solve the model again, the MIP solver would discard the worst solution and return with 3 solutions in the solution pool. If you instead set the PoolGap parameter to value 0.2, the MIP solver would discard any solutions whose objective value is worse than 120 (which would also leave 3 solutions in the solution pool).
If you set the PoolSearchMode parameter to 2 and the PoolSolutions parameter to 10, the MIP solver would attempt to find the 10 best solutions to the model. An OPTIMAL return status would indicate that either (i) it found the 10 best solutions, or (ii) it found all feasible solutions to the model, and there were fewer than 10. If you also set the PoolGap parameter to a value of 0.1, the MIP solver would try to find 10 solutions with objective no worse than 110. While this may appear equivalent to asking for 10 solutions and simply ignoring those with objective worse than 110, the solve will typically complete significantly faster with this parameter set, since the solver does not have to expend effort looking for solutions beyond the requested gap.
Solution Pool Subtleties
There are a few subtleties associated with finding multiple solutions that we’ll cover now.
Continuous Variables
One subtlety arises when considering multiple solutions for models with continuous variables. Specifically, you may have two solutions that take identical values on the integer variables but where some continuous variables differ. By choosing different points on the line between these two solutions, you actually have an infinite number of choices for feasible solutions to the problem. To avoid this issue, we define two solutions as being equivalent if they take the same values on all integer variables (and on all continuous variables that participate in SOS constraints). A solution will be discarded if it is equivalent to another solution that is already in the pool.
Optimality Gap
The interplay between the optimality gap (MIPGap or MIPGapAbs) and multiple solutions can be a bit subtle. When using the default PoolSearchMode, a non-zero optimality gap indicates that you are willing to allow the MIP solver to declare a solution optimal, even though the model may have other, better solutions. The claim the solver makes upon termination is that no other solution would improve the incumbent objective by more than the optimality gap. Terminating at this point is ultimately a pragmatic choice – we’d probably rather have the true best solution, but the cost of reducing the optimality gap to zero can often be prohibitive.
This pragmatic choice can produce a bit of confusion when finding multiple optimal solutions. Specifically, if you ask for the n best solutions, the optimality gap plays a similar role as it does in the default case, but the implications may be a bit harder to understand. Specifically, a non-zero optimality gap means that you are willing to allow the solver to declare that it has found the n best solutions, even though there may be solutions that are better than those that were returned. The claim in this case is that any solution not among the reported n best would improve on the objective for the worst among the n best by less than the optimality gap.
If you want to avoid this source of potential confusion, you should set the optimality gap to 0 when using PoolSearchMode=2
.
Logging
If you browse the log from a MIP solve with PoolSearchMode set to a non-default value, you may see the lower bound on the objective exceed the upper bound. This can’t happen with the default PoolSearchMode
– if you are only looking for one optimal solution, the search is done as soon as the lower bound reaches the upper bound. However, if you are looking for the n best solutions, you have to prove that the model has no solution better than the n-th best. The objective for that n-th solution could be much worse than that of the incumbent. In this situation, the log file will include a line of the form:
Optimal solution found at node 123 - now completing solution pool...
Distributed MIP
One limitation that we should point out related to multiple solutions is that the distributed MIP solver has not been extended to support non-default PoolSearchMode settings. Distributed MIP will typically produce many more feasible solutions than non-distributed MIP, but there’s no way to ask it to find the n best solutions.
Multiple Objectives
While typical optimization models have a single objective function, real-world optimization problems often have multiple, competing objectives. For example, in a production planning model, you may want to both maximize profits and minimize late orders, or in a workforce scheduling application, you may want to both minimize the number of shifts that are short-staffed while also respecting worker’s shift preferences.
The main challenge you face when working with multiple, competing objectives is deciding how to manage the tradeoffs between them. Gurobi provides tools that simplify the task: Gurobi allows you to blend multiple objectives, to treat them hierarchically, or to combine the two approaches. In a blended approach, you optimize a weighted combination of the individual objectives. In a hierarchical or lexicographic approach, you set a priority for each objective, and optimize in priority order. When optimizing for one objective, you only consider solutions that would not degrade the objective values of higher-priority objectives. Gurobi allows you to enter and manage your objectives, to provide weights for a blended approach, or to set priorities for a hierarchical approach. Gurobi will only solve multi-objective models with strictly linear objectives. Moreover, for continous models, Gurobi will report a primal only solution (not dual information).
Following the workforce application the specifications of the objectives would be done as follows:
equations defObj, defNumShifts, defSumPreferences; variables obj, numShifts, sumPreferences; defobj.. obj =e= numShifts - 1/100*sumPreferences; defNumShifts.. numShifts =e= ...; defSumPreferences.. sumPreferences =e= ...; model workforce /all/; solve workforce minimizing obj using mip;
With the default setting GUROBI will solve the blended objective. Using the parameter MultObj GUROBI will use a hierarchical approach. A hierarchical or lexicographic approach assigns a priority to each objective, and optimizes for the objectives in decreasing priority order. At each step, it finds the best solution for the current objective, but only from among those that would not degrade the solution quality for higher-priority objectives. The priority is specified by the absolute value of the objective coefficient in the blended objective function (defObj
). In the example, the numShifts
objective with coefficient 1 has higher priority than the sumPreferences
objective with absolute objective coefficient 1/100. The sign of the objective coefficient determines the direction of the particular objective function. So here numShifts
will be minimized (same direction as on the solve
statement) while sumPreferences
will be maximized. GAMS needs to identify the various objective functions, therefore the objective variables can only appear in the blended objective functions and in the particular objective defining equation.
By default, the hierarchical approach won’t allow later objectives to degrade earlier objectives. This behavior can be relaxed through a pair of attributes: ObjNRelTol and ObjNAbsTol. By setting one of these for a particular objective, you can indicate that later objectives are allowed to degrade this objective by the specified relative or absolute amount, respectively. In our earlier example, if the optimal value for numShifts
is 100, and if we set ObjNAbsTol for this objective to 20, then the second optimization step maximizing sumPreferences
would find the best solution for the second objective from among all solutions with objective 120 or better for numShifts
. Note that if you modify both tolerances, later optimizations would use the looser of the two values (i.e., the one that allows the larger degradation).
Summary of GUROBI Options
Termination options
Option | Description | Default |
---|---|---|
bariterlimit | Barrier iteration limit | infinity |
bestbdstop | Best objective bound to stop | maxdouble |
bestobjstop | Best objective value to stop | mindouble |
cutoff | Objective cutoff | maxdouble |
iterationlimit | Simplex iteration limit | infinity |
memlimit | Memory limit | maxdouble |
nodelimit | MIP node limit | maxdouble |
softmemlimit | Soft memory limit | maxdouble |
solutionlimit | MIP feasible solution limit | maxint |
timelimit | Time limit | GAMS reslim |
worklimit | Work limit | maxdouble |
Tolerances options
Option | Description | Default |
---|---|---|
barconvtol | Barrier convergence tolerance | 1e-08 |
barqcpconvtol | Barrier QCP convergence tolerance | 1e-06 |
feasibilitytol | Primal feasibility tolerance | 1e-06 |
intfeastol | Integer feasibility tolerance | 1e-05 |
markowitztol | Threshold pivoting tolerance | 0.0078125 |
mipgap | Relative MIP optimality gap | GAMS optcr |
mipgapabs | Absolute MIP optimality gap | GAMS optca |
optimalitytol | Dual feasibility tolerance | 1e-06 |
psdtol | Positive semi-definite tolerance | 1e-06 |
Simplex options
Option | Description | Default |
---|---|---|
lpwarmstart | Warm start usage in simplex | 1 |
networkalg | Network simplex algorithm | -1 |
normadjust | Simplex pricing norm | -1 |
perturbvalue | Simplex perturbation magnitude | 0.0002 |
quad | Quad precision computation in simplex | -1 |
sifting | Sifting within dual simplex | -1 |
siftmethod | LP method used to solve sifting sub-problems | -1 |
simplexpricing | Simplex variable pricing strategy | -1 |
Barrier options
Option | Description | Default |
---|---|---|
barcorrectors | Central correction limit | -1 |
barhomogeneous | Barrier homogeneous algorithm | -1 |
barorder | Barrier ordering algorithm | -1 |
crossover | Barrier crossover strategy | -1 |
crossoverbasis | Crossover initial basis construction strategy | -1 |
qcpdual | Compute dual variables for QCP models | 1 |
Scaling options
Option | Description | Default |
---|---|---|
objscale | Objective scaling | 0 |
scaleflag | Model scaling | -1 |
MIP options
Option | Description | Default |
---|---|---|
branchdir | Branch direction preference | 0 |
concurrentjobs | Enables distributed concurrent solver | 0 |
concurrentmip | Enables concurrent MIP solver | 1 |
degenmoves | Degenerate simplex moves | -1 |
disconnected | Disconnected component strategy | -1 |
distributedmipjobs | Enables the distributed MIP solver | 0 |
dumpbcsol | Dump incumbents to GDX files during branch-and-cut | |
fixoptfile | Option file for fixed problem optimization | |
heuristics | Turn MIP heuristics up or down | 0.05 |
improvestartgap | Trigger solution improvement | 0 |
improvestartnodes | Trigger solution improvement | maxdouble |
improvestarttime | Trigger solution improvement | maxdouble |
.lazy | Lazy constraints value | 0 |
lazyconstraints | Indicator to use lazy constraints | 0 |
minrelnodes | Minimum relaxation heuristic control | -1 |
mipfocus | Set the focus of the MIP solver | 0 |
mipstart | Use mip starting values | 0 |
mipstopexpr | Stop expression for branch and bound | |
miqcpmethod | Method used to solve MIQCP models | -1 |
multimipstart | Use multiple (partial) mipstarts provided via gdx files | |
nlpheur | Controls the NLP heuristic for non-convex quadratic models | 1 |
nodefiledir | Directory for MIP node files | . |
nodefilestart | Memory threshold for writing MIP tree nodes to disk | maxdouble |
nodemethod | Method used to solve MIP node relaxations | -1 |
nonconvex | Control how to deal with non-convex quadratic programs | -1 |
norelheurtime | Limits the amount of time (in seconds) spent in the NoRel heuristic | 0 |
norelheurwork | Limits the amount of work performed by the NoRel heuristic | 0 |
obbt | Controls aggressiveness of Optimality-Based Bound Tightening | -1 |
.partition | Variable partition value | 0 |
partitionplace | Controls when the partition heuristic runs | 15 |
.prior | Branching priorities | 1 |
pumppasses | Feasibility pump heuristic control | -1 |
rins | RINS heuristic | -1 |
solfiles | Location to store intermediate solution files | |
solnpool | Controls export of alternate MIP solutions | |
solnpoolmerge | Controls export of alternate MIP solutions for merged GDX solution file | |
solnpoolnumsym | Maximum number of variable symbols when writing merged GDX solution file | 10 |
solnpoolprefix | First dimension of variables for merged GDX solution file or file name prefix for GDX solution files | soln |
solvefixed | Indicator for solving the fixed problem for a MIP to get a dual solution | 1 |
startnodelimit | Node limit for MIP start sub-MIP | -1 |
submipnodes | Nodes explored by sub-MIP heuristics | 500 |
symmetry | Symmetry detection | -1 |
varbranch | Branch variable selection strategy | -1 |
zeroobjnodes | Zero objective heuristic control | -1 |
Presolve options
Option | Description | Default |
---|---|---|
aggfill | Allowed fill during presolve aggregation | -1 |
aggregate | Presolve aggregation control | 1 |
dualreductions | Disables dual reductions in presolve | 1 |
precrush | Allows presolve to translate constraints on the original model to equivalent constraints on the presolved model | 0 |
predeprow | Presolve dependent row reduction | -1 |
predual | Presolve dualization | -1 |
premiqcpform | Format of presolved MIQCP model | -1 |
prepasses | Presolve pass limit | -1 |
preqlinearize | Presolve Q matrix linearization | -1 |
Presolve | Presolve level | -1 |
presos1bigm | Controls largest coefficient in SOS1 reformulation | -1 |
presos1encoding | Controls SOS1 reformulation | -1 |
presos2bigm | Controls largest coefficient in SOS2 reformulation | -1 |
presos2encoding | Controls SOS2 reformulation | -1 |
presparsify | Presolve sparsify reduction | -1 |
Tuning options
Option | Description | Default |
---|---|---|
tunecleanup | Enables a tuning cleanup phase | 0 |
tunecriterion | Specify tuning criterion | -1 |
tunejobs | Enables distributed tuning | 0 |
tunemetric | Metric to aggregate results into a single measure | -1 |
tuneoutput | Tuning output level | 2 |
tuneresults | Number of improved parameter sets returned | -1 |
tunetargetmipgap | A target gap to be reached | 0 |
tunetargettime | A target runtime in seconds to be reached | 0.005 |
tunetimelimit | Time limit for tuning | -1 |
tunetrials | Perform multiple runs on each parameter set to limit the effect of random noise | 0 |
Multiple Solutions options
Option | Description | Default |
---|---|---|
poolgap | Relative gap for solutions in pool | maxdouble |
poolgapabs | Absolute gap for solutions in pool | maxdouble |
poolsearchmode | Choose the approach used to find additional solutions | 0 |
poolsolutions | Number of solutions to keep in pool | 10 |
MIP Cuts options
Option | Description | Default |
---|---|---|
bqpcuts | BQP cut generation | -1 |
cliquecuts | Clique cut generation | -1 |
covercuts | Cover cut generation | -1 |
cutaggpasses | Constraint aggregation passes performed during cut generation | -1 |
cutpasses | Root cutting plane pass limit | -1 |
cuts | Global cut generation control | -1 |
flowcovercuts | Flow cover cut generation | -1 |
flowpathcuts | Flow path cut generation | -1 |
gomorypasses | Root Gomory cut pass limit | -1 |
gubcovercuts | GUB cover cut generation | -1 |
impliedcuts | Implied bound cut generation | -1 |
infproofcuts | Infeasibility proof cut generation | -1 |
liftprojectcuts | Lift-and-project cut generation | -1 |
mipsepcuts | MIP separation cut generation | -1 |
mircuts | MIR cut generation | -1 |
modkcuts | Mod-k cut generation | -1 |
networkcuts | Network cut generation | -1 |
projimpliedcuts | Projected implied bound cut generation | -1 |
psdcuts | PSD cut generation | -1 |
relaxliftcuts | Relax-and-lift cut generation | -1 |
rltcuts | RLT cut generation | -1 |
strongcgcuts | Strong-CG cut generation | -1 |
submipcuts | Sub-MIP cut generation | -1 |
zerohalfcuts | Zero-half cut generation | -1 |
Distributed algorithms options
Option | Description | Default |
---|---|---|
workerpassword | Password for distributed worker cluster | |
workerpool | Distributed worker cluster |
Other options
Option | Description | Default |
---|---|---|
displayinterval | Frequency at which log lines are printed | 5 |
.dofuncpieceerror | Error allowed for PWL translation of function constraints | 1e-3 |
.dofuncpiecelength | Piece length for PWL translation of function constraints | 1e-2 |
.dofuncpieceratio | Control whether to under- or over-estimate function values in PWL approximation | -1 |
.dofuncpieces | Sets strategy for PWL function approximation | 0 |
feasopt | Computes a minimum-cost relaxation to make an infeasible model feasible | 0 |
feasoptmode | Mode of FeasOpt | 0 |
.feaspref | feasibility preference | 1 |
feasrelaxbigm | Big-M value for feasibility relaxations | 1e+06 |
freegamsmodel | Preserves memory by dumping the GAMS model instance representation temporarily to disk | 0 |
funcmaxval | Maximum value for x and y variables in function constraints | 1e+06 |
funcpieceerror | Error allowed for PWL translation of function constraint | 0.001 |
funcpiecelength | Piece length for PWL translation of function constraint | 0.01 |
funcpieceratio | Controls whether to under- or over-estimate function values in PWL approximation | -1 |
funcpieces | Sets strategy for PWL function approximation | 0 |
iis | Run the Irreducible Inconsistent Subsystem (IIS) finder if the problem is infeasible | 0 |
iismethod | IIS method | -1 |
integralityfocus | Set the integrality focus | 0 |
kappa | Display approximate condition number estimates for the optimal simplex basis | 0 |
kappaexact | Display exact condition number estimates for the optimal simplex basis | 0 |
method | Algorithm used to solve continuous models | -1 |
miptrace | Filename of MIP trace file | |
miptracenode | Node interval when a trace record is written | 100 |
miptracetime | Time interval when a trace record is written | 1 |
multiobjmethod | Warm-start method to solve for subsequent objectives | -1 |
multiobjpre | Initial presolve on multi-objective models | -1 |
multobj | Controls the hierarchical optimization of multiple objectives | 0 |
names | Indicator for loading names | 1 |
numericfocus | Set the numerical focus | 0 |
objnabstol | Allowable absolute degradation for objective | |
objnreltol | Allowable relative degradation for objective | |
printoptions | List values of all options to GAMS listing file | 0 |
qextractalg | quadratic extraction algorithm in GAMS interface | 0 |
readparams | Read Gurobi parameter file | |
rerun | Resolve without presolve in case of unbounded or infeasible | -1 |
rngrestart | Write GAMS readable ranging information file | |
seed | Modify the random number seed | 0 |
sensitivity | Provide sensitivity information | 0 |
threads | Number of parallel threads to use | threads |
Tuning | Parameter Tuning | |
usebasis | Use basis from GAMS | GAMS bratio |
varhint | Guide heuristics and branching through variable hints | 0 |
writeparams | Write Gurobi parameter file | |
writeprob | Save the problem instance |