Main Content

`coneprog`

AlgorithmsThis example shows the solution times for `coneprog`

with various problem sizes and all algorithms of the `LinearSolver`

option. The problem is to find the distance of a point to an ellipsoid where the point is in `n`

dimensions and the ellipsoid is represented by a cone constraint with `m`

rows for the constraint matrix. Choose `(n,m) = i*(100,20)`

for `i`

from 1 to 10. The `define_problem`

helper function at the end of this example creates the problem for specified values of `m`

, `n`

, and the seed for the random number generator. The function creates pseudorandom cones with 10 entries of 1 in each matrix row and at least two entries of 1 in each column, and ensures that the first matrix column is a (dense) column of 1s.

Set the parameters for the problem generation function.

n = 100; m = 20; seed = 0;

Set the experiment to run for ten problem sizes.

numExper = 10;

Create the complete list of `LinearSolver`

option values.

linearSolvers = {'auto','augmented','normal','schur','prodchol'};

For this data, the `'auto'`

setting causes `coneprog`

to use the `'prodchol'`

linear solver, so you obtain essentially the same results for those two values.

Create structures to hold the resulting timing data and the number of iterations each run takes.

time = struct(); s = " "; time.probsize = repmat(s,numExper,1); % Initialize time struct to zeros. for solver_i = linearSolvers time.(solver_i{1}) = zeros(numExper, 1); end iter = struct(); iter.probsize = repmat(s,numExper,1); for solver_i = linearSolvers iter.(solver_i{1}) = zeros(numExper, 1); end

To obtain meaningful timing comparisons, run `solve`

(which calls `coneprog`

) a few times without timing the results. This "warm-up" prepares the solver to use data efficiently, and prepopulates the internal just-in-time compiler.

[prob, x0] = define_problem(m, n, seed); options = optimoptions('coneprog','Display','off'); for i = 1 : 4 sol = solve(prob,x0,'options',options); end

Run the solver on all the problems while recording the solution times and the number of iterations the solver takes.

for i = 1:numExper % Generate problems of increasing size. [prob, x0] = define_problem(m*i, n*i, seed); time.probsize(i) = num2str(m*i)+"x"+num2str(n*i); iter.probsize(i) = num2str(m*i)+"x"+num2str(n*i); % Solve the generated problem for each algorithm and measure time. for solver_i = linearSolvers options.LinearSolver = solver_i{1}; tic [~,~,~,output] = solve(prob,x0,'options',options); time.(solver_i{1})(i) = toc; iter.(solver_i{1})(i) = output.iterations; end end

Display the timing results. The `probsize`

column indicates the problem size as `"m x n"`

, where `m`

is the number of cone constraints and `n`

is the number of variables.

timetable = struct2table(time)

`timetable=`*10×6 table*
probsize auto augmented normal schur prodchol
__________ ________ _________ ________ ________ ________
"20x100" 0.020335 0.042185 0.022258 0.018266 0.019167
"40x200" 0.028701 0.21417 0.063392 0.01956 0.030663
"60x300" 0.026849 0.38047 0.11627 0.02042 0.027778
"80x400" 0.032513 0.65735 0.23975 0.023377 0.034159
"100x500" 0.040358 1.2081 0.42095 0.026024 0.038788
"120x600" 0.089219 2.8035 0.92355 0.033922 0.0909
"140x700" 0.098881 7.4664 2.1049 0.046021 0.10043
"160x800" 0.11053 8.7302 2.908 0.054712 0.11306
"180x900" 0.11439 10.485 3.5668 0.056406 0.11708
"200x1000" 0.099195 6.7833 3.6698 0.053792 0.097791

The shortest times appear in the `auto`

, `schur`

, and `prodchol`

columns. The `auto`

and `prodchol`

algorithms are identical for the problems, so any timing differences are due to random effects. The longest times appear in the `augmented`

column, while the `normal`

column times are intermediate.

Are the differences in the timing results due to differences in speed for each iteration or due to the number of iterations for each solver? Display the corresponding table of iteration counts.

itertable = struct2table(iter)

`itertable=`*10×6 table*
probsize auto augmented normal schur prodchol
__________ ____ _________ ______ _____ ________
"20x100" 8 8 8 8 8
"40x200" 11 11 11 11 11
"60x300" 8 8 8 8 8
"80x400" 8 8 8 8 8
"100x500" 8 8 8 8 8
"120x600" 19 11 11 11 19
"140x700" 17 18 17 15 17
"160x800" 16 16 16 16 16
"180x900" 14 14 14 13 14
"200x1000" 10 10 10 10 10

For this problem, the number of iterations is not clearly correlated to the problem size, which is a typical characteristic of the interior-point algorithm used by `coneprog`

. The number of iterations is nearly the same in each row for all algorithms. The `schur`

and `prodchol`

iterations are faster for this problem than those of the other algorithms.

The following code creates the `define_problem`

helper function.

function [prob, x0] = define_problem(m,n,seed) %%% Generate the following optimization problem %%% %%% min t %%% s.t. %%% || Ax - b || <= gamma %%% || x - xbar || <= t %%% %%% which finds the closest point of a given ellipsoid (||Ax-b||<= gamma) %%% to a given input point xbar. %%% rng(seed); % The targeted total number of nonzeros in matrix A is % 10 nonzeros in each row % plus 2 nonzeros in each column % plus a dense first column. numNonZeros = 10*m + 2*n + m; A = spalloc(m,n,numNonZeros); % For each row generate 10 nonzeros. for i = 1:m p = randperm(n,10); A(i,p) = 1; end % For each column generate 2 nonzeros. for j = 2:n p = randperm(m,2); A(p,j) = 1; end % The first column is dense. A(:,1) = 1; b = ones(m, 1); gamma = 10; % Find a point outside of the ellipsoid. xbar = randi([-10,10],n,1); while norm(A*xbar - b) <= gamma xbar = xbar + randi([-10,10],n,1); end % Define the cone problem. prob = optimproblem('Description', 'Minimize Distance to Ellipsoid'); x = optimvar('x',n); t = optimvar('t'); prob.Objective = t; prob.Constraints.soc1 = norm(x - xbar) <= t; prob.Constraints.soc2 = norm(A*x - b) <= gamma; x0.x = sparse(n,1); x0.t = 0; end