# Master’s Research

I worked under the supervision of Dr. Stephen Wright to complete a research project for my Master of Science at Miami University.

Our goal was to find efficient ways to solve the nonlinear resource allocation problem (P):

$\begin{array}{cll} \min &\sum_{i=1}^n f_i(x_{i})} &\\[0.5em] {\rm s.t.} &\sum_{i=1}^n g_i(x_i) \leq c}, & c \in \mathbb{R} \\[0.5em] & \ell_{i} \leq x_{i} \leq u_{i}, & i = 1,2 ,\ldots, n \end{array}$
where $$f \colon \mathbb{R}^n \to \mathbb{R}^n$$, $$g \colon \mathbb{R}^n \to \mathbb{R}^n$$ are separable, convex, and differentiable functions. We assume without loss of generality that $$f_i$$ and $$g_i$$ are monotone. In particular, we studied three speci fic real-world applications including strati ed sampling, quadratic knapsack, and manufacturing capacity planning. This work was motivated by Dr. Stephen Wright and his previous experience with stratified sampling.

The nonlinear resource allocation problem has been studied extensively and can be approached using several di erent methods. Lagrange multiplier methods rely heavily on the Karush-Kuhn-Tucker (KKT) conditions and require maximizing a concave piecewise linear function q of one variable. A simple approach to find the maximum is to use an iterative method like a bisection search. Due to relatively nice structure of (P) we may utilize more specialized algorithms. For example, using the optimality conditions and the tangent criterion allows us to identify breakpoints (locations where $$q’ = 0$$ or $$q’$$ is undefined). By searching these breakpoints we can find approximately where $$q’ = 0$$ and then interpolate to find the location of the maximum.

An alternative approach to solving (P) is using a variable fixing (or pegging) method. A pegging algorithm fixes an active variable to its upper (or lower) bound at each iteration and then solves the smaller subproblem. Since each iteration fixes at least one variable, the complexity of this class of algorithms is at most $$O(n^2)$$. We successfully implemented an algorithm (PEG) from Bretthauer & Shetty (2002) in Matlab and improved it using work from Kiwiel (2008).

The new work we accomplished in this project relied on interior point methods (IPMs). An IPM travels through the interior of the feasible region to find a solution. A barrier or penalty function prevents the algorithm from approaching the boundary. The methods we used were primal-dual path following algorithms, where the iterates follow an arc of strictly feasible points. By perturbing the complementary slackness property of the KKT conditions, we can identify the central path (an arc of strictly feasible points where the complementary slackness conditions have the same value for all indices). The general framework of solving interior point problems involves solving a Newton system (which can be solved extremely efficiently by exploiting the structure of problem (P)). We implemented two interior point algorithms. The first was a linearly convergent predictor-corrector method and the second was a quadratically convergent method
(IPMQ) based on work by Sun & Zhao (2000).

Since pegging methods worked so well on solving problem (P), a natural extension of this project was to study the area of variable indicators. An indicator is a function that identifies whether a variable is active or inactive. By identifying which variables are active early on in an iterative procedure like pegging methods or IPMs, we may reduce the computational complexity. Among the many indicator functions to choose from, we used the Tapia indicators (1994) which use a quotient of successive Lagrange multipliers and the quotient of successive slack variables. To be able to use the indicators, we require that the iterates of our iterative algorithm converge superlinearly to a solution. To accomplish this, we combined the Tapia indicators with (IPMQ) and then used (PEG) as a subroutine which we called (IPMQ-PEG).

Our numerical tests showed that (IPMQ-PEG) was competitive at solving problem (P) for the three applications we considered but was dominated by (PEG). Since a closed form solution could be found for the subproblems solved in (PEG), we believe that our algorithm would work as well or better than (PEG) on classes of problems where a numerical routine would be needed to solve the subproblem.