DescriptionThis dissertation describes OPOS, a C++ software library and framework for developing massively parallel continuous optimization software. We show that classical iterative optimization algorithms such as gradient projection and augmented Lagrangian methods can be parallelized to run efficiently on distributed memory machines using OPOS.
In Chapter 1 we provide some background on general optimization software and algorithms, as well as parallel software for LASSO and stochastic programming problems.
Chapter 2 introduces OPOS’s software development methodology. We start out by describing a set of optimization-domain-specific C++ classes and routines that embody the building blocks of OPOS. The main goal of these classes and routines is to allow the user to code efficient, reusable, maintainable, and readily parallelizable optimization algorithms. OPOS enables the optimization software developer to build optimization algorithm classes that are independent of the problem structure as well as the program’s
desired execution.
Details of a spectral projected gradient algorithm by Birgin and Martı́nez and its implementation, OPSPG, are discussed in Chapter 3. Initially, we review the optimization algorithm and OPSPG’s code. Next we describe an application to the LASSO problem, and a novel data distribution technique which achieves an even load balance. Followed by implementation details of objective function and gradient evaluations given our data distribution. We close the chapter by presenting computational results.
Chapter 4 introduces the basic theory behind augmented Lagrangian algorithms and a specific version called ALGENCAN, which was developed by Birgin and Martı́nez. Then we discuss the building blocks of our object-parallel augmented Lagrangian software OPAL, which is based on ALGENCAN. OPAL is applied to solve linear stochastic programming problems. We describe a scenario-based data distribution technique using PySP, a python-based modeling software for stochastic programs. This is followed by implementation details of objective function, constraint and gradient evaluations given our data distribution. At the end of the chapter, we demonstrate our computational results.
Chapter 5 summarizes findings of our work and discusses future research opportunities for both LASSO and stochastic programming problems.