# stochastic optimization

Tuesday, August 9, 2016 - 11:00am - 12:30pm

Jeff Linderoth (University of Wisconsin, Madison)

Continuing the first lecture, we will introduce advanced features that

improve the performance of algorithms for solving the Benders-based

decomposition. Aggregating scenarios and regularization approaches

will be a primary focus. We will also introduce a different dual

decomposition technique that can be effective for solving two-stage

stochastic programs, and discuss algorithmic approaches for solving

the dual decomposition.

improve the performance of algorithms for solving the Benders-based

decomposition. Aggregating scenarios and regularization approaches

will be a primary focus. We will also introduce a different dual

decomposition technique that can be effective for solving two-stage

stochastic programs, and discuss algorithmic approaches for solving

the dual decomposition.

Monday, August 8, 2016 - 9:00am - 10:30am

Jeff Linderoth (University of Wisconsin, Madison)

This lecture gives an introduction to modeling optimization

problems where parameters of the problem are uncertain. The primary

focus will be on the case when the uncertain parameters are modeled as

random variables. We will introduce both two-stage, recourse-based

stochastic programming and chance-constrained approaches. Statistics

that measure the value of computing a solution to the stochastic

problem will be introduced. We will show how to create

an equivalent extensive form formulations of the instances, so that

problems where parameters of the problem are uncertain. The primary

focus will be on the case when the uncertain parameters are modeled as

random variables. We will introduce both two-stage, recourse-based

stochastic programming and chance-constrained approaches. Statistics

that measure the value of computing a solution to the stochastic

problem will be introduced. We will show how to create

an equivalent extensive form formulations of the instances, so that

Tuesday, August 9, 2016 - 9:00am - 10:30am

Jim Luedtke (University of Wisconsin, Madison)

We present the Benders decomposition algorithm for solving two-stage stochastic optimization models. The main feature of this algorithm is that it alternates between solving a relatively compact master problem, and a set of subproblems, one per scenario, which can be solved independently (hence decomposing the large problem into many small problems). After presenting and demonstrating correctness of the basic algorithm, several computational enhancements will be discussed, including effective selection of cuts, multi-cut vs.

Monday, August 8, 2016 - 11:00am - 12:30pm

Jim Luedtke (University of Wisconsin, Madison)

This lecture introduces the concept of risk measures and their use in stochastic optimization models to enable decision makers to seek decisions that are less likely to yield a highly undesirable outcome. In particular, we focus on coherent and convex risk measures, and demonstrate the duality relationship between such risk measures and distributionally robust stochastic optimization models. The specific examples of average value-at-risk (also known as conditional value-at-risk) and mean semideviation risk measures will be presented.

Thursday, March 29, 2012 - 5:30pm - 6:15pm

Nathan (Nati) Srebro (Toyota Technological Institute at Chicago)

I will discuss deep connections between Statistical Learning, Online

Learning and Optimization. I will show that there is a tight

correspondence between the sample size required for learning and the

number of local oracle accesses required for optimization, and the

same measures of complexity (e.g. the fat-shattering dimension or

Rademacher complexity) control both of them. Furthermore, I will show

how the Mirror Descent method, and in particular its stochastic/online

variant, is in a strong sense universal for online learning,

Learning and Optimization. I will show that there is a tight

correspondence between the sample size required for learning and the

number of local oracle accesses required for optimization, and the

same measures of complexity (e.g. the fat-shattering dimension or

Rademacher complexity) control both of them. Furthermore, I will show

how the Mirror Descent method, and in particular its stochastic/online

variant, is in a strong sense universal for online learning,