Coherent Optical Processing with Machine Learning

Wednesday, October 16, 2019 - 9:05am - 9:50am
Keller 3-180
Charles Bouman (Purdue University)
Coherent sensing of light has the potential to revolutionize optics in much the same way that coherent RF processing revolutionized communications and RADAR. The power of this approach is that once optical measurements are converted to digital form, they can be processed with advanced, nonlinear, highly intelligent algorithms that can far exceed the capabilities of analog optical devices. However, a key challenge is the need to integrate highly accurate and often nonlinear physical models of coherent optical sensors, with emerging machine learning methods to achieve next generation results.

In this talk, we discuss how coherent optical processing can be combined with advanced algorithmic techniques and machine learning to solve what was a previously unsolved problem: single-shot imaging through deep turbulence. We first describe the digital-holographic model-based iterative reconstruction (DH-MBIR) algorithm which is designed to solve the nonlinear inverse problem of recovering an image which has been distorted by anisoplanaric or “deep” atmospheric turbulence. The DH-MBIR algorithm uses the EM algorithm to dramatically simplify otherwise intractable computations required for the Bayesian image reconstruction. We then show how DH-MBIR can be combined with a deep neural network prior model to produced dramatically higher quality reconstructions of both the images and atmospheric phase distortions from a single coherent measurement of the light field. Results are shown for both synthetic and bench-top optical measurements.