deep learning

Wednesday, October 16, 2019 - 3:00pm - 3:45pm
Ge Wang (Rensselaer Polytechnic Institute)
Computer vision and image analysis are major application examples of deep learning. While computer vision and image analysis deal with existing images and produce features of these images (images to features), tomographic imaging produces images of multi-dimensional structures from experimentally measured “encoded” data as various tomographic features (integrals, harmonics, and so on, of underlying images) (features to images). Recently, deep learning is being actively developed worldwide for tomographic imaging, forming a new area of imaging research.
Wednesday, October 16, 2019 - 11:20am - 12:05pm
Jong Chul Ye (Korea Advanced Institute of Science and Technology (KAIST))
Encoder-decoder networks using convolutional neural network (CNN) architecture have been extensively used in deep learning approaches for inverse problems thanks to its excellent performance. However, it is still difficult to obtain coherent geometric view why such an architecture gives the desired performance.
Wednesday, May 9, 2018 - 4:00pm - 4:30pm
Jiequn Han (Princeton University)
Developing algorithms for solving high-dimensional stochastic control problems and high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notorious difficulty known as the curse of dimensionality. In the first part of this talk, we develop a deep learning-based approach that directly solves high-dimensional stochastic control problems based on Monte-Carlo sampling.
Thursday, March 8, 2018 - 10:30am - 11:30am
Garrett Goh (Pacific Northwest National Laboratory)
With access to large datasets, deep neural networks (DNN) have achieved human-level accuracy in image and speech recognition tasks. However, in chemistry, data is inherently small and fragmented. In this work, we develop various approaches of using rule-based models and physics-based simulations to train ChemNet, a transferable and generalizable pre-trained network for small-molecule property prediction that learns in a weak-supervised manner from large unlabeled chemical databases.
Tuesday, March 6, 2018 - 9:00am - 10:00am
The use of a priori digital models to build, evaluate and validate designs has become a standard practice in the product creation process. Furthermore, their use in production, operation and service has been increasing in the last years, for example in model predictive control, maintenance or assist systems.
Tuesday, May 17, 2016 - 2:00pm - 2:50pm
Shai Shalev-Shwartz (Hebrew University)
I will describe two contradicting lines of work. On one hand, a practical work on autonomous driving I was doing at Mobileye, in which deep learning is one of the key ingredients. On the other hand, theoretical work I was doing at the Hebrew university showing strong hardness of learning results. Bridging this gap is a great challenge. I will describe some approaches toward a solution, focusing on practically relevant theory and theoretically relevant practice.
Monday, February 22, 2016 - 1:15pm - 2:00pm
Stephen Wright (University of Wisconsin, Madison)
We survey some developments in machine learning and data analysis,
focusing on those in which optimization is an important
component. Some of these have possible relevance for industrial and
energy applications, for example, constraints and covariances could be
learned from process data rather than specified a priori. Some
possibilities along these lines will be proposed.
Wednesday, January 27, 2016 - 4:15pm - 5:10pm
René Vidal (Johns Hopkins University)
Matrix, tensor, and other factorization techniques are used in a wide range of applications and have enjoyed significant empirical success in many fields. However, common to a vast majority of these problems is the significant disadvantage that the associated optimization problems are typically non-convex due to a multilinear form or other convexity destroying transformation.
Subscribe to RSS - deep learning