Chair of Session 3
Professor David A van Dyk, Imperial College London, UK
Show speakers
Professor David A van Dyk, Imperial College London, UK
Professor David A van Dyk, Imperial College London, UK
David A van Dyk is a Professor in the Statistics Section of the Department of Mathematics at Imperial College London. After obtaining his PhD from the University of Chicago, he held faculty positions at Harvard University and the University of California, Irvine before relocating to London in 2011. Professor van Dyk was elected Fellow in the American Statistical Association in 2006, elected Fellow of the Institute of Mathematical Statistics in 2010, and received a Wolfson Merit Award in 2011. His scholarly work focuses on methodological and computational issues involved with Bayesian analysis of highly structured statistical models and emphasizes serious interdisciplinary research, especially in astronomy. He founded and coordinates the Imperial-California-Harvard AstroStatistics Collaboration (iCHASC) and is particularly interested in improving the efficiency of computationally intensive methods involving data augmentation, such as EM-type algorithms and various Markov chain Monte Carlo methods.
Distilling natural laws from experimental data: from particle physics to computational biology
Professor Hod Lipson, Cornell University, USA
Abstract
Can machines discover scientific laws automatically? For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. This talk will outline a series of recent research projects, starting with self-reflecting robotic systems, and ending with machines that can formulate hypotheses, design experiments, and interpret the results, to discover new scientific laws. While the computer can discover new laws, will we still understand them? Our ability to have insight into science may not keep pace with the rate and complexity of automatically-generated discoveries. Are we entering a post-singularity scientific age, where computers not only discover new science, but now also need to find ways to explain it in a way that humans can understand? We will see examples from art to architecture, from psychology to cosmology, from big science to small science.
Show speakers
Professor Hod Lipson, Cornell University, USA
Professor Hod Lipson, Cornell University, USA
"Hod Lipson is an Associate Professor of Mechanical & Aerospace Engineering and Computing & Information Science at Cornell University in Ithaca, NY. He directs the Creative Machines Lab, which focuses on novel ways for automatic design, fabrication and adaptation of virtual and physical machines. He has led work in areas such as evolutionary robotics, multi-material functional rapid prototyping, machine self-replication and programmable self-assembly. Lipson received his Ph.D. from the Technion - Israel Institute of Technology in 1998, and continued to a postdoc at Brandeis University and MIT. His research focuses primarily on biologically-inspired approaches, as they bring new ideas to engineering and new engineering insights into biology. For more information visit
http://www.mae.cornell.edu/lipson."
Model-based machine learning
Professor Christopher Bishop FREng, Microsoft Research Cambridge
Abstract
Traditional machine learning is characterised by a bewildering variety of techniques, such as logistic regression, support vector machines, neural networks, Kalman filters, and many others, as well as numerous variants of these. Each has its own merits, and each has its own associated algorithms for fitting adjustable parameters to a training data set. Selecting an appropriate technique can be difficult, and adapting it to a specific application requires detailed understanding of that technique and involves corresponding modifications to the source code.
In recent years that has been a growing interest in a simpler, yet much more powerful, paradigm called model-based machine learning. This allows a very broad range of machine learning models to be specified compactly within a simple development environment. Training the model becomes a task in probabilistic inference, and is decoupled from the specification of the model itself and hence can be automated. The majority of standard techniques correspond to specific choices for the model and arise naturally as special cases, while variants of these techniques to suit specific applications are easily constructed, and alternative related structures can readily be compared. Newcomers to the field of machine learning need only to understand the model specification environment in order to gain access to a huge range of models. The model-based approach to machine learning is particularly powerful when enabled through a probabilistic programming language.
Show speakers
Professor Christopher Bishop FREng, Microsoft Research Cambridge
Professor Christopher Bishop FREng, Microsoft Research Cambridge
Chris Bishop is a Distinguished Scientist at Microsoft Research Cambridge, and a Professor of Computer Science at the University of Edinburgh. He is also a Fellow of Darwin College Cambridge, a Fellow of the Royal Academy of Engineering, and a Fellow of the Royal Society of Edinburgh. He was principal organiser of the six month international research programme on Neural Networks and Machine Learning at the Isaac Newton Institute for Mathematical Sciences in Cambridge, which ran from July to December 1997. He is the author of the influential textbook Neural Networks for Pattern Recognition (Oxford University Press, 1995) which has over 22,000 citations. His most recent textbook Pattern Recognition and Machine Learning (Springer, 2006) has over 16,000 citations, and has been widely adopted.
Nonparametric probabilistic modelling
Professor Zoubin Ghahramani FRS, University of Cambridge
Abstract
Uncertainty, data, and inference play a fundamental role in modelling. Probabilistic approaches to modelling have transformed scientific data analysis, artificial intelligence and machine learning, and have made it possible to exploit the many opportunities arising from the recent explosion of big data problems arising in the sciences, society and commerce. Once a probabilistic model is defined, Bayesian statistics (which used to be called "inverse probability") can be used to make inferences and predictions from the model. Bayesian methods work best when they are applied to models that are flexible enough to capture the complexity of real-world data. Recent work on non-parametric Bayesian machine learning provides this flexibility. I will touch upon some of our latest work in this area, including new models for time series and for social and biological networks.
Show speakers
Professor Zoubin Ghahramani FRS, University of Cambridge
Professor Zoubin Ghahramani FRS, University of Cambridge
Professor Zoubin Ghahramani leads the Machine Learning Group at the University of Cambridge. His academic career includes concurrent appointments as one of the founding members of the Gatsby Computational Neuroscience Unit in London, and as a faculty member of CMU's Machine Learning Department for over 10 years. His current research focuses on nonparametric Bayesian modelling, scalable learning for Big Data, probabilistic programming, Bayesian optimisation, and building an AI system for Data Science: the Automatic Statistician. He has published over 200 papers, receiving over 25,000 citations (an h-index of 72). In 2013, he received a $750,000 Google Award for research on building the Automatic Statistician. He has served in a number of leadership roles as programme and general chair of the main international conferences in machine learning: AISTATS (2005), ICML (2007, 2011), and NIPS (2013, 2014).
Statistical inference for markov jump process models via differential geometric monte carlo methods and the linear noise approximation
Professor Mark Girolami, Imperial College
Abstract
Bayesian analysis for Markov jump processes is a non-trivial and challenging problem. Although exact inference is theoretically possible, it is computationally demanding thus its applicability is limited to a small class of problems. In this talk we describe the application of Riemann manifold MCMC methods using an approximation to the likelihood of the Markov jump process which is valid when the system modelled is near its thermodynamic limit. The proposed approach is both statistically and computationally efficient while the convergence rate and mixing of the chains allows for fast MCMC inference. The methodology is evaluated using numerical simulations on two problems from chemical kinetics and one from systems biology.
Show speakers
Professor Mark Girolami, Imperial College
Professor Mark Girolami, Imperial College
Mark Girolami FRSE is Chair of Statistics in the Department of Mathematics at Imperial College London and Associated Professor in the Department of Computing at Imperial College. He spent the first ten years of his career as an engineer with IBM. He is an EPSRC Established Career Research Fellow - Mathematics - (2012-2018), and previously an EPSRC Advanced Research Fellow - ICT - (2007-2012). In 2011 he was elected to the Fellowship of the Royal Society of Edinburgh and awarded a Royal Society Wolfson Research Merit Award. Previous research has generated patents and technologies for fraud detection in both automated banking and telecoms. Based on this research, currency validation products are now being shipped with latest generation Cash Machines from National Cash Registers. In the last quarter of 2015, he worked on developing probabilistic methods for City specific forecasting of product demand for Amazon Global Forecasting Division in Seattle.