Support us | Visit us | Contact us
Organised by Dr Nick Jones and Dr Thomas Maccarone
We will bring together two vibrant research groups for an exchange of ideas: physical scientists working with challenging data and needing tools to make the most of it; and analysts not yet working in these rich scientific fields. Speakers cover applications across astrophysics, biological physics, geophysics and earth sciences and meet those from applied mathematics, computer science, engineering and statistics. We aim to open the world of new methods for data analysis to the physical scientist and accelerate the integration of data analysts into physical science. For further details on speakers see this final programme.
The proceedings of this meeting are scheduled to be published in a future issue of Philosophical Transactions A.
This meeting was followed by a related Satellite meeting at the Kavli Royal Society International Centre entitled Signal processing for the physical sciences from 28 – 29 March 2012.
Dr Nick Jones, Imperial College London, UK
Nick Jones, Imperial Mathematics, works on topics relating to the designed disordered world around us. This concerns both how we should perform inference about the systems around us and how they in turn perform inference themselves.
Dr Tom Maccarone, University of Southampton, UK
Tom Maccarone works across a broad range of topics in astrophysics, but he received his PhD at Yale University for a thesis on the variability of X-ray emission from accretion flows onto black holes and neutron stars. It was from this work that his interest in signal processing began, as these systems present a rich phenomenology of variability, much of which is poorly understood. He hopes this meeting will fertilize time domain astrophysics with new techniques to apply to old problems. After his PhD he moved to Europe, taking on postdoctoral fellowships at the Scuola Internazionale Superiore di Studi Avanzati and the University of Amsterdam, before taking on a faculty position at the University of Southampton.
Professor John Sahr, University of Washington, USAChair of Session 1
John D. Sahr studied Electrical Engineering at the California Institute of Technology, earning a BS in 1984. After a year of graduate study at the University of California, Los Angeles, he continued his graduate study in Space Physics at Cornell University, completing his PhD in 1990. In 1991 Dr. Sahr joined the Electrical Engineering Faculty at the University of Washington in Seattle, Washington. Dr. Sahr's research has emphasized radar remote sensing of ionospheric turbulence. In 1998 Dr. Sahr and his students built the Manastash Ridge Radar, a passive bistatic radar which observes the scatter of commercial FM broadcasts at 100 MHz. Dr. Sahr also serves as Associate Dean of Undergraduate Academic Affairs for the University.
Professor Malcolm Sambridge, Australian National University, AustraliaTransdimensional inverse problems in the geosciences
Except for a very thin layer at the surface, all of our knowledge of the physical properties of the Earth is based on indirect observations collected at the surface. This is known as an inverse problem. Inverse problems occur in many areas of the sciences where an abundance of observations exist that only indirectly constrain some process or physical property of interest. Over the past forty years geophysicists have built models of various physical and chemical properties of the Earth’s interior which fit observations collected at the surface. Formal inversion methods typically involve an optimization process whereby one or more classes of data are used to constrain parameters in a mathematical representation of the subsurface. A common difficulty is that surface observations do not uniquely constrain the subsurface, meaning additional information must be introduced, usually in the form of some ad hoc regularizing criteria, which is often chosen for mathematical convenience.
An alternative approach is to embrace the non-uniqueness directly and employ an inference process based on parameter space sampling. Instead of seeking a best model within an optimization framework one seeks an ensemble of solutions and derives properties of that ensemble for inspection. While this idea has itself been employed for more than 30 years, it is only now gaining broad acceptance. Recently these ideas have been extended with the introduction of trans-dimensional and hierarchical sampling methods. These approaches are becoming popular because they offer novel ways of dealing with problems involving joint fitting of multiple data types, uncertain data errors and/or uncertain model parameterizations. Rather than being forced to make decisions on parameterization, level of data noise and weights between data types in advance, as is often the case in an optimization framework, these choices can be relaxed and instead constrained by the data themselves. Limitations exist with sampling based approaches in that computational cost is often high for large scale structural problems, i.e. many unknowns and data. However there are a surprising number of areas where they are now feasible. This presentation will outline transdimensional inverse methods and describe some recent applications to geophysical problems. They have potential for similar data inference problems across the physical sciences.
Malcolm Sambridge obtained a Ph.D. in geophysics from the Australian National University in 1988. He spent periods as a post-doctoral researcher at the Carnegie Institution of Washington D.C., USA, and the Institute of Theoretical Geophysics, University of Cambridge, UK. He has been at the Australian National University since 1992. He currently leads the Seismology and Mathematical Geophysics group in the Research School of Earth Sciences. His research interests include data inference methods, inverse theory, seismology and earth structure, mathematical methods, Monte Carlo methods and optimization. He has ongoing research projects in direct search methods for inversion and in particular transdimensional inverse problems.
Dr Simon Vaughan, University of Leicester, UKTime series analysis in astronomy
Progress in astronomy comes from interpreting the signals encoded in the light received from distant objects – the distribution of light over the sky (images), over photon wavelength (spectrum), over polarization angle, and over time (usually called light curves by astronomers). In the time domain we see transient events such as supernovae, gamma-ray bursts, and other powerful explosions; we see periodic phenomena such as the orbits of planets around nearby stars, radio pulsars, and pulsations of stars in nearby galaxies; and persistent aperiodic variations (“noise”) from powerful systems like accreting black holes. In my talk I will briefly review a few of the recent and future challenges in the burgeoning area of Time Domain Astrophysics. I will discuss the recovery of reliable noise power spectra from sparely sampled time series, higher-order properties of accretion black holes, time delays and correlations in multivariate time series, and characterisation of gamma-ray burst light curves.
Simon Vaughan is a lecturer in Observational Astronomy in the Department of Physics and Astronomy at the University of Leicester. His research uses observations of cosmic X-ray sources to study physics under extreme conditions, in particular, close to black holes. This includes the study the behaviour of active galactic nuclei (accreting supermassive black holes in the centres of galaxies), gamma-ray bursts (GRBs, the brightest explosions in the known Universe), and black hole X-ray binaries in our Galaxy. His particular interests include probes of strong gravity, time series astronomy, Bayesian data analysis, and dust-scattering from GRBs. Dr Vaughan led the team that discovered the first example of the latter phenomenon (in 2004).
Professor Andrew Walden, Imperial College London, UK Rotary components and polarization ellipses: a statistical perspective
Rotary analysis decomposes vector motions on the plane into counter-rotating components which have proved particulary useful in the study of geophysical flows influenced by the rotation of the Earth. For stationary random signals the motion at any frequency takes the form of a random polarization ellipse. Although there are numerous applications of rotary analysis relatively little attention has been paid to the statistical properties of the random ellipses or to the estimated rotary coefficient which measures the tendency to rotate counterclockwise or clockwise. The precise statistical structure of the polarization ellipses is reviewed, including the random behaviour of the ellipse orientation, aspect ratio and intensity. Special attention is then paid to spectral matrix estimation from physical data and on hypothesis testing and confidence intervals computed using the estimated matrices.
Professor Andrew Walden is a Professor in the Statistics Section of the Department of Mathematics at Imperial College London. He was a research scientist at BP from 1981 to 1990, specialising in statistical signal processing, particularly applied to seismic data analysis. In 1984 he was presented with the Van Weelden award of the European Association of Exploration Geophysicists for a paper analysing the statistical reflectivity of the Earth. During 1985-6 he visited the University of Washington, Seattle, USA, where he taught graduate courses in the Statistics and Geophysics Departments. He joined Imperial College London in 1990 where his research focusses on time series analysis methodology for problems in the physical and medical sciences. With Don Percival he has co-authored two popular Cambridge University Press books 'Spectral Analysis for Physical Applications' and 'Wavelet Methods for Time Series Analysis'.
Professor Neil Cornish, Montana State University, USA Gravitational wave astronomy: needle in a haystack
A world-wide array of highly sensitive interferometers stands poised to usher in a new era in astronomy with the first direct detection of gravitational waves. The data from these instruments will provide a unique perspective on extreme astrophysical phenomena such as neutron stars and black holes, and will allow us to test Einstein's theory of gravity in the strong field, dynamical regime. To fully realize these goals we need to solve some challenging problems in signal processing and inference, such as finding rare and weak signals that are buried in non-stationary and non-Gaussian instrument noise, dealing with high dimensional model spaces, and locating what are often extremely tight concentrations of posterior mass within the prior volume. Gravitational wave detection using space based detectors and Pulsar Timing Arrays bring with them the additional challenge of having to isolate individual signals that overlap one another in both time and frequency. Promising solutions to these problems will be discussed, along with some of the challenges that remain.
Neil Cornish is Professor of Physics at Montana State University. He grew up on a sheep station in the Australian bush, and went on to study physics at the University of Melbourne and the University of Toronto, followed by postdoctoral appointments at Cambridge and Princeton. After starting out in theoretical cosmology and general relativity, his research interests have become increasingly focused on signal analysis in the newly emerging field of gravitational wave astronomy. Professor Cornish is a member of the Laser Interferometer Gravitational Observatory Scientific Collaboration, and has played a leading role in developing space based gravitational wave detectors.
Professor Guy Nason, University of Bristol, UKChair of Session 2
Guy Nason is Professor of Statistics and currently Head of School of Mathematics at the University of Bristol. His research interests include non-stationary time series (locally stationary, LS, processes) and multiscale methods in statistics. He is interested in many areas of application but particularly low-intensity imaging, renewable energy and economics. His signal processing interests originated in using wavelets for denoising, initially via cross-validation methods, moving through empirical Bayes methods with complex-valued wavelets to the development in 2004 of the Haar-Fisz variance stabilizing transform (with Fryzlewicz) for non-Gaussian noise. In 2000 he introduced (with von Sachs and Kroisandt) the locally stationary wavelet processes, an alternative to locally stationary Fourier processes, recognizing that, for non-stationary processes Fourier is not necessarily canonical. Since then he has worked on spectral estimation, costationary series and irregularly spaced locally stationary series. He was awarded the Guy Medal in Bronze from the Royal Statistical Society in 2001 and was an EPSRC Advanced Research Fellow 2000-5.
Professor Jens Timmer, University of Freiburg, GermanyJoining forces of bayesian and frequentist methodology: a study for inference in the presence of non-identifiability
Increasingly complex applications involve large datasets in combination with nonlinear and high-dimensional mathematical models. In this context, statistical inference is a challenging issue that calls for pragmatic approaches that take advantage of both Bayesian and frequentist methods. The elegance of Bayesian methodology is founded in the propagation of information content provided by experimental data and prior assumptions to the posterior probability distribution of model predictions. However, for complex applications experimental data and prior assumptions potentially constrain the posterior probability distribution insufficiently. In these situations Bayesian Markov chain Monte Carlo sampling can be infeasible. From a frequentist point of view insufficient experimental data and prior assumptions can be interpreted as non-identifiability. The profile likelihood approach offers to detect and to resolve non-identifiability by experimental design iteratively. Therefore, it allows to better constrain the posterior probability distribution until Markov chain Monte Carlo sampling can be used securely. Using an application from cell biology we compare both methods and show that a successive application of both methods facilitates a realistic assessment of uncertainty in model predictions.
Jens Timmer holds a Chair for Theoretical Physics and Its Applications in the Life Sciences at the University of Freiburg, Germany, and is Co-Director of the School of Life Sciences at the Freiburg Institute for Advanced Studies. His main research interest is the development and interdisciplinary application of mathematical methods for analysis and modelling of dynamic processes in the life sciences. The methods he applies range from data-based modelling by ordinary differential equations to network inference by stochastic approaches. His main applications are in the fields of neurology and cellular signal transduction. He has published more than 200 papers in peer-reviewed journals.
Dr Mukund Thattai, National Centre for Biological Sciences IndiaUsing topology to tame the complex biochemistry of genetic networks
Living cells are controlled by networks of interacting genes, proteins and biochemicals. Cells use the emergent collective dynamics of these networks to probe their surroundings, perform computations, and generate appropriate responses. Here we consider genetic networks, interacting sets of genes that regulate one-another’s expression. It is possible to infer the interaction topology of genetic networks from high-throughput experimental measurements. However, such experiments rarely provide information on the detailed nature of each interaction. We show that topological approaches provide powerful means of dealing with the missing biochemical data. We first discuss the biochemical basis of gene regulation, and describe how genes can be connected into networks. We then show that, given weak constraints on the underlying biochemistry, topology alone determines the emergent properties of certain simple networks. Finally, we apply these approaches to the realistic example of quorum-sensing networks: chemical communication systems that co-ordinate the responses of bacterial populations. We find that the versatility of a quorum-sensing network – its ability to generate diverse response types – is determined purely by its topology. The most versatile topology is the one most commonly observed among real quorum-sensing sytems, suggesting that natural selection can act to optimize topology as well as biochemistry.
Mukund Thattai obtained a B.A. in physics from Cornell University in 1999, and a Ph.D. in physics from the Massachusetts Institute of Technology in 2004. While at MIT, Dr. Thattai made pioneering contributions to the understanding of how the randomness of molecular processes can affect living cells. He is currently a tenured professor at the National Centre for Biological Sciences in Bangalore, India. His laboratory at NCBS works in the area of synthetic biology, an emerging field which attempts to combine genes into biological circuits, much as transistors are combined into electrical circuits.
Dr Max Little, MIT, USASignal processing for molecular and cellular biophysics: an emerging field
Recent advances in the ability of experimental biophysics to watch the molecular and cellular processes of life in action – such as atomic force microscopy, optical tweezers, and Forster fluorescence resonance energy transfer – raise challenges for digital signal processing of the resulting experimental data. This talk explores the unique properties of such biophysical time series that set them apart from other signals, such as the prevalence of abrupt jumps and steps, multi-modal distributions, and autocorrelated noise. It exposes the problems with classical linear signal processing algorithms applied to this kind of data, and describes new nonlinear and non-Gaussian algorithms that are able to extract information that is of direct relevance to biophysical questions of interest. It is argued that these new methods applied in this context typify the nascent field of biophysical digital signal processing. Practical experimental examples will be discussed.
Max Little began his career writing software, signal processing algorithms and music for video games, and then moved on by way of a degree in mathematics to the University of Oxford. After postdoc positions in Oxford and co-founding a web-based image search business, he won a joint MIT-Wellcome Trust fellowship to follow up on his doctoral research work in biomedical signal processing. His research focuses on applied models and statistical signal processing for a range of problems across the physical sciences, including biomedicine, biology, hydrology and meteorology.
Professor Paul Vitanyi, CWI, The NetherlandsSimilarity and denoising
We can discover the effective similarity among pairs of finite objects and denoise a finite object using the Kolmogorov complexity of these objects. The drawback is the Kolmogorov complexity is not computable. If we approximate it using a good real-world compressor, then it turns out that on natural data the processes give adequate results in practice. In all cases we use the entire string. The methodology is parameter-free, alignment-free, and works on individual data. We illustrate both methods with examples.
Paul M.B. Vitanyi received his Ph.D. from the Free University of Amsterdam in 1978. He is a CWI Fellow at the national research institute for mathematics and computer science in the Netherlands, CWI, and Professor of Computer Science at the University of Amsterdam. He serves on the editorial boards of Distributed Computing (1987-2003), Information Processing Letters, Theory of Computing Systems, Parallel Processing Letters, International journal of Foundations of Computer Science, Entropy, Information, Journal of Computer and Systems Sciences (guest editor), and elsewhere. He has worked on cellular automata, computational complexity, distributed and parallel computing, machine learning and prediction, physics of computation, Kolmogorov complexity, information theory, quantum computing, publishing about 200 research papers and some books. He received a Knighthood (Ridder in de Orde van de Nederlandse Leeuw) in 2007 and elected member of the Academia Europaea in 2011. Together with Ming Li they pioneered applications of Kolmogorov complexity and co-authored An Introduction to Kolmogorov Complexity and its Applications Springer-Verlag, New York, 1993 (3rd Edition 2008), parts of which have been translated into Chinese, Russian and Japanese. Web page: http://www.cwi.nl/~paulv/
Professor David van Dyk, Imperial College London, UK Chair of Session 3
David van Dyk is a Professor in the Statistics Section of the Department of Mathematics at Imperial College London. After obtaining his PhD from the University of Chicago, he held faculty positions at Harvard University and the University of California, Irvine before relocating to London in 2011. Professor van Dyk was elected Fellow in the American Statistical Association in 2006, elected Fellow of the Institute of Mathematical Statistics in 2010, and received a Wolfson Merit Award in 2011. His scholarly work focuses on methodological and computational issues involved with Bayesian analysis of highly structured statistical models and emphasizes serious interdisciplinary research, especially in astronomy. He founded and coordinates the Imperial-California-Harvard AstroStatistics Collaboration (iCHASC) and is particularly interested in improving the efficiency of computationally intensive methods involving data augmentation, such as EM-type algorithms and various Markov chain Monte Carlo methods.
Professor Christopher Bishop, Microsoft Research, UKModel-based machine learning
Traditional machine learning is characterised by a bewildering variety of techniques, such as logistic regression, support vector machines, neural networks, Kalman filters, and many others, as well as numerous variants of these. Each has its own merits, and each has its own associated algorithms for fitting adjustable parameters to a training data set. Selecting an appropriate technique can be difficult, and adapting it to a specific application requires detailed understanding of that technique and involves corresponding modifications to the source code.
In recent years that has been a growing interest in a simpler, yet much more powerful, paradigm called model-based machine learning. This allows a very broad range of machine learning models to be specified compactly within a simple development environment. Training the model becomes a task in probabilistic inference, and is decoupled from the specification of the model itself and hence can be automated. The majority of standard techniques correspond to specific choices for the model and arise naturally as special cases, while variants of these techniques to suit specific applications are easily constructed, and alternative related structures can readily be compared. Newcomers to the field of machine learning need only to understand the model specification environment in order to gain access to a huge range of models. The model-based approach to machine learning is particularly powerful when enabled through a probabilistic programming language.
Chris Bishop is a Distinguished Scientist at Microsoft Research Cambridge, where he leads the Machine Learning and Perception group. He is also Professor of Computer Science at the University of Edinburgh, and Vice President of the Royal Institution of Great Britain. He is a Fellow of the Royal Academy of Engineering, a Fellow of the Royal Society of Edinburgh, and a Fellow of Darwin College Cambridge. His research interests include probabilistic approaches to machine learning, as well as their practical application. Chris is the author of the leading textbook "Neural Networks for Pattern Recognition" (Oxford University Press, 1995) which has over 16,000 citations, and which helped to bring statistical concepts into the mainstream of the machine learning field. His latest textbook "Pattern Recognition and Machine Learning" (Springer, 2006) has over 5,000 citations, and has been widely adopted. In 2008 he presented the 180th series of annual Royal Institution Christmas Lectures, with the title "Hi-tech Trek: the Quest for the Ultimate Computer", to a television audience of close to 5 million.
Professor Zoubin Ghahramani, University of Cambridge, UK Nonparametric probabilistic modelling
Uncertainty, data, and inference play a fundamental role in modelling. Probabilistic approaches to modelling have transformed scientific data analysis, artificial intelligence and machine learning, and have made it possible to exploit the many opportunities arising from the recent explosion of big data problems arising in the sciences, society and commerce. Once a probabilistic model is defined, Bayesian statistics (which used to be called "inverse probability") can be used to make inferences and predictions from the model. Bayesian methods work best when they are applied to models that are flexible enough to capture the complexity of real-world data. Recent work on non-parametric Bayesian machine learning provides this flexibility. I will touch upon some of our latest work in this area, including new models for time series and for social and biological networks.
Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, UK, and is also Associate Research Professor of Machine Learning at Carnegie Mellon University, USA. His current research focus is on Bayesian approaches to statistical machine learning, with applications to bioinformatics, econometrics, and large-scale data modelling. He has over 200 publications in fields such as computer science, statistics, engineering, and neuroscience.
He has served on the editorial boards of several leading journals in the field, including JMLR, JAIR, Annals of Statistics, Machine Learning, Bayesian Analysis, and was Associate Editor in Chief of IEEE Transactions on Pattern Analysis and Machine Intelligence, the IEEE's highest impact journal. He also served on the Board of the International Machine Learning Society, and as Program Chair (2007) and General Chair (2011) of the International Conference on Machine Learning. More information can be found at http://learning.eng.cam.ac.uk/zoubin/
Professor Hod Lipson, Cornell University, USADistilling natural laws from experimental data: from particle physics to computational biology
Can machines discover scientific laws automatically? For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena in nature. Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. This talk will outline a series of recent research projects, starting with self-reflecting robotic systems, and ending with machines that can formulate hypotheses, design experiments, and interpret the results, to discover new scientific laws. While the computer can discover new laws, will we still understand them? Our ability to have insight into science may not keep pace with the rate and complexity of automatically-generated discoveries. Are we entering a post-singularity scientific age, where computers not only discover new science, but now also need to find ways to explain it in a way that humans can understand? We will see examples from art to architecture, from psychology to cosmology, from big science to small science.
Hod Lipson is an Associate Professor of Mechanical & Aerospace Engineering and Computing & Information Science at Cornell University in Ithaca, NY. He directs the Creative Machines Lab, which focuses on novel ways for automatic design, fabrication and adaptation of virtual and physical machines. He has led work in areas such as evolutionary robotics, multi-material functional rapid prototyping, machine self-replication and programmable self-assembly. Lipson received his Ph.D. from the Technion - Israel Institute of Technology in 1998, and continued to a postdoc at Brandeis University and MIT. His research focuses primarily on biologically-inspired approaches, as they bring new ideas to engineering and new engineering insights into biology. For more information visit http://www.mae.cornell.edu/lipson.
Professor Mark Girolami, University College London, UKStatistical inference for markov jump process models via differential geometric monte carlo methods and the linear noise approximation
Bayesian analysis for Markov jump processes is a non-trivial and challenging problem. Although exact inference is theoretically possible, it is computationally demanding thus its applicability is limited to a small class of problems. In this talk we describe the application of Riemann manifold MCMC methods using an approximation to the likelihood of the Markov jump process which is valid when the system modelled is near its thermodynamic limit. The proposed approach is both statistically and computationally efficient while the convergence rate and mixing of the chains allows for fast MCMC inference. The methodology is evaluated using numerical simulations on two problems from chemical kinetics and one from systems biology.
Mark Girolami holds the Chair of Statistics in the Department of Statistical Science at University College London (UCL). He is also Director of the Centre for Computational Statistics and Machine Learning at UCL, and holds a Professorial position in the Department of Computer Science at UCL. Prior to joining UCL Mark held the Chair of Computing and Inferential Science at the University of Glasgow. In 2011 he was elected to the Fellowship of the Royal Society of Edinburgh.
Professor Robert Palmer, University of Oklahoma, USAChair of Session 4
Robert Palmer received the Ph.D. degree in electrical engineering from the University of Oklahoma in 1989. From 1989 to 1991, he was a JSPS Postdoctoral Fellow with the Radio Atmospheric Science Center, Kyoto University, Japan, where his major accomplishment was the development of novel interferometric radar techniques for studies of atmospheric turbulent layers. After his stay in Japan, Dr. Palmer was with the Physics and Astronomy Department of Clemson University, South Carolina. From 1993 to 2004, he was a part of the faculty of the Department of Electrical Engineering, University of Nebraska, where his interests broadened into areas including wireless communications, remote sensing, and pedagogy. He currently holds the Tommy C. Craighead Chair with the School of Meteorology, University of Oklahoma (OU), and serves as Director of OU’s Atmospheric Radar Research Center. Since coming to OU, his research interests have focused on the application of advanced radar signal processing techniques to observations of severe weather, particularly related to phased-array radars and other innovative system designs. He has published widely in the area of radar remote sensing of the atmosphere, with an emphasis on generalized imaging problems, spatial filter design, and clutter mitigation using advanced array/signal processing techniques.
Professor Stephen Roberts, University of Oxford, UKSequential non-parametric bayesian inference: approaches and applications
This talk will focus on Bayesian inference algorithms built around the elegant formalism of non-parametric models, in particular Gaussian processes. It will firstly introduce Gaussian processes for time-series inference problems and extend this to consider the role of domain knowledge in the models. We show how intuitive extensions allow us to tackle many of the problems faced in time-series modelling, including forecasting, observation scheduling and changepoint detection. Examples are given from a variety of practical domains, including multi-sensor weather forecasting and astrophysical time-series modelling.
Stephen Roberts is Professor Engineering Science at the University of Oxford where he leads the Pattern Analysis and Machine Learning Research Group (www.robots.ox.ac.uk/~parg). His main research interests lie in the application and development of mathematical methods in data analysis and data-driven machine learning, in particular statistical learning and inference and their application to complex problems in heterogeneous information fusion. His early contributions to the field include Bayesian models for real-time data modelling and signal processing. More recent research has focused on non-parametric Bayesian models for multi-sensor data fusion, global optimisation, complex systems, game theory and network analysis. Particular emphasis is placed on the real-world applications of advanced theory and over many years he has applied these statistical methods to diverse problems in astrophysics, biology, finance and engineering.
Dr Michael Hedlin, Scripps Institution of Oceanography, University of California, San Diego, USAThe study of atmospheric phenomena using seismic networks
Although seismic networks have been used for decades to study earthquakes and probe the structure of the Earth’s interior, they also record atmospheric phenomena, presumably through the acoustic-to-seismic coupling phenomenon. We have analyzed broadband seismic data from the 400-station USArray Transportable Array to create a catalogue of infrasonic sources or “skyquakes” in the western United States. The network detected and located several hundred skyquakes each year, many of which were not observed by regional infrasonic arrays likely due to the effects of wind noise on infrasonic microphones. A large-scale study of the detection statistics of these events demonstrates the influence of seasonal reversals of zonal stratospheric winds on infrasonic propagation. We use well-constrained explosions from the catalog to test propagation algorithms and 3D atmospheric velocity models. The seismic waveforms reveal in unprecedented detail the spread of the infrasound wavefield across the Earth’s surface within 1000 km of the source, including the penetration of sound into predicted geometric shadow zones. The seismic waveforms also consistently show long-lived packets of energy from these impulsive atmospheric sources. Infrasonic ray trace modelling of the observed arrival times may suggest that both the sound penetration and the extended duration of the signal packets are due to interaction of the infrasound wavefield with atmospheric internal gravity waves.
Michael Hedlin is the head of the Laboratory for Atmospheric Acoustics at the University of California. His research interests include probing atmospheric phenomena and large- and small-scale atmospheric structure using dense seismic and barometric network recordings of infrasound. Dr Hedlin also conducts reseach in nuclear and hazard monitoring using infrasound and seismic data. To support this research Dr Hedlin and a colleague recently spearheaded the conversion of the USArray Transportable Array to a broadband seismo-acoustic network with the addition of a suite of barometers to each station. This unprecedented network comprises 400 stations on a Cartesian grid spanning 2,000,000 square km in the continental United States.
Michael Hedlin has authored or co-authored ~ 60 articles in scientific journals including papers in Nature and Science. He is a member of the Global Seismographic Network Standing Committee. He is editor-in-chief of InfraMatics.
Professor Sofia Olhede, University College London, UKMultivariate oscillations
We develop a geometric understanding of the modulated multivariate oscillation, starting from a review of the univariate, bivariate and basic multivariate modulated oscillations. We show that in higher dimensions the modulated multivariate oscillation can always, irrespectively of the dimensionality of the observed signal, instantaneously be described as a linearly, circularly or elliptically polarized signal using a set of complex vectors in conjunction with a single complex-valued signal. The evolution of this representation needs careful modelling. We show how the instantaneous rates of change of the signal can conveniently be represented as an evolution of the oscillatory structure across time, coupled with alterations of the multivariate relationships (or geometry) between the multiple signals. We describe how to calculate an intrinsic representation of the oscillation independent from the observational axes.
We show how the global dimensionality of the signal is built up from all its local one dimensional contributions, and introduce the purely unidirectional signal, to quantify how different any given signal is from the closest purely unidirectional signal. We illustrate the properties of the derived representation of the multivariate signal with synthetic and real-world data examples, and conclude with some discussion of outstanding problems of oscillatory representations.
Sofia C. Olhede was born in Spanga, Sweden, in 1977. She received the MSci and PhD degrees in mathematics from Imperial College London, London, UK, in 2000 and 2003, respectively. She held the posts of Lecturer (2002–2006) and Senior Lecturer (2006–2007) with the Mathematics Department, Imperial College London, and in 2007, she joined the Department of Statistical Science, University College London, where she is the Pearson Professor of Statistics and director of research. She holds a UK Engineering and Physical Sciences Research Council Leadership fellowship in Statistics. She is best known for her work on the complex wavelet transform, and the definition of localized spatial decompositions of image properties, having introduced the monogenic wavelet transform. Her research interests include the analysis of complex-valued stochastic processes, non-stationary time series and inhomogeneous random fields, with applications in neuroscience and oceanography.
Professor Aapo Hyvarinen, University of Helsinki, FinlandIndependent component analysis: recent advances
Independent component analysis is a probabilistic method for learning a linear transform of a random vector. The goal is to find components which are maximally independent and non-Gaussian (non-normal). Its fundamental difference to classical multivariate statistical methods is in the assumption of non-Gaussianity, which enables the identification of original, underlying components, in contrast to classical methods. The basic theory of ICA was mainly developed in the 1990's and summarized, for example, in our monograph in 2001. Here, we provide an overview of recent developments in the theory since the year 2000. The main topics are: testing independent components, analysing multiple data sets (three-way data), analysis of causal relations, modelling dependencies between the components, and improved methods for estimating the basic model.
Aapo Hyvarinen studied undergraduate mathematics at the universities of Helsinki (Finland), Vienna (Austria), and Paris (France), and obtained a Ph.D. degree in Information Science at the Helsinki University of Technology in 1997. After further post-doctoral work at the Helsinki University of Technology, he moved to the University of Helsinki in 2003. Since 2008, he is Professor of Computational Data Analysis at the University of Helsinki.Aapo Hyvarinen is the main author of the books "Independent Component Analysis" (2001) and "Natural Image Statistics" (2009), and author or coauthor of more than 100 scientific articles. He is Action Editor at the Journal of Machine Learning Research and Neural Computation, Editorial Board Member in Foundations and Trends in Machine Learning, as well as Contributing Faculty Member of Faculty of 1000 Biology. His current work concentrates on applications of unsupervised machine learning methods to neuroscience.
Book prize event 6 Mar
History of science lecture 7 Mar
Enter your email address to receive updates about scientific meetings at the Royal Society
Full listing of our events and exhibitions.
Watch videos of past events.
Most of our talks are free and open to the public.
We host major conferences for leading scientists.
Explore our annual science exhibition
Contact the events team.