Skip to content

Signal processing for the physical sciences









Kavli Royal Society Centre, Chicheley Hall, Newport Pagnell, Buckinghamshire, MK16 9JJ


Satellite meeting organised by Dr Nick Jones and Dr Thomas Maccarone

Event details

We aim to help  bring together cutting edge methods from data analysis with pressing data challenges in the physical sciences. In particular we focus on challenges involving time-series data.  This meeting develops the themes of the preceding larger meeting in a more informal setting with more opportunity for dialogue. 

This meeting was preceded by a related discussion meeting Signal processing and inference for the physical sciences 26 – 27 March 2012. 

Biographies of the organisers and speakers are available below.  Audio recordings are freely available and the programme can be downloaded here.

Event organisers

Select an organiser for more information

Schedule of talks

Session 1

4 talks Show detail Hide detail

Accounting for calibration uncertainty in high energy astrophysics

Professor David A van Dyk, Imperial College London, UK

Show speakers

Cosmological signal processing

Professor Benjamin Wandelt, Institut d'Astrophysique de Paris, France


Cosmological signal processing presents a unique combination of large data sets obtained through observations (not in a  laboratory), a detailed, quantitative theoretical framework in which to interpret them, and where information is a precious resource which is ulimately limited by the finiteness of the observable Universe. A natural consequence of this situation is that phenomenologists have become experts at solving inference problems, often in a Bayesian framework. These analysis techniques can be understood as filters which constrain the inferences to answers that respect basic physical constraints. Much progress has been made in linear processing of the signals on the sky but in order to exploit more of the finite information available to us, the field now moves to non-linear inferences. I will show examples of computationally efficient non-linear signal processing with tens of millions of constraints and parameters, explored in a statistically rigorous fashion. These techniques are used to reconstruct the initial conditions of the Universe, constrain the expansion history and constituents of the Universe, and thus move us one step closer towards resolving the fundamental questions of dark energy, dark matter and the origin of the Cosmos.

Show speakers

Spherical signal processing for cosmology

Dr Jason McEwen, University College London, UK


Cosmological observations are inherently made on the celestial sphere; consequently, the geometry of the manifold on which these observations are made must be taken into account in subsequent analysis.  I will discuss new developments in fundamental signal processing on the sphere, including sampling theorems, wavelet theory and compressive sensing.  I will also discuss the application of these new spherical signal processing methods to analyse observations of the cosmic microwave background (CMB), the relic radiation of the Big Bang, which can be used to unlock many of the secrets of the early Universe.

Show speakers

Understanding severe weather through advances in signal processing

Professor Robert Palmer, University of Oklahoma, USA


A brief summary of the importance of severe weather research and Doppler radar observations, in particular, are provided.  General weather radar design and data processing constraints are discussed.  The role of signal processing for transforming sampled radar voltages into useful information about the structure and dynamics of storms for both meteorologists and the general public is outlined.  Major challenges, including non-stationary clutter mitigation, moisture measurement, and sensitivity enhancement, will be discussed.  In addition, the future of weather radar systems will be presented with an emphasis on phased array radars, which support advanced beamforming/spatial filtering algorithms.  Many of the example systems and algorithms were developed at the University of Oklahoma, and will be presented in the context of severe weather observations in the Central Plains of the USA.

Show speakers

Session 2

3 talks Show detail Hide detail

Atmospheric radar and statistical signal processing

Professor John Sahr, University of Washington, USA


Radar observation of atmospheric targets challenges the experimentalist.  There is intricate interplay between the precise illumination of the target and the subsequent signal analysis.   Small scattering cross section often necessitates the use of pulse compression waveform. The radar target frequently prohibits successful interrogation with periodic sampling due to large range extent and large Doppler content.  Ingenious waveforms and algorithms have been developed to overcome this challenge, many of which are recognizable as temporal extension of techniques developed by radio astronomers.  The past decade’s progress has been enabled by startling advances in high speed, affordable computational resources.  Also, digital receivers are now providing extraordinary data quality very early in the receiver chain, and high speed internet connectivity is lowering the cost and technical barriers to multistatic active and passive radar systems.

Show speakers

Highly comparative time-series analysis

Mr Ben Fulcher, University of Oxford, UK


Scientists measure, record, and analyze the dynamics of diverse systems: including the stock market, living cells, heart rates, and Earth's climate system. But despite this existing wealth of interdisciplinary time-series data, and methods and models for analyzing it, an extensive organization of scientific time series and their analysis methods has never been performed. In this talk, I will describe the structure in a collection of over 35 000 pieces scientific time-series data, and over 9 000 associated time-series analysis methods that we have assembled. Analysis methods are organized using their behaviour on empirical time series, time-series datasets are organized according to their measured properties. We show how redundancy in our collection of scientific time-series analysis methods can be exploited to form a reduced set that can be used to compare diverse types of time series meaningfully. As well as presenting broad results on the structure of empirical time series and their methods, I will also demonstrate the broad scientific utility of a set of tools for addressing specific time-series analysis tasks, including the selection of useful models and metrics for datasets of electroencephalograms, self-affine time series, heart beat intervals, and speech signals. Lastly, I will show how the dimensionality of time-series datasets generated from models with a small number of parameters can be estimated, and how one can deduce estimates of these parameters.

Show speakers

Signal processing for digital acoustics

Dr Francisco Pinto, EPFL, Switzerland


Sound waves propagate through space and time by transference of energy between the particles in the medium, which vibrate according to the oscillation patterns of the waves. These vibrations can be captured by a microphone and translated into a digital signal, representing the amplitude of the sound pressure as a function of time. The signal obtained by the microphone characterizes the time-domain behavior of the acoustic wave field, but has no information related to the spatial domain. The spatial information can be obtained by measuring the vibrations with an array of microphones distributed at multiple locations in space. This allows the amplitude of the sound pressure to be represented not only as a function of time but also as a function of space. The goal of this work is to provide a formulation of Fourier theory that treats the wave field as a single function of space and time, and allows it to be processed as a multidimensional signal using the theory of digital signal processing (DSP).

Show speakers

Session 3

3 talks Show detail Hide detail

Collaborative estimation

Dr Onkar Dabeer, Tata Institute of Fundamental Research, India


The talk has two parts, both involving pooling of data from different sources to improve the estimation task at hand. In particular, I will emphasize the modeling aspect in both parts, which may be of interest in physical sciences.

     1. In e-commerce, we often have access to ratings given by users for many of the items they have bought/experienced. In collaborative filtering, we pool together rating data from different users about different items and use it to make item recommendations for users. We propose a mathematical model to study this problem, identify fundamental performance limits for the model, exhibit schemes that achieve these limits, and test their performance on real data.

     2. We consider a collection of prediction experiments, where several experiments may share the same regression parameters (but we do know which experiments are similar). By pooling data across experiments, we hope to do better. In this talk, I will show an application of this framework and discuss some methods to solve the problem.

Show speakers

Extra-solar planets via a Bayesian multi-planet periodogram

Professor Phil Gregory, University of British Columbia, Canada


A remarkable array of new ground based and space based astronomical tools are providing astronomers access to other solar systems. Over 700 planets have been discovered to date including several super earths in the habitable zone. These successes on the part of the observers have spurred a significant effort to improve the statistical tools for analyzing data in this field. 

I will describe a Bayesian multi-planet Kepler periodogram based on a new fusion Markov chain Monte Carlo algorithm which incorporates parallel tempering, simulated annealing and genetic crossover operations. Each of these features facilitate the detection of a global minimum in chi-squared in a multi-modal environment. By combining all three, the algorithm greatly increases the probability of realizing this goal. 

The fusion MCMC is controlled by a unique two stage adaptive control system that automates the tuning of the proposal distributions for efficient exploration of the model parameter space even when the parameters are highly correlated. This controlled fusion MCMC algorithm is implemented in Mathematica using parallized code and run on an 8 core PC. It is designed to be a very general tool for nonlinear model fitting. The performance of the algorithm will be illustrated with some recent successes in the exoplanet field where it has facilitated the detection of a number of new planets.

Show speakers

From astrophysics to fusion plasmas: signal processing and system optimization analysis for ITER

Dr Duccio Testa, Ecole Polytechnique Fédérale de Lausanne, Switzerland


Efficient, real-time and unsupervised data analysis is one of the key elements for achieving scientific success in complex engineering and physical systems, of which three examples are the currently operating Joint European Torus (JET) and the soon-to-be-built International Thermonuclear Experimental Reactor (ITER) and the Square Kilometre Array (SKA) telescope.

There is a wealth of signal processing techniques that are being applied to data analysis in such complex systems, and here we wish to present some examples of the synergies that can be exploited when combining ideas and methods from different fields, such as astronomy and astrophysics and thermonuclear fusion plasmas.

One problem which is common to these subjects is the determination of pulsation modes from irregularly sampled time-series. We have used recent techniques of signal processing in astronomy and astrophysics, based on the Sparse Representations of Signals, to solve current questions arising in thermonuclear fusion plasmas. Two examples are the detection of magneto-hydrodynamic instabilities, which is now performed routinely in JET in real-time on a sub-millisecond time-scale, and the studies leading to the optimization of the magnetic diagnostic system in ITER. These questions have been solved formulating them as inverse problems, despite the fact that these applicative frameworks are extremely different from the classical use of Sparse Representations, on both the theoretical and computational points of view.

Requirements, prospects and ideas for the signal processing and real-time data analysis applications of this method to routine operation of ITER and of the SKA telescope will be discussed.

Finally, we will conclude with an example of a potential application of the Sparse Representation method to the analysis of electrical prospections (using the so-called Schlumberger diagram) in an Etruscan necropolis and in an Etruscan fortress town located close to Rome, both sites dating from around the fifth century BC.



P Blanchard, A Fasoli, J B Lister, Ecole Polytechnique Fédérale de Lausanne, Switzerland.

S Bourguignon, Institut de Recherche en Communications et Cybernétique, France

H Carfantan, Université de Toulouse, France

A Goodyear, Culham Centre for Fusion Energy, UK

G Vayakis, ITER organization, France

P Blanchard, Ecole Polytechnique Fédérale de Lausanne, Switzerland and JET-EFDA Close Support Unit, Culham Science Centre, UK

A Klein, formerly Massachusetts Institute of Technology, USA.

T Panis, formerly Ecole Polytechnique Fédérale de Lausanne, Switzerland

JET-EFDA contributors, see Appendix of F Romanelli et al, Nuclear Fusion 51 (2011), 094008 Proceedings of the 23rd IAEA Fusion Energy Conference 2010, Daejeon, Korea)

The Gruppo Archeologico Romano, Rome section of the Gruppi Archeologici d’Italia

Show speakers

Session 4

3 talks Show detail Hide detail

From Maxwell’s equations to efficient filter flow and its application to blind image deconvolution

Dr Michael Hirsch, University College London, UK and Max Planck Institute for Intelligent Systems, Germany


Digital image restoration is a key area in signal and image processing due to its many applications in both scientific imaging as well as everyday photography. An important sub-discipline, that is receiving an ever increas­ing interest from the academic as well the industrial world, is the field of image deconvolution, which enjoys this interest due to both its theoretical and practical implications. While classical or non-blind image deconvolution aims at restoring a sharp latent image assuming the blur is known, blind image deconvolution addresses the much harder but also more realistic case, where the degradation is unknown. An estimate of the original image must be obtained using only its blurred and possibly noise corrupted observations.

Blind image deconvolution involves many challenging problems, including modeling the image formation process, formulating tractable priors incorporating generic image statistics, and devising efficient methods for optimization. This renders it an intriguing but also intricate task, which has recently seen much attention as well as progress in both the image and signal processing but also the computer vision and graphics community.

In this context, we present a mathematically sound and physically well-motivated framework, which allows expressing and efficiently computing spatially-varying blur [1]. We derive our “Efficient Filter Flow” framework as a discrete approximation of the incoherent imaging equation and devise expressions for its efficient implementa­tion using the short-time Fourier transform [2]. By extending the commonly employed invariant blur model, our framework substantially broadens the application range of blind deconvolution methods.

In a number of challenging real-world applications we demonstrate both the validity and versatility of our approach. In particular, we utilise our model for reconstructing a sharp latent image from a sequence of short-exposure images degraded by atmospheric turbulence [1]. To capitalise on the abundance of data available in astronomical imaging, we develop a blind deconvolution algorithm, which bypasses the computational burden of current blind deconvolution methods that are restricted in the number of observations they can process [3].

Another challenging application which proves the usefulness of our framework is the problem of removing camera shake from a single image. We extend our model to incorporate the particularities of camera shake and develop an efficient algorithm that outperforms state-of-the-art methods in both restoration quality and computation time [4].

Finally, we show how our framework combined with a simple measurement procedure can be used to substan­tially improve the quality of images taken with photographic lenses that suffer severe optical aberrations [5].


  • M Hirsch, S Sra, B Schölkopf, and S Harmeling. Efficient Filter Flow for Space-Variant Multiframe Blind Deconvo­lution. In Proceedings of the IEEE Conference on Vision and Pattern Recognition (CVPR), 2010.

  • T Stockham Jr High-speed convolution and correlation. In Proceedings of the April 26-28, 1966, Spring joint computer conference, pages 229–233. ACM, 1966.

  • M Hirsch, S Harmeling, S Sra, B Schölkopf. Online Multi-frame Blind Deconvolution with Super-resolution and Saturation Correction. In Astronomy and Astrophysics, 531, A9, 2011.

  • M Hirsch, C J Schuler, S Harmeling, B Schölkopf. Fast Removal of Non-uniform Camera Shake. In Proceedings of the 13th International Conference on Computer Vision (ICCV), 2011.

  • CJ Schuler, M Hirsch, S Harmeling, B Schölkopf. Non-stationary Correction of Lens Aberrations. In Proceedings of the 13th International Conference on Computer Vision (ICCV), 2011.

Show speakers

Polynomial matrix factorization for broadband adaptive signal processing

Professor John McWhirter FRS FREng, Cardiff University, UK


This talk will outline some of my recent research into techniques for factorizing polynomial matrices. It will present algorithms for computing the polynomial matrix eigen-value decomposition (PEVD), QR decomposition (PQRD) and singular value decomposition (PSVD). Polynomial matrices play an important role in the context of broadband sensor array signal processing including, for example, multiple input multiple output (MIMO) systems for wireless communications. In this case, the (i, j) element of the matrix is simply a polynomial representing the transfer function of the propagation channel from the ith transmitter to the jth receiver. In the talk, I will show how these new polynomial matrix factorization techniques have been successfully applied to numerical models for MIMO communications and also in the context of underwater acoustic surveillance arrays.

Show speakers

Scattering representations of stochastic processes

Dr Joan Bruna, Centre de Mathématiques Appliqués, Ecole Polytechnique, France


Scattering operators cascade wavelet modulus decompositions to obtain delocalized, translation invariant signal representations which are stable to deformations.

Scattering operators provide a new spectral representation of stationary processes, which characterizes high-order moments and thus captures non-gaussian properties. In this talk we will concentrate on three families of processes: Gaussian, Point Processes and Lévy.

For each of them, we will show how scattering coefficients provide consistent estimates, which will allow the identification and discrimination of several properties of the process.

First, we derive an expected scattering decay for Gaussian processes, using Hermite chaos expansions and CLTs, and derive a weak consistency result. We then introduce variable volatility gaussian processes, which model long-range fluctuations at different scales. Next, we study the scattering of point processes, starting from Poisson processes. Random deformations of Poisson processes supply a rich family of point processes which incorporates geometric information. Finally, we combine these results to characterize compound Point processes leading to a characterization of the Lévy measure.

Show speakers
Signal processing for the physical sciences Kavli Royal Society Centre, Chicheley Hall Newport Pagnell Buckinghamshire MK16 9JJ