Support us | Visit us | Contact us
Satellite meeting organised by Dr Nick Jones and Dr Thomas Maccarone
We aim to help bring together cutting edge methods from data analysis with pressing data challenges in the physical sciences. In particular we focus on challenges involving time-series data. This meeting develops the themes of the preceding larger meeting in a more informal setting with more opportunity for dialogue.
This meeting was preceded by a related discussion meeting Signal processing and inference for the physical sciences 26 – 27 March 2012.
Biographies of the organisers and speakers are available below. Audio recordings are freely available and the programme can be downloaded here.
Dr Nick Jones, University of Oxford, UK
Nick Jones, Imperial Mathematics, works on topics relating to the designed disordered world around us. This concerns both how we should perform inference about the systems around us and how they in turn perform inference themselves.
Dr Thomas Maccarone, University of Southampton, UK
Tom Maccarone works across a broad range of topics in astrophysics, but he received his PhD at Yale University for a thesis on the variability of X-ray emission from accretion flows onto black holes and neutron stars.
It was from this work that his interest in signal processing began, as these systems present a rich phenomenology of variability, much of which is poorly understood. He hopes this meeting will fertilize time domain astrophysics with new techniques to apply to old problems. After his PhD he moved to Europe, taking on postdoctoral fellowships at the Scuola Internazionale Superiore di Studi Avanzati and the University of Amsterdam, before taking on a faculty position at the University of Southampton.
Professor Benjamin Wandelt, Institut d'Astrophysique de Paris, FranceCosmological signal processing
Professor Wandelt received his PhD in astrophysics and theoretical physics from Imperial College, London. He worked as a postdoctoral fellow at the Theoretical Astrophysics Centre in Copenhagen from 1997 to 1999, and as a research associate at Princeton University from 1999 to 2001. He joined the faculty of the Departments of Physics and Astronomy at the University of Illinois in 2001, and received tenure in 2006 . In January of 2010 he was named International Chair at the Université Pierre et Marie Curie (Paris VI) and the Institut d'Astrophysique de Paris and in June 2010 awarded an Excellence Chair by the Agence Nationale de Recherche. Since 2011 he is a founder and co-director of the Initiative in Cosmology and Astroparticle Physics at IAP. He has been recognized by international awards such as the Bessel prize and the Sofja Kovalevskaja award, and has been granted Planck Scientist status within the Planck mission. His research connects questions in fundamental physics with astronomical data on scales ranging from the inner halos of galaxies to the largest scales accessible to observations.
Cosmological signal processing presents a unique combination of large data sets obtained through observations (not in a laboratory), a detailed, quantitative theoretical framework in which to interpret them, and where information is a precious resource which is ulimately limited by the finiteness of the observable Universe. A natural consequence of this situation is that phenomenologists have become experts at solving inference problems, often in a Bayesian framework. These analysis techniques can be understood as filters which constrain the inferences to answers that respect basic physical constraints. Much progress has been made in linear processing of the signals on the sky but in order to exploit more of the finite information available to us, the field now moves to non-linear inferences. I will show examples of computationally efficient non-linear signal processing with tens of millions of constraints and parameters, explored in a statistically rigorous fashion. These techniques are used to reconstruct the initial conditions of the Universe, constrain the expansion history and constituents of the Universe, and thus move us one step closer towards resolving the fundamental questions of dark energy, dark matter and the origin of the Cosmos.
Professor Robert Palmer, University of Oklahoma, USAUnderstanding severe weather through advances in signal processing
Dr Robert Palmer received the PhD degree in electrical engineering from the University of Oklahoma in 1989. From 1989 to 1991, he was a JSPS Postdoctoral Fellow with the Radio Atmospheric Science Center, Kyoto University, Japan, where his major accomplishment was the development of novel interferometric radar techniques for studies of atmospheric turbulent layers. After his stay in Japan, Dr Palmer was with the Physics and Astronomy Department of Clemson University, South Carolina. From 1993 to 2004, he was a part of the faculty of the Department of Electrical Engineering, University of Nebraska, where his interests broadened into areas including wireless communications, remote sensing, and pedagogy. He currently holds the Tommy C Craighead Chair with the School of Meteorology, University of Oklahoma (OU), and serves as Director of OU’s Atmospheric Radar Research Center. Since coming to OU, his research interests have focused on the application of advanced radar signal processing techniques to observations of severe weather, particularly related to phased-array radars and other innovative system designs. He has published widely in the area of radar remote sensing of the atmosphere, with an emphasis on generalized imaging problems, spatial filter design, and clutter mitigation using advanced array/signal processing techniques.
A brief summary of the importance of severe weather research and Doppler radar observations, in particular, are provided. General weather radar design and data processing constraints are discussed. The role of signal processing for transforming sampled radar voltages into useful information about the structure and dynamics of storms for both meteorologists and the general public is outlined. Major challenges, including non-stationary clutter mitigation, moisture measurement, and sensitivity enhancement, will be discussed. In addition, the future of weather radar systems will be presented with an emphasis on phased array radars, which support advanced beamforming/spatial filtering algorithms. Many of the example systems and algorithms were developed at the University of Oklahoma, and will be presented in the context of severe weather observations in the Central Plains of the USA.
Dr Jason McEwen, University College London, UKSpherical signal processing for cosmology
Jason McEwen received a B.E. (Hons) degree in Electrical and Electronic Engineering from the University of Canterbury, New Zealand, in 2002 and a Ph.D. in Astrophysics from the University of Cambridge in 2006. He held a Research Fellowship at Clare College, Cambridge, from 2007 to 2008 and worked as a Quantitative Analyst from 2008 to 2010, before returning to academia in 2010. He recently held a postdoctoral position at Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland, followed by a Leverhulme Trust Early Career Fellowship at University College London, where he remains as a Newton International Fellow, supported by the Royal Society and the British Academy. His research interests are focused on spherical signal processing, including sampling theorems and wavelets on the sphere, compressed sensing and Bayesian statistics, and applications of these theories to cosmology and radio interferometry.
Cosmological observations are inherently made on the celestial sphere; consequently, the geometry of the manifold on which these observations are made must be taken into account in subsequent analysis. I will discuss new developments in fundamental signal processing on the sphere, including sampling theorems, wavelet theory and compressive sensing. I will also discuss the application of these new spherical signal processing methods to analyse observations of the cosmic microwave background (CMB), the relic radiation of the Big Bang, which can be used to unlock many of the secrets of the early Universe.
Professor David A van Dyk, Imperial College London, UKAccounting for calibration uncertainty in high energy astrophysics
David A van Dyk is a Professor in the Statistics Section of the Department of Mathematics at Imperial College London. After obtaining his PhD from the University of Chicago, he held faculty positions at Harvard University and the University of California, Irvine before relocating to London in 2011. Professor van Dyk was elected Fellow in the American Statistical Association in 2006, elected Fellow of the Institute of Mathematical Statistics in 2010, and received a Wolfson Merit Award in 2011. His scholarly work focuses on methodological and computational issues involved with Bayesian analysis of highly structured statistical models and emphasizes serious interdisciplinary research, especially in astronomy. He founded and coordinates the Imperial-California-Harvard AstroStatistics Collaboration (iCHASC) and is particularly interested in improving the efficiency of computationally intensive methods involving data augmentation, such as EM-type algorithms and various Markov chain Monte Carlo methods.
The analysis of high-energy spectra and images in astronomy relies on prelaunch and space-based analysis of the operating characteristics of the photon detectors used for space-based data collection. This involves the observation and analysis of known sources along with sophisticated computer models of the telescopes. The resulting calibration products include point-spread functions, exposure maps, effective area curves, and redistribution matrices for the photon energies. Although these products are only known approximately and with complex correlation structures among their components, they are typically taken as known in the final analyses. In this talk we explore the effect of calibration uncertainly on parameter estimation and uncertainty assessment and develop a suite of statistical methods that aim to properly account for calibration uncertainty. Proposed methods vary from a relatively simple but approximate technique based on multiple imputation to a fully Bayesian technique. Our Bayesian model fitting relies on Markov chain Monte Carlo for posterior simulation and involves the use of Metropolis Hastings updates within a Partially Collapsed Gibbs sampler. Finally, we use a sample of radio loud quasars to illustrate the substantial effect that properly accounting for calibration uncertainty can have on the error bars of the fitted parameters in high-energy spectral analysis.
Professor John Sahr, University of Washington, USAAtmospheric radar and statistical signal processing
John D Sahr studied Electrical Engineering at the California Institute of Technology, earning a BS in 1984. After a year of graduate study at the University of California, Los Angeles, he continued his graduate study in Space Physics at Cornell University, completing his PhD in 1990. In 1991 Dr Sahr joined the Electrical Engineering Faculty at the University of Washington in Seattle, Washington. Dr Sahr's research has emphasized radar remote sensing of ionospheric turbulence. In 1998 Dr Sahr and his students built the Manastash Ridge Radar, a passive bistatic radar which observes the scatter of commercial FM broadcasts at 100 MHz. Dr Sahr also serves as Associate Dean of Undergraduate Academic Affairs for the University.
Radar observation of atmospheric targets challenges the experimentalist. There is intricate interplay between the precise illumination of the target and the subsequent signal analysis. Small scattering cross section often necessitates the use of pulse compression waveform. The radar target frequently prohibits successful interrogation with periodic sampling due to large range extent and large Doppler content. Ingenious waveforms and algorithms have been developed to overcome this challenge, many of which are recognizable as temporal extension of techniques developed by radio astronomers. The past decade’s progress has been enabled by startling advances in high speed, affordable computational resources. Also, digital receivers are now providing extraordinary data quality very early in the receiver chain, and high speed internet connectivity is lowering the cost and technical barriers to multistatic active and passive radar systems.
Dr Francisco Pinto, EPFL, SwitzerlandSignal processing for digital acoustics
Francisco Pinto holds a PhD from the École Polytechnique Fédérale de Lausanne EPFL in the field of Computer and Information Sciences. His expertise is in digital signal processing and theoretical acoustics. He has worked for the Swiss-based hearing aids company Phonak SA, and is a recipient of the IEEE Best Paper Award and the Calouste Gulbenkian PhD Fellowship.
Sound waves propagate through space and time by transference of energy between the particles in the medium, which vibrate according to the oscillation patterns of the waves. These vibrations can be captured by a microphone and translated into a digital signal, representing the amplitude of the sound pressure as a function of time. The signal obtained by the microphone characterizes the time-domain behavior of the acoustic wave field, but has no information related to the spatial domain. The spatial information can be obtained by measuring the vibrations with an array of microphones distributed at multiple locations in space. This allows the amplitude of the sound pressure to be represented not only as a function of time but also as a function of space. The goal of this work is to provide a formulation of Fourier theory that treats the wave field as a single function of space and time, and allows it to be processed as a multidimensional signal using the theory of digital signal processing (DSP).
Mr Ben Fulcher, University of Oxford, UKHighly comparative time-series analysis
Ben Fulcher obtained a BSc (Adv) (Hons) studying Physics, Chemistry, and Maths at the University of Sydney in 2007, after which he obtained a Masters in Physics on the topic of physiologically-based sleep modeling in 2008, also at the University of Sydney. This research involved analyzing a simple neuronal population-based model that captures the key physiological interactions that underly sleep-wake dynamics. His DPhil at the University of Oxford was on the topic of highly-comparative time-series analysis, which was recently submitted. This very broad, empirical research on time-series analysis involved much interaction with the time-series analysis community at the University of Oxford: through University-wide Signals Days, and as part of a time series seminar series he co-organized in 2011 for the Balliol Interdisciplinary Institute.
Scientists measure, record, and analyze the dynamics of diverse systems: including the stock market, living cells, heart rates, and Earth's climate system. But despite this existing wealth of interdisciplinary time-series data, and methods and models for analyzing it, an extensive organization of scientific time series and their analysis methods has never been performed. In this talk, I will describe the structure in a collection of over 35 000 pieces scientific time-series data, and over 9 000 associated time-series analysis methods that we have assembled. Analysis methods are organized using their behaviour on empirical time series, time-series datasets are organized according to their measured properties. We show how redundancy in our collection of scientific time-series analysis methods can be exploited to form a reduced set that can be used to compare diverse types of time series meaningfully. As well as presenting broad results on the structure of empirical time series and their methods, I will also demonstrate the broad scientific utility of a set of tools for addressing specific time-series analysis tasks, including the selection of useful models and metrics for datasets of electroencephalograms, self-affine time series, heart beat intervals, and speech signals. Lastly, I will show how the dimensionality of time-series datasets generated from models with a small number of parameters can be estimated, and how one can deduce estimates of these parameters.
Dr Onkar Dabeer, Tata Institute of Fundamental Research, IndiaCollaborative estimation
Onkar Dabeer received the BTech and MTech degrees in Electrical Engineering from the Indian Institute of Technology, Bombay, in 1996 and 1998 respectively, and, the PhD\ degree in Electrical Engineering from the University of California at San Diego in June 2002. He was a postdoctoral researcher at University of California, Santa Barbara from July 2002 to July 2003. From August 2003 to August 2004, he served as a Senior Engineer at Qualcomm Inc, San Diego.
Since September 2004 he has been on the faculty at the School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai (Bombay), India. He is an Editor for the IEEE Transactions on Wireless Communications and IEEE Wireless Communications Letter. He is a recipient of the Prof. Revankar Prize for outstanding graduate student (IIT Bombay, 1998), the Homi Bhabha Fellowship (2006-2009), the Indian National Science Academy's Young Scientist Medal (2009), and he is an Associate (2009-2012) of the Indian Academy of Science. His research interests include estimation theory (with current emphasis on problems arising in web, social media, and sensor networks) and multi-Gigabit wireless networks.
The talk has two parts, both involving pooling of data from different sources to improve the estimation task at hand. In particular, I will emphasize the modeling aspect in both parts, which may be of interest in physical sciences.
Dr Duccio Testa, Ecole Polytechnique Fédérale de Lausanne, SwitzerlandFrom astrophysics to fusion plasmas: signal processing and system optimization analysis for ITER
After finishing high school with A-levels in classical studies, Dr Duccio Testa enrolled in a physics degree at the University of Torino while working as an archaeologist around Torino and Roma. He received the University (bachelor) degree in physics in Torino in July 1994 with a master dissertation on collective phenomena induced by high-energy relativistic electrons.
After teaching in high school and working at the observatory of Pino Torinese for about one year, he began his PhD studies in October 1995 at Imperial College, London (UK), working on the interaction between fast ions and waves in the ion cyclotron and lower hybrid range of frequencies, on experiments performed at the Joint European Torus in Abingdon (UK). He obtained the PhD degree in plasma physics from Imperial College in October 1998. He then held a postdoc position at the Plasma Science and Fusion Centre, Massachusetts Institute of Technology, Boston (USA), from October 1998 to June 2002 and then at the Centre de Recherches en Physique de Plasma, Ecole Polytechnique Fédérale de Lausanne (CH) from July 2002 to June 2005, in both cases working on the Alfvén Eigenmodes Active Diagnostic System at JET.
Since June 2005 he has been a permanent staff of CRRP-EPFL, where he is currently works on the high-frequency magnetic diagnostic system for ITER, on the Alfvén Eigenmodes Active Diagnostic system for JET and on operation of the CRPP Tokamak a Configuration Variable.
Efficient, real-time and unsupervised data analysis is one of the key elements for achieving scientific success in complex engineering and physical systems, of which three examples are the currently operating Joint European Torus (JET) and the soon-to-be-built International Thermonuclear Experimental Reactor (ITER) and the Square Kilometre Array (SKA) telescope.
There is a wealth of signal processing techniques that are being applied to data analysis in such complex systems, and here we wish to present some examples of the synergies that can be exploited when combining ideas and methods from different fields, such as astronomy and astrophysics and thermonuclear fusion plasmas.
One problem which is common to these subjects is the determination of pulsation modes from irregularly sampled time-series. We have used recent techniques of signal processing in astronomy and astrophysics, based on the Sparse Representations of Signals, to solve current questions arising in thermonuclear fusion plasmas. Two examples are the detection of magneto-hydrodynamic instabilities, which is now performed routinely in JET in real-time on a sub-millisecond time-scale, and the studies leading to the optimization of the magnetic diagnostic system in ITER. These questions have been solved formulating them as inverse problems, despite the fact that these applicative frameworks are extremely different from the classical use of Sparse Representations, on both the theoretical and computational points of view.
Requirements, prospects and ideas for the signal processing and real-time data analysis applications of this method to routine operation of ITER and of the SKA telescope will be discussed.
Finally, we will conclude with an example of a potential application of the Sparse Representation method to the analysis of electrical prospections (using the so-called Schlumberger diagram) in an Etruscan necropolis and in an Etruscan fortress town located close to Rome, both sites dating from around the fifth century BC.
P Blanchard, A Fasoli, J B Lister, Ecole Polytechnique Fédérale de Lausanne, Switzerland.
S Bourguignon, Institut de Recherche en Communications et Cybernétique, France
H Carfantan, Université de Toulouse, France
A Goodyear, Culham Centre for Fusion Energy, UK
G Vayakis, ITER organization, France
P Blanchard, Ecole Polytechnique Fédérale de Lausanne, Switzerland and JET-EFDA Close Support Unit, Culham Science Centre, UK
A Klein, formerly Massachusetts Institute of Technology, USA.
T Panis, formerly Ecole Polytechnique Fédérale de Lausanne, Switzerland
JET-EFDA contributors, see Appendix of F Romanelli et al, Nuclear Fusion 51 (2011), 094008 Proceedings of the 23rd IAEA Fusion Energy Conference 2010, Daejeon, Korea)
The Gruppo Archeologico Romano, Rome section of the Gruppi Archeologici d’Italia
Professor Phil Gregory, University of British Columbia, CanadaExtra-solar planets via a Bayesian multi-planet periodogram
Phil Gregory received a PhD in physics from University of Manchester in 1969, specializing in space physics and radio astronomy at the Jodrell Bank Observatory with an experiment on the first all British satellite Ariel III. He returned to Canada in 1970 as a Postdoctoral Fellow in the Astronomy Department of the University of Toronto. In 1972, he discovered the first giant radio outburst emanating from a newly discovered X-ray source Cygnus X-3. A special edition of Nature was devoted to the discovery and follow-up observations. The radio outbursts coincide with the ejection of jets traveling at close to the speed of light emanating from either a black hole or neutron star. The discovery led to a faculty position in the Physics Department of the University of British Columbia in 1973. In 1976, he started the Galactic Radio Patrol Project to search for transient radio sources using the giant Green Bank 300 ft telescope of the US National Radio Astronomy Observatory. The project continued until 1988 when the telescope suddenly collapsed (apparently to prevent him from discovering intelligent life on other worlds!) . Amongst the project discoveries was a new supernova remnant with a central 7s pulsar which was featured on the cover of Nature in 1981. In 1989 he developed a keen interest in Bayesian data analysis which led in 1992 (together with Tom Loredo of Cornell University) to the Gregory-Loredo Bayesian algorithm for the detection of periodic signals of unknown shape. His text book on "Bayesian logical Data Analysis for the Physical Sciences" was published by Cambridge University Press (2005 & 2010). He has recently pioneered a very general fusion Markov Chain Monte Carlo Bayesian method for nonlinear model fitting. He is exploiting this new tool in extra-solar planet research where it functions as a multi-planet Kepler periodogram.
A remarkable array of new ground based and space based astronomical tools are providing astronomers access to other solar systems. Over 700 planets have been discovered to date including several super earths in the habitable zone. These successes on the part of the observers have spurred a significant effort to improve the statistical tools for analyzing data in this field. I will describe a Bayesian multi-planet Kepler periodogram based on a new fusion Markov chain Monte Carlo algorithm which incorporates parallel tempering, simulated annealing and genetic crossover operations. Each of these features facilitate the detection of a global minimum in chi-squared in a multi-modal environment. By combining all three, the algorithm greatly increases the probability of realizing this goal. The fusion MCMC is controlled by a unique two stage adaptive control system that automates the tuning of the proposal distributions for efficient exploration of the model parameter space even when the parameters are highly correlated. This controlled fusion MCMC algorithm is implemented in Mathematica using parallized code and run on an 8 core PC. It is designed to be a very general tool for nonlinear model fitting. The performance of the algorithm will be illustrated with some recent successes in the exoplanet field where it has facilitated the detection of a number of new planets.
Professor John McWhirter FRS FREng, Cardiff University, UKPolynomial matrix factorization for broadband adaptive signal processing
John McWhirter gained a First Class Honours degree in Mathematics (1970) and a PhD in Theoretical Physics (1973) from the Queen’s University of Belfast. In 1973 he joined the Royal Radar Establishment in Malvern (later to become the Royal Signals and Radar Establishment (RSRE), and now part of QinetiQ Ltd). He left QinetiQ in 2007 to take up a post as Distinguished Research Professor in the School of Engineering at Cardiff University.
John McWhirter has been conducting research on adaptive sensor array signal processing since 1980. In the process he built up the RSRE Signal Processing Group which carried out research on all aspects of digital signal processing for defence, with particular emphasis on adaptive filtering and beamforming. This group soon became renowned as a centre of excellence worldwide and, in recognition of this, he received the EURASIP Group Technical Achievement Award in 2003. He has published more than 170 research papers and holds numerous patents. In 1994 he was awarded the J J Thomson Medal by the Institution of Electrical Engineers for his personal research into systolic array processors. He is currently pursuing another programme of highly original research into broadband sensor arrays, convolutive blind signal separation, and polynomial matrix decomposition techniques.
John McWhirter was elected as a Fellow of the Royal Academy of Engineering in 1996 and the Royal Society in 1999. He is a Fellow of the Institute of Mathematics and its Applications (IMA) and served as President of the IMA in 2002 and 2003.
This talk will outline some of my recent research into techniques for factorizing polynomial matrices. It will present algorithms for computing the polynomial matrix eigen-value decomposition (PEVD), QR decomposition (PQRD) and singular value decomposition (PSVD). Polynomial matrices play an important role in the context of broadband sensor array signal processing including, for example, multiple input multiple output (MIMO) systems for wireless communications. In this case, the (i, j) element of the matrix is simply a polynomial representing the transfer function of the propagation channel from the ith transmitter to the jth receiver. In the talk, I will show how these new polynomial matrix factorization techniques have been successfully applied to numerical models for MIMO communications and also in the context of underwater acoustic surveillance arrays.
Dr Joan Bruna, Centre de Mathématiques Appliqués, Ecole Polytechnique, FranceScattering representations of stochastic processes
Joan Bruna obtained a BsC in Mathematics from Universitat Politècnica de Catalunya (UPC Barcelona) in 2002, followed by a MsC in Telecommunications Engineering from UPC in 2004 and a MsC in Applied Mathematics from the Ecole Normale Supérieure in 2005. He then joined the image-processing startup Let it Wave, founded by Professor Stéphane Mallat, and contributed to the development of several video processing chips using spatio-temporal multiscale decompositions.
Since 2008 he is pursuing a PhD at École Polytechnique, under the supervision of Stéphane Mallat, while being an image processing consultant at CSR. He is the author of several image processing patents as well as seminal papers on Scattering Operators applied to image recognition.
Scattering operators cascade wavelet modulus decompositions to obtain delocalized, translation invariant signal representations which are stable to deformations.
Scattering operators provide a new spectral representation of stationary processes, which characterizes high-order moments and thus captures non-gaussian properties. In this talk we will concentrate on three families of processes: Gaussian, Point Processes and Lévy.
For each of them, we will show how scattering coefficients provide consistent estimates, which will allow the identification and discrimination of several properties of the process.
First, we derive an expected scattering decay for Gaussian processes, using Hermite chaos expansions and CLTs, and derive a weak consistency result. We then introduce variable volatility gaussian processes, which model long-range fluctuations at different scales. Next, we study the scattering of point processes, starting from Poisson processes. Random deformations of Poisson processes supply a rich family of point processes which incorporates geometric information. Finally, we combine these results to characterize compound Point processes leading to a characterization of the Lévy measure.
Dr Michael Hirsch, University College London, UK and Max Planck Institute for Intelligent Systems, GermanyFrom Maxwell’s equations to efficient filter flow and its application to blind image deconvolution
Michael Hirsch studied physics and mathematics at the University of Erlangen and at Imperial College London. He received a Diploma in theoretical physics in 2007, before joining the Department of Empirical Inference headed by Prof. Dr. Bernhard Schölkopf at the Max Planck Institute for Intelligent Systems (formerly MPI for Biological Cybernetics). Since 2011 he is working as a post-doctoral researcher at the interplay of machine learning and cosmology at University College London. His research interests cover a wide range of signal and image processing problems in scientific imaging, as well as computational photography.
Digital image restoration is a key area in signal and image processing due to its many applications in both scientific imaging as well as everyday photography. An important sub-discipline, that is receiving an ever increasing interest from the academic as well the industrial world, is the field of image deconvolution, which enjoys this interest due to both its theoretical and practical implications. While classical or non-blind image deconvolution aims at restoring a sharp latent image assuming the blur is known, blind image deconvolution addresses the much harder but also more realistic case, where the degradation is unknown. An estimate of the original image must be obtained using only its blurred and possibly noise corrupted observations.
Blind image deconvolution involves many challenging problems, including modeling the image formation process, formulating tractable priors incorporating generic image statistics, and devising efficient methods for optimization. This renders it an intriguing but also intricate task, which has recently seen much attention as well as progress in both the image and signal processing but also the computer vision and graphics community.
In this context, we present a mathematically sound and physically well-motivated framework, which allows expressing and efficiently computing spatially-varying blur . We derive our “Efficient Filter Flow” framework as a discrete approximation of the incoherent imaging equation and devise expressions for its efficient implementation using the short-time Fourier transform . By extending the commonly employed invariant blur model, our framework substantially broadens the application range of blind deconvolution methods.
In a number of challenging real-world applications we demonstrate both the validity and versatility of our approach. In particular, we utilise our model for reconstructing a sharp latent image from a sequence of short-exposure images degraded by atmospheric turbulence . To capitalise on the abundance of data available in astronomical imaging, we develop a blind deconvolution algorithm, which bypasses the computational burden of current blind deconvolution methods that are restricted in the number of observations they can process .
Another challenging application which proves the usefulness of our framework is the problem of removing camera shake from a single image. We extend our model to incorporate the particularities of camera shake and develop an efficient algorithm that outperforms state-of-the-art methods in both restoration quality and computation time .
Finally, we show how our framework combined with a simple measurement procedure can be used to substantially improve the quality of images taken with photographic lenses that suffer severe optical aberrations .
Public lecture 5 Dec
Conference 11 Dec
Enter your email address to receive updates about scientific meetings at the Royal Society
Full listing of our events and exhibitions.
Watch videos of past events.
Most of our talks are free and open to the public.
We host major conferences for leading scientists.
Explore our annual science exhibition
Contact the events team.