This page is archived

Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.

Computation by natural systems

21 - 22 March 2018 09:00 - 17:00

Theo Murphy scientific meeting organised by Dr Dominique Chu, Professor Christian Ray and Professor Mikhail Prokopenko

Over recent years it has become clear in various sciences that many natural systems perform computations. Research into the properties of these natural computers remains fragmented along disciplinary boundaries between computer science, physics, engineering and biology. The objective of this meeting was to overcome this fragmentation by bringing together researchers from different fields to discuss their latest finding on natural computation.

Enquiries: please contact the Scientific Programmes team

Schedule

09:20 - 09:45 Fundamental limits on the thermodynamics of circuits

The thermodynamics of computation specifies the minimum amount that entropy must increase in the environment of any physical system that implements a given computation, when there are no constraints on how the system operates (the so-called “Landauer limit”). However common engineered computers use digital circuits that physically connect separate gates in a specific topology. Each gate in the circuit performs its own “local” computation, with no a priori constraints on how it operates. In contrast, the circuit’s topology introduces constraints on how the aggregate physical system implementing the overall “global” computation can operate. These constraints cause additional minimal entropy increase in the environment of the overall circuit, beyond that caused by the individual gates.

Here we analyze the relationship between a circuit’s topology and this additional entropy increase, which we call the “circuit Landauer cost”. We also compute a second kind of circuit cost, the “circuit mismatch cost”. This is the extra entropy that is generated if a physical system is designed to achieve minimal entropy production for a particular distribution q over its inputs, but is instead used with an input distribution p that differs from q.

We show that whereas the circuit Landauer cost cannot be negative, circuits can have positive or negative mismatch cost. In fact the total circuit cost (given by summing the two types of cost) can be positive or negative. Thus, the total amount of entropy increase in the environment can be either larger or smaller when a particular computation is implemented with a circuit.

Furthermore, in general different circuits computing the same Boolean function have both different Landauer costs and different mismatch costs. This provides a new set of challenges, never before considered, for how to design a circuit to implement a given computation with minimal thermodynamic cost. As a first step in addressing these challenges, we use tools from the computer science field of circuit complexity to analyze the scaling of thermodynamic costs for different computational tasks.

Professor David Wolpert, Santa Fe Institute, USA

10:00 - 10:30 Entropy and information in natural complex systems

The metaphor of a potential epigenetic differentiation landscape broadly suggests that during differentiation a stem cell follows the steepest descending gradient toward a stable equilibrium state which represents the final cell type. It has been conjectured that there is an analogy to the concept of entropy in statistical mechanics. In order to assess these predictions, Wiesner et al. computed the Shannon entropy for time-resolved single-cell gene expression data in two different experimental setups of haematopoietic differentiation. They found that the behaviour of this entropy measure is in contrast to these predictions.

This is one of very few examples where information theory gives direct insights into a measurable biochemical process. This talk will discuss these findings and their interpretation. And an overview will be given over information theoretic measures of complexity which might quantify the computation of such processes.

Associate Professor Karoline Wiesner, Bristol University, UK

11:00 - 11:30 Thermodynamics of computation with chemical reaction networks

Computing can be seen as transformations between nonequilibrium states. In stochastic thermodynamics, the second law can be used to provide a direct connection between the thermodynamic cost needed to induce such transformations and the distance from equilibrium (measured as a relative entropy) of these nonequilibrium states. This nonequilibrium Landauer principle not only holds for detailed balanced dynamics but also for driven non-detailed balanced ones. Remarkably, a closely related result also holds for open chemical reaction networks described by deterministic mass action law kinetics, even in presence of diffusion. These results lay the foundations for a systematic study of the thermodynamic cost needed to produce chemical spatio-temporal signals which play a key role in biology and chemical computing.

Professor Massimiliano Esposito, Université du Luxembourg, Luxembourg

11:45 - 12:30 Computation in the ground state: from spin models to memristors

We will describe two aspects of ongoing current research on (classical and analogic) computation using spin models and memristors (resistors with memory). We will first discuss how a simple Ising spin model can be turned into a bottom-up, programmable logic gate computational model. As I will show, it is possible to encode logic gates in the ground state of a spin system using only local external fields, and this model can serve as a proof of concept for more complicated computations using nanomagnets. I will briefly describe at the end why a completely different physical device, the memristor, performs a similar yet different type of computation. Memristors are simply resistors which change resistance as currents flows through them, and are thought of as low-energy 2-ports computational devices which can be used for neuromorphic (brain-inspired) computing. We conclude discussing future experiments to provide a more accurate mapping of memristive annealer to Ising models at an effective (hopefully low?) temperature.

Dr Francesco Caravelli, Los Alamos National Laboratory, USA

13:30 - 14:00 Robust accessible states allow efficient training of neural networks with very low precision stochastic synapses

The problem of training neural networks is in general non-convex and in principle computationally hard. This is true even for single neurons, when synapses are only allowed a few bits of precision. In practice, however, many fairly greedy heuristics exhibit surprisingly good results.

A large-deviations analysis performed with statistical physics tools has shown that efficient learning is made possible in these models by the existence of rare but very dense regions of optimal configurations, which have appealing robustness and generalization properties.  The analysis also shows that the capacity of the networks saturates fast with the number of synaptic values and thus indicates that very few bits are indeed sufficient for effective learning.  Further analyses also show that these regions can be directly targeted by learning algorithms, in various ways: a particularly simple one consists in exploiting synaptic fluctuations, offering an appealing avenue of investigation to interpret the high levels of stochasticity observed in biological neurons.

Professor Carlo Baldassi, Bocconi University, Milan, Italy

14:15 - 14:45 Computations and control in brain systems

The human brain is a complex organ characterized by a heterogeneous pattern of structural connections that supports long-range functional interactions. Non-invasive imaging techniques now allow for these patterns to be carefully and comprehensively mapped in individual humans, paving the way for a better understanding of how the complex network architecture of structural wiring supports computation, cognition, and behavior. While a large body of work now focuses on descriptive statistics to characterize these wiring patterns, a critical open question lies in how the organization of these networks constrains the potential repertoire of brain dynamics. In this talk, I will describe an approach for understanding how perturbations to brain dynamics propagate through complex wiring patterns, driving the brain into new states of activity. Drawing on a range of disciplinary tools – from graph theory to network control theory and optimization – I will identify control points in brain networks, characterize trajectories of brain activity states following perturbation to those points, and propose a mechanism for how network control evolves in our brains as we grow from children into adults. I will close with a more general discussion of the importance of understanding the role of structural network architecture in computation and cognition.

Professor Danielle S. Bassett, University of Pennsylvania, USA

15:30 - 16:00 Energy-efficient information transfer in neural networks

Abstract not yet available.

ImageJ=1.46r

Professor Renaud Jolivet, University of Geneva and CERN, Switzerland

16:15 - 16:45 The evolution of cellular individuality Abstract not yet available.

Professor Eric Deeds, University of Kansas, USA

09:00 - 09:30 Novel computing substrates

Natural systems do not compute. Humans interpret their space-time development as a computation. Anything around us can be used to design computing circuits. We demonstrate this on examples of collision-based logical circuits implemented in experimental laboratory prototypes and computer models of reaction-diffusion chemical systems, living swarms, slime mould, plants, liquid marbles and protein polymers. In collision-based computers logical values are represented by a space-time position of mobile objects: excitation wavefronts, protoplasmic tubes, solitons, plat roots. Logical gates are realized in interactions between the objects. The experimental and theoretical designs uncovered might not be better than existing silicon architectures but offer us worlds of alternative computation. Principle of information processing in spatially extended chemical systems, slime mould, plants and polymers might inspire us to develop efficient algorithms, design optimal architectures and manufacture working prototypes of future and emergent computing devices.

Professor Andrew Adamatzky, University of the West of England, UK

09:45 - 10:15 Computing during development Abstract not yet available.

Professor Nick Monk, University of Sheffield, UK

11:00 - 11:30 Exploiting a structured environment: biochemical approaches for constructing information-harvesting devices

Organisms exploit correlations in their environment to survive and grow. This fact holds across scales, from bacterial chemotaxis, which leverages the spatial clustering of food molecules, to the loss of leaves by deciduous trees, which is worthwhile because sunlight exposure is highly correlated from day to day. At the most basic level, correlations within the environment that are not sustained by physical interactions indicate a non-equilibrium state that can be exploited via information processing.

Here, it is demonstrated that relatively simple molecular systems can extract useful work from correlations ("feed on the information"). The use of concrete biochemical designs helps to demystify the thermodynamic role of information, and highlights the constraints under which information-exploiting devices must operate. Both free-running devices and those in which external manipulation (but, importantly, not feedback-control) is required are shown. Furthermore, competing strategies for exploiting a correlated environment are contrasted. These simple constructions illustrate a fundamental dilemma between exploring and exploiting a resource.

Dr Thomas Ouldridge, Imperial College London, UK

11:45 - 12:15 Programming DNA circuits Biological organisms use complex molecular networks to navigate their environment and regulate their internal state. The development of synthetic systems with similar capabilities could lead to applications such as smart therapeutics or fabrication methods based on self-organization. To achieve this, molecular control circuits need to be engineered to perform integrated sensing, computation and actuation. In this talk, it will be demonstrated how DNA hybridization and strand displacement can be used to implement the computational core of such control circuits. To assist the design process, domain-specific programming languages have been developed to specify a sequence-level circuit design, and compile to chemical reaction networks, a well-established formalism for describing and simulating chemistry. Furthermore, parameter inference techniques have been embedded in this design platform, which facilitates design-build-test cycles via model-based characterization and circuit prediction. A first example will introduce the design and construction of a DNA implementation of the approximate majority algorithm, which seeks to establish consensus in a population of agents (molecules). A second example will illustrate how DNA circuits can be considerably accelerated by tethering DNA hairpin molecules to a fixed template, overcoming molecular diffusion as a rate-limiting step.

Dr Neil Dalchau, Microsoft Research, UK

13:30 - 14:00 What time is it?

Even without looking at a watch, we have an inner feeling for time. How do we measure time? Our body, in particular each of our cells has an inner clock enabling all our rhythmic biological activities like sleeping. Astonishingly, prokaryotic cyanobacteria, which can divide faster than ones a day, also use an inner timing system to foresee the accompanying daily changes of light and temperature and regulate their physiology and behavior in 24-hour cycles. Their underlying biochemical oscillator fulfills all criteria of a true circadian clock though it is made of solely three proteins. It ticks with a robust 24-hour period even under fluctuating or continuous conditions. Nevertheless, it can be entrained by light, temperature or nutrients. Reconstituted from the purified protein components, KaiC, KaiB, and KaiA, the cyanobacterial protein clock can tick autonomously in a test tube for weeks.  This apparent simplicity has proven to be an ideal system for answering questions about information transfer and robustness of circadian clocks. 
Computational modeling identified various aspects of this interesting system like nested feedback loops, period robustness against noise and seemingly conflicting synchronisation with the environment.

Professor Dr Ilka Maria Axmann, Heinrich-Heine-Universität Düsseldorf, Germany

14:15 - 14:45 Dynamics, Feedback, and Noise in Natural and Synthetic Gene Regulatory Networks

Cells live in uncertain, dynamic environments and have many feedback mechanisms for sensing and responding to changes in their surroundings. This talk will discuss examples of both natural and engineered feedback circuits and how they can be used to alter dynamics of gene expression. Using a combination of time-lapse microscopy experiments and stochastic modeling, the talk will show how E. coli bacteria use feedback to generate dynamics and noise in expression of a key regulatory protein, providing transient antibiotic resistance at the single-cell level. In addition, it will highlight diverse examples of how feedback can be used to allow cells to responds to changing environments.

Professor Mary Dunlop, Boston University, USA

15:30 - 16:00 Computing with motile biological agents exploring networks

Many mathematical problems, e.g., cryptography, network routing, require the exploration a large number of candidate solutions. Because the time required for solving these problems grows exponentially with their size, electronic computers, which operate sequentially, cannot solve them in a reasonable time. In contrast, biological organisms routinely process information in parallel for essential tasks, e.g., foraging, searching for space opens three possible biocomputing avenues.

Biomimetic algorithms translate biological procedures, e.g., space searching, chemotaxis, etc., into mathematical algorithms. This approach was used to derive fungi-inspired algorithms for searching space and bacterial chemotaxis-inspired algorithms for finding the edges of geometrical patterns.

Biosimulation uses the procedures of large numbers of motile biological agents, directly, without any translation to formal mathematical algorithms, thus by-passing computation-proper. The agents explore complex networks that mimic real situations, e.g., traffic. This approach focused almost entirely on traffic optimization, using an amoeboid organisms placed in confined geometries, with chemotactic ‘cues’, e.g., nutrients in node coordinates.

Computing with biological agents in networks uses very large number of agents exploring microfluidics networks, purposefully designed to encode hard mathematical problems. The foundations of a parallel-computation system in which a combinatorial problem (SUBSET SUM) is encoded into a graphical, modular network embedded in a nanofabricated planar device was reported. Exploring the network in a parallel fashion using a large number of independent agents, e.g., molecular motor-propelled cytoskeleton filaments, bacteria, algae, solves the mathematical problem. This device uses orders of magnitude less energy than conventional computers, additionally addressing issues related to parallel computing implementation.

Professor Dan V. Nicolau, McGill University, Canada

16:15 - 17:00 Panel discussion