Skip to content
Events

Verified trustworthy software systems

Event

Starts:

April
042016

09:00

Ends:

April
052016

17:00

Location

The Royal Society, London, 6-9 Carlton House Terrace, London, SW1Y 5AG

Overview

Scientific discussion meeting organised by Professor Philippa Gardner, Professor Peter O'Hearn, Professor Mike Gordon FRS, Professor Greg Morrisett and Professor Fred B. Schneider

Verification is used to find bugs, and to describe and prove properties of industrial software

Synopsis

Software faults cause widespread problems in society. Symbolic verification provides promising solutions to these problems. The field is coming of age: verification techniques scale; tools are tractable; and real-world programs can be verified. This meeting brings together academics, industrial experts and government scientists to identify key challenges in establishing verification at the heart of the software design process. 

Download the meeting programme (pdf).

Please expand the speaker profiles to find the audio from the talk.

Attendance to this event has now passed

This meeting is intended for researchers, industrialists and government employees interested in software verification.

Enquiries: Contact the events team 

Event organisers

Select an organiser for more information

Schedule of talks

04 April

09:00-12:30

Session 1

4 talks Show detail Hide detail

Chairs

Dr John Launchbury

09:10-09:45 Industrial hardware and software verification with ACL2

Professor J Strother Moore, University of Texas at Austin, USA

Abstract

ACL2 has seen sustained use over the past decade or two at various companies including AMD, Centaur, Intel, Kestrel Institute, Oracle, and Rockwell-Collins. This paper focuses on how and why ACL2 is used in industry. The industrial applications of ACL2 include verification of the functional correctness of floating-point hardware designs in microcode, RTL, and Verilog, verification of other properties of such circuits, analysis of other transition systems including reactive systems and asynchronous transaction systems, basic results for elliptic curve cryptography, verification of information flow properties of operating systems, and modelling and verification of software in low level languages including machine code, microcode, JVM bytecode, and LLVM. ACL2 is well-suited to many of these applications because it is an integrated programming/proof environment supporting a subset of the ANSI standard Common Lisp programming language. As a programming language ACL2 permits the coding of efficient and robust programs; as a prover ACL2 can be fully automatic but provides many features permitting domain specific human-supplied guidance at various levels of abstraction. ACL2 specifications and models often serve as efficient execution engines for the modelled artefacts while permitting formal analysis and proof of properties. ACL2 also provides support for the development and verification of other formal analysis tools. However, ACL2 did not find its way into industrial use merely because of its technical features. The core ACL2 user/development community has a shared vision of making mechanised verification routine when appropriate and has been committed to this vision for the quarter century since the Computational Logic, Inc., Verified Stack. The community has focused on demonstrating the viability of the tool by taking on industrial projects (often at the expense of not being able to publish much) and is responsive to the needs of the industrial users in the community.

Show speakers

09:45-10:15 miTLS:Verified refernce implementations for TLS

Abstract


10:45-11:20 Using formal methods to eliminate exploitable bugs

Professor Kathleen Fisher, Tufts University, USA

Abstract

For decades, formal methods have offered the promise of software that doesn’t have exploitable bugs. Until recently, however, it hasn’t been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. It has been proven to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and to enforce integrity and confidentiality properties. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution in the formal methods community, including increased processor speed, better infrastructure like the Isabelle/HOL and Coq theorem provers, specialised logics for reasoning about low-level code, increasing levels of automation afforded by tactic languages and SAT/SMT solvers, and the decision to move away from trying to verify existing artefacts and instead focus on co-developing the code and the correctness proof. In this talk I will explore the promise and limitations of current formal methods techniques for producing useful software that provably does not contain exploitable bugs. I will discuss these issues in the context of DARPA’s HACMS program, which has as its goal the creation of high-assurance software for vehicles, including quad-copters, helicopters, and automobiles.

Show speakers

11:20-11:55 Formal verification: will the seeding ever flower?

Neil White, Altran, UK

Abstract

In one sense, formal specification and verification has been highly successful: techniques have been developed in pioneering academic research, transferred to software companies through training and partnerships, and successfully deployed into systems with national significance. Altran UK have been in the vanguard of this movement and the talk will summarise some of the key deployments of formal techniques over the past 20 years. However, the impact of formal techniques remains within an industrial niche, and whilst government and OEMs across industry search for solutions to the problems of poor quality code, the wider software industry remain resistant to adoption of this proven solution. The talk will conclude by reflecting on some of the challenges we face as a community in ensuring that formal techniques achieve their true potential impact for society.

Show speakers

13:30-17:00

Session 2

4 talks Show detail Hide detail

Chairs

Dr Warren Hunt

13:35-14:10 Program analysis using syntax-guided synthesis engines

Professor Daniel Kroening, University of Oxford, UK

Abstract

We show how a broad range of program analysis tasks can be solved using a second-order satisfiability solver, and present an instance of this approach that uses expression synthesis to solve quantified formulas with bit-vector constraints.

Show speakers

14:10-14:45 Provably trustworthy systems

Professor Gerwin Klein, Data61 and UNSW Australia, Australia

Abstract

While many software systems are trusted, few are worthy of that trust. Formalproof, which can provide strong evidence for the highest levels of trustworthiness, is often discounted as infeasible for real software systems.

To show that this does not need to be the case, I will give an overview of the formal verification of the seL4 microkernel, from high-level classical security properties own to the binary code level. This verification was cheaper than current best practice industrial methods for achieving similar levels of assurance. However, it was still is more expensive than standard software construction, which delivers reasonable quality, albeit not high assurance. I will highlight our current research in proof engineering and code/proof co-generation to make deep formal software verification economically more viable for high-quality software.

Show speakers

15:10-15:45 Using Crash Hoare logic for certifying the FSCQ file system

Professor Nickolai Zeldovich, MIT CSAIL, USA

Abstract

FSCQ is the first file system with a machine-checkable proof (using the Coq proof assistant) that its implementation meets its specification and whose specification includes crashes.  FSCQ provably avoids bugs that have plagued previous file systems, such as performing disk writes without sufficient barriers or forgetting to zero out directory blocks.  If a crash happens at an inopportune time, these bugs can lead to data loss.  FSCQ's theorems prove that, under any sequence of crashes followed by reboots, FSCQ will recover the file system correctly without losing data.

To state FSCQ's theorems, we introduce the Crash Hoare logic (CHL), which extends traditional Hoare logic with a crash condition, a recovery procedure, and logical address spaces for specifying disk states at different abstraction levels.  CHL also reduces the proof effort for developers through proof automation.  Using CHL, we developed, specified, and proved the correctness of the FSCQ file system.  Although FSCQ's design is relatively simple, experiments with FSCQ running as a user-level file system show that it is sufficient to run Unix applications with usable performance.  FSCQ's specifications and proofs required significantly more work than the implementation, but the work was manageable even for a small team of a few researchers.

Show speakers

15:45-16:20 Industrial concurrency specification for C/C++ and Nvidia GPUs

Dr Mark Batty, University of Kent, UK

Abstract

Modern computer systems have intricate memory hierarchies that violate the intuition of a global timeline of interleaved memory accesses. In these so-called relaxed-memory systems, it can be dangerous to use intuition, specifications are universally unreliable, and the outcome of testing is both wildly nondeterministic and dependent on hardware that is getting more permissive of odd behaviour with each generation. These complications pervade the whole system, and have led to bugs in language specifications, deployed processors, compilers, and vendor-endorsed programming idioms – it is clear that current engineering practice is severely lacking.

I will describe my part of a sustained effort in the academic community to tackle these problems by applying a range of techniques, from exhaustive testing to mechanised formal specification and proof. I will focus on two cases of industrial impact: first, the formalisation, refinement and validation the 2011/14 C and C++ concurrency design, leading to changes in the ratified international standard, and second, an engagement with Nvidia, where I have helped to define and validate the relaxed memory model of their next generation of graphics processors. This work represents an essential enabling step for verification on modern concurrent systems.

Show speakers

05 April

09:00-12:30

Session 3

4 talks Show detail Hide detail

Chairs

Professor Gordon Plotkin

09:00-09:35 Moving fast with software verification

Professor Peter O'Hearn, Facebook and University College London, UK

Abstract

This is a story of transporting ideas from theoretical research in reasoning about programs into the fast-moving engineering culture of Facebook. The context is that I landed at Facebook in September of 2013, when we brought the Infer static analyser with us from the verification startup Monoidics. Infer is based on recent research in program analysis, which applied a relatively recent development in logics of programs, separation logic. Infer is deployed internally, running continuously to verify select properties of every code modification in Facebook’s mobile apps; these include the main Facebook apps for Android and iOS, Facebook Messenger, Instagram, and other apps which are used by over a billion people in total. This talk describes our experience deploying verification technology inside Facebook, some the challenges we faced, lessons learned, and speculates on prospects for broader impact of verification technology.

Show speakers

09:35-10:10 Understanding and verifying JavaScript programs

Professor Philippa Gardner, Imperial College London, UK

Abstract

JavaScript is the most widely used language for client-side web applications. The dynamic nature of JavaScript, together with its complicated semantics, makes the understanding and development of JavaScript code notoriously difficult. Because of this, JavaScript programmers still have very little tool support for catching errors early in development, compared with the abundance of tools (such as IDEs and specialised static analysis tools) that are available for more traditional languages such as C and Java.

The development of such analysis tools for JavaScript web applications is known to be a challenging task. In this talk, I will describe:

--JSCert, a Coq mechanised specification of JavaScript which is line-by-line close to the English standard.  It has an accompanying reference interpreter, which is provably correct with respect to JSCert and tested using the official test suite.

--JSIL, an intermediate language for JavaScript with a semantics-driven compiler which has been proved correct for a significant fragment and properly tested. In future, we will develop JSIL front-ends for verification tools based on symbolic execution, such as CBMC used by Amazon, Infer developed at Facebook and Rosette.

--JSVerify, a fledgling symbolic execution tool which has currently been used to verify JSIL implementations of the internal and built-in functions of JavaScript. In future, we will develop a substantial JavaScript verification tool, requiring a formal link between the JavaScript and JSIL program logics, a large engineering effort to make the verification tractable, and the verification of substantial program examples such as those from Google’s V8 array library.

Our ultimate goal is to develop a trusted service for understanding and verifying JavaScript programs.

Show speakers

10:40-11:15 Avoiding fatal flaws with formal methods

Professor Fred B. Schneider, Cornell University, USA

Abstract

Formal methods are once again in vogue as a tool for building software systems that can resist attacks. The approach, however, has disappointed before. What might the future hold? We discuss the reasons for those past failures, and we explore what has changed (as well as what hasn't).

Show speakers

11:15-11:50 A rigorous approach to consistency in cloud databases

Dr Alexey Gotsman, IMDEA Software Institute, Spain

Abstract

Modern databases underlying large-scale cloud services guarantee immediate availability and tolerate network failures at the expense of providing only weak data consistency guarantees. Unfortunately, we currently lack sophisticated theory and tools that would help programmers in using such databases correctly despite the effects of weak consistency. I will present our ongoing work that uses formal techniques to study how weak consistency in cloud databases affects the correctness of applications using them. Building on it, we aim to improve the programmability and performance of cloud databases.

Show speakers

13:30-17:00

Session 4

4 talks Show detail Hide detail

Chairs

Professor David Sands, Chalmers University of Technology

13:30-14:05 imPACT: Privacy, Accountability, Compliance and Trust in tomorrow's internet

14:05-14:40 Combining static analysis and machine learning for industrial-quality information-flow-security enforcement

Marco Pistoia

Abstract

Security auditing of industry-scale software systems mandates automation. Static taint analysis enables deep and exhaustive tracking of suspicious data flows for detection of potential leakage and integrity violations, such as cross-site scripting (XSS), SQL injection (SQLi) and log forging. Research in this area has taken two directions: program slicing and type systems. Both of these approaches suffer from a high rate of false findings, which limits the usability of analysis tools. Attempts to reduce the number of false findings have resulted in analyses that are either unsound (suffering from the dual problem of false negatives) or too expensive (due to their high precision, thereby failing to scale to real-world applications). We investigate a novel approach that combines static analysis and machine learning for increasing analysis scalability while reducing the number of false positives. From a static-analysis perspective, the key observation is that taint analysis is a demand-driven problem. This enables lazy and scalable computation of vulnerable information flows. With the analysis being scalable to large codebases, the user is still left to review thousands of warnings, and classify them as either actionable or spurious. This is both burdensome and error prone, leaving developers disenchanted. We address this challenge by applying statistical learning to the warnings output by the analysis based on user feedback on a small set of warnings. This leads to an interactive solution, whereby the user classifies a small fragment of the issues reported by the analysis, and the learning algorithm then classifies the remaining warnings automatically.

Show speakers

15:10-15:45 Trust in progamming tools: the formal verification of compilers and static analysers

Dr Xavier Leroy, Inria Paris, France

Abstract

The formal verification of software has made great progress since 2000, resulting in verification tools that establish crucial safety and security properties of software and are beginning to be used in industry. However, the confidence we can have in the results of these tools is limited by two risks: unsoundness of verification tools, which could fail to account for all behaviours of the analysed program; and miscompilation of the verified program, causing incorrect executable code to be produced from a correct program. Those risks can be ruled out by applying program proof to the verification tools and to compilers themselves. In this talk I will briefly present two such formally-verified tools developed at Inria: the CompCert C compiler and the Verasco static analyser based on abstract interpretation. I will conclude with some future directions and challenges in this line of work.

Show speakers

15:45-16:20 Mind the gap: from crypto to code

Professor Greg Morrisett, Cornell University, USA

Abstract

The field of cryptography has lead the way in casting questions of security and privacy into a mathematical science. But there is still a large gap between the paper-and-pencil proofs of security done by cryptographers, and the code that is intended to realize them. I will discuss recent advances in formulating machine-checked proofs of security, not only for protocols, but the actual code.

Show speakers
Verified trustworthy software systems The Royal Society, London 6-9 Carlton House Terrace London SW1Y 5AG UK