Links to external sources may no longer work as intended. The content may not represent the latest thinking in this area or the Society’s current position on the topic.
Verified trustworthy software systems
Scientific discussion meeting organised by Professor Philippa Gardner, Professor Peter O'Hearn, Professor Mike Gordon FRS, Professor Greg Morrisett and Professor Fred B. Schneider
Synopsis
Software faults cause widespread problems in society. Symbolic verification provides promising solutions to these problems. The field is coming of age: verification techniques scale; tools are tractable; and real-world programs can be verified. This meeting brings together academics, industrial experts and government scientists to identify key challenges in establishing verification at the heart of the software design process.
Download the meeting programme (pdf).
Please expand the speaker profiles to find the audio from the talk.
Attendance to this event has now passed
This meeting is intended for researchers, industrialists and government employees interested in software verification.
Enquiries: Contact the events team
Organisers
Schedule
Chair
Dr John Launchbury
Dr John Launchbury
Dr John Launchbury is the Director of the Information Innovation Office (I2O) at DARPA. In this role he develops strategy, and works with I2O program managers to develop new programs and transition program products.
Before joining DARPA, Dr Launchbury was chief scientist of Galois, Inc., which he founded in 1999 to address challenges in information assurance through the application of functional programming and formal methods. Under his leadership, the company experienced strong growth and was recognized for thought leadership in high-assurance technology development.
Prior to founding Galois, Dr Launchbury was a full professor at the OGI School of Science and Engineering at OHSU (Oregon). He earned awards for outstanding teaching and gained international recognition for his work on the analysis and semantics of programming languages, the Haskell programming language in particular.
Dr Launchbury received first-class honours in mathematics from Oxford University, holds a Ph.D. in computing science from the University of Glasgow and won the British Computer Society's distinguished dissertation prize. In 2010, Dr Launchbury was inducted as a Fellow of the Association for Computing Machinery (ACM).
09:10 - 09:45 |
Industrial hardware and software verification with ACL2
ACL2 has seen sustained use over the past decade or two at various companies including AMD, Centaur, Intel, Kestrel Institute, Oracle, and Rockwell-Collins. This paper focuses on how and why ACL2 is used in industry. The industrial applications of ACL2 include verification of the functional correctness of floating-point hardware designs in microcode, RTL, and Verilog, verification of other properties of such circuits, analysis of other transition systems including reactive systems and asynchronous transaction systems, basic results for elliptic curve cryptography, verification of information flow properties of operating systems, and modelling and verification of software in low level languages including machine code, microcode, JVM bytecode, and LLVM. ACL2 is well-suited to many of these applications because it is an integrated programming/proof environment supporting a subset of the ANSI standard Common Lisp programming language. As a programming language ACL2 permits the coding of efficient and robust programs; as a prover ACL2 can be fully automatic but provides many features permitting domain specific human-supplied guidance at various levels of abstraction. ACL2 specifications and models often serve as efficient execution engines for the modelled artefacts while permitting formal analysis and proof of properties. ACL2 also provides support for the development and verification of other formal analysis tools. However, ACL2 did not find its way into industrial use merely because of its technical features. The core ACL2 user/development community has a shared vision of making mechanised verification routine when appropriate and has been committed to this vision for the quarter century since the Computational Logic, Inc., Verified Stack. The community has focused on demonstrating the viability of the tool by taking on industrial projects (often at the expense of not being able to publish much) and is responsive to the needs of the industrial users in the community. Professor J Strother Moore, University of Texas at Austin, USA
Professor J Strother Moore, University of Texas at Austin, USAJ Strother Moore held the Admiral B.R. Inman Centennial Chair in Computing Theory at the University of Texas at Austin until his retirement in 2015. He is the author of many books and papers on automated theorem proving and mechanical verification of computing systems. Along with Boyer he is a co-author of the Boyer-Moore theorem prover and the Boyer-Moore fast string searching algorithm. With Matt Kaufmann he is the co-author of the ACL2 theorem prover. In 1999, Boyer and Moore were awarded the Herbrand Award for their work in automatic theorem proving. Boyer, Moore, and Kaufmann were awarded the 2005 ACM Software Systems Award for the Boyer-Moore theorem prover. Moore is a Fellow of the American Association for Artificial Intelligence, the ACM, the Royal Society of Edinburgh (Corresponding Fellow), and the US National Academy of Engineering. |
|
---|---|---|
09:45 - 10:15 |
miTLS:Verified refernce implementations for TLS
|
|
10:45 - 11:20 |
Using formal methods to eliminate exploitable bugs
For decades, formal methods have offered the promise of software that doesn’t have exploitable bugs. Until recently, however, it hasn’t been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. It has been proven to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and to enforce integrity and confidentiality properties. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution in the formal methods community, including increased processor speed, better infrastructure like the Isabelle/HOL and Coq theorem provers, specialised logics for reasoning about low-level code, increasing levels of automation afforded by tactic languages and SAT/SMT solvers, and the decision to move away from trying to verify existing artefacts and instead focus on co-developing the code and the correctness proof. In this talk I will explore the promise and limitations of current formal methods techniques for producing useful software that provably does not contain exploitable bugs. I will discuss these issues in the context of DARPA’s HACMS program, which has as its goal the creation of high-assurance software for vehicles, including quad-copters, helicopters, and automobiles. Professor Kathleen Fisher, Tufts University, USA
Professor Kathleen Fisher, Tufts University, USAKathleen Fisher is Professor in the Computer Science Department at Tufts University. Previously, she was a Principal Member of the Technical Staff at AT&T Labs Research, a Consulting Faculty Member in the Computer Science Department at Stanford University, and a program manager at DARPA where she started and managed the HACMS and PPAML programs. Kathleen's research focuses on advancing the theory and practice of programming languages and on applying ideas from the programming language community to the problem of ad hoc data management. The main thrust of her work has been in domain-specific languages to facilitate programming with massive amounts of ad hoc data. Kathleen is an ACM Fellow. She has served as program chair for FOOL, ICFP, CUFP, and OOPSLA and as general chair for ICFP. Kathleen is past Chair of the ACM Special Interest Group in Programming Languages (SIGPLAN), past Co-Chair of CRA's Committee on the Status of Women (CRA-W), a former editor of the Journal of Functional Programming, and an associated editor of TOPLAS. Kathleen is a recipient of SIGPLAN’s Distinguished Service Award. |
|
11:20 - 11:55 |
Formal verification: will the seeding ever flower?
In one sense, formal specification and verification has been highly successful: techniques have been developed in pioneering academic research, transferred to software companies through training and partnerships, and successfully deployed into systems with national significance. Altran UK have been in the vanguard of this movement and the talk will summarise some of the key deployments of formal techniques over the past 20 years. However, the impact of formal techniques remains within an industrial niche, and whilst government and OEMs across industry search for solutions to the problems of poor quality code, the wider software industry remain resistant to adoption of this proven solution. The talk will conclude by reflecting on some of the challenges we face as a community in ensuring that formal techniques achieve their true potential impact for society. Neil White, Altran, UK
Neil White, Altran, UKNeil White is the Head of Engineering for Altran UK. He has worked in the area of bespoke safety-critical software for nearly 20 years. In that time he has delivered signature projects into a variety of markets, including Air Traffic Control, Aerospace, Defence, and Finance. His projects have included the first deployments to new regulatory standards, and the industrial introduction of new technologies. He is a long-term proponent of formal methods and demonstrably correct systems. He is currently interested in the sustainable evolution required to keep hi-tec companies at the forefront of their markets. |
Chair
Dr Warren Hunt
Dr Warren Hunt
Dr Warren A. Hunt, Jr. is a Professor at the University of Texas Computer Science Department, where he teaches formal methods and computer architecture, and where he investigates and develops methods for microprocessor specification and program verification, automated theorem-proving methods, and computational biology tools. Dr Hunt is currently the PI for DARPA's CRASH effort at UT.
Dr Hunt has been active in the hardware verification area for more than 25 years, and he has applied formal verification tools and methods to a litany of microprocessor designs: FM8501, FM8502, FM9001, Motorola CAP DSP, FM9801, VIA Nano, and Oracle SPARC. Dr Hunt completed the first complete mechanical verification of a microprocessor design in 1985, and he, along with Bishop Brock, specified, designed, and mechanically verified, the 32-bit FM9001, the first and only such verified microprocessor ever to be built. Dr Hunt is the steering committee chairman of the FMCAD Conference series, and he serves as an associate editor of the "Formal Methods in System Design" journal.
Prior to his 2002 arrival at UT, Dr Hunt worked as a Research Staff Member and Manager at IBM's Austin Research Laboratory from 1997 to 2002, where he was involved with formal verification and high-performance computing as one of the founders and PIs of IBM's DARPA PERCS project. From 1986 until 1997, he served as Vice President of Hardware Engineering at Computational Logic, Inc. From 1982 until 1985, Hunt served as Hardware and Systems Manager for Cyb Systems. Dr Hunt has a BSEE from Rice University and a PhD in computer science from UT Austin.
13:35 - 14:10 |
Program analysis using syntax-guided synthesis engines
We show how a broad range of program analysis tasks can be solved using a second-order satisfiability solver, and present an instance of this approach that uses expression synthesis to solve quantified formulas with bit-vector constraints. Professor Daniel Kroening, University of Oxford, UK
Professor Daniel Kroening, University of Oxford, UKDaniel Kroening received the M.E. and doctoral degrees in computer science from the University of Saarland, Saarbrucken, Germany, in 1999 and 2001, respectively. He joined the Model Checking group in the Computer Science Department at Carnegie Mellon University, Pittsburgh PA, USA, in 2001 as a Post-Doc. He was an assistant professor at the Swiss Technical Institute (ETH) in Zurich, Switzerland, from 2004 to 2007. He is now Professor of Computer Science at the University of Oxford. His research interests include automated formal verification of hardware and software systems, concurrency, decision procedures, embedded systems and hardware/software co-design. |
|
---|---|---|
14:10 - 14:45 |
Provably trustworthy systems
While many software systems are trusted, few are worthy of that trust. Formalproof, which can provide strong evidence for the highest levels of trustworthiness, is often discounted as infeasible for real software systems. To show that this does not need to be the case, I will give an overview of the formal verification of the seL4 microkernel, from high-level classical security properties own to the binary code level. This verification was cheaper than current best practice industrial methods for achieving similar levels of assurance. However, it was still is more expensive than standard software construction, which delivers reasonable quality, albeit not high assurance. I will highlight our current research in proof engineering and code/proof co-generation to make deep formal software verification economically more viable for high-quality software. Professor Gerwin Klein, Data61 and UNSW Australia, Australia
Professor Gerwin Klein, Data61 and UNSW Australia, AustraliaGerwin Klein is a Senior Principal Researcher at Data61 (previously NICTA), and Conjoint Professor at UNSW in Sydney, Australia. He is leading Data61's Formal Methods research in the Trustworthy Systems Group and is the leader of the seL4 verification effort that created the first machine-checked proof of functional correctness of a general-purpose microkernel. He joined NICTA in 2003 after receiving his PhD from Technische Universität München, Germany, where he formally proved type-safety of the Java Bytecode Verifier in the theorem prover Isabelle/HOL. His research interests are in interactive formal verification, programming languages, and systems code. Gerwin Klein has won a number of awards, among them together with his team the 2011 MIT TR-10 award for the top ten emerging technologies world-wide. |
|
15:10 - 15:45 |
Using Crash Hoare logic for certifying the FSCQ file system
FSCQ is the first file system with a machine-checkable proof (using the Coq proof assistant) that its implementation meets its specification and whose specification includes crashes. FSCQ provably avoids bugs that have plagued previous file systems, such as performing disk writes without sufficient barriers or forgetting to zero out directory blocks. If a crash happens at an inopportune time, these bugs can lead to data loss. FSCQ's theorems prove that, under any sequence of crashes followed by reboots, FSCQ will recover the file system correctly without losing data. To state FSCQ's theorems, we introduce the Crash Hoare logic (CHL), which extends traditional Hoare logic with a crash condition, a recovery procedure, and logical address spaces for specifying disk states at different abstraction levels. CHL also reduces the proof effort for developers through proof automation. Using CHL, we developed, specified, and proved the correctness of the FSCQ file system. Although FSCQ's design is relatively simple, experiments with FSCQ running as a user-level file system show that it is sufficient to run Unix applications with usable performance. FSCQ's specifications and proofs required significantly more work than the implementation, but the work was manageable even for a small team of a few researchers. Professor Nickolai Zeldovich, MIT CSAIL, USA
Professor Nickolai Zeldovich, MIT CSAIL, USANickolai Zeldovich is an Associate Professor at MIT's department of Electrical Engineering and Computer Science, and a member of the Computer Science and Artificial Intelligence Laboratory. His research interests are in building practical secure systems, from operating systems and hardware to programming languages and security analysis tools. He received his PhD from Stanford University in 2008, where he developed HiStar, an operating system designed to minimize the amount of trusted code by controlling information flow. In 2005, he co-founded MokaFive, a company focused on improving desktop management and mobility using x86 virtualization. His current research is focused on formal verification of systems software. |
|
15:45 - 16:20 |
Industrial concurrency specification for C/C++ and Nvidia GPUs
Modern computer systems have intricate memory hierarchies that violate the intuition of a global timeline of interleaved memory accesses. In these so-called relaxed-memory systems, it can be dangerous to use intuition, specifications are universally unreliable, and the outcome of testing is both wildly nondeterministic and dependent on hardware that is getting more permissive of odd behaviour with each generation. These complications pervade the whole system, and have led to bugs in language specifications, deployed processors, compilers, and vendor-endorsed programming idioms – it is clear that current engineering practice is severely lacking. I will describe my part of a sustained effort in the academic community to tackle these problems by applying a range of techniques, from exhaustive testing to mechanised formal specification and proof. I will focus on two cases of industrial impact: first, the formalisation, refinement and validation the 2011/14 C and C++ concurrency design, leading to changes in the ratified international standard, and second, an engagement with Nvidia, where I have helped to define and validate the relaxed memory model of their next generation of graphics processors. This work represents an essential enabling step for verification on modern concurrent systems. Dr Mark Batty, University of Kent, UK
Dr Mark Batty, University of Kent, UKMark Batty is an expert on language concurrency models. Prior to his work, C and C++ did not have a well-defined concurrency model; Batty worked with the ISO on the 2011 language revisions to define a rigorous formal model in tight correspondence with the standard. In the process, he identified major problems, proposing fixes that were adopted prior to ratification; a rare example of formal semantics directly impacting mainstream industrial practice. With others, Batty proved-sound proposed industry-standard compiler mappings from C++11 to x86 and Power, he proved (in HOL4) that simple race-free programs have SC behaviour (DRF-SC), a central tenet of the design, and he defined a sound principle for compositional reasoning in C++11. Despite these positive results, Batty showed that C/C++11 has fundamental problems: to admit compiler optimisations, the semantics has been made too weak. Batty's recent work focuses on GPGPU concurrency: he has identified bugs in a nascent AMD GPU design feature and worked with Nvidia on their internal specifications. |
Chair
Professor Gordon Plotkin
Professor Gordon Plotkin
After an undergraduate degree in Mathematics and Physics, obtained from Glasgow University, Gordon Plotkin obtained a doctorate in Artificial Intelligence from Edinburgh University in 1972. He then joined the faculty at Edinburgh, becoming a full professor in 1986. He is a Fellow of the Royal Society, a Fellow of the Royal Society of Edinburgh, and a member of Academia Europaea. He may be best known for his work on the operational semantics of programming languages, in particular for Structural Operational Semantics. He has also contributed to many other areas of the semantics and logic of programming languages, and, more recently, to simulation languages for systems biology. His prizes include the ACM Programming Languages Achievement Award and the Royal Society Milner Award.
09:00 - 09:35 |
Moving fast with software verification
This is a story of transporting ideas from theoretical research in reasoning about programs into the fast-moving engineering culture of Facebook. The context is that I landed at Facebook in September of 2013, when we brought the Infer static analyser with us from the verification startup Monoidics. Infer is based on recent research in program analysis, which applied a relatively recent development in logics of programs, separation logic. Infer is deployed internally, running continuously to verify select properties of every code modification in Facebook’s mobile apps; these include the main Facebook apps for Android and iOS, Facebook Messenger, Instagram, and other apps which are used by over a billion people in total. This talk describes our experience deploying verification technology inside Facebook, some the challenges we faced, lessons learned, and speculates on prospects for broader impact of verification technology. Professor Peter O'Hearn, Facebook and University College London, UK
Professor Peter O'Hearn, Facebook and University College London, UKPeter O’Hearn works as an Engineering Manager at Facebook with the Static Analysis Tools team, and as a Professor of Computer Science at University College London. Peter held academic positions at Syracuse and Queen Mary before UCL, and is the past recipient of a Royal Society Wolfson Research Merit Award, a Most Influential POPL Paper Award, and a Royal Academy of Engineering/Microsoft Research Chair. His research has been in the broad areas of programming languages and logic, ranging from new logics and mathematical models to industrial applications of program proof. With John Reynolds he developed separation logic, a theory which opened up new practical possibilities for program proof. In 2009 he cofounded a software verification start-up company, Monoidics Ltd, which was acquired by Facebook in 2013. The Facebook Infer program analyser, recently open-sourced, runs on every modification to the code of Facebook’s mobile apps, in a typical month issuing millions of calls to a custom separation logic theorem prover and resulting in hundreds of bugs being fixed before they reach production. |
|
---|---|---|
09:35 - 10:10 |
Understanding and verifying JavaScript programs
JavaScript is the most widely used language for client-side web applications. The dynamic nature of JavaScript, together with its complicated semantics, makes the understanding and development of JavaScript code notoriously difficult. Because of this, JavaScript programmers still have very little tool support for catching errors early in development, compared with the abundance of tools (such as IDEs and specialised static analysis tools) that are available for more traditional languages such as C and Java. The development of such analysis tools for JavaScript web applications is known to be a challenging task. In this talk, I will describe: --JSCert, a Coq mechanised specification of JavaScript which is line-by-line close to the English standard. It has an accompanying reference interpreter, which is provably correct with respect to JSCert and tested using the official test suite. --JSIL, an intermediate language for JavaScript with a semantics-driven compiler which has been proved correct for a significant fragment and properly tested. In future, we will develop JSIL front-ends for verification tools based on symbolic execution, such as CBMC used by Amazon, Infer developed at Facebook and Rosette. --JSVerify, a fledgling symbolic execution tool which has currently been used to verify JSIL implementations of the internal and built-in functions of JavaScript. In future, we will develop a substantial JavaScript verification tool, requiring a formal link between the JavaScript and JSIL program logics, a large engineering effort to make the verification tractable, and the verification of substantial program examples such as those from Google’s V8 array library. Our ultimate goal is to develop a trusted service for understanding and verifying JavaScript programs. Professor Philippa Gardner, Imperial College London, UK
Professor Philippa Gardner, Imperial College London, UKPhilippa Gardner is a professor in the Department of Computing at Imperial. Her current research focusses on program verification: in particular, reasoning about web programs (JavaScript and DOM) and reasoning about concurrent programs. She completed her PhD thesis, supervised by Professor Gordon Plotkin FRS at Edinburgh, in 1992. She moved to Cambridge in 1998 on an EPSRC Advanced Fellowship, hosted by Professor Robin Milner FRS. She obtained a lectureship at Imperial in 2001, and became professor in 2009. She held a Microsoft Research Cambridge/Royal Academy of Engineering Senior Fellowship from 2005 to 2010 at Imperial. She is the Director of the Research Institute in Automated Program Analysis and Verification, funded by GCHQ in association with EPSRC. |
|
10:40 - 11:15 |
Avoiding fatal flaws with formal methods
Formal methods are once again in vogue as a tool for building software systems that can resist attacks. The approach, however, has disappointed before. What might the future hold? We discuss the reasons for those past failures, and we explore what has changed (as well as what hasn't). Professor Fred B. Schneider, Cornell University, USA
Professor Fred B. Schneider, Cornell University, USAFred B. Schneider is Samuel B. Eckert Professor of Computer Science at Cornell University and chair of the department. Schneider's research has focused on various aspects of trustworthy systems -systems that will perform as expected, despite failures and attacks. Schneider was named Professor-at-Large at the University of Tromso (Norway) in 1996 and was awarded a Doctor of Science honoris causa by the University of Newcastle-upon-Tyne in 2003 for his work in computer dependability and security. He received the 2012 IEEE Emanuel R. Piore Award for ‘contributions to trustworthy computing through novel approaches to security, fault-tolerance and formal methods for concurrent and distributed systems’. The U.S. National Academy of Engineering elected Schneider to membership in 2011, and the Norges Tekniske Vitenskapsakademi (Norwegian Academy of Technological Sciences) named him a foreign member in 2010. |
|
11:15 - 11:50 |
A rigorous approach to consistency in cloud databases
Modern databases underlying large-scale cloud services guarantee immediate availability and tolerate network failures at the expense of providing only weak data consistency guarantees. Unfortunately, we currently lack sophisticated theory and tools that would help programmers in using such databases correctly despite the effects of weak consistency. I will present our ongoing work that uses formal techniques to study how weak consistency in cloud databases affects the correctness of applications using them. Building on it, we aim to improve the programmability and performance of cloud databases. Dr Alexey Gotsman, IMDEA Software Institute, Spain
Dr Alexey Gotsman, IMDEA Software Institute, SpainAlexey Gotsman is an Assistant Research Professor at the IMDEA Software Institute in Madrid, Spain. Before joining IMDEA, he was a postdoctoral fellow at the University of Cambridge, where he also got his Ph.D. Alexey's research interests are in programming models and verification techniques for concurrent and distributed software. |
Chair
Professor David Sands, Chalmers University of Technology
Professor David Sands, Chalmers University of Technology
David Sands (PhD, Imperial College 1990), is Professor of Computer Science (since 2001) at Chalmers University of Technology in Sweden. His main research areas are Programming Languages, Computer Security, and their combination. He founded the ProSec research group (Programming Language Based Methods for Security) at Chalmers at the turn of the millennium, and was awarded the prestigious Individual Grant for the Advancement of Research Leaders from the Swedish Foundation for Strategic Research. He is known for his work on foundations of information-flow security, and more recently has worked on the development of security-policy compliant programming languages. Sands served as chairman of the Computer Science evaluation panel of the Science Council of Sweden (2011-2013), and now serves as an elected member of the scientific board. He is one of four PIs of the Wallenberg Autonomous Systems Program (WASP), the largest single research grant ever awarded in Sweden.
13:30 - 14:05 | imPACT: Privacy, Accountability, Compliance and Trust in tomorrow's internet | |
---|---|---|
14:05 - 14:40 |
Combining static analysis and machine learning for industrial-quality information-flow-security enforcement
Security auditing of industry-scale software systems mandates automation. Static taint analysis enables deep and exhaustive tracking of suspicious data flows for detection of potential leakage and integrity violations, such as cross-site scripting (XSS), SQL injection (SQLi) and log forging. Research in this area has taken two directions: program slicing and type systems. Both of these approaches suffer from a high rate of false findings, which limits the usability of analysis tools. Attempts to reduce the number of false findings have resulted in analyses that are either unsound (suffering from the dual problem of false negatives) or too expensive (due to their high precision, thereby failing to scale to real-world applications). We investigate a novel approach that combines static analysis and machine learning for increasing analysis scalability while reducing the number of false positives. From a static-analysis perspective, the key observation is that taint analysis is a demand-driven problem. This enables lazy and scalable computation of vulnerable information flows. With the analysis being scalable to large codebases, the user is still left to review thousands of warnings, and classify them as either actionable or spurious. This is both burdensome and error prone, leaving developers disenchanted. We address this challenge by applying statistical learning to the warnings output by the analysis based on user feedback on a small set of warnings. This leads to an interactive solution, whereby the user classifies a small fragment of the issues reported by the analysis, and the learning algorithm then classifies the remaining warnings automatically. Marco Pistoia
Marco PistoiaMarco Pistoia, Ph.D. is a Distinguished Researcher and Senior Manager at the IBM Thomas J. Watson Research Center in New York. In January 2010, he was bestowed the title of IBM Master Inventor. He is the inventor of 115 patents and 171 patent applications. Dr. Pistoia received a Ph.D. in Mathematics from New York University in 2005. He has lectured at numerous universities worldwide, written 10 books, and published extensively on various aspects of Program Analysis, Language-based Security and Mobile Computing. In the course of his career, he received three ACM Distinguished Paper Awards (2007, 2011 and 2014). In 2007, the Italian Ministry of Education, University and Research, the National Committee of the Italian Presidents of Faculties of Sciences and Technologies, and Confindustria (Italy’s leading organization representing all the Italian manufacturing and service companies) presented Pistoia as one of the 70 most successful mathematicians who graduated from an Italian university between 1980 and 2000. |
|
15:10 - 15:45 |
Trust in progamming tools: the formal verification of compilers and static analysers
The formal verification of software has made great progress since 2000, resulting in verification tools that establish crucial safety and security properties of software and are beginning to be used in industry. However, the confidence we can have in the results of these tools is limited by two risks: unsoundness of verification tools, which could fail to account for all behaviours of the analysed program; and miscompilation of the verified program, causing incorrect executable code to be produced from a correct program. Those risks can be ruled out by applying program proof to the verification tools and to compilers themselves. In this talk I will briefly present two such formally-verified tools developed at Inria: the CompCert C compiler and the Verasco static analyser based on abstract interpretation. I will conclude with some future directions and challenges in this line of work. Dr Xavier Leroy, Inria Paris, France
Dr Xavier Leroy, Inria Paris, FranceXavier Leroy is a senior research scientist at Inria where he leads the Gallium research team. His research focuses on programming languages and tools, and on the formal verification of software using program proof and static analysis. He is the architect and one of the main developers of the OCaml functional programming language and of the CompCert formally-verified C compiler. |
|
15:45 - 16:20 |
Mind the gap: from crypto to code
The field of cryptography has lead the way in casting questions of security and privacy into a mathematical science. But there is still a large gap between the paper-and-pencil proofs of security done by cryptographers, and the code that is intended to realize them. I will discuss recent advances in formulating machine-checked proofs of security, not only for protocols, but the actual code. Professor Greg Morrisett, Cornell University, USA
Professor Greg Morrisett, Cornell University, USAGreg Morrisett is the Dean of Computing and Information Sciences (CIS) at Cornell University, which houses the departments of Computer Science, Information Science, and Statistical Sciences. Previously, he held the Allen B. Cutting Chair in Computer Science at Harvard University from 2004-2015. At Harvard, he also served as the Associate Dean for Computer Science and Electrical Engineering and as the Director of the Center for Research on Computation and Society. Before Harvard, Morrisett spent eight years on the faculty of Cornell's Computer Science Department. He received his bachelor's degree from the University of Richmond and both his Master's and Doctorate degrees from Carnegie Mellon University. Professor Morrisett's research focuses on the application of programming language technology for building secure, reliable, and high-performance software systems. A common theme is the focus on systems-level languages and tools that can help detect or prevent common vulnerabilities in software. Past examples include typed assembly language, proof-carrying code, software fault isolation, and control-flow isolation. Recently, his research focuses on building provably correct and secure software, including a focus on cryptographic schemes, machine learning, and compilers. Morrisett is a Fellow of the ACM and has received a number of awards for his research on programming languages, type systems, and software security, including a Presidential Early Career Award for Scientists and Engineers, an IBM Faculty Fellowship, an NSF Career Award, and an Alfred P. Sloan Fellowship. He served as Chief Editor for the Journal of Functional Programming and as an associate editor for ACM Transactions on Programming Languages and Systems, Information Processing Letters, and The Journal of the ACM. He currently serves as co-editor-in-chief for the Research Highlights column of Communications of the ACM. In addition, Morrisett has served on the DARPA Information Science and Technology Study (ISAT) Group, the NSF Computer and Information Science and Engineering (CISE) Advisory Council, Microsoft Research's Technical Advisory Board, Microsoft's Trusthworthy Computing Academic Advisory Board, and the Fortify Technical Advisory Board. |