Floating-Point Analysis Research

This page is a compilation of research efforts and software tools devoted to program analysis of floating-point code. This research area is a community of people from many different backgrounds: traditional PL, high performance computing, numerical analysis, and systems software. With this page, I am hoping to help form connections between people and projects that might previously have never encountered each other. A similar list is located here, and there was recently a Dagstuhl seminar where many of the authors below met to discuss issues related to the field of floating-point analysis.

If you are doing research or software development in this area, please let me know and I'm happy to add you and your project. Alternately, if you are the author of a project already listed here and have updates or more details you'd like to add, please send them to me. Finally, if you encounter a broken link, please report it and I will fix it as soon as possible.

Table of Contents:

I also maintain a curated BibTeX file with most of the papers listed below. There are still a few missing; if you have a BibTeX entry that I'm missing please send it to me!

I also have a set of slides from the SC'17 BoF session on improving numeric computation.


AMP: Automated Mixed Precision

AMP is a tool developed by Ralph Nathan and Daniel Sorin (Duke University) for mixed-precision analysis. It begins with a single-precision version of a program and uses a variety of metrics and heuristics to gradually raise certain portions of the program to higher precision in order to achieve the desired accuracy in the outputs.

CRAFT: Configurable Runtime Analysis for Floating-Point Tuning

CRAFT was my own dissertation project harnessing the Dyninst toolkit to perform various dynamic floating-point analyses, including cancellation detection, dynamic range tracking, and automated mixed-precision analysis. CRAFT can perform an automated search of a program's instruction space, determining the level of precision necessary in the result of each instruction to pass a user-provided verification routine assuming all other operations are done in double precision. This gives a general overview of the approximate precision requirements of the program, providing insights and guidance for mixed-precision implementation.

There is a branch of CRAFT by Andrew Shewmaker (OpenEye Scientific, formerly LANL) that adds a new analysis mode for extracting histograms of all floating-point values encountered during a program run. He has also been building a variant of this tool based on SHVAL (described below).


A group at the University of Washington is working on two analysis tools. The first is Herbie, a tool that performs random sampling alongside arbitrary precision to identify code modifications that will improve accuracy. It also provides a web interface.

The second tool is Herbgrind, a shadow value analysis tool built on top of Valgrind, which tracks dependencies in numeric code to help identify root causes of floating-point issues.


Precimonious is a tool written by Cindy Rubio González (currently UC Davis) that leverages the LLVM framework to tweak variable declarations to build and prototype mixed-precision configurations.

More recent work uses shadow value analysis to speed up the search.

SHVAL: SHadow Value Analysis Library

A more recent project of mine, SHVAL uses the Intel Pin instrumentation framework to perform full heavyweight shadow-value analysis for floating-point arithmetic, effectively allowing developers to simulate any real-numbered representation for which a plugin can be developed.

This work was extended by Ramy Medhat (U. of Waterloo) to perform per-address or per-instruction floating-point error analysis. Using this framework, he developed several mixed-precision versions of an image processing library, demonstrating how the tool can assist with making performance, energy, and accuracy tradeoffs in the embedded domain.

  • Paper: "Managing the Performance/Error Tradeoff of Floating-point Intensive Applications" (To appear, EMSOFT'17)
  • Source: github.com/ramymedhat/shval


STOKE is a general stochastic optimization tool from Stanford that has been extended to handle floating-point computation.


Verificarlo is a tool written by Pablo de Oliveira Castro (Université de Versailles St-Quentin-en-Yvelines), Eric Petit, and Christophe Denis (ENS Cachan). It leverages the LLVM framework for automated Monte Carlo Arithmetic analysis.


cohd / syhd

Laurent Thévenoux at LIP, ENS de Lyon has written tools for source-to-source error compensation and synthesis of floating-point code. The source code is not yet available.


A group from CERTH & University of Thessaly and RWTH Aachen have written a significance analysis system using a combination of interval arithmetic and automatic differentiation. The software does not yet seem to be publicly available.


Floating-point stability analysis via SMT solving and theorem proving.

FPTaylor / FPTuner

A group at the University of Utah describes a way of using partial derivatives and symbolic Taylor series expansions to bound floating-point roundoff error; this achieves much tighter bounds than interval/affine/SMT methods.

They have also built a mixed-precision tool for Python code based on this technique. Note that it also requires the commercial Gurobi Optimizer.


Tool for interval arithmetic.


LifeJacket uses the Alive optimization verifier to verify floating-point optimizations in LLVM.


PRECiSA is a tool from the NIA and NASA that calculates symbolic error bounds using the PVS language. The authors also provide a web interface.


Victor Magron and others present a framework for providing upper bounds on absolute roundoff errors using semidefinite programming and sums of squares certificates. The results can be checked using the Coq theorem prover.

Rosa / Daisy / Anton

Rosa is a source-to-source translator written by Eva Darulova (now at MPI-SWS, previously EPFL) for the Scala language. It uses an SMT solver to annotate a program with mixed-precision types.

Eva Darulova also has another Scala project called Daisy that combines various types of static and dynamic analysis of floating-point code, and is currently a work-in-progress. Eva also has a paper with Anastasiia Izycheva on computing sound relative errors. They find that this can produce much tighter error bounds than analyzing absolute error, and an implementation is included in Daisy.

Finally, Eva has a paper with Einar Horn and Saksham Sharma on mixed-precision tuning in Scala and C. The software for this project does not appear to have been released yet.


Salsa is an abstract-interpretation-based floating-point analysis tool that suggests transformations to minimize floating-point error.



Astrée is a commercial static analysis tool for proving the correctness of C code, including all floating-point computation.


Fluctuat is a commercial, abstract-interpretation-based static analysis tool for analyzing numerical code in C or Ada. It was originally written by Eric Goubault, Matthieu Martel, and Sylvie Putot.



CompCert is a compiler that has been formally verified to preserve floating-point semantics using Coq.

FLiT: Floating-point Litmus Tests

FLiT is a test infrastructure for detecting variability in floating-point code due to differences in compilers and execution environments. It is currently developed and maintained by Michael Bentley at the University of Utah.


FPBench is a community benchmark project for floating-point examples.


Symbolic execution engine for LLVM with a benchmark suite of example floating-point programs. There is also a fork that supports reasoning over floating-point constraints.


Another project from the University of Utah (see FPTaylor and FPTuner above), S3FP is a tool for finding floating-point inputs that will maximize error.

SOAP (Structural Optimisation of Arithmetic Programs)

SOAP is a tool for optimizing FPGA-based implementations of floating-point code.

Unum / Posit arithmetic

Unums ("universal numbers") are a novel binary representation for real numbers by John Gustafson that exactly quantifies the uncertainty of a stored value. It extends IEEE floating-point by making both the exponent and signficand fields variable-sized, and it adds an "exact" bit that is set if the number is exact and unset if the number represents a range between representable numbers. It provides a way to obtain mathematically-rigorous results in numeric code at the cost of computational efficiency.

Posits are the newest iteration on Gustafson's Unum alternative real number representation. Posits have a fixed overall width, but the number of exponent and significand bits vary according to a run-length-encoded "regime" field. It provides higher accuracy and a greater dynamic range than identically-sized IEEE formats.


VFLOAT is a a variable-precision floating-point library for FPGAs, developed and maintained by Miriam Leeser at Northeastern University.