Below are some of the on-going projects in our group. Most of these involve some combination of theory (e.g. program analysis, synthesis) and practice (e.g. concurrent algorithms, code discovery).
If you are interested to find out more, please drop us a line.
S3: Statistical Software Synthesis
In this project we investigate new data-driven directions for automating programming tasks by combining program analysis with statistical techniques (e.g. language models, A* search, etc). The basic idea is to learn from examples (for instance, from the massive amount of programming data already available on the Web or from examples provided by the user) in order to perform various tasks, such as automated code completion, code refactoring, language translation and automated tutoring.
EventRacer: Analysis of Event-Driven Applications
try it out: http://eventracer.org
Many end-user applications such as web sites and smartphone apps are written in an event-driven style. Unfortunately, these applications can exhibit incorrect behaviors fundamentally caused by the incorrect handling of asynchrony that arises from the application's interaction with the external world (e.g. user clicks, network interrupts, etc).
In this project, we develop new program analysis techniques (based on dynamic analyiss and graph algorithms) that can detect and correct such issues automatically in real-world applications (e.g. Google Maps).
The project is a collaboration between ETH Zurich and Samsung Research.
Fender: Programming with Relaxed Memory Models
more details: http://practicalsynthesis.org/fender/
To improve performance, virtually all modern processers (e.g. Intel) support relaxed memory models. Unfortunately, programming these relaxed models is difficult and error-prone, yet highly desirable for systems software (e.g. the Linux kernel, concurrent algorithms).
In this project, we develop new automated program analysis (e.g. predicate abstraction, abstract interpretation, model checking) and synthesis techniques which enable us to build systems software that exploits these chips corrrectly and efficiently.
The project is a collaboration between ETH Zurich and Technion.
Analysis of Structured Parallel Programs
Programming languages are increasingly adopting some form of structured parallelism in order to manage the complexity of writing parallel programs (e.g. X10, Cilk, Scala, F#, GPU CUDA kernels, etc). In this project we develop program analysis algorithms that analyze such structured programs. These program analyzers aim to leverage the structure in order to obtain better space and time complexity than analyzers for unstructured parallelism.
Practical Lower Bounds for Concurrent Algorithms
Concurrent programming involves a delicate balance between correctness and efficiency: using too much synchronization leads to inefficiency, using too little produces incorrect programs. Here, we define a wide class of concurrent programs and prove that any program in this class must use certain synchronization, or it is incorrect. These results are particularly important when it comes to libraries (e.g. Java collections), systems programming (e.g. Linux kernel) and modern processors architectures (e.g. PowerPC, Intel, ARM).
Papers: POPL'11, POPL'11 Talk: PDF, PPT, Linux Weekly News features the result
Paraglide: Synthesis of Concurrent Systems Software
In this project, we explored new directions for computer assisted construction of concurrent systems. The work ranged from new techniques for automatic synchronization inference based on semantic search to verification-driven synthesis combining program analysis and inference, to synthesis of concurrent algorithms such as concurrent data structures. Using the techniques developed here, we managed to synthesize new concurrent set and memory management algorithms which surprised even algorithmic experts. This project won the A-level Outstanding Project Award at IBM Research.
Quality Virtual Machine
The goal of this project was to scale important dynamic program analysis algorithms (e.g. resource usage) by integrating the analysis with the language runtime (i.e., the virtual machine). By making the runtime aware of the analysis, various performance optimizations are now made possible: fine-grained feedback-directed control of the analysis overhead, efficient snapshot algorithms, etc. Among other things, we extended a real-world Java VM (IBM's J9) with various analyzers as well as techniques to control the overhead (the resulting system was called QVM). Using QVM, we were able to analyze massive applications (without any change to the application) and find and repair errors that would have been very difficult or impossible to detect otherwise.