Reliable and Interpretable Artificial Intelligence

Semester
Fall 2017
Lecturer
Prof. Dr. Martin Vechev
TAs
Dr. Petar Tsankov
Dr. Dana Drachsler-Cohen          
Dimitar Dimitrov
Matthew Mirman
EDoz Link
EDoz link
Credits
4
Lecture Time and Place
     
Tuesday, 10-12am in HG E3
Exercise Time and Place
   
Tuesday 2-3pm in HG F26.3
Wednesday 11-12am in CAB G59


Creating reliable and explainable probabilistic models is a major challenge to solving the artificial intelligence problem. This course covers some of the latest advances that bring us closer to constructing such models. These advances span the areas of program synthesis/induction, programming languages, machine learning, and probabilistic programming. The material consists of three interconnected parts:

Part I: Program Synthesis/Induction
Synthesis is a new frontier in AI where the computer programs itself from user provided examples. Synthesis has significant applications for non-programmers as well as for programmers where it can provide massive productivity increase (e.g., wrangling for data scientists). Modern synthesis techniques excel at learning functions over discrete spaces from (partial) intent. There have been a number of recent, exciting breakthroughs in techniques that discover complex, interpretable/explainable functions from few examples, partial sketches and other forms of supervision.
Topics covered:
- Theory of program synthesis: version spaces, counter-example guided inductive synthesis (CEGIS) with SAT/SMT, synthesis from noisy examples, learning with few examples, compositional synthesis, lower bounds on learning.
- Applications of techniques: synthesis for end users (e.g., spreadsheets), data analytics and financial computing, interpretable machine learning models for structured data.
- Combining neural networks and synthesis

Part II: Robustness of Deep Learning
Deep learning methods based on neural networks have made impressive advances in recent years. A fundamental challenge with these models is that of understanding what the trained neural network has actually learned, for example, how stable / robust the network is to slight variations of the input (e.g., an image or a video), how easy it is to fool the network into mis-classifying obvious inputs, etc.
Topics covered:
- Basics of neural networks: fully connected, convolutional networks, residual networks, activation functions
- Finding adversarial examples in deep learning with SMT
- Methods and tools to guarantee robustness of deep nets (e.g., via affine arithmetic, SMT solvers); synthesis of robustness specs

Part III: Probabilistic Programming
Probabilistic programming is an emerging direction whose goal is democratize the construction of probabilistic models. In probabilistic programming, the user specifies a model while inference is left to the underlying solver. The idea is that the higher level of abstraction makes it easier to express, understand and reason about probabilistic models.
Topics covered:
- Inference: MCMC samplers and tactics (approximate), symbolic inference (exact).
- Semantics: basic measure theoretic semantics of probability; bridging measure theory and symbolic inference.
- Frameworks and languages: WebPPL (MIT/Stanford), PSI (ETH), Picture/Venture (MIT), Anglican (Oxford).
- Synthesis for probabilistic programs: this connects to Part I
- Applications of probabilistic programming: using the above solvers for reasoning about bias in machine learning models (connects to Part II), reasoning about computer networks, security protocols, approximate computing, cognitive models, rational agents.

Lectures (Tentative Schedule):

No. DateContentSlides Exercises Solutions
1 Sept 19 Motivation, topics, organization PDF
Part I: Program Synthesis/Induction
2 Sept 26 Sketching, Counter-example inductive synthesis (CEGIS) PDF, blackboard: PDF PDF Z3 Z3
3 Oct 3 Exact Learning PDF PDF PDF
4 Oct 10 ML for Synthesis, Synthesis for Interpretable ML PDF, PDF
RNNs on blackboard:
 
 
PDF PDF
5 Oct 17 Neural Turing Machines, Synthesis for Interpretability PDF
RNNs summary:
 
PDF PDF
Part II: Robustness of Deep Learning
Part III: Probabilistic Programming

Readings (Optional):

Lecture 2

Lecture 3

Lecture 4

Lecture 5

Synthesis Cheat Sheet: