misailo@illinois.edu


 Associate Professor
 Department of Computer Science
 University of Illinois
 4110 Siebel Center
 201 N Goodwin Avenue
 Urbana, IL 61801, USA

  Scholar    LinkedInLinkedIn
  DBLP      Facebook Facebook

 

 Publications:     
 2023        2022        2021
 2020        2019        2018
 2017        2016        2015
 2014        2013        2012
 before

 

 Software:     
 Repositories        







Sasa Misailovic

Pronunciation: /Sa-sha Mee-sail-oveech/

Research Summary

I work at the intersection of programming languages, compilers, and software engineering. I am pursuing an ambitious research agenda that rethinks programming systems for modern applications in which noisy data, unreliable communication, approximate computation, and uncertain decisions are fundamental traits. Such applications are pervasive in many fields, including machine learning, data analytics, multimedia processing, and robotics. My research aims to attack three “grand challenges” for future programming systems – including languages, compilers, runtime systems, developer-productivity tools, and software-hardware interfaces:

  1. How can the next-generation programming systems and hardware interoperate to give acceptably accurate results with maximum performance and energy savings?

  2. How can these systems rigorously quantify the inaccuracy and noise from the inputs and computation?

  3. How to apply these principles to improve the reliability of software operating in uncertain environments?

My research seeks to answer these questions by combining empirical techniques from programming languages and computer systems with solid mathematical foundations provided by probability theory, mathematical optimization, logic, and statistics. At Illinois, my work redesigns the entire computing stack, from applications to hardware, with the abstractions for approximation and uncertainty as central tenets.

I have published my research broadly, in top conferences in programming languages (PLDI, POPL, OOPSLA, ASPLOS, PPoPP, ESOP, ECOOP, CGO, SAS) and software engineering (FSE, ICSE, ISSTA, ICST), but also in formal methods (CAV, ATVA, RV), computer architecture (HPCA), high-performance computing (ICS), computer systems (USENIX ATC), and embedded systems (DAC). As of December 2021, my papers have accumulated over 3500 citations according to Google Scholar (eleven papers have over 100 citations each) and my h-index is 25. Three papers won Distinguished paper awards [OOPSLA’13, OOPSLA’14, ICSE-NIER’20] and one was recognized with the Test-of Time honorable mention [FSE’11]. My group's research has been funded by NSF, DARPA, USDA, and industry gifts. My tools, which are publicly available, have helped identify significant performance improvement opportunities and exposed numerous accuracy-related errors in existing software. My work at the University of Illinois has explored three main directions:

  1. Programming systems for principled computing with approximations;

  2. Programming systems for reliable and efficient probabilistic software;

  3. Programming systems for improving software reliability, including support for autonomous distributed systems.

Programming Systems for Principled Computing with Approximations

Approximate computing explores various system-level techniques for reducing the time and energy of the computation by trading off a small level of accuracy. My Ph.D. work pioneered accuracy-aware compiler transformations and probabilistic accuracy analyses [MITTR'09, ICSE’10, ASPLOS’11, FSE’11, SAS’11, POPL’12, PLDI’12, RACES’12, TECS’13, OOPSLA’13, OOPSLA’14, MITPHD'15, CACM’16]. An interested reader may find the brief summary of these contributions here. While the community developed many promising early results at individual levels of the computing stack, it remained an open question how to safely apply and systematically compose different approximations, especially for parallel and heterogeneous systems (most previous analyses focused on sequential programs). At Illinois, I have been working on building a practical ecosystem of techniques for principled program approximation. My goal is to simplify the development of approximate programs by helping programmers ensure safety and quantitative reliability as well as manage accuracy-performance tradeoffs on parallel and heterogeneous hardware. To accomplish this goal, my research extends across the full computing stack.

We designed the first language/verification system for reasoning about approximate asynchronous message-passing programs [OOPSLA’19a]. We show that a handful of language constructs (typed channels, conditional send/receive primitives, and random choice) are general enough to express most communication-oriented approximations proposed in the literature. We designed a novel automated sequentialization approach that enables lifting an arbitrary sequential approximation analysis to reason about message-passing programs that satisfy the symmetric nondeterminism property. It also ensures that the approximations do not cause deadlocks. We demonstrated the effectiveness and generality of our approach by representing and proving the correctness of four analyses in our system – approximation types, relative safety, reliability, and accuracy – and using them jointly to automatically verify eight approximate programs and eight common parallel kernels for various communication-oriented approximations. We also extended the language with recovery blocks and a precise quantitative reliability analysis [CGO’20], and incorporated low-overhead runtime accuracy monitoring [RV’21].

We developed a novel compiler for heterogeneous systems with approximate hardware (e.g., custom accelerators) and software components (e.g., algorithmic approximation). They tackle the challenges of navigating large tradeoff spaces induced by approximations and maintaining accuracy/performance portability across different hardware platforms. We designed a set of LLVM IR extensions for tensor-based programs augmented with accuracy specifications, and developed a three-level predictive auto-tuning approach for the combination of hardware-agnostic (that may be known ahead of time, while developing the program) and hardware-specific approximations (that may be known only after installing the program on the target platform) [OOPSLA’19b, PPoPP’21]. For a set of real-world DNN benchmarks, we obtained ~2x speedup with a small classification accuracy loss, while our predictive approximation search took 10x less time than empirical tuning.

We built adaptive runtime systems for input-aware and phase-aware approximation that leverage the dynamic changes of accuracy at run-time and thus provide finer-grained control of the result accuracy. For video processing pipelines, we expose input-awareness via error calibration (inspired by ’canary inputs’) on adaptively sampled frames [ATC’18]. For graph analytics programs, our input-aware algorithm systematically reduces a graph during pre-processing, and uses an adaptive post-processing scheme to improve the final result [ICS’20]. For iterative numerical programs, our runtime system adapts approximation level in each identified execution phase (e.g., beginning, middle, or end) [CGO’17].

We co-designed a new shared-memory multiprocessor with a wireless network-on-chip, and developed a programming model and runtime system that leverage approximate communication [ASPLOS’19a, HPCA’21]. Each CPU core is equipped with a wireless antenna that broadcasts select data to the other cores and can selectively drop messages to reduce contention on the wireless channel. Our programming system automatically identifies data in shared-memory programs that can benefit from wireless communication and/or tolerate dropped messages. For a set of graph and scientific benchmarks, it achieved ~1.9x speedup (with high accuracy) over the conventional architecture with wired network-on-chip.

Programming Systems for Reliable, Accurate, and Efficient Probabilistic Software

Probabilistic algorithms emerged in various areas of computer science. Notably, probabilistic programming (PP) aims to democratize Bayesian inference by representing complex models as simple high-level programs, while the underlying PP systems hide the intricate details of inference. PP has recently shown exciting potential and gained significant interest from industry (e.g., Facebook, Google, Microsoft, Uber) and the scientific community (e.g., epidemic modeling and single-cell genomics). However, this area is still in its infancy and needs an ecosystem of tools for improving the reliability and accuracy of PP systems. My work identified that many errors that occur in probabilistic computations and underlying systems are caused by randomness and/or handling of probability distributions and are hard to identify using conventional testing and analysis approaches. My research presents novel techniques for testing probabilistic algorithms and systems [FSE’18, ICSE’19, FSE’19, ISSTA’20, ISSTA’21, FSE’21] and novel analyses for reasoning about the accuracy of probabilistic programs [ATVA’18, PLDI’18, ESOP’20, ATVA’21, DAC’21].

We developed the foundation for testing PP systems [FSE’18]. Our fuzzing-based framework has several analogies with compiler fuzz-testing. However, to make fuzzing effective for PP systems, we show the necessity to incorporate domain-specific information about probability distributions, dependence analysis, and accuracy tests to guide the search for errors. It helped us discover and fix over 50 new bugs in three popular PP systems (Stan, Pyro, Edward) and two underlying libraries (PyTorch and TensorFlow). We also made a taxonomy of all errors previously reported in these systems that can help researches in the future. To help debug PP systems, we extended our system with an analysis-driven algorithm for reducing error-revealing probabilistic programs [FSE’19].

We developed new programming systems that make testing of probabilistic software more effective and easier to understand. While many probabilistic algorithms come with nice theoretical guarantees (e.g., Bloom filters, locality sensitive hashing), these specifications are rarely used to check the algorithm implementations. We thus developed a specification language for probabilistic accuracy predicates and a framework that checks if the specification is correct using statistical hypothesis testing [ICSE’19]. It helped us find and fix five previously unknown bugs in popular implementations of probabilistic algorithms. However, even if a randomized program is correct, its tests may occasionally fail (i.e., be flaky). We developed an approach that uses statistical convergence testing to identify flaky tests in PP and machine learning (ML) systems [ISSTA’20], which detected 11 new flaky tests. We extended this approach to optimize test run time [ISSTA’21], resulting in 2.2x speedup on 160 tests. Inspired by these works, we recently developed the first approach for fixing assertion bounds in PP and ML systems based on Extreme value theory from statistics, which proposed fixes to 28 tests [FSE’21].

We leveraged static analysis to improve the accuracy and performance of probabilistic programs. Building on our symbolic solver for probabilistic programs [CAV’16], we developed analyses for symbolic inference with accuracy guarantees [ATVA’21], symbolic sensitivity analysis of PP’s result to noise in the inputs or prior distributions [ATVA’18] and a domain-specific language for reasoning about congestion and reliability in software-defined networks [PLDI’18]. We further identified accuracy issues when naively converting mixed discrete/continuous programs to continuous (for which inference is faster). Our approach poses PP continualization as a program synthesis problem, which combines program analysis and numerical optimization with the aim to minimize the accuracy loss of continualized programs [ESOP’20]. Recently, we developed an optimizing PP compiler for embedded systems that tunes inference code to optimally use fixed-point arithmetic [DAC’21]. It achieved speedups over floating-point code of ~11x on Arduino, ~4x on PocketBeagle, and ~2x on Raspberry Pi systems.

Programming Systems for Improving Reliability of Software in Uncertain Environments

My research also investigated techniques for coping with uncertainty from the environment that can impact computation and its accuracy. Distributed robotic applications are a prominent example with noisy sensors, ad-hoc interfaces to hardware, and potentially unreliable communication to other robots. We developed a novel language and verification system [OOPSLA’20] with abstractions to make the control and coordination code portable and modularly verifiable. Our abstractions for platform-independent code include port interfaces for sensing and control and distributed shared memory for coordination. We defined an executable formal semantics in the K framework. Our formalization enables the developer to explicitly identify the assumptions about the environment needed to prove important safety properties using off-the-shelf solvers (e.g., for reachability) in distributed delivery and mapping applications.

The ability of programs to produce multiple acceptable results opens new opportunities to redefine the goals of program testing and synthesis through the lens of uncertainty. Resiliency analysis (which studies the impact of non-deterministic soft errors on program reliability) and approximate program analysis are similar in that both have the notion of acceptable errors, while the inaccuracy sources are different (bit flips vs. intentional approximations). We showed how to adapt several foundational techniques from software testing to speed up error injection campaigns for resiliency analysis [ASPLOS’19b, DSN’19]. We refined example-based synthesis by identifying perturbation properties that relate changes in program outputs to those in program inputs and automatically generating new input-output examples to reduce ambiguity [POPL’20].

Future Directions

The growing complexity of software and diversity of hardware platforms will continue to make uncertainty and approximation central concerns in computer systems. Many emerging applications will be distributed, adaptive, and autonomous, where each component (e.g., for perception, control, and decision-making) will require different mathematical and empirical tools that need to be combined for the end-to-end analysis and optimization in the face of uncertainty. The long-term goal of my research is to redefine programming systems to support such complex applications while being able to ensure safety and at the same time discover profitable tradeoffs between important quantitative properties – accuracy, robustness, security, privacy, and others. To support this goal, in the near future, my research will explore the three lines of work that I describe next.

The first line of work will build robust programming systems to sustain the democratization of programming with probabilities, developed on solid theoretical foundations. The shift of computing toward AI and machine learning is opening the need for probabilistic and statistical reasoning. Many seasoned and novice programmers alike have little experience with this kind of reasoning and will significantly benefit from automated programming systems. I plan to develop approaches for systematic analysis, testing, debugging, and repair of probabilistic software by combining both data-driven and static analysis to obtain acceptable accuracy and scalability (e.g., by continuing from FSE’18, FSE’19, ICSE’19, ISSTA’20, ATVA’21, ISSTA’21, and FSE’21).

The second line of work will expand the reach of automated probabilistic reasoning to improve reliability and performance of distributed autonomous applications. Autonomous robotics applications have multiple complex components and reasoning about their fault tolerance poses new challenges. I plan to build on OOPSLA’20 work to define new domain-specific languages (e.g., for precision agriculture and drone operation) and extend the languages and verification with probabilistic abstractions for distributed computations (inspired by PLDI’18 and OOPSLA’19a). I also plan to efficiently compile high-level languages (based on OOPSLA’19b, PPoPP’21, and DAC’21), while enabling low-overhead accuracy monitoring and energy-efficient inference.

The third line of work will invent new foundational techniques for analyzing and optimizing quantitative tradeoffs spanning parallel/distributed programs and hardware on which they run. I will explore the abstractions and interfaces between the layers of the computing stack, as well as mathematical optimization approaches for multi-objective program tuning. I will establish a stronger connection between quantitative program properties and costs of computation and communication. Combining (soft) probabilistic and (hard) logic constraints with a combination of empirical or surrogate accuracy/performance/robustness models (e.g., by extending the approach from PPoPP’21) opens an interesting venue for work on future adaptive analysis/optimization algorithms that can further incorporate user feedback (e.g., as in POPL’20) to improve their precision.