News Archives

[Colloquium] Bias in Computer Systems Experiments

March 2, 2010

Watch Colloquium: 

Quicktime file (484 MB)
AVI file (476 MB)


  • Date: Tuesday, March 2, 2010 
  • Time: 11 am — 12:15 pm 
  • Place: Mechanical Engineering, Room 218

Todd Mytkowicz
Dept. of Computer Science
University of Colorado

Abstract: To evaluate an innovation in computer systems a performance analyst measures execution time or other metrics using one or more standard workloads. In short, the analyst runs an experiment. To ensure the experiment is free from error, s/he carefully minimizes the amount of instrumentation, controls the environment in which the measurement takes place, repeats the measurement multiple times, and uses statistical techniques to characterize her/his data. Unfortunately, even with such a responsible approach, the analyst’s experiment may still be misleading because of bias. A biased experiment occurs when one experimental setup—or the environment in which we carry out our measurements—inadvertently favors a particular outcome over others. In this talk, I demonstrate that bias is large enough to mislead systems experiments and common enough that it cannot be ignored by the systems community. I describe tools and methodologies that my co-authors and I developed to mitigate the impact of bias on our experiments. Finally, I conclude with my future plans for research—tools that aid performance analysts in understanding the complex behavior of their systems.

Bio: Todd Mytkowicz recently defended his Ph.D. in Computer Science at the University of Colorado, advised by Amer Diwan and co-advised by Elizabeth Bradley. During his graduate tenure he was lucky enough to intern at both Xerox’ PARC and IBM’s T.J. Watson research lab. He was also a visiting scholar at the University of Lugano, Switzerland. His research interests focus on performance analysis of computer system—specifically, he develops tools that aid programmers in understanding and optimizing their systems.