2019 Computer Science Colloquium Series


Real-time Heterogeneous Data to Guide Infectious Disease Forecasting Models

Sara Del Valle, Los Alamos National Lab

Wednesday, May 1, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Globalization has created health problems that can no longer be adequately analyzed and mitigated using traditional data analysis techniques and data sources. Whether the goal is to monitor the rapid spread of infectious diseases, societal changes initiated by climate change, or the emergence of a new deadly virus being subtly observed by citizenry, there is a near-real-time digital signature associated with the disruptive event. In this presentation, I will describe an approach that combines heterogeneous data streams, such as Internet and satellite imagery, with mathematical models to monitor and forecast infectious diseases. Our goal is to improve decision support by assimilating real-time information into predictive models to reduce global disease burden.

Bio:

Sara Del Valle is a scientist and deputy group leader in the Information Systems and Modeling Group at Los Alamos National Laboratory. She has a Ph.D. in Applied Mathematics and works on developing, integrating, and analyzing mathematical, computational, and statistical models for the spread of infectious diseases such as smallpox, anthrax, influenza, malaria, HIV, hepatitis C, MERS-CoV, zika, Chikungunya, dengue, and Ebola. In addition, she has modeled the potential effects of mass casualties on the Healthcare and Public Health Sector including resource allocation and dependencies on other infrastructures. Most recently, she has been investigating the role of heterogeneous data streams such as satellite imagery, social media, and climate on detecting, monitoring, and real-time forecasting of infectious diseases.


Inexact computing at LANL

Laura Monroe, Los Alamos National Lab

Wednesday, April 24, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Smaller feature size in the late-CMOS era is expected to increase the number of faults occurring in systems, and this trend is exacerbated by the larger number of parts needed to field exascale and larger systems. Inexact computing may be one way to address these problems. Inexact computing includes both probabilistic and approximate computing techniques. Probabilistic computation dates back to the 50s. The calculation is done non-deterministically and the result may be correct or “correct enough”. Approximate computation is deterministic, and produces results that are “close enough”, where the lack of precision may be due to word length limits or numerical methods. In this talk, we discuss LANL work in the area of inexact computing, and the challenges that occur in this type of computation.

Bio:

Dr. Laura Monroe is a research scientist at Los Alamos National Laboratory. She received her Ph.D. in Mathematics and Computer Science from the University of Illinois at Chicago, where she studied the theory of error-correcting codes with Vera Pless. She worked at NASA Glenn following graduation, and joined LANL in 2000. She now works at LANL’s Ultrascale Systems Research Center in the field of probabilistic computing for high-performance applications. She has received several Defense Program Awards of Excellence, an R&D 100 award in 2006 as part of the PixelVizion team, and recently received a 2019 NM Women in Tech award. She has published in the fields of probabilistic computing, resilience, error-correcting codes and visualization, and her interests are in those fields as well as in the mathematical bridge between the computer as physical object and as ideal system.


An Efficient Multiple-Placed Foraging Algorithm for Scalable Swarms

Qi Lu, The University of New Mexico

Wednesday, April 17, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Scalability is a significant challenge for robot swarms. Generally, larger groups of cooperating robots produce more inter-robot collisions, and in swarm robot foraging, larger search arenas result in larger travel costs. This paper demonstrates a scale-invariant swarm foraging algorithm that ensures that each robot finds and delivers resources to a central collection zone at the same rate, regardless of the size of the swarm or the search area. Dispersed mobile depots aggregate locally foraged resources and transport them to a central place via a hierarchical branching transportation network. This approach is inspired by ubiquitous fractal branching networks such as tree branches and animal cardiovascular networks that deliver resources to cells and determine the scale and pace of life. We demonstrate that biological scaling laws predict how quickly robots forage in simulations of up to thousands of robots searching over thousands of square meters. We then use biological scaling predictions to determine the capacity of depot robots in order to overcome scaling constraints and produce scale-invariant robot swarms.

Bio:

Qi Lu is a Ph.D. candidate in Computer Science and supervised by Dr. Moses at the University of New Mexico. His areas of interest are robot swarms, biologically-inspired robots, and autonomous robots. He designs the multiple-placed foraging model to improve foraging performance of robot swarms. He also builds a bio-inspired hierarchical branching network for scalable robot swarms. His goal is to design an efficient foraging algorithm for large robot swarms. His work has been published in top robotics conferences and journals, including ICRA, IROS, and Autonomous Robots. He is on the program committee for the International Symposium on Multi-Robot and Multi-Agent Systems (MRS). He has an M.S. in Computer Science from the Aalborg University, Denmark. More info: https://www.cs.unm.edu/~lukey11 and videos: tinyurl.com/youtube-lukey


Decentralized Allocation of Tasks with Temporal and Precedence Constraints to a Team of Robots

Maria Gini, University of Minnesota

Wednesday, April 3, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

We propose an auction-based method for a team of robots to allocate and execute tasks that have temporal and precedence constraints. Temporal constraints are expressed as time windows, within which a task must be executed. The robots use our priority-based iterated sequential single-item auction algorithm to allocate tasks among themselves and keep track of their individual schedules. A key innovation is in decoupling precedence constraints from temporal constraints and dealing with them separately. We demonstrate the performance of the allocation method and show how it can be extended to handle failures and delays during task execution. We leverage the power of simulation as a tool to analyze the robustness of schedules. Data collected during simulations are used to compute well-known indexes that measure the risk of delay and failure in the robots’ schedules. We demonstrate the effectiveness of our method in simulation and with real robot experiments.

Bio:

Maria Gini is a Professor in the Department of Computer Science and Engineering at the University of Minnesota. She studies decision making for autonomous agents in applications ranging from distributed methods for allocation of tasks to robots, to methods for robots to explore an unknown environment, teamwork for search and rescue, and navigation in dense crowds. She is a Fellow of the Association for the Advancement of Artificial Intelligence and of the IEEE. She is Editor in Chief of Robotics and Autonomous Systems, and is on the editorial board of numerous journals, including Artificial Intelligence, and Autonomous Agents and Multi-Agent Systems.

https://agile-mfg.unm.edu/


Automated Program Verification via Data-Driven Inference

He Zhu, Galois, Inc

Monday, March 25, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Despite decades of investigation, software systems remain vulnerable to code defects. Even fewer assurance guarantees can be made about the reliability of new domains such as autono-mous systems that are controlled by neural networks. The structure of a neural network poses significant challenges in reasoning about its lifetime operational behavior in a complex real-world environment.

In this talk, inspired by recent successful applications of data-driven techniques to real-world prob-lems (e.g., AlphaGo), I will describe how to address the aforementioned conundrum by exploring data-driven techniques to discover high-level abstract software specifications and demonstrate how such techniques can enable scalable automated formal verification of programming systems.

Viewing a software verifier as a source of data, providing examples, counterexamples, and input-output examples to a hypothesis program specification, I will first present SynthHorn, a verification framework that allows machine learning algorithms to interact with a verification engine to syn-thesize meaningful specifications for real-world programming systems.

Second, as a continued exploration of the data-driven verification theme found in SynthHorn, I will show how to leverage data-driven formal verification to realize trustworthy machine-learning systems. Specifically, I will present SynthML, a general program synthesis framework that syn-thesizes programmatic specifications of complex reinforcement learning models (e.g., neural net-works). It samples environment states that can be encountered by a cyber-physical system con-trolled by a deep neural network, synthesizing a deterministic program that closely approximates the behavior of the neural controller on these states and that additionally satisfies desired safety constraints. Executing a neural policy in tandem with a verified program distilled from it can retain performance, provided by the neural policy, while maintaining safety, provided by the program.

I will conclude the talk by discussing my two-fold vision: 1) applying machine-learning driven ver-ification techniques to build trustworthy distributed systems, operating systems, and emerging AI-based systems, as well as 2) enabling transparent, robust and safe machine learning by means of language-based verification techniques.

Bio:

He Zhu is a researcher at Galois, Inc.. He received his Ph.D. from Purdue University advised by Prof. Suresh Jagannathan. He is broadly interested in improving software reliability and security via machine learning, program analysis, and program synthesis with a special focus on type sys-tems, automated reasoning, and formal verification. He authored a PLDI Distinguished Paper in 2018 and received the Maurice H. Halstead Memorial Award in 2016.


Privacy in the field: Protecting Sensitive Data for AI Applications

Ferdinando Fioretto, Georgia Institute of Technology

Friday, March 8, 2019
Centennial Engineering Center 1026
2:00-3:00 PM

Abstract:

Advances in artificial intelligence and data science have allowed the development of products that leverage individuals' data to provide valuable services. However, the use of this massive quantity of personal information raises fundamental privacy concerns. Differential Privacy (DP) has emerged as the de-facto standard to addresses the sensitivity of such information and can be used to release privacy-preserving datasets. Despite its large theoretical value, when these private datasets are used as inputs to complex machine learning or optimization tasks, they may produce results that are fundamentally different from those obtained on the original data.

In this talk, I will focus on the problem of releasing privacy-preserving data for complex data analysis tasks. I will introduce the notion of Constrained-Based Differential Privacy (CBDP) which allow us to cast the data release problem to an optimization problem whose goal is to preserve the salient features of the original dataset. Finally, I will discuss two applications of CBDP for large socio-technical systems related to the optimization of operations in transportation systems and energy networks.

Bio:

Ferdinando Fioretto is a postdoctoral researcher at the Georgia Institute of Technology. His research focuses on artificial intelligence, data privacy, and multiagent coordination. Ferdinando has published in several top-ranked AI journals and conferences. He has organized workshops, special tracks, and gave tutorials at AAAI, AAMAS, and CP, and has served the program committee of various AI conferences, including AAAI, IJCAI, AAMAS, and CP. He is the recipient of a best student paper award (CMSB, 2013), a most visionary paper award (AAMAS workshop series, 2017), and a best AI dissertation award (AI*IA, 2017).


Optimization for machine learning

Yifan Sun, University of British Columbia

Wednesday, March 6, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

In the recent few years, huge advances have been made in machine learning, which has transformed many fields such as computer vision, speech processing, and even games. A key "secret sauce" in the success of these models is the ability of certain architectures to learn good representations of complex data; that is, preprocessing the data in a way to make the optimization either more robust or more efficiently solvable.

We investigate two instances where encoding structure in the feature variable facilitates optimization. In the first case, we look at word vectors as language-modeling mathematical constructs, and their use in data mining applications. Then, we investigate the curious ability of proximal methods to quickly identify sparsity patterns in an optimization variable, which greatly facilitates feature selection. These two examples illustrate the diversity of machine learning optimization problems, but also highlight the prominence of underlying themes in structured representations.

Bio:

Yifan Sun got her PhD in Electrical Engineering from UCLA in 2015. She worked at Technicolor Research in Palo Alto, California, for two years post graduate schools, focusing on machine learning projects. She is now a postdoctorate at the University of British Columbia in Vancouver. Her research interests are convex optimization, semidefinite optimization, first-order and stochastic methods, and machine learning interpretability.


Behavioral Analytics for Interactive Systems

Adam C. Lammert, faculty candidate

Friday, March 1, 2019
Centennial Engineering Center 1026
2:00-3:00 PM

Abstract:

Systems equipped with algorithms for understanding human states can better adapt the nature and extent of machine engagement in human-machine collaborations, potentially improving collaborative outcomes and human performance. Such algorithms may also shed light onto the fundamental principles and mechanisms underlying human behavior, thereby improving their potential impact in health technologies, as well as technologies for the advancement of neural and cognitive science. My research is focused on behavioral analytics for interactive systems, developing novel statistical and computational models that enable machines to find meaningful patterns in human behavior, and gain awareness of human neurocognitive states. Progress toward algorithms that can monitor and understand complex human behaviors is associated with many challenges, most of which center on effectively modeling the myriad variety and sources of behavioral variability exhibited across individuals and contexts. Fresh insights into the development of algorithms that can uncover the mechanisms and sources of variability in human behavior have been recently enabled by an interdisciplinary confluence of empirical, computational and theoretical advances. My research, situated at this confluence, incorporates each of these elements. This talk will present three related efforts in this domain, aimed at (a) cognitive status estimation from vocal biomarkers, based on advanced analytics and machine learning, (b) classification of sensorimotor deficits in traumatic brain injury using immersive virtual environments and combined statistical/mechanistic modeling, and (c) analysis and modeling of speech production variability with magnetic resonance imaging and neurocomputational models [work supported by NIH, NSF and DoD].


Memory Fabric: Systems Software for Seamless Scaling Across Complex Memory Topologies

Ada Gavrilovska, Georgia Tech

Wednesday, February 20, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Emerging memory technologies – from fast, but small-capacity High Bandwidth Memories (HBMs), to much slower, larger, and persistent non-volatile memories (NVMs) – are transforming the way systems are being built. In response, future systems software stacks will need to be designed for a much richer and more complex environment. Using data-intensive scientific and analytics applications as motivation, our work addresses the complex memory fabrics of heterogeneous memory components, with different, and potentially configurable, capacity, persistence, sharing, and access properties. In this talk I will present our research on re-architecting the systems software toward harnessing greater benefits from heterogeneous memory systems. I will present an overview of our systems software solutions for platforms with non-volatile main memory, our recent work on automating memory management across heterogeneous memory components, and on in-fabric acceleration for disaggregated memory systems.

Bio:

Ada Gavrilovska is an associate professor in the School of Computer Science at Georgia Tech, where she leads the KERNEL research group. Her research is largely driven by emerging hardware technologies and modern workloads, and focuses on addressing performance, scalability and efficiency problems across the systems software stack. Recent projects include operating system and hypervisor methods for dealing with platform-wide compute and memory heterogeneity, dynamic resource management for large-scale multicores and server systems with high-performance fabrics, and systems support for tapping into the increased client-side resource diversity at the edges of the network. Gavrilovska's research has been supported by the National Science Foundation, the US Department of Energy, and industry grants from Cisco, HP, IBM, Intel, Intercontinental Exchange, LexisNexis, VMware, and others. She has published over ninety peer-reviewed papers, and edited a book, High Performance Communications: A Vertical Approach (CRC Press, 2009).


Studying biological mechanisms with computational comparative analysis

Xiuwei Zhang, faculty candidate

Friday, February 15, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Studies of biological mechanisms (including biological networks, gene-expression levels) can greatly benefit from comparative analysis between different biological systems. Comparative analysis can be performed on various resolutions: from different species to individuals, tissues and single cells. Comparative analysis on evolutionary time scale, that is, cross-species analysis, can reveal the evolutionary sources of biological mechanisms thus bring new insights into our knowledge of present-day species. Going to a much higher resolution, variation between single cells allows researchers to study the biological processes among a heterogeneous population of cells, for example, the differentiation process from normal cells to cancer cells.

Computational modeling of the variation and similarity between different species or single cells is crucial for these comparative studies. I will present two representative works respectively on species level and single cell level: a probabilistic graphical model to improve the quality of regulatory networks of multiple species; an in silico simulator for single cell RNA-Seq data, which models the three levels of variation in this type of data. This simulator can generate datasets with similar statistical properties to real data, can be used to benchmark various computational methods for single cell RNA-Seq data, and can assist in wet-lab experimental design. Comparative studies on different levels are related: mechanisms learned on the single cell level can be integrated to an evolutionary framework for further refinement. Such knowledge can potentially be used to design new vaccines, drugs and new treatments for human disease.


Incorporating Real-World Semantics into Program Analysis of Robot Systems

John-Paul Ore

Wednesday, February 13, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Robotic software is plagued both by faults that menace all software (null-pointers, index-out-of-bounds) and also faults specific to its physical interaction with the real world, such as dimensional inconsistencies. These software hazards occur when developers incorrectly manipulate real-world quantities with physical units, such as confusing force with torque or measuring an angle in degrees instead of radians—something we have shown frequently happens in practice. We also found that existing solutions to these problems are time-consuming and error-prone. To address the state of the art, we designed a program analysis technique and its corresponding tool ’Phys’ to automatically detect dimensional inconsistencies in robotic software with minimal developer burden. Phys uses probabilistic reasoning and dataflow analysis to infer what variables mean in the real world. Phys works on systems that use the popular ‘Robot Operating System’ (ROS). I will present an evaluation showing that Phys has an 85% True Positive rate. I will show that dimensional inconsistencies lurk in at least 6% (211/3,484) of open-source robotic software repositories. I will further show the results of an empirical study showing that developers correctly identify the physical units of variables only 51% of the time, motivating our future work on automatically suggesting physical unit types. Finally, I will present a vision of future robotic software research enabled by our techniques that aims to help developers build robots with more reliable robotic software.

Bio:

John-Paul Ore is a Ph.D. candidate with the Computer Science and Engineering department at the University of Nebraska–Lincoln. His research is in software engineering and field robotics. His Ph.D. work focuses on how to automatically detect dimensional inconsistencies in robotic software without time-consuming developer annotations. Specifically, he builds techniques and tools that infer physical unit types (like ‘meters-per-second’) using probabilistic reasoning to combine facts from dataflow with evidence from uncertain sources like variable names. He also performs empirical studies of developers to assess their ability to make decisions about robotic software. Overall, his goal is to help robotic system developers create better and safer systems. John-Paul received an Othmer fellowship, a US Patent for Aerial Water Sampling (#US9606028B2), ‘Best Masters Thesis’ Award (2014), ‘Best Tool Demonstration’ (ISSTA’17), and is on the program committee for Robotic Software Engineering Workshop (RoSE, part of ICSE’19). He has a B.A. in Philosophy from the University of Chicago. More info and videos: https://cse.unl.edu/~jore


The Price of Ignorance in Swarm Robotic Central Place Foraging: An Analytical Perspective

Abhinav Aggarwal

Wednesday, February 6, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Renewed interest in Central Place Foraging has resulted from the need for robotic in-situ resource utilization solutions as a means to support habitation on other worlds. Practical algorithms that are provably scalable are needed. A key factor limiting the performance of foraging algorithms is the awareness of the bot(s) about the location of food items around the nest. From this perspective, a metric called Price of Ignorance is introduced. It measures how much time an ignorant bot takes relative to an omniscient forager for complete collection of food items in the arena.

In this talk, I will talk about this metric and use one deterministic (DASA) and one randomized foraging algorithm (BalCPFA) for a comparison study. These two algorithms are based on the top two algorithms from the recent Swarmathon competition. The analysis confirms the recent empirical claim by Lu et al. (ICRA 2019) that a deterministic spiral search can be expected to outperform a random walk based search (based on a Gazebo simulation). It also shows that even when the bots deploy perfect site fidelity, BalCPFA is unable to outperform DASA, in expectation. Furthermore, upon analyzing the effect of depletion of food items from the arena on the foraging efficiency, this analysis allows us to conclude that this effect is likely to be one of the key factors causing the BalCPFA lag behind DASA.

Bio:

Abhinav Aggarwal is a Ph.D. candidate at the University of New Mexico Computer Science Department, working with Prof. Jared Saia on robust interactive communication protocols and resource competitive analysis. He likes working on interesting mathematically challenging problems, mainly at the intersection of security and theoretical aspects of distributed computing. He has interned at several places like Microsoft, Google, VISA Research and Cornell University for his projects that span across different aspects of secure and fault-tolerant distributed systems. He has also served on various graduate student organizations on the campus, including CSGSA, GPSA, and ISA.


Complex and High-Dimensional Motion Planning Under Uncertain Conditions

Lydia Tapia, PhD

Wednesday, January 30, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Mankind is on the cusp of a robotics revolution. Soon, cars will drive themselves and our packages will be autonomously delivered by flying robot. Despite these advances, there is one aspect of navigation that robots are currently unable to handle well: uncertainty. Navigation uncertainty comes from many sources, both internal to the robot, e.g., control or localization uncertainty, or external to the robot, e.g., changes in or uncertainty of the world around the robot. In this talk, we will address multiple forms of uncertainty that impact autonomous navigation. First, we consider navigation in environments that are changing stochastically. Our methods are the first to directly integrate stochastic changes that occur during navigation and provide real-time capable solutions for navigation. Next, we consider transition uncertainty that occurs when an action is taken but the outcome is unexpected. Through adaptation of learned plans, we demonstrate adjustment to certain forms of transition uncertainty. Finally, we consider model uncertainty, a lack of precision or error in the world model used to navigate. While this form of uncertainty often causes unintended collisions, we demonstrate how to quantify and adjust to predicted collisions. Application of our methods spans both robotics and biological domains. In the robotics domain, we will demonstrate our solutions on autonomous vehicle navigation, aerial vehicles, and manipulation. In the biological domain, we investigate the impact of uncertainty in the simulation of antibody assembly, where multiple molecules are moving and interacting on a cell membrane.

Bio:

Lydia Tapia, PhD is an Associate Professor in the Department of Computer Science at The University of New Mexico. She received her Ph.D. in Computer Science from Texas A&M University and her B.S. in Computer Science from Tulane University. Her research contributions are focused on the development of computationally efficient algorithms for the simulation and analysis of high-dimensional motions for robots and molecules. Specifically, she explores problems in computational structural biology, motion under stochastic uncertainty, and reinforcement learning. Based on this work, she has been awarded two patents, one on a novel unmanned aerial vehicle design and another on a method to design allergen treatments. Lydia is the recipient of the 2016 Denice Denton Emerging Leader ABIE Award from the Anita Borg Institute, a 2016 NSF CAREER Award for her work on simulating molecular assembly, and the 2017 Computing Research Association Committee on the Status of Women in Computing Research (CRA-W) Borg Early Career Award.


The Tale of the Three Little Speakers and the Big Bad Wolf, or Why Do We Even Come to this Seminar?

Trilce Estrada, PhD

Wednesday, January 23, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Have you ever wonder why colloquium is a required class in the department of Computer Science? Have you ever asked yourself why (oh why) do you have to listen to a talk, or what do you even gain from attending to a research seminar? By now, research seminars are part of your academic life. They are designed to showcase a broad variety of topics or to bring different and fresh perspectives to the table. But depending on the culture of the department, the colloquium can be vibrant and exciting, or the most boring and pointless experience. In this meta-colloquium we will talk about the intrinsic vs. extrinsic motivators to attend to a research seminar, we will discuss strategies to formulate questions and engage with speakers, and we will interactively analyse a few case studies. Finally the goal of this talk is to set up foundations to maximize your gain, as a student, from this colloquium series.

Bio:

Trilce Estrada, PhD is an assistant professor in the department of Computer Science at the University of New Mexico. Her research interests include self-managed distributed systems, Big Data analysis, crowd sourcing, and machine learning. Recently, she was awarded the National Science Foundation's Early Career Award for the proposal entitled CAREER: Enabling Distributed and In-Situ Analysis for Multidimensional Structured Data.


Computational Design and Fabrication for All

Leah Buechley, PhD

Wednesday, January 16, 2019
Centennial Engineering Center 1041
2:00-3:00 PM

Abstract:

Computer Science for All (CS4All) is a new effort whose goal is to provide all K-12 students in the US with access to a CS education. Since it was announced in 2016, the initiative has gathered steam and school districts across the country are teaching their first computing classes. It is an exciting time for researchers in computer science and education; there is tremendous opportunity to shape the foundation of a new educational movement.

This talk will advocate for an approach to K-12 CS education that prioritizes young people's interests and engagement. I will argue that integrations of computing with design and hands-on making provide especially promising opportunities for deep engagement and learning in CS. I will survey relevant educational research, and present examples of how students from diverse backgrounds can create beautiful, meaningful artifacts by blending CS, design, and fabrication. I will present my own work in this area and discuss the exciting array of research opportunities presented by the intersection of CS4All with the emerging field of computational fabrication.

Bio:

Leah Buechley is a designer, engineer, and educator. Her work explores integrations of computing, electronics, and design. She has done foundational work in paper and fabric-based computing. Her inventions include the LilyPad Arduino, a construction kit for sew-able electronics. She currently runs a design firm, Rural / Digital, that explores playful integrations of technology and design. Previously, she was an associate professor at the MIT Media Lab, where she founded and directed the High-Low Tech group. Her research was the recipient of an NSF CAREER Award and the 2017 Edith Ackerman award for Interaction Design and Children. Her work has been exhibited internationally in venues including the Exploratorium, the Victoria and Albert Museum, and Ars Electronica and has been featured in publications including The New York Times, Boston Globe, and Wired. Leah received a PhD in computer science from the University of Colorado at Boulder and a BA in physics from Skidmore College.