Christopher De Sa, Cornell University
Scalable Machine Learning Algorithms
Recent advances in machine learning technologies have led to exciting new capabilities in AI for tasks across multiple domains. Much of this advancement has been driven by the capability of machine learning systems to process and learn from very large data sets using very complicated models. Continuing to scale data up in this way presents a computational challenge, as power, memory, and time are all factors that limit performance. In these lectures, we will explore the standard methods used to enable scaling in today’s learning algorithms and the principles that underlie them. We will also cover some more recent research, to try to get a perspective on what the future of this field will look like.
Peter Druschel, MPI-SWS
Mobile Privacy and the Power of Encounters
In these lectures, we review privacy threats due to data captured by location-based services, personal devices, autonomous devices, and devices in the Internet-of-Things (IoT). We then study secure encounters, a technology that reconciles user privacy with communication and services involving such devices. Finally, we review powerful new applications enabled by secure encounters.
A secure encounter is an agreement by two anonymous devices to have met at a given time and place. An associated shared secret enables the devices to subsequently confirm their encounter and communicate securely. We will see how this simple idea enables fascinating new forms of privacy-preserving, contextual, secure communication among personal, autonomous, and IoT devices, and enables users to selectively provide evidence of their personhood and physical whereabouts without revealing their identity.
Anja Feldmann, MPI-INF
The Internet: A Complex Eco-system
While the Internet is a hugely successful, human-made artifact that has changed society fundamentally, it has become a complex system with many challenges. In this talk, I will outline some of them and also point out a number of surprises in terms of our mental models of the Internet that we develop over the years. Next, I will focus on the evolution of the Internet and discuss, e.g., methods for detecting Internet infrastructure outages, how to successfully enable cooperation between Internet players, and revisit congestion control. I will end with an outlook on how we may evolve the Internet to tackle the future challenges of ubiquitous data availability from sensors and devices everywhere.
Tom Goldstein, University of Maryland
What Makes Deep Learning Work? An Intuitive Perspective Combining Theory and Experiments
My lectures will study neural networks from a theoretical and empirical perspective. I will explore loss landscape geometry and discuss how it impacts our ability to train networks. I will also discuss the issue of generalization of neural networks. After exploring what makes generalization a mystery, I will discuss some possible explanations for generalization that are supported by experiments. Finally, I will discuss the vulnerability of neural networks to various attacks, including adversarial examples and dataset poisoning. Using theoretical and experimental tools, we can understand the causes of these vulnerabilities and possible ways to avoid them.
Immanuel Trummer, Cornell University
Learning to Process Data Faster via Reinforcement Learning
Database systems have many tuning parameters that have a huge impact on how long data processing takes. Processing time may reduce from days to seconds when using the right parameter settings. But figuring out the best configuration is hard. Existing systems rely on hand-crafted rules and performance models for tuning. This approach is however unreliable and leads all too often to disastrous performance.
In this lecture, we will see how machine learning can help us to build more performance-robust database systems. Specifically, we will see how systems can learn optimal processing plans without user input, simply by trying different plans on small data samples and observing their performance. The lecture starts with a high-level overview of how database systems process data. Next, we will see the most important tuning parameters and the challenges involved in configuring them. Finally, we will see how reinforcement learning can help us to overcome those challenges.
David Van Horn, University of Maryland
Programming Languages for Trustworthy Software
Since the late 1960s, computer scientists have struggled with what has come to be known as the software crisis: an ever increasing reliance of society on computing systems, coupled with the growing gap between the ubiquity and power of these systems and the difficulty of writing useful and efficient programs economically. I believe the solution to this crisis rests in the effective use of programming language (PL) technology, which has the potential to turn the power of computing toward resolving the very crisis it creates. Today, programming languages and their associated tools can guide good design, categorically eliminate large classes of errors and vulnerabilities, and provide substantial aid to programmers throughout the software development life-cycle.
My lectures will cover the foundational tools PL researchers use to design and reason about languages and survey some recent work on fortified programming languages that offer strong correctness guarantees to programmers. By the end of the lectures, students should understand the basic tools and techniques of the PL researcher and be able to read and understand papers from the field.