Greetings! Welcome to the IAP Newsletter with recent and upcoming research publications, news and events. Research includes applications and infrastructure for AI and machine learning, hardware acceleration, operating systems, networking, security, storage and data management - all in the context of distributed systems.


Join our new Series of webinars featuring topics in AI presented by prominent experts in machine learning.
Thursday March 25, 2021, 11am-12pm PT

Prof. Ana Klimovic, ETH Zurich

Ingesting and Processing Data Efficiently for Machine Learning

Pre-registration is required. Please see the abstract, bio and registration page.

Abstract:  Machine learning applications have sparked the development of specialized software frameworksand hardware accelerators. Yet, in today’s machine learning ecosystem, one important part of the system stack has received far less attention and specialization for ML: how we store and preprocess training data. This talk will describe the key challenges for implementing high-performance ML input data processing pipelines. We analyze millions of ML jobs running in Google's fleet and find that input pipeline performance significantly impacts end-to-end training performance and resource consumption. Our study shows that ingesting and preprocessing data on-the-fly during training consumes 30% of end-to-end training time, on average. Our characterization of input data pipelines motivates several systems research directions, such as disaggregating input data processing from model training and caching commonly reoccurring input data computation subgraphs. We present the multi-tenant input data processing service that we are building at ETH Zurich, in collaboration with Google,  to improve ML training performance and resource usage.

Bio: Ana Klimovic is an Assistant Professor in the Systems Group of the Computer Science Department at ETH Zurich. Her research interests span operating systems, computer architecture, and their intersection with machine learning. Ana's work focuses on computer system design for large-scale applications such as cloud computing services, data analytics, and machine learning. Before joining ETH in August 2020, Ana was a Research Scientist at Google Brain and completed her Ph.D. in Electrical Engineering at Stanford University in 2019. Her dissertation research was on the design and implementation of fast, elastic storage for cloud computing.

Below, Ana receives the Best Poster Award at the 2018 Stanford-UCSC Workshop.
Thursday, May 13, 11am-12pm PT

Prof. Manya Ghobadi, MIT

Optimizing AI Systems with Optical Technologies

Pre-registration is required. Please see the abstract, bio and registration page.

Abstract: Our society is rapidly becoming reliant on deep neural networks (DNNs). New datasets and models are invented frequently, increasing the memory and computational requirements for training. The explosive growth has created an urgent demand for efficient distributed DNN training systems. In this talk, I will discuss the challenges and opportunities for building next-generation DNN training clusters. In particular, I will propose optical network interconnects as a key enabler for building high-bandwidth ML training clusters with strong scaling properties. Our design enables accelerating the training time of popular DNN models using reconfigurable topologies by partitioning the training job across GPUs with hybrid data and model parallelism while ensuring the communication pattern can be supported efficiently on an optical interconnect. Our results show that compared to similar-cost interconnects, we can improve the training iteration time by up to 5x.

Bio: Manya Ghobadi is an assistant professor at the EECS department at MIT. Before MIT, she was a researcher at Microsoft Research and a software engineer at Google Platforms. Manya is a computer systems researcher with a networking focus and has worked on a broad set of topics, including data center networking, optical networks, transport protocols, and network measurement. Her work has won the best dataset award and best paper award at the ACM Internet Measurement Conference (IMC) as well as Google research excellent paper award.


Friday June 11, 2021 - ONLINE

The AI Workshop is being co-organized by Prof. Arvind Krishnamurthy, UW (photo above), Prof. Manya Ghobadi and Prof. Mohammad Alizadeh, MIT CSAIL (photos below).
Apache TVM and other leading machine learning solutions were (and are being) developed at UW with widespread support from the community.

Hear from faculty and students from interdisciplinary research groups at UW and MIT, such as SAMPL (System, Architecture, Machine learning, and Programming languages), SYSLab (Computer Systems Lab), MODE (Machine learning, Optimization, Distributed systems, and Estadística), and PLSE (Programming Languages and Software Engineering), and SAMPA.
AI Workshop co-organizers, Prof. Manya Ghobadi and Prof. Mohammad Alizadeh, MIT CSAIL

HIPEAC 2021 - European Network on High Performance and Embedded Architecture and Compilation - January 17-19, 2021

Exploiting Parallelism Opportunities with Deep Learning Frameworks, Yu Emma Wang, Carole-Jean Wu, Xiaodong Wang, Kim Hazelwood, and David Brooks

SMAUG: End-to-End Full-Stack Simulation Infrastructure for Deep Learning Workloads, Sam (Likun) Xi, Yuan Yao, Kshitij Bhardwaj, Paul Whatmough, Gu-Yeon Wei, and David Brooks

FAST '21 - The 19th USENIX Conference on File and Storage Technologies - February 23–25, 2021

Mania Abdi, Northeastern University; Amin Mosayyebzadeh, Boston University; Mohammad Hossein Hajkazemi, Northeastern University; Emine Ugur Kaynar, Boston University; Ata Turk, State Street; Larry Rudolph, Twosigma; Orran Krieger, Boston University; Peter Desnoyers, Northeastern University

Luis Ceze, University of Washington, and Karin Strauss, Microsoft Research

Ian Neal, Gefei Zuo, Eric Shiple, and Tanvir Ahmed Khan, University of Michigan; Youngjin Kwon, KAIST; Simon Peter, University of Texas at Austin; Baris Kasikci, University of Michigan

HPCA-27 - The 27th IEEE International Symposium on High-Performance Computer Architecture, Seoul, South Korea - March 1-3, 2021

Cheetah: Optimizing and Accelerating Homomorphic Encryption for Private Inference
Brandon Reagen (NYU); Woo-Seok Choi (Seoul National University); Yeongil Ko (Harvard); Vincent T. Lee, Hsien-Hsin S. Lee (Facebook); Gu-Yeon Wei, David Brooks (Harvard)

SynCron: Efficient Synchronization Support for Near-Data-Processing Architectures
Christina Giannoula (National Technical University of Athens / ETH Zürich); Nandita Vijaykumar (University of Toronto / ETH Zürich); Nikela Papadopoulou, Vasileios Karakostas (National Technical University of Athens); Ivan Fernandez (University of Malaga / ETH Zürich); Juan Gómez-Luna, Lois Orosa (ETH Zürich); Nectarios Koziris, Georgios Goumas (National Technical University of Athens); Onur Mutlu (ETH Zürich)

CAPE: A Content-Addressable Processing Engine
Helena Caminal, Kailin Yang (Cornell University); Srivatsa Srinivasa, Akshay Krishna Ramanathan (Pennsylvania State University); Khalid Al-Hawaj, Tianshu Wu (Cornell University); Vijaykrishnan Narayanan (The Pennsylvania State University); Christopher Batten, José F. Martínez (Cornell University)

Chasing Carbon: The Elusive Environmental Footprint of Computing
Udit Gupta (Harvard University / Facebook); Young Geun Kim (Arizona State University); Sylvia Lee, Jordan Tse, Hsien-Hsin S. Lee (Facebook); Gu-Yeon Wei (Harvard University); David Brooks (Harvard University / Facebook); Carole-Jean Wu (Facebook / Arizona State University)

tinyML Summit 2021 & tinyML Research Symposium 2021 - March 22-26, 2021

Keynote: Putting AI on a Diet: TinyML and Efficient Deep Learning
Prof. Song Han, MIT

Keynote: miliJoules for 1000 Inferences: Machine Learning Systems “on the Cheap”
Prof. Diana Marculescu, University of Texas at Austin

NSDI '21 - 18th USENIX Symposium on Networked Systems Design and Implementation - April 12–14, 2021

Eric Campbell, Cornell University; William Hallahan, Yale University; Priya Srikumar, Cornell University; Carmelo Cascone, Open Networking Foundation; Jed Liu, Intel;Vignesh Ramamurthy, InfoSys; Hossein Hojjat, University of Tehran & Tehran Institute for Advanced Studies; Ruzica Piskac and Robert Soulé, Yale University; Nate Foster, Cornell University

Guyue Liu and Hugo Sadok, Carnegie Mellon University; Anne Kohlbrenner, Princeton University; Bryan Parno, Vyas Sekar, and Justine Sherry, Carnegie Mellon University

Silvery Fu, UC Berkeley; Akhil Jakatdar, Saurabh Gupta, and Radhika Mittal, UIUC; Sylvia Ratnasamy, UC Berkeley

Amedeo Sapio, Marco Canini, and Chen-Yu Ho, KAUST; Jacob Nelson, Microsoft Research; Panos Kalnis, KAUST; Changhoon Kim, Barefoot Networks; Arvind Krishnamurthy, University of Washington; Masoud Moshref, Barefoot Networks; Dan Ports, Microsoft Research; Peter Richtarik, KAUST

MLSys 2021 - Fourth Conference on Machine Learning and Systems - April 4-7, 2021
MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers 
Igor Fedorov, et. al.

CPR: Understanding and Improving Failure Tolerant Training for Deep Learning Recommendation with Partial Recovery
K. Maeng, S. Bharuka, I. Gao, M. Jeffrey, V. Saraph, B.-Y. Su, C. Trippel, J. Yang, M. Rabbat, B. Lucia, and Carole-Jean Wu

TT-Rec: Tensor Train Compression for Deep Learning Recommendation Model Embeddings
C. Yin, B. Acun, X. Liu, and Carole-Jean Wu

ASPLOS 2021 - The 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems - April 19-23, 2021

Yanqi Zhang, Weizhe Hua, Zhuangzhuang Zhou, Edward Suh, Christina Delimitrou (Cornell University)

Yu Gan, Mingyu Liang (Cornell University); Sundar Dev, David Lo (Google); Christina Delimitrou (Cornell University)

Guowei Zhang, Nithya Attaluri (MIT); Joel Emer (MIT & NVIDIA); Daniel Sanchez (MIT)

Irina Calciu (VMware Research); M. Talha Imran (The Pennsylvania State University); Ivan Puddu (ETH Zurich); Sanidhya Kashyap (Georgia Institute of Technology); Hasan Al Maruf (University of Michigan); Onur Mutlu (ETH Zurich); Aasheesh Kolli (The Pennsylvania State University and Google)

Joshua Landgraf, Tiffany Yang, Will Lin (UT Austin); Christopher J. Rossbach (UT Austin and VMware Research and Katana Graph); Eric Schkufza (Amazon)

Mark Wilkening, Udit Gupta, Samuel Hsia (Harvard University); Caroline Trippel (Facebook); Carole-Jean Wu (Facebook/ASU); David Brooks, Gu-Yeon Wei (Harvard University)

ISCA 48 - The International Symposium on Computer Architecture 
Valencia, Spain - May 22-26, 2021
Program unavailable at time of print.

1st IEEE Workshop on Theory and Practice of Programmable Forwarding - June 28th, 2021
Call for papers is due on March 5

Hacking Distributed is a blog hosted by Prof. Emin Gün Sirer for "everyday techies building real systems people use, and their still-with-it-and-technical CTOs".


2020 ACM Fellows. The ACM Fellows program recognizes the top 1% of ACM Members for their outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community. Fellows are nominated by their peers, with nominations reviewed by a distinguished selection committee.


Enzian is a research computer designed by ETH Zurich for computer systems software research, rather than any particular commercial workload. An Enzian node has a big server-class Marvell ThunderX CPU closely coupled in cache coherence to a large Xilinx FPGA, with ample main memory and network bandwidth on both sides. For more info, see the demo at the recent IAP Workshop and the paper at the 2020 Conference on Innovative Data Systems Research (CIDR), Amsterdam - Tackling Hardware/Software co-design from a database perspective


February 17, 2021

January 25, 2021

January 13, 2021

December 7, 2020

November 16, 2020

October 28, 2020

September 13, 2020

August 25, 2020

IAP Workshop Testimonials
Professor Christos Kozyrakis, Stanford - “As a starting point, I think of these IAP workshops as ‘Hot Chips meets ISCA’, i.e., an intersection of industry’s newest solutions in hardware (Hot Chips) with academic research in computer architecture (ISCA); but more so, these workshops additionally cover new subsystems and applications, and in a smaller venue where it is easy to discuss ideas and cross-cutting approaches with colleagues.” 
Professor Heiner Litz, UC Santa Cruz - "The IAP workshops represent extremely valuable events for all attendees including industry members, students and faculty. On my side, multiple project collaborations and student internships have evolved from these meetings leading to a win-win-win situation for all participants.” 

Ana Klimovic, PhD student, Stanford - “I have attended three IAP workshops and I am consistently impressed by the quality of the talks and the breadth of the topics covered. These workshops bring top-tier industry and academia together to discuss cutting-edge research challenges. It is a great opportunity to exchange ideas and get inspiration for new research opportunities." 
Nathan Pemberton, PhD student, UC Berkeley - "IAP workshops provide a valuable chance to explore emerging research topics with a focused group of participants, and without all the time/effort of a full-scale conference. Instead of rushing from talk to talk, you can slow down and dive deep into a few topics with experts in the field." 

Support a unique tech forum that brings together academia and industry under your company's banner? 
Please feel free to contact us regarding sponsorship opportunities, and for more info about any of the items above.
Jim Ballingall
Executive Director
Industry-Academia Partnership (IAP)
cel: 408-212-1035
Copyright © 2013-2021 Industry-Academia Partnership
Prof. Christos Kozyrakis (left) and Prof. Heiner Litz welcome attendees at the 2018 Stanford-UCSC Workshop.