EVENT 1 - Machine Learning for EDA (May 7, 2020)
6PM-8PM Pacific Time (USA and Canada)
9PM-11PM Eastern Time (USA and Canada)
9AM-11AM China Standard Time
Zoom Meeting Link
Please email hosts for meeting password: Yiran Chen or Tsung-Yi Ho
For more meeting information: DAWN_Attendance_Guidance.pdf
The capacity of Zoom meeting room is 300. Please be on time!
Talk 1 (0‘-20‘)
Talk 2 (20‘-35‘)
Talk 3 (35‘-50‘)
Talk 4 (50‘-65‘)
Talk 5 (65‘-80‘)
Panel (80‘-120‘)
Q&A
Talk 1

Reinforcement Learning for Placement Optimization

In the past decade, computer systems and chips have played a key role in the success of AI. Our vision in Google Brain's ML for Systems team is to use AI to transform the way systems and chips are designed. Many core problems in systems and hardware design are combinatorial optimization or decision making tasks with state and actions sizes that are orders of magnitude larger than common AI benchmarks in robotics and games. In this talk, we will go over some of our research on tackling such optimization problems. First, we talk about our work on deep reinforcement learning models that learn to do computational resource allocation, a combinatorial optimization problem that repeatedly appears in systems. Our method is end-to-end and abstracts away the complexity of the underlying optimization space; the RL agent learns the implicit tradeoffs between computation and communication of the underlying resources and optimizes the allocation using only the true reward function (e.g., the runtime of the generated allocation). We will then discuss our work on optimizing chip placement with reinforcement learning. Our approach has the ability to learn from past experience and improve over time. To enable our RL policy to generalize to unseen blocks. Our objective is to minimize PPA (power, performance, and area), and we show that, in under 6 hours, our method can generate placements that are superhuman or comparable on modern accelerator chips, whereas existing baselines require human experts in the loop and can take several weeks.



Azalia Mirhoseini

Google Brain
Azalia Mirhoseini is a Senior Research Scientist at Google Brain. She is the co-founder/tech-lead of the Machine Learning for Systems Team at Brain where they focus on deep reinforcement learning based approaches to solve problems in computer systems and metalearning. She has a Ph.D. in Electrical and Computer Engineering from Rice University. She has received a number of awards, including the MIT Technology Review 35 under 35 award, the Best Ph.D. Thesis Award at Rice and a Gold Medal in the National Math Olympiad in Iran. Her work has been covered in various media outlets including MIT Technology Review and IEEE Spectrum.



Anna Goldie

Google Brain
Anna Goldie is a Senior Software Engineer at Google Brain and co-founder/tech-lead of the Machine Learning for Systems Team, which focuses on deep reinforcement learning approaches to problems in computer systems. She is also a PhD student in the Stanford NLP Group, where she is advised by Professor Chris Manning. At MIT, she earned a Masters of Computer Science, Bachelors of Computer Science, and Bachelors of Linguistics. She speaks fluent Mandarin, Japanese, and French, as well as conversational Spanish, Italian, German, and Korean. She has given high-profile keynotes in Mandarin Chinese, and her work has been covered in various media outlets, including MIT Technology Review and IEEE Spectrum.



Talk 2

AI-Enabled Agile IC Physical Design and Manufacturing

In this talk, I will give an overview of our recent efforts leveraging modern AI advancement with domain-specific customizations for agile IC physical design and manufacturing closure. I will first show how we leverage deep learning hardware and software to develop a new open-source VLSI placement engine, DREAMPlace [DAC’19], which is over 30x faster than the previous SOTA academic placer with similar quality of results. I will then present MAGICAL (Machine Generated Analog IC Layout) system, funded by the DARPA IDEA program to produce fully automated, no-human-in-the-loop analog layouts from netlists to GDSII with very promising results [ICCAD’19]. I will further show how we leverage recent AI breakthrough in generative adversarial network (GAN) to develop end-to-end lithography modeling with orders of magnitude speedup [DAC’19, ISPD’20]. The closed-loop between AI and IC will be discussed.



David Z. Pan

University of Texas at Austin
David Z. Pan is currently Engineering Foundation Professor at the Department of Electrical and Computer Engineering, University of Texas at Austin. His research interests include machine learning for EDA, cross-layer IC design for manufacturing, reliability, security, hardware acceleration, CAD for analog/mixed-signal designs and emerging technologies. He has published over 370 refereed journal/conference papers and 8 US patents. He has served in many journal editorial boards and conference committees, including various leadership roles such as ICCAD 2019 General Chair, ASP-DAC 2017 TPC Chair, and ISPD 2008 General Chair. He has received many awards, including SRC Technical Excellence Award, 18 Best Paper Awards (ASP-DAC 2020, DAC 2019, GLSVLSI 2018, VLSI Integration 2018, HOST 2017, SPIE 2016, ISPD 2014, ICCAD 2013, ASPDAC 2012, ISPD 2011, IBM Research 2010 Pat Goldberg Memorial Best Paper Award in CS/EE/Math, ASPDAC 2010, DATE 2009, ICICDT 2009, SRC Techcon in 1998, 2007, 2012 and 2015), DAC Top 10 Author Award in Fifth Decade, ASP-DAC Frequently Cited Author Award, Communications of ACM Research Highlights, ACM/SIGDA Outstanding New Faculty Award, NSF CAREER Award, IBM Faculty Award (4 times), and many international CAD contest awards. He has graduated 32 PhD students who have also won many awards, including the First Place of ACM Student Research Competition Grand Finals in 2018, ACM/SIGDA Student Research Competition Gold Medal (twice), ACM Outstanding PhD Dissertation in EDA Award (twice), EDAA Outstanding Dissertation Award (twice), etc. He is a Fellow of IEEE and SPIE.



Talk 3

Plug-in Use of Machine Learning and Beyond

The wave of machine learning splashes to almost every corner of the world and EDA is no exception. Indeed, machine learning brings significant improvement over conventional EDA in various places. On the other hand, the applications of machine learning tend to be straightforward plug-in use, largely due to the poor explicability of many ML techniques. Nevertheless, we believe that the merit of ML EDA research hinges on deep interactions with domain knowledge and customizations wherever possible. To this end, two examples of such endeavor are presented. One is on functional verification acceleration and the other is design rule violation prediction.



Jiang Hu

Texas A&M University
Jiang Hu is a professor at the Department of Electrical and Computer Engineering of Texas A&M University. His research interests include ML EDA, computer resource management, hardware security and hardware-software interplay. He received best paper awards at DAC, ICCAD and IEEE International Conference on Vehicular Electronics and Safety. He has served as general chair for ISPD, and associated editor for TCAD and TODAES. He is a fellow of IEEE.



Talk 4

Efficient AI, TinyML, Model Compression

I will talk about Once-for-All network for efficient neural architecture search. Conventional NAS methods are computationally prohibitive (causing CO2 emission as much as 5 cars’ lifetime) thus unscalable. In this work, we propose to train a Once-for-All Network (OFA, ICLR’20) that can specialize for different hardware platforms without retraining. It consistently outperforms state-of-the-art NAS methods including MobileNet-v3 and EfficientNet, while reducing many orders of magnitude GPU hours and CO2 emission, receiving the 1st place in the 3rd and 4th Low Power Computer Vision Challenge (LPCVC). I will also talk about ProxylessNAS that received the 1st place in the Google Visual Wake Works challenge and has been integrated by the research community including PyTorch and AutoGluon.



Song Han

MIT
Song Han is an assistant professor in MIT’s Department of Electrical Engineering and Computer Science. His research focuses on efficient deep learning computing. He proposed “deep compression” technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation “efficient inference engine” that first exploited model compression and weight sparsity in deep learning accelerators. Recently he is interested in AutoML and NAS methods for efficient TinyML models. He received the best paper award at the ICLR’16 and FPGA’17. He is a recipient of NSF CAREER Award and MIT Technology Review Innovators Under 35. Many of the pruning, compression, and acceleration techniques have been integrated into commercial AI chips. He was the co-founder and chief scientist of DeePhi Tech (acquired by Xilinx). He earned a PhD in electrical engineering from Stanford University.



Talk 5

Pin Access Optimization Using Machine Learning

With the development of advanced process nodes of semiconductor, the problem of pin access has become one of the major factors to impact the occurrences of design rule violations (DRVs) due to complex design rules and limited routing resource. To tackle this problem, many recent works apply machine learning-based techniques to predict whether a local region has DRV or not by regarding global routing (GR) congestion and local pin density as the main features during the training process. Empirically, however, DRV occurrence is not necessary to be strongly correlated with the two features in advanced nodes. In this talk, I will present two of our works on DRV prediction using pin patterns as the major feature and model-guided placement refinement for DRV reduction [DAC’19, ISPD’20].



Shao-Yun Fang

National Taiwan University of Science and Technology
Shao-Yun Fang received the B.S. degree in electrical engineering from National Taiwan University (NTU), Taipei, Taiwan, in 2008 and the Ph.D. degree from the Graduate Institute of Electronics Engineering, NTU in 2013. She is currently an Associate Professor of the Department of Electrical Engineering, National Taiwan University of Science and Technology (NTUST), Taipei, Taiwan. Her current research interests focus on physical design and design for manufacturability for integrated circuits. She was a recipient of the First Place Winner of the 2012 ACM/SIGDA Student Research Competition (Graduate Student Category), the Silver Award Winner of the 2012 TSMC Outstanding Student Research Award (Category I: Circuit Design Technologies), two Best Paper Awards from the 2016 International Conference on Computer Design (ICCD) and the 2016 International Symposium on VLSI Design, Automation, and Test (VLSI-DAT), and two best paper nominations from the 2012 and 2013 International Symposium on Physical Design (ISPD).