Tutorials

ASP-DAC 2024 offers attendees a set of three-hour intense introductions to specific topics. If you register for tutorials, you have the option to select two out of the eight topics. This year, all tutorials will be held in-person.

  • Date: Monday, January 22, 2024 (9:30 — 17:00)
Room 204 Room 205 Room 206 Room 207
9:30 — 12:30 (KST) Tutorial-1
Tutorial to NeuroSim: A Versatile Benchmark Framework for AI Hardware
Tutorial-2
Toward Robust Neural Network Computation on Emerging Crossbar-based Hardware and Digital Systems
Tutorial-3
Morpher: A Compiler and Simulator Framework for CGRA
Tutorial-4
Machine Learning for Computational Lithography
14:00 — 17:00 (KST) Tutorial-5
Low Power Design: Current Practice and Opportunities
Tutorial-6
Leading the industry, Samsung CXL Technology
Tutorial-7
Sparse Acceleration for Artificial Intelligence: Progress and Trends
Tutorial-8
CircuitOps and OpenROAD: Unleashing ML EDA for Research and Education

Tutorial-1: Monday, January 22, 9:30—12:30 (KST) @ Room 204

Tutorial to NeuroSim: A Versatile Benchmark Framework for AI Hardware

Speakers:
Shimeng Yu (Georgia Institute of Technology, USA)

Abstract:

NeuroSim is a widely used open-source simulator to benchmark AI hardware, and it is primarily developed for compute-in-memory (CIM) accelerators, for deep neural network (DNN) inference and training, with hierarchical design options from device-level to circuit-level, and up to algorithm-level. It is timely to hold a tutorial to introduce the research community about new features and recent updates of NeuroSim, including the technology support down to 1 nm node, new modes of operations such as digital CIM (DCIM) and compute-in-3D NAND, as well as the heterogeneous 3D integration of chiplets to support ultra-large AI models. During this tutorial, a real-time demo will also be shown to provide hands-on experiences for the attendees to use/modify the NeuroSim framework to suite their own research purposes.

Tutorial target and outline:

  • i. Introduction to AI hardware and CIM paradigm
  • ii. NeuroSim hardware-level modeling methodologies: area, latency, and energy estimation
  • iii. NeuroSim software-level modeling methodologies: supporting quantization and other non-ideal device effects.
  • iv. Examples of running inference engine and training accelerators for convolutional neural networks with various technology choices.
  • v. NeuroSim extension 1: technology updates to 1 nm node and DCIM support
  • vi. NeuroSim extension 2: TPU-like architecture benchmark with novel buffer global buffer memory designs
  • vii. NeuroSim extension 3: chiplet based integration for ultra-large-scale transformer model
  • viii. NeuroSim extension 4: 3D NAND based CIM for hyperdimensional computing
  • ix. Demo of running inference engine benchmarking (DNN+NeuroSim V1.4)

Biographies:

Prof. Shimeng Yu is a full professor of electrical and computer engineering at Georgia Institute of Technology. He received the Ph.D. degree in electrical engineering from Stanford University. Among Prof. Yu’s honors, he was a recipient of NSF CAREER Award in 2016, IEEE Electron Devices Society (EDS) Early Career Award in 2017, ACM Special Interests Group on Design Automation (SIGDA) Outstanding New Faculty Award in 2018, Semiconductor Research Corporation (SRC) Young Faculty Award in 2019, ACM/IEEE Design Automation Conference (DAC) Under-40 Innovators Award in 2020, IEEE Circuits and Systems Society (CASS) Distinguished Lecturer 2021-2022, and IEEE Electron Devices Society (EDS) Distinguished Lecturer 2022-2023. Prof. Yu is active in the service in the EDA community as TPC members of DAC, ICCAD, DATE, etc. Prof. Yu has given multiple short course presentations at IEDM, ISCAS, ESSCIRC. He also gave workshop presentations at DAC and Design Automation Summer School (DASS) in the past.



Tutorial-2: Monday, January 22, 9:30—12:30 (KST) @ Room 205

Toward Robust Neural Network Computation on Emerging Crossbar-based Hardware and Digital Systems

Speaker:
Yiyu Shi (University of Notre Dame, USA)
Masanori Hashimoto (Kyoto University, Japan)

Abstract:

As a promising alternative to traditional neural network computation platforms, Compute-in-Memory (CiM) neural accelerators based on emerging devices have been intensively studied. These accelerators present an opportunity to overcome memory access bottlenecks. However, they face significant design challenges. The non-ideal conditions resulting from the manufacturing process of these devices induce uncertainties. Consequently, the actual weight values in deployed accelerators may deviate from those trained offline in data centers, leading to performance degradation. The first part of this tutorial will cover:
  1. Efficient worst-case analysis for neural network inference using emerging device-based CiM,
  2. Enhancement of worst-case performance through noise-injection training,
  3. Co-design of software and neural architecture specifically for emerging device-based CiMs.
Deep Neural Networks (DNNs) are currently operated on GPUs in both cloud servers and edge-computing devices, with recent applications extending to safety-critical areas like autonomous driving. Accordingly, the reliability of DNNs and their hardware platforms is garnering increased attention. This talk will focus on soft errors, predominantly caused by cosmic rays, a major error source during an intermediate device's lifetime. While DNNs are inherently robust against bit flips, these errors can still lead to severe miscalculations due to weight and activation perturbations, bit flips in AI accelerators, and errors in their interfaces with microcontrollers, etc. The latter part of this tutorial will discuss:
  1. Identification of vulnerabilities in neural networks,
  2. Reliability analysis and enhancement of AI accelerators for edge computing,
  3. Reliability assessment of GPUs against soft errors.

Biography:

Prof. Yiyu Shi is currently a professor in the Department of Computer Science and Engineering at the University of Notre Dame, the site director of National Science Foundation I/UCRC Alternative and Sustainable Intelligent Computing, and the director of the Sustainable Computing Lab (SCL). He received his B.S. in Electronic Engineering from Tsinghua University, Beijing, China in 2005, the M.S and Ph.D. degree in Electrical Engineering from the University of California, Los Angeles in 2007 and 2009 respectively. His current research interests focus on hardware intelligence and biomedical applications. In recognition of his research, more than a dozen of his papers have been nominated for or awarded as the best paper in top journals and conferences, including the 2023 ACM/IEEE William J. McCalla ICCAD Best Paper Award and 2021 IEEE Transactions on Computer-Aided Design Donald O Pederson Best Paper Award. He is also the recipient of Facebook Research Award, IBM Invention Achievement Award, NSF CAREER Award, IEEE Region 5 Outstanding Individual Achievement Award, IEEE Computer Society Mid-Career Research Achievement Award, among others. He has served on the technical program committee of many international conferences. He is the deputy editor-in-chief of IEEE VLSI CAS Newsletter, and an associate editor of various IEEE and ACM journals. He is an IEEE CEDA distinguished lecturer and an ACM distinguished speaker.



Prof. Masanori Hashimoto received the B.E., M.E., and Ph.D. degrees in communications and computer engineering from Kyoto University, Kyoto, Japan, in 1997, 1999, and 2001, respectively. Now, he is a Professor in the Department of Communications and Computer Engineering, Kyoto University. His current research interests include VLSI design and CAD, especially design for reliability, soft error characterization, timing and power integrity analysis, reconfigurable computing, and low-power circuit design. He served as the TPC chair for ASP-DAC 2022 and MWSCAS 2022. He was/is on the Technical Program Committees of international conferences, including DAC, ICCAD, DATE, ITC, Symposium on VLSI Circuits, and IRPS. He serves/served as the Editor-in-Chief for Microelectronics Reliability and an Associate Editor for IEEE Trans. VLSI Systems, IEEE Trans. CAS-I, and ACM Trans. Design Automation of Electronic Systems.



Tutorial-3: Monday, January 22, 9:30—12:30 (KST) @ Room 206

Morpher: A Compiler and Simulator Framework for CGRA

Speakers:
Tulika Mitra (National University of Singapore, Singapore)
Zhaoying Li (National University of Singapore, Singapore)
Thilini Kaushalya Bandara (National University of Singapore, Singapore)

Abstract:

Coarse-Grained Reconfigurable Architecture (CGRA) provides a promising pathway to scale the performance and energy efficiency of computing systems by accelerating the compute-intensive loop kernels. However, there exists no end-to-end open-source toolchain for CGRA, supporting architectural design space exploration, compilation, simulation, and FPGA emulation for real-world applications. This hands-on tutorial presents Morpher, an open-source end-to-end compilation and simulation framework for CGRA, featuring state-of-the-art mappers, assisting in design space exploration, and enabling application-level testing of CGRA. Morpher can take a real-world application with a compute-intensive kernel and a user-provided CGRA architecture as input, compile the kernel using different mapping methods, and automatically validate the compiled binaries through cycle-accurate simulation using test data extracted from the application. Morpher can handle real-world application kernels without being limited to simple toy kernels through its feature-rich compiler.Morpher architecture description language allows users to easily specify a variety of architectural features such as complex interconnects, multi-hop routing, and memory organizations. Morpher is available online at https://github.com/ecolab-nus/morpher.

Biographies:

Prof. Tulika Mitra is Provost's Chair Professor of Computer Science and Vice Provost (Academic Affairs) at the National University of Singapore (NUS). Her research interests include the design automation of embedded real-time systems with particular emphasis on software timing analysis/optimizations, application-specific processors, energy-efficient computing, and heterogeneous computing.


Mr. Zhaoying Li is currently working toward his Ph.D. degree from the National University of Singapore. His current research interests include reconfigurable architectures and compiler optimizations.


Ms. Thilini Kaushalya Bandara is a fourth-year PhD student at the School of Computing, National University of Singapore. Her research interests include hardware-software co-design of power-efficient reconfigurable architectures and their design space exploration.



Tutorial-4: Monday, January 22, 9:30—12:30 (KST) @ Room 207

Machine Learning for Computational Lithography

Speaker:
Yonghwi Kwon (Synopsys, USA)
Haoyu Yang (NVIDIA Research, USA)

Abstract:

As the technology node shrinks, the number of mask layers and pattern density has been increasing exponentially. This has led to a growing need for faster and more accurate mask optimization techniques to achieve high manufacturing yield and faster turn-around-time (TAT). Machine learning has emerged as a promising solution for this challenge, as it can be used to automate and accelerate the mask optimization process. This tutorial will introduce recent studies on using machine learning for computational lithography. We will start with a comprehensive introduction to computational lithography, including its challenges and how machine learning can be applied to address them. We will then present recent research in four key areas:
  • (1) Mask optimization: This is the most time-consuming step in the resolution enhancement technique (RET) flow. This tutorial compares different approaches for machine learning-based mask optimization, based on features and machine learning model architecture.
  • (2) Lithography modeling: An accurate and fast lithography model is essential for every step of the RET flow. Machine learning can be used to develop more accurate and efficient lithography models, by incorporating physical properties into the model and learning from real-world data.
  • (3) Sampling and synthesis of test patterns: A comprehensive set of test patterns is needed for efficient modeling and machine learning training. Machine learning can be used to identify effective sampling methods and generate new patterns for better coverage.
  • (4) Hotspot prediction and correction: Lithography hotspots can lead to circuit failure. Machine learning can be used to predict hotspots and develop correction methods that can improve the yield of manufactured chips.
We will also discuss how machine learning is being used in industry for computational lithography, and what the future directions of this research are. This tutorial is intended for researchers and engineers who are interested in learning about the latest advances in machine learning for computational lithography. It will provide a comprehensive overview of the field, and will introduce the most promising research directions.

Biography:

Dr. Yonghwi Kwon is currently a Senior R&D Engineer in Synopsys, Sunnyvale, CA, where he works on machine learning-based mask optimization and lithography modeling techniques. He received his B.S. and Ph.D. in School of Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2018 and 2023, respectively. He is recipient of 2022 SPIE Nick Cobb Memorial Scholarship, 2021 IEEE TSM Best Paper Award, and 2022 IEEE TSM Best Paper Honorable Mention Award.



Dr. Haoyu Yang is a Senior Research Scientist in NVIDIA Research, Austin, TX, where he actively conducts research on AI for semiconductor manufacturing. Prior to NVIDIA Research, he was a Lead Software Engineer in Cadence Design Systems, San Jose, CA and Postdoctoral Fellow in The Chinese University of Hong Kong (CUHK), Hong Kong. Haoyu received his B.S. at 2015 in Tianjin University, Tianjin, China and Ph.D. at 2020 in CUHK, Hong Kong. He is recipient of 2019 ASP-DAC Best Poster Award of Student Research Forum, 2019 SPIE Nick Cobb Memorial Scholarship. He has authored and co-authored more than 40 high quality papers in peer reviewed international conferences and journals.



Tutorial-5: Monday, January 22, 14:00—17:00 (KST) @ Room 204

Low Power Design: Current Practice and Opportunities

Speaker:
Gang Qu (University of Maryland, USA)

Abstract:

Power and energy efficiency has been one of the most critical design criteria for the past several decades. This tutorial consists of three parts that will help both academic researchers and industrial practitioners to understand the current state-of-the-art, the new advances, challenges and opportunities related to low power design. After a brief motivation, we will review some of the most popular low power design technologies including dynamic voltage and frequency scaling (DVFS), clock gating and power gating. Then we will cover some of the recent advances such as approximate computing and in-memory computing. Finally, we will share with the audience some of the security pitfalls in implementing these low power methods. This tutorial is designed for graduate students and professionals from industry and government working in the general fields of EDA, embedded systems, and Internet of Things. Previous knowledge on low power design and security are not required.

Biography:

Prof. Gang Qu is a professor in the Department of Electrical and Computer Engineering at the University of Maryland, College Park. He has worked extensively in the area of low power design and hardware security with more than 300 publications and delivered more than 150 invited talks. He has led tutorials in many conferences including ASPDAC (2005,2020,2022), DAC (2023), DATE (2019), HOST (2020, 2022). ISCAS (2016, 2017), and VTTW (2017). Dr. Qu has served ACM SIGDA Low Power Technical Committee and he is a fellow of IEEE.



Tutorial-6: Monday, January 22, 14:00—17:00 (KST) @ Room 205

Leading the industry, Samsung CXL Technology

Speaker:
Jeonghyeon Cho (Samsung Electronics, Republic of Korea)
Jinin So (Samsung Electronics, Republic of Korea)
Kyungsan Kim (Samsung Electronics, Republic of Korea)

Abstract:

1. Leading the industry, Samsung CXL Technology: The rapid development of data-intensive technology has driven an increasing demand for new architectural solutions with scalable, composable, and coherent computing environments. Recent efforts on compute express link (CXL) are a key enabler in accelerating the architecture shift for memory-centric architecture. CXL is an industry standard interconnect protocol for various processors to efficiently expand memory capacity and bandwidth with a memory-semantic protocol. Also, the memory connected with CXL allows the handshaking communication to include processing-near-memory (PNM) engine into the memory. As the foremost leader in the memory solution, Samsung electronics has developed CXL-enabled memory solutions: CXL-MXP and CXL-PNM. CXL-MXP allows flexible memory expansion compared to current DIMM-based memory solution and CXL-PNM is the world first CXL-based PNM solution for GPT inference acceleration. By adopting CXL protocol to memory solutions, our memory solutions will expand the CXL memory ecosystem while strengthening its presence in the next-generation memory solutions market.
2. Expanding Memory Boundaries through the CXL-Enabled Devices: With the growth of data volumes in data analytics and machine learning applications, the importance of a memory system performance is becoming increasingly critical. For powerful computation, the memory technology has evolved in both bandwidth and capacity throughput CXL-enabled devices. CXL technology can promote efficiency of various solutions such as CPU/GPU, ML accelerator, and memory. Memory chipmakers focused on CXL’s technological characteristic of supporting memory and accelerator to proceed with a preemptive research and development and, consequently, come up with the lasted integrated solution. In this tutorial, CXL-enabled devices and solutions will be discussed: CXL-based memory expander (MXP) and processing-near-memory (PNM). MXP provides robust disaggregated memory pooling and expansion capability for processors, accelerators to overcome these memory bandwidth and capacity constraints. PNM enables performance several times faster than that of dozen CPU cores in memory intensive computations. Also, CXL protocol’s challenges and various research topics will be discussed as well.
3. Software Challenges of Heterogeneous CXL Compute Pool, and SMDK: CXL is leading a new architecture of heterogeneous compute system with various device types and connectivity topology. The technology complies open-standard connectivity and inherits experiences of conventional memory tiering system. In reality, however, due to the novelty of the CXL devices and architectural deployment, the technology is raising a number of SW challenges across software layers to properly utilize the expanded computing resource pool consisting of CXL memory and accelerator, CXL-MXP and CXP-PNM (Processing Near Memory). The tutorial explains software considerations of the CXL compute pool and SMDK, Samsung's CXL software suite to leverage the pool. SMDK provides a variety of software and functionalities in an optimized way, and has open-sourced for CXL industry and researcher since March 2022.

Biography:

Dr. Jeonghyeon Cho joined Samsung Electronics, Gyeonggi-do, South Korea, in 1998, where he is currently a Vice President of Technology. His current research interests include the area of DRAM solution methodology that include signal integrity (SI), power integrity (PI), electromagnetic interference (EMI), and electrostatic discharge (ESD). He received a Ph.D. degree in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, in 2011.



Mr. Jinin So is a Principal Engineer at Samsung Electronics in Hwaseong, Gyeonggi-do, South Korea. He has over 20 years of experience at Samsung electronics and SK Hynix. His research interests include high-performance and energy-efficient memory sub-system architecture, DIMM/CXL-based domain-specific architecture. He received a B.S. degree in computer engineering from Hongik University. He has authored numerous PNM research publications.



Mr. Kyungsan Kim is a Principal Engineer at Samsung Electronics. His research interests include architecture and optimization of heterogeneous memory system across full stack software. His team has deployed SMDK, Samsung’s CXL software suite for CXL compute pool. He is a maintainer of Samsung open memory platform development kit, https://github.com/OpenMPDK. He received an M.S. degree in computer science and engineering from Chungang University. Contact him at ks024.kim@samsung.com



Tutorial-7: Monday, January 22, 14:00—17:00 (KST) @ Room 206

Sparse Acceleration for Artificial Intelligence: Progress and Trends

Speakers:
Guohao Dai (Shanghai Jiao Tong University, China)
Xiaoming Chen (Chinese Academy of Sciences, China)
Mingyu Gao (Tsinghua University, China)
Zhenhua Zhu (Tsinghua University, China)

Abstract:

After decades of advancements, artificial intelligence algorithms have become increasingly sophisticated, with sparse computing playing a pivotal role in their evolution. On the one hand, sparsity is an important method to compress neural network models and reduce computational workload. Furthermore, generative algorithms like Large Language Models (LLM) have brought AI into the 2.0 era, and the large computational complexity of LLM makes using sparsity to reduce workload more crucial. On the other hand, for real-world sparse applications like point cloud and graph processing, emerging AI algorithms have been developed to process sparse data. In this tutorial, we will review and summarize the characteristics of sparse computing in AI 1.0 & 2.0. Then, the development trend of sparse computing will also be discussed from the circuit level to the system level. This tutorial includes three parts: (1) From the circuit perspective, emerging Processing-In-Memory (PIM) circuits have demonstrated attractive performance potential compared to von Neumann architectures. We will first explain the opportunities and challenges when deploying irregular sparse computing onto the dense PIM circuits. Then, we will introduce multiple PIM circuits and sparse algorithm co-optimization strategies to improve the PIM computing energy efficiency and reduce the communication latency overhead; (2) From the architecture perspective, this tutorial will first present multiple domain-specific architectures (DSA) for efficient sparse processing, including application-dedicated accelerators and Near Data Processing (NDP) architectures. These DSA architectures achieve performance improvement of one to two orders of magnitude compared to CPU in graph mining, recommendation systems, etc. After that, we will discuss the design idea and envision the future of general sparse processing for various sparsity and sparse operators; (3) From the system perspective, we will introduce the sparse kernel optimization strategies on GPU systems. Based on these studies, an open-source sparse kernel library, i.e., dgSPARSE, will be presented. The dgSPARSE library outperforms commercial libraries on various graph neural network models and sparse operators. Furthermore, we will also discuss the application of the above system-level design methodologies in the optimization of LLM.

Biographies:

Prof. Guohao Dai is a tenure-track Associate Professor at Shanghai Jiao Tong University, he received the B.S. and Ph.D. (with honor) degrees from Tsinghua University, Beijing, in 2014 and 2019. Prof. Guohao Dai has published more than 50 high-level international journals and conference papers in the fields of Electronic Design Automation (EDA), heterogeneous computing, and system/architecture design. He is cited for more than 1000 times in Google Scholar. The published paper won the ASP-DAC 2019 Best Paper Award, DATE 2023/DAC 2022/DATE 2018 Best Paper Nomination, and WAIC 2022 Outstanding Youth Paper Award. He has personally won honors such as the WAIC 2022 Yunfan Award, the global champion of the NeurIPS21 BIGANN competition, Beijing's outstanding doctoral graduates, Tsinghua University's outstanding doctoral graduates, and Tsinghua University's outstanding doctoral thesis. Professor Dai has participated in guiding students to rank third in the world in ACM 2021 SRC and first in the world in MICRO 2020 SRC. Currently, he serves as PI/Co-PI for several projects with a personal share of over RMB 10 million.



Prof. Xiaoming Chen is an Associate Professor with Institute of Computing Technology, Chinese Academy of Sciences. He received the BS and PhD degrees in Electronic Engineering from Tsinghua University, in 2009 and 2014, respectively. His current research interest is mainly focused on design automation for computing-in-memory architectures. He has published 120+ papers in top conferences and journals including MICRO, HPCA, DAC, ICCAD, IEEE TCAD, IEEE TPDS, etc. He was awarded the Excellent Young Scientists Fund of National Natural Science Foundation of China in 2021. He received ASP-DAC'22 Best Paper Award and 2016 EDAA Outstanding Dissertation Award.



Prof. Mingyu Gao is a tenure-track assistant professor of computer science in the Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University in Beijing, China. Mingyu received his PhD and Master of Science degrees in Electrical Engineering at Stanford University, and Bachelor of Science degree in Microelectronics at Tsinghua University. His research interests lie in the fields of computer architecture and systems, including efficient memory architectures, scalable data processing, and hardware system security, with a special emphasis on data-intensive applications like artificial intelligence and big data analytics. He has published in top-tier computer architecture conferences including ISCA, ASPLOS, HPCA, OSDI, and SIGMOD, and been granted several patents. He won the IEEE Micro Top Picks paper award in 2016, and was the recipient of Forbes China 30 Under 30 in 2019.



Dr. Zhenhua Zhu received his B.S. and Ph.D. degrees from the Department of Electronic Engineering, Tsinghua University, China, in 2018 and 2024. Currently, he is serving as a postdoc researcher in the Department of EE, Tsinghua University. His research interests include Processing-In-Memory, Near Memory Computing, computer architecture, EDA, etc. He has published more than 30 conference and journal papers, including IEEE TCAD, DAC, ISCA, MICRO, ICCAD, etc. He serves as the reviewer of IEEE TCAD, ACM TODAES, AICAS, etc.



Tutorial-8: Monday, January 22, 14:00—17:00 (KST) @ Room 207 (*with possible overflow into the 17:00~18:00 time period)

CircuitOps and OpenROAD: Unleashing ML EDA for Research and Education

Speakers:
Andrew B. Kahng (University of California San Diego, USA)
Vidya A. Chhabria (Arizona State University, USA)
Bing-Yue Wu (Arizona State University, USA)

Abstract:

This tutorial will first present NVIDIA’s CircuitOps approach to modeling chip data, and generation of chip data using open-source infrastructure of OpenROAD (in particular, OpenDB and OpenSTA, along with Python APIs). The tutorial will highlight how integration of CircuitOps and OpenROAD has created an ML EDA infrastructure which will serve as a playground for users to directly experiment with generative and reinforcement learning-based ML techniques within an open-source EDA tool. Recently developed Python APIs around OpenROAD have allowed the integration of CircuitOps with OpenROAD to both query data from OpenDB and modify the design via ML-algrothim to OpenDB callbacks. As a part of the tutorial, participants will work with OpenROAD’s python interpreter and leverage CircuitOps to (i) represent and query chip data in ML-friendly data formats such as graphs, numpy arrays, pandas dataframes, and images, and (ii) modify circuit netlist information from a simple implementation of a reinforcement learning framework for logic gate sizing. Several detailed examples will show how ML EDA applications can be built on OpenROAD and CircuitOps ML EDA infrastructure. The tutorial will also survey the rapidly-evolving landscape of ML EDA – spanning generative methods, reinforcement learning, and other methods – that build on open design data, data formats and tool APIs. Attendees will receive pointers to optional pre-reading and exercises in case they would like to familiarize themselves with the subject matter before attending the tutorial. The tutorial will be formulated to be as broadly interesting and useful as possible to students, researchers and faculty, and to practicing engineers in both EDA and design.

Outline:
  1. Background: What does an ML EDA infrastructure require? CircuitOps, OpenROAD infrastructure? Data format, APIs, Algorithms.
  2. Key Python APIs for reading and writing into the database and what types of ML algorithms does the infrastructure enable
  3. Hands-on-session: Demonstration of the query-based APIs and ML-friendly data formats supported within OpenROAD (images, graphs, dataframes)
  4. Architecture and details of example ML EDA applications built on CircuitOps + OpenROAD
  5. Hands-on-session: Demonstration of callback APIs and RL-based gate sizing iterations with CircuitOps + OpenROAD
  6. The current landscape of ML-EDA, emerging standards, data models/formats, and similar efforts worldwide

Biographies:

Prof. Andrew B. Kahng is Distinguished Professor of CSE and ECE and holder of the endowed chair in high-performance computing at UC San Diego. He was visiting scientist at Cadence (1995-97) and founder/CTO at Blaze DFM (2004-06). He is coauthor of 3 books and over 500 journal and conference papers, holds 35 issued U.S. patents, and is a fellow of ACM and IEEE. He was the 2019 Ho-Am Prize laureate in Engineering. He has served as general chair of DAC, ISPD and other conferences, and from 2000-2016 served as international chair/co-chair of the International Technology Roadmap for Semiconductors (ITRS) Design and System Drivers working groups. He has been principal investigator of “OpenROAD” (https://theopenroadproject.org/) since June 2018, and until August 2023 served as principal investigator and director of “TILOS” (https://tilos.ai/), a U.S. NSF AI Research Institute.



Prof. Vidya A. Chhabria is an assistant professor in the School of Electrical, Computer, and Energy Engineering at Arizona State University. She received her Ph.D. and M. S. degrees in Electrical Engineering from the University of Minnesota in December 2022 and May 2018.. She spent her graduate research internships at Qualcomm in 2017 and NVIDIA Research (ASIC VLSI Research Group) in 2020 and 2021. Her research interests lie in computer-aided design (CAD) for VLSI systems and primarily revolve around physical design, optimization, and analysis algorithms. She has received ICCAD Best Paper Award (2021), Doctoral Dissertation Fellowship (2021), Louise T. Dosdall Fellowship (2020), and Cadence Women in Technology scholarship (2020).



Mr. Bing-Yue Wu is a Ph.D. student in the School of Electrical, Computer, and Energy Engineering at Arizona State University. He received his B.S. and M.S. degrees in Electrical Engineering from National Taiwan University of Science and Technology (Taiwan Tech), Taipei, Taiwan, in 2020 and 2023, respectively. His research interests include machine learning-based EDA solutions and open-source EDA infrastructure.