Pengfei Su received a Ph.D. degree in Computer Science from William & Mary in 2021. He is an Assistant Professor in the Department of Computer Science and Engineering at the University of California, Merced. His research interests lie in programming languages, program analysis, and high-performance computing, with an emphasis on providing tools support for performance analysis. He has extensive experience in building tools to identify software and hardware inefficiencies in modern and emerging architectures. Some of his tools have been deployed to industrial data centers (e.g., Uber) and DOE national laboratories (e.g., Jefferson Lab) for improving code execution performance and increasing system throughputs. His paper in ICSE’19 won the ACM SIGSOFT Distinguished Paper Award and his paper in PPoPP’19 won the Best Paper Award.
Prof. Pengfei Su
The memory system is becoming heterogeneous and intelligent. Multiple memory components with different properties (e.g., latency, bandwidth and capacity) are put together, introducing memory heterogeneity; Some memory components could have computing capability and be equipped with machine learning models, introducing memory intelligence. Memory heterogeneity and intelligence are useful to increase memory capacity, reduce production cost, and avoid data movement. However, memory heterogeneity and intelligence introduce new challenges on memory allocation and reclamation, data migration, programming methods, and so on.
This session aims to discuss how to manage emerging heterogenous and intelligent memory systems. The purpose of this workshop is to bring together computer scientists and domain scientists from academia,and industry to share recent advances in heterogenous and intelligent memory systems. The discussion in this session will cover the fields of computer architecture, operating systems, programming models, and applications.
• Heterogenous memory systems
• Disaggregated memory systems
• Persistent memory-based heterogeneous memory systems
• Interconnect between heterogeneous memory systems
• Big memory systems and architectures
• Big data applications
Prof. Xiaoyi Lu, University of California Merced, USA
Title: Designing Fast and Scalable Storage Systems for Heterogeneous Memory
Abstract: Nowadays, fast and scalable storage systems become much more important for Big Data analytics and management. The emerging heterogeneous memory technologies in the form of NVMe-SSDs and PMEMs offer persistence with unprecedented performance. These technologies have the potential to change the fundamental design principles of storage systems. In this talk, we rethink the assumptions and design paradigms of traditional storage systems and propose new storage schemes to take advantage of heterogeneous memory technologies. Specifically, we will first present DStore, a decoupled storage model which builds a fast and persistent object store on PMEM. We further present NVMe-CR, which aims to scale checkpoint I/O workloads on supercomputers using NVMe-over-Fabrics. Finally, we will discuss RDMP-KV, which proposes PMEM-aware RDMA-based communication protocols for persistent key-value stores.
Biography: Dr. Xiaoyi Lu is an Assistant Professor in the Department of Computer Science and Engineering at the University of California, Merced, USA. He is the founder and director of Parallel and Distributed Systems Laboratory (PADSYS Lab). Previously (2018-2020), he was a Research Assistant Professor at the Ohio State University (OSU). His current research interests include parallel and distributed computing, high-performance interconnects, advanced I/O technologies, Big Data Analytics, Virtualization, Cloud Computing, and Deep Learning system software. He has published more than 100 papers in major international conferences, workshops, and journals with multiple Best (Student) Paper Awards or Nominations. He has delivered more than 100 times of invited talks, tutorials, and presentations worldwide. He has been actively involved in various professional activities in academic journals and conferences. Many of Dr. Lu’s research outcomes (e.g, PMIdioBench, HiBD, MVAPICH2-Virt, DataMPI, LingCloud, NeuroHPC) are made publicly available to the community and currently being used by hundreds of organizations all over the world. More details about Dr. Lu can be found at http://faculty.ucmerced.edu/luxi.
Xiao Bai received the B.Eng. degree in computer science from Beihang University of China, Beijing, China, in 2001, and the Ph.D. degree in computer science from the University of York, York, U.K., in 2006. He was a Research Officer (Fellow, Scientist) with the Computer Science Department, University of Bath, until 2008. He is currently a Full Professor with the School of Computer Science and Engineering, Beihang University. He has authored or co-authored more than 120 papers in high quality journals and refereed conferences. His current research interests include pattern recognition, image processing, and remote sensing image analysis. He is the Associate Editor for journal of Pattern Recognition, Signal Processing and Area Editor of Display. He is also the Vice Chair of TC2 (Structural Pattern Recognition) of IAPR (International Association of Pattern Recognition).
Prof. Xiao Bai
Xin Ning received the B.S. degree in software engineering in 2012, and the Ph.D. degree in electronic circuit and system from university of Chinese Academy of Sciences, in 2017. He is currently an Associate Professor with the Laboratory of Artificial Neural Networks and High Speed Circuits, Institute of Semiconductors, Chinese Academy of Sciences. His current research interests include neural networks, intelligent systems and computer vision. He has published by first or corresponding author more than 45 papers in journals and refereed conferences. Now he serves as the young associated editor of CAAI Transactions on Intelligent Systems, the guest editor of Elsevier Journal on DISPLAYS. He is also the guest editor of CONNECTION SCIENCE and CONCURR COMP-PRACT E. He was the Website Chair of the IEEE HPBD&IS 2020, and the Publication Chair of the IEEE HPBD&IS 2021.
Prof. Xin Ning
Robots, unmanned aerial vehicles, intelligent terminals in the home, smart phones, tablet computers, and other end side devices demand efficient image processing algorithms. Deep neural networks are widely used in computer vision tasks such as image classification and object detection, and they have made impressive improvements over the last few years. Owing to their huge commercial value, deep learning and convolutional neural networks have become research hotspots, and numerous excellent work have been conducted. At present, traditional deep neural networks are designed to extract more expressive depth features via a very deep neural network structure. This has presented a huge challenge to the deployment of convolution neural networks on various hardware platforms, especially mobile and edge devices, and has severely limited the development and application of deep neural networks on portable devices. The key to improving the efficiency and ability of mobile terminals to process image and video data, and to meeting the constraints of storage space and power consumption, lies in the lightweight design, model compression, and acceleration of deep neural networks, and this has been highlighted by academia and industry. This special session aims to promote the deployment and implementation of lightweight deep neural network models in edge devices.
Specific topics include, but are not limited to the following:
• Efficient image processing based on deep neural networks
• Efficient pattern recognition based on deep neural networks
• Efficient visual navigation based on deep neural networks
• Lightweight deep neural network structure design
• Parameter pruning and sharing in deep neural networks
• Pruning and thinning in deep neural networks
• Quantification of deep neural networks
• Knowledge distillation of deep neural networks
• Neural Architecture Search and AutoML
Prof. Hongzhi Yin, University of Queensland, Australia
Title: Information Network embedding for A New Generation of Geosocial Recommendations
Abstract: The rapid development of mobile Internet, location acquisition and 5G communication technologies has fostered a profusion of geo-social networks (e.g., Foursquare, Yelp and Google Place). They provide users an online platform to check-in at points of interests (e.g., cinemas, galleries and hotels) and share their life experiences in the physical world via mobile devices. The new dimension of location implies extensive knowledge about user behaviours and interests by bridging the gap between online social networks and the physical world. It is crucial to develop new geo-social recommendation services for both individual users and groups to explore the new places, attend new events and find their potential partners to attend these events together. This keynote will introduce three emerging geo-social recommendation paradigms and their new challenges: spatial item recommendation for mobile users, spatial item recommendation for dynamic groups, and joint spatial item and partner recommendation. This talk will also explore how to adopt and advance the network embedding techniques to address the new challenges in the three geo-social recommendation services.
Biography: Prof. Hongzhi Yin works as ARC Future Fellow and associate professor with The University of Queensland, Australia. He was recognized as Field Leader of Data Mining & Analysis in The Australian's Research 2020 magazine. He received his doctoral degree from Peking University in July 2014, and his PhD Thesis won the highly competitive Distinguished Doctor Degree Thesis Award of Peking University. His current main research interests include recommender system, graph representation learning, chatbots, edge machine learning, trustworthy machine learning, decentralized and federated learning, and smart healthcare. He has published 180+ papers, including 15 publications in Top 1% (CNCI), 100 CCF A and 60+ CCF B. He has won 6 Best Paper Awards such as ICDE'19 (CCF A) Best Paper Award, DASFAA'20 (CCF B) Best Student Paper Award, and ACM Computing Reviews' 21 Annual Best of Computing Notable Books and Articles as well as one invited paper in the special issue of KAIS on the best papers of ICDM 2018. He is currently serving as editorial board member and guest editor for over 10 leading journals such as ACM Transactions on Intelligent Systems and Technology, Information Systems, World Wide Web, Journal of Computer Science and Technology, Information and Frontiers in Big Data.
Prof. Chen Gong, Nanjing University of Science and Technology, China
Title: Towards a Unified Framework for Weakly-Supervised Learning
Abstract: As a classic learning problem, weakly supervised learning has so far induced a variety of specific learning paradigms. Regarding insufficient, indefinite, and inaccurate supervision, various methods such as semi-supervised learning, PU learning, multi-instance learning, and label noise learning have emerged. Although there are many kinds of weakly supervised learning methods, the previous studies on different methods are isolated from each other. Therefore, whether there is a unified weakly supervised learning framework to fundamentally model various weakly supervised situations is worthy of further exploration. By keeping this in mind, this talk mainly presents a general weakly supervised learning framework called "Centroid Estimation with Guaranteed Efficiency". The core of the framework is to design an unbiased and efficient empirical risk estimator for various weakly-supervised situations via loss decomposition and centroid estimation. The designed framework can cover a variety of typical weakly supervised learning methods such as semi-supervised learning, PU learning, multi-instance learning and label noise learning, and shows encouraging performance on different types of benchmark datasets.
Biography: Chen Gong received his dual doctoral degree from Shanghai Jiao Tong University (SJTU) and University of Technology Sydney (UTS) in 2016 and 2017, respectively. Currently, he is a full professor in the School of Computer Science and Engineering, Nanjing University of Science and Technology. His research interests mainly include machine learning, data mining, and learning-based vision problems. He has published more than 100 technical papers at prominent journals and conferences such as JMLR, IEEE T-PAMI, IEEE T-NNLS, IEEE T-IP, ICML, NeurIPS, ICLR, CVPR, etc. He also serves as the reviewer for more than 30 international journals such as AIJ, IJCV, JMLR, IEEE T-PAMI, and also the SPC/PC member of several top-tier conferences such as ICML, NeurIPS, ICLR, CVPR, ICCV, AAAI, IJCAI, etc. He received the "Excellent Doctorial Dissertation" awarded by Shanghai Jiao Tong University (SJTU) and Chinese Association for Artificial Intelligence (CAAI). He was enrolled by the "Young Elite Scientists Sponsorship Program" of Jiangsu Province and China Association for Science and Technology. He was also the recipient of "Wu Wen-Jun AI Excellent Youth Scholar Award".
Dr. Yuqing Ma, Beihang University, China
Title: Efficient Object Detection in the Open World
Abstract: Object detection, which is a hot research topic in the artificial intelligence and computer vision field, has been widely applied to robot navigation, video surveillance, industrial inspection, etc. Traditional object detection models, containing millions of parameters, highly rely on the large-scale well-annotated datasets in the close experimental setting, which is inflexible to an unexpected situation. However, in the real, open world, such as X-ray security inspection, key information is inclined to be interfered due to the complex background or malicious camouflage, and training data of certain categories are hard to be collected. What is worse, the hardware deployment environment is also restricted. Therefore, object detection task in the open world faces the challenges of weak targeting signal, sparse training data and limited computing resources, which deviates from the closed experimental setting created by traditional large-scale datasets. This talk will focus on the efficient object detection problems in the open world and present a series of accurate, reliable and efficient object detection approaches, respectively from signal inhibition, semantic evolution and information retention. Besides, three high-quality annotated benchmarks based on real-world scenarios will be introduced in this keynote, to promote the research of object detection in the open world. Also, this talk will further discuss the future development of efficient object detection, to face the challenges of openness, complexity and rivalry in the open world.
Biography: Yuqing Ma works as a postdoc in Computer Department, Beihang University. She received her doctoral degree from Beihang University in June 2021 and was honored as an excellent graduate in Beijing. Her research is centered around artificial intelligence and computer vision. She is particularly interested in inferencing in the open world with incomplete information, including sparse data, weak signal, complex dynamics, etc. She has published 16 high-quality academic papers on CCF-A conferences including IJCAI, ICCV, ACMMM, AAAI, etc., and Q1 journals, such as IEEE TIP, IEEE TNNLS, etc. She has published three high-quality real-world datasets, attracting extensive attention worldwide. She has won the national scholarship, 2018 IJCAI Travel Grant, and joined Tencent Rhino-Bird Elite Training Program, etc. She is also serving as the key member in multiple national major projects, carrying out her research in public security inspection, frontier inspection and so on.
Dr. Jian Zhou is an Associate Professor in Wuhan National Laboratory for Optoelectronics at Huazhong University of Science and Technology. He received his first Ph.D. in Computer Science from Huazhong University of Science and Technology in 2016 and his second Ph.D. in Computer Engineering from the University of Central Florida in 2018. Between 2018 and 2020, he worked as Postdoctoral Fellow at the University of Central Florida. His research interests include Advanced Computer Architecture, Non-volatile Memory (NVM) technologies, Hardware Security, Solid-state Storage Drive (SSD) technologies, Big Data, Near Data Processing, etc. He has published more than 20 papers in top conferences and journals in computer architecture such as ISCA, FAST, DAC, ToC, TPDS, ToS, EuroSYS, IPDPS, ICDCS, DATE, ICCD, etc. Check out this website for more details about Dr. Jian Zhou and his research: https://haslab.org/.
Dr. Jian Zhou
Huazhong University of Science and Technology, China
As the new golden age of Computer Architectures runs across the rise of Artificial Intelligence, both computing and storage systems are becoming increasingly complex. Rearchitecting the computer systems to reduce latency and energy fundamentally is the key for future Artificial Intelligent. However, as all computing, storage, and network equipment embrace breaking changes and become more intelligent, developing Intelligent Architectures that efficiently support large-scale Artificial Intelligent algorithms and ease up the programming is challenging.
This workshop aims to bring scientists from academia and industry together to share recent advances and experiences in Intelligent Architecture design and applications. This session will discuss innovative architecture and abstraction that promote efficient and practical artificial intelligence.
Specific topics include, but are not limited to the following:
• Data-Centric Architectures and Systems
• Data-Driven Computing and Memory Systems
• Data-Aware Intelligent System Design
• Process Using Memory
• Process Near Memory
• Interconnection between Intelligent Computing and Storage Devices
• Distributed Intelligent Computing and Memory Systems
Prof. Tong Zhang, Electrical, Computer and Systems Engineering Department at Rensselaer Polytechnic Institute, USA
Title: Computational Storage: Another Fantasy or A Real Big Thing?
Abstract: The rapidly growing interest in heterogeneous computing has recently moved computational storage into the limelight. It has a beautifully simple rationale: Moving computational tasks closer to where data reside could improve the overall system performance/efficiency. Intuitively, this simple rationale makes a perfect sense and cannot be possibly refuted. However, its large-scale commercial success has remained elusive so far, despite so many awesome research papers and 100s' millions of dollars spent on its R&D. This disappointing status quo warrants doubts and skepticisms: Will it turn out to be an over-hyped fantasy just like many others we have seen over the years? Are there any fatal flaws in this simple idea? Facing these questions, proponents of computational storage must be brutally honest to themselves and humbly search for the (inconvenient) truth, other than conveniently blaming the industry reluctance/laziness on embracing disruptive technologies. In this talk, I will discuss the pitfalls of prior and on-going R&D efforts, and present the correct (or at least the best) way to commence its commercialization journey. I will also show that there is still a large room for innovations in this area, despite many papers published over the past 20 years. Finally, although this talk solely focuses on computational storage, the lessons we learned could also help to prevent "in-memory computing" or "computational memory" (another hot topic today) from becoming yet another academic fantasy.
Biography: Tong Zhang is currently a Professor in the Electrical, Computer and Systems Engineering Department at Rensselaer Polytechnic Institute (RPI), NY. In 2002, he received the Ph.D. degree in electrical engineering from the University of Minnesota and joined the faculty of RPI. He has graduated 19 PhD students, and authored/co-authored over 160 papers, with over 5,000 citations and h-index of 40. Among his research accomplishments, he made pioneering contributions to enabling the pervasive use of low-density parity-check (LDPC) code in commercial HDDs/SSDs and establishing the research area of flash memory signal processing. In 2014, he co-founded ScaleFlux (San Jose, CA) to spearhead the commercialization of computational storage drives, and currently serves as its Chief Scientist. He is an IEEE Fellow.
Prof. Yu Wang, Tsinghua University, China
Title: Towards Energy-efficient System and Architecture for Artificial Intelligence
Abstract: Artificial Intelligence (AI) has empowered many key areas, such as security monitoring, autonomous driving, national defense equipment, etc. However, the computation volume of AI has shown an explosive growth trend. Taking security monitoring as an example, security cameras around the world generate about 2500PB of data every day and processing these data requires the memory capacity of 10^8 Nvidia T4 GPUs. The corresponding overall computing power consumption is as high as 7.5×10^6 KW for one year, which is equivalent to the annual power generation capacity of 0.66 Three Gorges hydropower stations. To promote the rapid implementation of artificial intelligence in various life scenarios, it is necessary to realize the efficient mapping of artificial intelligence algorithms to hardware systems, which poses a huge challenge to the design of intelligent-oriented systems and hardware architectures. On the one hand, there is a contradiction between the rapid increase in computing power requirements of intelligent algorithms and the slow improvement of general hardware energy efficiency. On the other hand, there is a contradiction between the unstructured sparse data structure and the structured hardware architecture. This report will introduce structured sparse pruning and dynamic low-bit quantization technologies, which will reduce computation and bandwidth requirements by 10-20 times without loss of accuracy. The designed FPGA-specific accelerators achieve up to 40 times and 16 times of energy efficiency compared with CPUs and GPUs. improvement. We propose the “pruning-quantizing-customization” neural network software and hardware co-design method, which is widely used in the industry. We also propose the FPGA-based coarse-grained instruction set architecture and neural network layer fusion compilation flow, reducing the hardware deployment cost of any neural network model to the order of hundreds of seconds, breaking the problem of a long development cycle of dedicated hardware for intelligent algorithms. Furthermore, for larger and more sparse Graph Neural Network scenarios (GNN), we target accelerating GNNs on GPUs from following perspectives, including accelerating operators, unifying interface, and optimizing computational graph. Our sparse kernel on GPU achieves up to 14.01x speedup compared with the state-of-the-art cuSPARSE design. For typical GNN model, our design achieves an acceleration of up to 3.67x compared with the traditional framework. We form a complete library for both sparse operators and GNN models. All these researches are included in an open-sourced project, namely dgSPARSE. We hope that researchers and developers in related domains could join this open ecology and contribute to the community.
Biography: Prof. Yu Wang received the B.S. and Ph.D. (with honor) degrees from Tsinghua University, Beijing, in 2002 and 2007. He is currently a tenured professor and chair of the Department of Electronic Engineering, Tsinghua University. His research interests include brain inspired computing, application specific hardware computing, parallel circuit analysis, and power/reliability aware system design methodology. He has authored and coauthored more than 300 papers in refereed journals and conferences. He has received Best Paper Award in ASPDAC 2019, FPGA 2017, NVMSA 2017, ISVLSI 2012, and Best Poster Award in HEART 2012 with 10 Best Paper Nominations. He is a recipient of DAC under 40 innovator award (2018), IBM X10 Faculty Award (2010). He served as TPC chair for ICFPT 2019 and 2011, ISVLSI2018, finance chair of ISLPED 2012-2016, track chair for DATE 2017-2019 and GLSVLSI 2018, and served as program committee member for leading conferences in these areas, including top EDA conferences such as DAC, DATE, ICCAD, ASP-DAC, and top FPGA conferences such as FPGA and FPT. He served as co-editor-in-chief of the ACM SIGDA E-Newsletter, associate editor of the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, the IEEE Transactions on Circuits and Systems for Video Technology, ACM Transactions on Embedded Computing Systems, ACM Transactions on Design Automation of Electronic Systems, IEEE Embedded Systems Letters, the Journal of Circuits, Systems, and Computers, and Special Issue editor of the Microelectronics Journal. He is now with ACM SIGDA EC and DAC 2021 EC. He is the co-founder of Deephi Tech (acquired by Xilinx in 2018), which is a leading deep learning computing platform provider.
Dr. Bo Tang is an Assistant Professor in the Department of Electrical and Computer Engineering at Mississippi State University. He received the Ph.D. degree in electrical engineering from University of Rhode Island (Kingstown, RI) in 2016. From 2016 to 2017, he worked as an Assistant Professor in the Department of Computer Science at Hofstra University, Hempstead, NY. His research interests lie in the general areas of statistical machine learning and data mining, as well as their various applications in cyber-physical systems, including robotics, autonomous driving and remote sensing.
Dr. Bo Tang
|Dr. Li Li is currently an Assistant Professor in University of Macau. He received his Ph.D. degree from the Ohio State University in 2018, the M.S. degree from the Ohio State University in 2014, and the B.S. degree from Tianjin University in 2011. He has research experience in different research institutions such as ShenZhen Institute of Technology, Chinese Academy of Science, Huawei Research and Microsoft Research. He has published in refereed journals and conference proceedings, such as INFOCOM, RTSS, ICDCS, NDSS, MM, TMC, TDSC.|
University of Macau,
The integration of embedded computation, communication, sensors and actuators has led to the emergence and development of Cyber-Physical Systems (CPS). Such systems cover vast application areas such as those referring to power grids, transportation, healthcare, manufacturing, remote sensing, structure health monitoring, just to name the few.
Thanks to their ability to interact with the environment they are deployed in, the sensor platform associated with CPS collects a large amounts of data. By embedding intelligence in the application, researchers and engineers can enable new functions not previously possible, leading to many smart-X systems such as smart grid, smart healthcare, smart elderly care, smart agriculture, smart transportation, and smart building, among others. Computational intelligence and machine learning-based data mining techniques constitute the basis of intelligence by handling the high volume, variety, veracity and velocity (the 4V’s challenges) of big data. The goal of this special session is to unveil these challenges and present the state-of-the-art research activities and results on all facets of data mining and knowledge discovery in CPS.
Specific topics include, but are not limited to the following:
• Continual learning or lifelong learning
• Learning with limited or inaccurate supervision
• Data and information fusion
• Federated or distributed learning
• Dimensionality reduction
• Data mining in Smart-X environments and the IoT
• AI in intelligent transportation systems
• AI in manufacturing
• AI in smart healthcare and elderly care
• AI in smart agriculture
• Big data model in CPS
Prof. Haifeng Li, Central South University, China
Title: Continual Learning: A Brain-Inspired Perspective
Abstract: Continual learning, as a new and promising learning paradigm to handle the open-world learning scenario, has recently attracted a lot of attention. The success of continual learning will eventually achieve the Artificial General Intelligence (AGI). Intuitively, the human being's memory system is the key to ensure that one can learn, finetune, and transfer knowledge and skills over the lifetime. In this talk, we will first review the recent research progresses from the human brain perspective. Then, we show how those researches could inspire the continual learning community to develop advanced methods and algorithms toward AGI. Finally, we will share some of our recent research findings and discuss potential future works on this topic.
Biography: Prof. Haifeng Li is a professor of the School of Geoscience and Information Physics of Central South University, Ph.D. supervisor, head of the Department of Geoinformation, Longcheng Talent Program, and 321 talents of Central South University. He worked as a senior research assistant at Hong Kong Polytechnic University in 2011, and he was a visiting scholar on spatial big data and service intelligence at the University of Illinois at Urbana-Champaign, USA, in 2014. He is a PI of a number of research projects. He has published more than 60 papers in top journals such as Nature Food (Cover paper), IEEE TNNLS, ISPRS Journal of P &RS, IEEE TGRS, IEEE TITS, RSE, ERL, and so on. One paper is selected as ESI 1‰ and two papers are selected as ESI 1%. He is an editorial board member of SCI journals and reviewer of many top journals. His main research interests include geographic/remote sensing big data, machine/deep learning, and artificial/brain-inspired intelligence.
Dr. Song Jiang is currently a professor of the CSE department at University of Texas at Arlington. His research interests include system infrastructure for big data processing, ML/AI for system optimizations, and HPC. Dr. Jiang’s research has generated substantial impact on the IT industry where several of his proposed algorithms for memory and storage management have been officially adopted into mainstream systems, including the Linux kernel, the NetBSD kernel, and the storage engine of MySQL. More information about his research can be found at http://ranger.uta.edu/~sjiang/
Prof. Song Jiang
University of Texas
at Arlington, USA
With the success of ML (Machine Learning), continued growth in data volume, and increasing availability of large-scale and highly complex computing systems, interactions between ML and systems have drawn much attention in the ML/AI and system research communities and IT industry. People are studying how to build systems to better support the recent advances in machine learning (Systems for ML) and how we can leverage machine learning to improve systems (ML for Systems). There have been workshops and conference sessions dedicated on either of the directions.
On the one hand, new hardware and software systems, such as new generations of GPUs, hardware accelerators (e.g., TPU), open source frameworks (e.g., TensorFlow and Apache Spark), have enabled training increasingly complex models on ever larger datasets. On the other hand, ML has been employed in the system optimizations on aspects such as scheduling, data structure design, microarchitecture, compilers, memory management, database system, and resource allocation and service quality control in warehouse scale computing systems.
It is clear that advances of ML and systems can positively impact each other. And the impacts are necessary and can be significant. This panel will invite prominent scholars and practitioners who are conducting research and product development on systems and ML technologies in the academia and IT industry to share their experience and offer their opinions on various issues about the interactions between ML and systems. The panel presentations and discussions are expected to engage audience on potential and limits, pros and cons, and success stories and learned lessons on leveraging ML and systems for each other’s optimizations and to provide suggestions on identification of opportunities for leveraging one to improve the other.
Tentative topics to be discussed on the panel include:
• To what extent can bottlenecks on ML/AI performance and quality be ameliorated with more powerful and optimized hardware/software supports?
• What are the major gaps for today's hardware/software supports to meet the demand of running ML/AI applications?
• To what extent do the system designs need to be customized for accelerating ML/AI applications, including designs of computer architecture, networking, I/O and storage systems, OS, and fault tolerance and recovery?
• What system design problems can be more effectively addressed than empirical approaches?
• What are the limits for ML/AI to help with system designs, and what are the risks and concerns in the effort?
• While an ML/AI model functions largely as a black box, we may not be able to easily interpret and understand its decision and to obtain insights on interaction between systems and these workloads. How can we remedy this potential problem?
Florida International University, USA
Dr. XUE Chun Jason
City University of Hong Kong, China
Dr. Yuan-Hao Chang
Institute of Information Science (IIS), Academia Sinica, China
Dr. Rubao Lee
Rateup Inc, China
Dr. Xiao-Feng Li
Co-Founder and CEO