Program

Program

Keynote 1 (DAY-1 : Feb 6 9:20 - 10:10)

Session Chair : Hitoshi Murai (RIKEN R-CCS)

  • Satoshi Matsuoka, Director, R-CCS
  • "Towards Fugaku-NEXT: Debunking the HPC Myths, Pursuing Science Instead."

  • Following up on the success of Fugaku, Riken R-CCS is now on course to realize its successor, Fugaku-NEXT, ideally by the end of the 2020s. However, the next generation breakthrough in performance is being inhibited by a variety of factors pertaining to the slowdown of Moore's Law. In fact, such difficulty is generating a series of rather technically unfounded views on evolutionary paths on computing, or 'myths', that are distorting the right way of moving forward. As a institute of leading edge science, we are undertaking a scientific, methodological approach to how next generation machines may achieve more than an order of magnitude performance gains over Fugaku, while retaining its other trait, being broad and general purpose to be applicable to wideraning problems we face today as a society today.

Invited Talk 1 (DAY-1 : Feb 6 10:30 - 11:00)

Session Chair : Hirofumi Tomita (RIKEN R-CCS)

  • Eugenia Kalnay, Department of Atmospheric and Oceanic Science, University of Maryland, College Park, MD
  • "Historical Review of Our Work on Data Assimilation and Ensemble Forecasting Over my Career with my students and colleagues"

  • In this presentation, I review my work on ensemble forecasting and data assimilation with my graduate students, postdocs, and colleagues at the University of Maryland and elsewhere. Building upon the success of the breeding method (Toth and Kalnay, 1993) for atmospheric ensemble forecasting, we further demonstrated the feasibility of using the breeding method and bred-vector-based energy equations to analyze the instabilities from other Earth components (e.g., ocean, waves) and other planets (e.g., Martian atmosphere). In particular, we showed the advantage of bred vectors over other ensemble forecasting methods given its capability of isolating the error growth of either fast- or slow-modes in the multiscale coupled system. To prove that “what 4D-Var can do, Ensemble Kalman Filter (EnKF) can do as well”, we developed several equivalent components of 4D-Var (e.g., Quasi-outer-loop and weight interpolation) within the Local Ensemble Transform Kalman Filter (LETKF) to make the EnKF finally competitive to 4D-Var. To make the EnKF even more powerful, we developed the Ensemble Forecast Sensitivity to Observations (EFSO, Kalnay et al., 2012) to determine and quantify the impact of each observation on the subsequent forecasts. Different applications based on EFSO were introduced, ranging from online impact monitoring of existing observing systems to accelerated assimilation development for new observing systems. We then discussed the most powerful EFSO-based application, the Proactive Quality Control (PQC), a purely flow-dependent algorithm that further improves the analysis by reconciling the negative impacts from detrimental observations. We illustrated the powerfulness of PQC using real-observation experiments with state-of-the-art atmospheric and oceanic models. In the last section, we discussed coupled data assimilation, a topic that has recently gained much attention. Using a hierarchy of coupled models ranging from the simple coupled Lorenz model (Peña and Kalnay, 2004) to the state-of-the-art NOAA Climate Forecast System version 2(CFSv2), we demonstrated the advantage of the strongly-coupled ensemble data assimilation approach over other coupled data assimilation strategies when using a sufficient ensemble. We also showed that the correlation-cutoff method we developed can boost the performance of strongly-coupled data assimilation when the ensemble size is small. Lastly, we proved the feasibility of combining the Neural Network and the correlation-cutoff method for improved coupled analysis using an intermediate-complexity coupled general circulation model.

Session 1 (DAY-1 : Feb 6 11:00 - 12:00)

Session Chair : Hirofumi Tomita (RIKEN R-CCS)

  • Yasuhiro Matsunaga, Associate Professor of Graduate School of Science and Engineering Saitama University.
  • "Integrative modeling of biomolecular dynamics from simulations and experiments"

  • Atomically detailed description of the conformational dynamics in biomolecules is often crucial to understand biomolecular functions. Combining experimental measurements with molecular simulations significantly improves the descriptions. Ensemble refinements, where the simulations are utilized to refine ensemble averaged data in NMR, SAXS, or cryo-EM, are a popular approach in integrative structural biology. On the other hand, time-series data obtained with recent single-molecule measurement techniques provide richer temporal information about biomolecular dynamics. Recently, we have developed a data assimilation framework based on machine learning for integrating single-molecule time-series and simulation data. Our method can bridge the time-scale gap between single-molecule experiments and simulations. In this presentation, we will begin with an overview of integrative modeling methods and then present recent applications of our integrative modeling to single-molecule FRET (Förster resonance energy transfer) and high-speed AFM (atomic force microscopy) data.

  • Norihiko Sugimoto, Professor Keio University
  • "Introduction of AFES-Venus (Venus GCM) and ALEDAS-V (AFES LETKF Data Assimilation System for Venus)"

  • We have developed Venusian GCM (general circulation model) named AFES-Venus (Atmospheric GCM for the Earth Simulator for Venus). Furthermore, in order to make use of observations by the Venus Climate Orbiter “Akatsuki”, we have also developed the data assimilation system based on the LETKF (Local Ensemble Transform Kalman Filter) named ALEDAS-V (AFES-LETKF data assimilation system for Venus) for the first time in the world. We will introduce important recent results of AFES-Venus and ALEDAS-V. Object analysis with Akatsuki horizontal winds assimilation has been published in this fiscal year.

  • KaMan Kong, Computational Climate Science Research Team, R-CCS, RIKEN
  • "Introduction to a data-assimilation-based parameter estimation method and its application in weather simulation"

  • Weather forecasting and simulation are complex at different temporal-spatial scales, one of the reasons is that a broad range of physical parameterization is required. Recently, a novel method of parameter estimation using an ensemble-based data assimilation (DA) technique has been established and practiced in the physical schemes in the climatological simulation (e.g., Kotsuki et al., 2018, 2020; Sueki et al., 2022). In this talk, I would like to briefly introduce the DA-based parameter estimation method (e.g., by Extended Kalman filter) and a weather-DA system (SCALE-LETKF), produced by the Data assimilation Research Team and the Computational Climate Science Research Team. Finally, three main research topics using this weather-DA system now I am working on would like to be shared: estimating parameters of microphysics scheme in an ideal 2-dimension squall-line experiment, parameters of cumulus scheme in a real tropical-cyclone-induced precipitation, and parameters of dust scheme in a real dust emission simulation.

Invited Talk 2 (DAY-1 : Feb 6 13:30 - 14:00)

Session Chair : Masaaki Kondo (RIKEN R-CCS)

  • Simon Hammond, U.S. Department of Energy, Federal Program Manager, Office of Advanced Simulation and Computing and Institutional Research and Development Programs
  • "DOE NNSA's Post-Exascale Computing Strategy"

  • The Advanced Simulation and Computing (ASC) program, in the U.S. Department of Energy (DOE) National Nuclear Security Administration (NNSA), is responsible for providing simulation capabilities and computational resources for the NNSA Stockpile Stewardship Program. Within recent years, ASC has experienced major, disruptive computing architecture changes causing numerous code performance and portability challenges. We will present some details of the 2022 ASC Computing Strategy which addresses these challenges and provides an updated approach to the development and procurement of high-performance computing platforms through a balanced portfolio of systems and technology investments to provide a stable, production-level computing service while tracking progress of the computing industry. Partnering with industry and fielding advanced architecture testbeds and prototypes to keep pace with the technology changes will allow the ASC program to anticipate and prepare for potential disruptions. Sustained, tightly coordinated co-design collaborations with industry partners and other U.S. agencies will enable the optimal use of these technologies to meet stockpile mission requirements.

Session 2 (DAY-1 : Feb 6 14:00 - 15:00)

Session Chair : Masaaki Kondo (RIKEN R-CCS)

  • Kentaro Sano, RIKEN R-CCS
  • "Feasibility Study at the Architecture Research Group"

  • In the feasibility study of the next-generation supercomputing infrastructure which started in August, 2022, we have been investigating architecture candidates with available future technologies and system's requirements. We present the investigation and its current status introducing the organization of the architecture research group, fundamental goal, assumed evaluation criteria, technological trends, and available approaches for the next-generation system.

  • Kento Sato, RIKEN R-CCS
  • "Feasibility Study at the System Software and Library Research Group"

  • Towards the next-generation advanced computing infrastructures, feasibility studies are important to select the best architecture for scientific and industry applications. While hardware and architectures from CPU, memory, interconnects to storages are indispensable components of the computing infrastructures, system software and library are also essential to drive the advanced hardware and architectures. Therefore, it is necessary to study on the current and future software ecosystem surrounding the candidate architectures in the future and investigate their subsequent commercial development. In this talk, we introduce goals and plans of the system software and library research group in the RIKEN system research team for the feasibility studies.

  • Takashi Shimokawabe, Information Technology Center, The University of Tokyo Associate Professor
  • "Feasibility Study at the Application Research Group"

  • We are working as an Application Research Group in the activities conducted by the RIKEN team for the Feasibility Studies on Next-Generation Supercomputing Infrastructures by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. The Application Research Group consists of subgroups conducting research activities in the fields of computational science, data science, and social science, as well as subgroups conducting research activities mainly in the field of computer science. In this presentation, we will introduce the activities of the Application Research Group and some of the efforts of the subgroups.

Session 3 (DAY-1 : Feb 6 15:20 - 16:20)

Session Chair : Mohamed Wahib (RIKEN R-CCS)

  • Kazuto Ando, RIKEN Center for Computational Science , Senior Technical Staff Operations and Computer Technologies Division
  • "Nonlinear Reduced-Order Modeling for Turbulent Flow by Large-Scale Distributed Machine Learning on Fugaku"

  • By performing large scale distributed machine learning on Fugaku, we have been working on the development of mode decomposition and reduction model construction methods using neural networks for three-dimensional flow fields generated by fluid dynamics simulations. This time, by replacing the autoencoder that has been used so far with a variational autoencoder, we changed it so that the "latent vector" representing the state of the system in the reduced-order space is output as a random number following a normal distribution. We apply this method to a turbulent flow field around a cylinder with a Reynolds number of 1000, and show that the original flow field can be reproduced with high accuracy using a small number of modes. We also evaluate its applicability to more complex flow fields, such as flow fields around vehicles.

  • Kohei Fujita, Earthquake Research Institute, The University of Tokyo Associate professor
  • "Enhanced Earthquake Simulation with Data-Driven Methods and Stochastic Analysis"

  • In this talk, I will introduce our recent work on scalable finite-element earthquake simulation methods on Fugaku. First, I will introduce a scalable finite-element solver accelerated by data-driven methods. Here, the accuracy of initial solutions used in iterative solvers in time-history simulations are improved using past time-step data, leading to significant speedup. Second, I will introduce a scalable solver based on a stochastic finite-element method to conduct uncertainty quantification of huge-scale problems using the full Fugaku system.

  • Satoru Miyano, Director M&D Data Science Center, Tokyo Medical and Dental University
  • "Cancer Research Accelerated Supercomputers and AI"

  • At the Institute of Medical Science, University of Tokyo, we have been working for whole-genome cancer clinical sequencing by using the supercomputer SHIROKANE and AI tools. The challenge was to transform thousands to millions of genomic aberrations per case into cancer precision medicine. It is what we now call “Digital Transformation.” IBM Watson for Genomics was employed for this purpose until the end of 2020 (currently Qiagen Clinical Insight and some expert system is used). In the process, we identified the effectiveness of AI, the indispensability of specialists’ intervention and bottlenecks. We recognized that natural language processing technology such as BERT and Google Knowledge Graph AI technology is an important technology. Automatic document creation was also a crucial issue. AI with accuracy of X% was not the goal. What is needed is not a black box, but explainable AI (XAI) that explains "why" in a human-understandable way. Watson for Genomics satisfied this request to some extent.
    For basic cancer research on the epithelial-mesenchymal transition (EMT) and anti-cancer drug-resistance, we developed novel methodologies and XAIs in collaboration with Fujitsu Laboratories. EMT is considered to play key roles in tumor metastasis and drug-resistance. Cancer gains drug-resistance during anti-cancer drug therapies. But the detailed mechanisms behind then are still under investigation. We present recent results unraveled by XAI and gene network analysis tool with Fugaku from huge public gene expression data provided by the Sanger Institute and the Broad Institute.

Invited Talk 3 (DAY-1 : Feb 6 16:20 - 16:50)

Session Chair : Kazuyoshi Ikeda (RIKEN R-CCS)

  • Joe Ledsam, Google Health, Japan
  • "Google's research challenges to apply artificial intelligence to healthcare"

  • Over the last decade, Google has played a leading role in the development and application of machine learning models. With particular reference to clinical medicine and computational biology, Dr. Joe Ledsam will provide an overview of some of Google's efforts to apply artificial intelligence to research challenges in science and healthcare. He will discuss approaches to the design of applied AI studies, and include clinical and technical learnings from Google and DeepMind's work.

Poster Session (DAY-1 : Feb 6 17:10 - 18:20)

List of accepted posters

Keynote 2 (DAY-2 : Feb 7 9:00 - 9:50)

Session Chair : Hitoshi Murai (RIKEN R-CCS)

  • Kimmo Koski, CSC - IT Center for Science Ltd., CEO
  • "LUMI EuroHPC Supercomputer for Simulation, Big Data and AI in the Exascale Era"

  • The #1 European supercomputer LUMI, hosted by CSC in Finland, is over a 200 MEUR investment by 10 European countries and the European Commission EuroHPC Joint Undertaking. The benefits of the project include high performance of the HPE/Cray systems - over half an exaflop - with cost-efficient and environment-friendly datacenter built in an old papermill in Kajaani in Center-Finland.

    The architecture of the system is aimed to satisfy a broad range of advanced research needs. Having LUMI installed and operational in its full capability is only the beginning of the story. The next steps include providing world class computing capability for various applications and research projects, targeting to address grand challenges of computing in different areas of simulation, big data and AI.

    LUMI is a large consortium of countries in Europe, and the aim is to extend the collaboration overseas. The talk discusses LUMI and its targets towards Exascale, opportunities provided by global collaboration, impact of sustainability and future directions for LUMI and related infrastructures in Europe.

Invited Talk 4 (DAY-2 : Feb 7 9:50 - 10:20)

Session Chair : Mohamed Wahib (RIKEN R-CCS)

  • Rick Stevens, Professor of Computer Science at the University of Chicago Associate Laboratory Director of the Computing, Environment and Life Sciences (CELS) Directorate and Argonne Distinguished Fellow at Argonne National Laboratory
  • "A View of Post-Exascale Computational Science and the Emerging Mix of HPC, AI and Quantum"

  • In this talk I will attempt to outline where I think computational science is going over the next twenty years and how the emergence of new platforms that complement and challenge traditional HPC may impact the types of problems the community works on, the platforms that centers design and deploy and the research that gets funded. As we launch into the post-exascale epoch we face a computing landscape that is quite different from the one that motivated the international push for exascale HPC systems. We see the emergence of powerful AI methods, from generative language models that are transforming research, teaching (and exams!), to AI-HPC hybrid (or surrogate) models that promise orders of magnitude performance gains for some problems and the promise of a potentially transformative future enabled by quantum computers and quantum algorithms.
    I will focus on trying to weave together how these emerging capabilities will change the landscape of problems researchers pursue and when and how the large-scale scientific computing community is likely to evolve as it both absorbs new approaches and sorts through what is real and works and what is not yet ready for doing science.
    Future platforms need to be designed so that they are well suited for the problems that the community wants to solve in the near term but also need to help lead the community to new approaches that offer sustained impact across many disciplines.

Session 4 (DAY-2 : Feb 7 10:40 - 11:40)

Session Chair : Mohamed Wahib (RIKEN R-CCS)

  • Toyotaro Suzumura, The University of Tokyo Professor, Graduate School of Information Science and Technology
  • "Artificial Intelligence for Real-World Industrial Applications"

  • In this talk, I will introduce our recent research projects are related to artificial intelligence and high-performance computing - contributing to realizing Society 5.0 - such as graph neural networks for financial applications, geospatial analysis, molecular property prediction, machine learning for medical treatment, deep reinforcement learning for fair job recommendation.

  • Rio Yokota, Tokyo Institute of Technology, Global Scientific Information and Computing Center
  • "Can we Pretrain Vision Transformers with Synthetic Images?"

  • Transformers are now the dominant architecture in natural language processing, and have replaced convolutional neural networks in many vision tasks. The true potential of transformers is achieved when the size of both the dataset and model are increased to extreme limits, which is known as the “scaling law”. However, the largest vision dataset JFT is only available to a limited number of researchers at Google. Further, large image datasets contain many ethical issues such as societal bias, copyright infringement, and privacy concerns. Synthetic datasets on the other hand are free from these ethical issues, and we can generate arbitrarily large datasets without the need for manual labeling. In this talk, I will describe our recent efforts to achieve the same pretraining effect as JFT using only synthetic datasets.

  • Irina Rish, Full Professor at Université de Montréal Core Faculty Member at Mila Quebec Artificial Intelligence
  • "Towards Genuinely Open AI: the Role of High-Performance Computer"

  • The rapidly expanding field of large-scale self-supervised models (a.k.a. “foundation models”) continues to make significant advances towards solving long-standing AI challenges such as a broad generalization and transfer, bringing the AI field closer to its "holy grail’ of achieving the Artificial General Intelligence (AGI), and impacting a wide range of scientific and societal applications. However, this type of research requires the amount of compute currently unavailable to academic and nonprofit organizations, thus widening the "compute gap" with industry and leading to concentration of AI power in a few leading companies.
    This motivated us - a rapidly growing international collaboration across several Universities and non-profit organizations - to join forces and initiate an effort towards developing common objectives and tools for advancing the field of large-scale foundation models. Our long-term, overarching goal is to develop a wide international collaboration united by the objective of building foundation models that are increasingly more powerful, while at the same time are safe, robust and aligned with human values. Such models aim to serve as the foundation for numerous AI applications, from industry to healthcare to scientific discovery - i.e., AI-powered applications of great societal value. We aim to avoid accumulation of the most advanced AI technology in a small set of large companies, while jointly advancing the field of AI and keeping it open (“democratization” of AI). Obtaining an access to large-scale computational resources would greatly facilitate the development of open AI research world-wide, and ensure a collaborative, collective solution to the challenge of making AI systems of the future not only highly advanced but maximally beneficial for the whole of humanity.

Panel Discussion (DAY-2 : Feb 7 13:30 - 15:00)

Panel Discussion : Expectations for the Next-Generation of Supercomputers

Moderator : Kento Sato, R-CCS

Panelists :

Session 5 (DAY-2 : Feb 7 15:20 - 16:20)

Session Chair : Teruki Honma (RIKEN R-CCS)

  • Mayumi Kamada, Graduate School of Medicine, Kyoto University, Associate Professor
  • "Protein Structural Dynamics and its Application to Genomic Medicine"

  • Genomic medicine is a medical treatment to diagnose, decide on a treatment plan, and prevent diseases based on the differences in the genetic background of each individual. Particularly in cancer, many drugs have been developed targeting molecules with genomic variants. It is expected that optimal drug selection will be possible based on the presence or absence of a variant. However, many of the variants have not been annotated with any clinical interpretation because the effects on molecular function and disease mechanisms are not unrevealed. These variants of uncertain significance (VUS) are a bottleneck for genomic medicine because they make drug selection and treatment decisions difficult.
    In order to reveal the effect of a variant on molecular function, it is necessary to understand the effect on the protein encoded by the gene. Moreover, the structure of proteins and their interactions with other molecules play an essential role in the expression of molecular functions. Therefore, machine learning and large-scale molecular simulation approaches have recently been used to estimate the effects of VUS on protein structures and their functions. In this presentation, I would like to introduce our efforts to overcome the barrier of VUS and its application to genomic medicine.

  • Kei-ichi Okazaki, Institute for Molecular Science, Associate Professor
  • "Molecular Simulation and Statistical Inference of Functional Motions of Biomolecular Machines"

  • Biomolecular machines such as molecular motors and transporters are nano-machines developed by nature. These biomolecular machines are dynamic, changing their conformation when they function. Simulations of these molecular systems are challenging due to their slow time scale and system size. To elucidate mechanism of these dynamic biomolecular machines, we perform atomistic / coarse-grained molecular dynamics simulation as well as statistical inference of their reaction coordinates. First, I will introduce a rare event sampling method to overcome slow time scale and its application to a transporter protein called Na+/H+ antiporter. Then, I will show an on-going attempt to overcome huge system size of a membrane remodeling protein called dynamin.

  • Kei Terayama, Associate Professor Graduate School of Medical Life Science, Yokohama City University
  • "De novo molecular design based on the collaboration of simulation and machine learning"

  • Computational design of novel molecules is a challenging task attracting attention in various areas ranging from organic solar cells to drug discovery. Designing a novel, as yet unreported molecule based on data-driven approaches is inherently difficult in most cases due to the lack of or little on the molecule or similar ones. However, molecular simulations based on physical principles enable us to predict, to some extent, the molecular properties of unknown molecules. In this talk, I will introduce a method for designing novel functional molecules by combining molecular simulation with machine learning-based molecular generation techniques. I will also present examples of successful molecular designs and their validation through experimentation.

Invited Talk 5 (DAY-2 : Feb 7 16:20 - 16:50)

Session Chair : Yuji Sugita (RIKEN R-CCS)

  • Rosana Collepardo, Departments of Chemistry and Genetics, University of Cambridge / Professor of Computational and Molecular Biophysics
  • "Multiscale modelling of chromatin liquid-liquid phase separation"

  • The three-dimensional organisation of the DNA is one of the great marvels of physical biology. By winding around a special class of proteins, the metre-long DNA manages to compress enormously to fit inside tiny (6 μm) nuclei, avoid entanglement and, moreover, maintain exquisite control over the accessibility of the information it carries. The structure of this remarkable complex of DNA and proteins, known as chromatin, determines how easily the DNA can be accessed and, thus, it is intimately linked to gene expression regulation. In this talk, I will present our multiscale modelling techniques designed to investigate the structure of chromatin in conditions that mimic those inside cells (Farr et al, Nature Communications, 2021). I will discuss why nucleosomes, the building blocks of chromatin, should be viewed as highly plastic particles that foster multivalent interactions and promote chromatin’s liquid-like properties.