All programs are shown in JST timezone. (GMT+9:00)


Keynote 1 (DAY-1 : Feb 15 8:25 - 9:15)

Session Chair: Nobuyasu Ito

Session 1 (DAY-1 : Feb 15 9:15 - 9:55)

Session Chair: Nobuyasu Ito

Development and operation of Fugaku

  • Yutaka Ishikawa, RIKEN R-CCS

  • “History of Supercomputer Fugaku Development”

  • Supercomputer Fugaku is the deliverables of Flagship 2020 project officially starting in FY2014. The project will be ended in March 2021. We will summarize what were challenging from the current perspective. One of the big challenging was to define the target performance based on the prediction of microfabrication technology two generations ahead. Due to the slow down the development of microfabrication technology, we had to reconsider the development schedule delay to wait for 7nm technology. This delay allowed us to integrate a half precision floating unit into Fugaku CPU, A64FX, that accelates AI-related workloads.

  • Fumiyoshi Shoji, RIKEN R-CCS

  • “New services and operation on Fugaku”

Session 2 (Invited talks) (DAY-1 : Feb 15 10:10 - 11:10)

Session Chair: Mitsuhisa Sato

Outstanding Benchmark Results on Fugaku

  • Toshiyuki Shimizu, Fujitsu Ltd.
  • “TOP500 & HPCG performance on Fugaku”

  • Supercomputer Fugaku, which is developed by RIKEN and Fujitsu, is the first supercomputer in the history receiving the first prizes in three major supercomputers’ rankings, TOP500, HPCG, and Graph500 at the same time in June 2020. In addition to that it also received the first prize in the HPL-AI ranking. It also received the first prizes of those four rankings in November 2020, consecutively.
    In this presentation, contributions of the A64FX CPU and its system including system software to TOP500 and HPCG benchmarks results will be presented and discussed.

  • Koichi Shirahata, Fujitsu Laboratories Ltd.
  • “MLPerf HPC on Fugaku”

  • The MLPerf HPC benchmark suite measures the time to train emerging scientific machine learning models on HPC systems, including data staging from parallel file systems into accelerated and/or on-node storage systems. We submitted the CosmoFlow benchmark results on Fugaku, training a 3D convolutional neural network with N-body cosmological simulation data to predict cosmological parameters. The dataset is reformatted to reduce data staging time and LLIO (Lightweight Layered IO Accelerator) was used to make use of temporary local file system. Optimized oneAPI Deep Neural Network Library (oneDNN) for A64FX was developed to exploit the performance of the new chip. Since the accuracy could not reach the target with large batch sizes, a hybrid approach of data parallelism and model parallelism was introduced. About 1/10 of Fugaku was utilized to set a record for CPU-type supercomputers, achieving a processing speed 14 times faster than that of other CPU-type systems.

Session 3 (DAY-1 : Feb 15 11:10 - 11:50)

Session Chair: Mitsuhisa Sato

Outstanding Benchmark Results on Fugaku

  • Toshiyuki Imamura, RIKEN R-CCS

  • “HPL-AI benchmark on Fugaku”

  • HPL-AI benchmark [1] was initiated in November 2020, and the first run was demonstrated on the Summit system at ORNL with 445PFlop/s. Our performance benchmark of HPL-AI on the supercomputer Fugaku was awarded the first position at the 55th and 56th top500 [2]. We submitted the first score report with the effective performance of 1.42EFlop/s, which was benchmarked on 126,720 nodes equaling the five-sixths system of the supercomputer Fugaku with the normal mode (2.0GHz). It was the world's first achievement to exceed the wall of Exa-scale in a floating-point arithmetic benchmark. The second submission improved the score with the full system configuration of Fugaku (152,064 nodes) and the boost mode (2.2GHz). Our second report was broken to 2.0EFlop/s, an outstanding performance value.
    Compared to HPL, which allows only double-precision floating-point arithmetic, several challenges are hidden in the large-scale benchmark from a low-precision numerical viewpoint, such as avoiding underflow, overflow and missing information accumulated. A preliminary analysis of the HPL-AI matrix revealed that it is insufficient to simply replace FP64 operations with FP32 or FP16 operations, while the matrix's numerical property is unexpectedly positive. At the least, for a full-scale benchmark on Fugaku, thoughtful preliminary numerical analysis for lower-precision arithmetic was needed [3, 4]. We are currently working on a broader support for other platforms and aim to release our HPL-AI code as OSS, disabling FP16 calculations as needed. We expect to increase the number of benchmarks with HPL-AI and widely recognize mixed-precision benchmarking effectiveness.

    [1] HPL-AI Homepage,
    [2] Top500 Homepage,
    [3] S. Kudo, K. Nitadori, T. Ina, and T. Imamura: Prompt Report on Exa-Scale HPL-AI Benchmark, Proc. IEEE CLUSTER 2020: p.418-419 (2020)
    [4] S. Kudo, K. Nitadori, T. Ina, and T. Imamura: Implementation and Numerical Techniques for One EFlop/s HPL-AI Benchmark on Fugaku, Proc. 2020 IEEE/ACM 11th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA20), p. 69-76 (2020)

  • Masahiro Nakao, RIKEN R-CCS

  • “Performance of the Supercomputer Fugaku for Graph500 Benchmark”

  • I present the performance of the supercomputer Fugaku for breadth-first search (BFS) problem in the Graph500 benchmark, which is known as a ranking benchmark used to evaluate large-scale graph processing performance on supercomputer systems. Fugaku is a huge-scale Japanese exascale supercomputer that consists of 158,976 nodes connected by the Tofu interconnect D (TofuD). Our team has developed a BFS implementation that can extract the performance of Fugaku. We also optimize the number of processes per node, one-to-one communication, performance power ratio, and process mapping in the six-dimensional mesh/torus topology of TofuD. We evaluate the BFS performance for a large-scale graph consisting of about 2.2 trillion vertices and 35.2 trillion edges using the whole Fugaku system, and achieve 102,956 giga-traversed edges per second (GTEPS), resulting in the first position of Graph500 BFS ranking in November 2020. This performance is 3.3 times higher than that of Fugaku’s previous system, the K computer.

Session 4 (Invited talks) (DAY-1 : Feb 15 13:20 - 13:50)

Session Chair: Takahito Nakajima

Distinguished Achievements for COVID-19 on Fugaku

  • Shigenori Tanaka, Kobe University
  • “Fragment molecular orbital calculations for SARS-CoV-2 proteins”

  • Electron-correlated fragment molecular orbital (FMO) calculations for SARS-CoV-2 proteins have been performed on Fugaku supercomputer.
    First, FMO-based interaction analysis on a complex between the SARS-CoV-2 main protease (Mpro) and its peptide-like inhibitor N3 was carried out on the basis of PDB structure 6LU7 [1]. The ligand-protein interaction energies represented by the inter-fragment interaction energies (IFIEs) were decomposed into several contributions, and then the characteristics of hydrogen bonding and dispersion stabilization were made clear. From this study, His41, His163, His164 and Glu166 were found to be the most important amino acid residues of Mpro interacting with the inhibitor, mainly due to hydrogen bonding.
    Furthermore, combination of classical molecular dynamics (MD) simulation and ab initio FMO calculation was applied to a complex formed between the Mpro and the N3 to analyze interactions within the complex while incorporating structural fluctuations under physiological conditions [2]. Statistical evaluation of interaction energies between N3 and amino acid residues was thus performed by processing a thousand of structure samples, demonstrating that relative importance of each residue is altered by the structural fluctuation.
    Along with the statistical evaluation of IFIEs between the N3 ligand and surrounding amino-acid residues for a thousand of dynamical structure samples, we have also applied novel approaches based on the principal component analysis (PCA) and the singular value decomposition (SVD) to the analysis of IFIE data in order to extract the dynamically correlated interactions between the ligand and residues. We have then found that the ligand is dynamically bound in the pharmacophore through collective interactions formed by multiple residues, thus providing a new insight into structure-based drug discovery.
    At the stage of SARS-CoV-2 infection in human cells, the spike protein consisting of three chains, A, B and C, with a total of 3300 residues plays a key role, and thus its structural and binding nature of spike proteins to host human cells or neutralizing antibodies has attracted considerable interest. We performed the interaction analyses of the spike protein in both closed and open structures, based on large-scale FMO calculations at the level of up to the fourth-order Møller–Plesset perturbation with singles, doubles, and quadruples (MP4(SDQ)) [3]. Inter-chain interaction energies were evaluated for both structures, and a mutual comparison indicated considerable losses of stabilization energies in the open structure, especially in the receptor binding domain (RBD) of chain-B. The role of charged residues in inter-chain interactions was illuminated as well. By two separate calculations for the RBD complexes with angiotensin-converting enzyme 2 (ACE2) and B38 Fab antibody, it was found that the binding with ACE2 or antibody partially compensated for this stabilization loss of RBD. The SVD technique was also applied to extracting the correlated interactions among the residues involved in this huge protein complex system.

    [1] R. Hatada, K. Okuwaki, Y. Mochizuki, Y. Handa, K. Fukuzawa, Y. Komeiji, Y. Okiyama, S. Tanaka, J. Chem. Inf. Model. 60 (2020) 3593.
    [2] R. Hatada, K. Okuwaki, K. Akisawa, Y. Mochizuki, Y. Handa, K. Fukuzawa, Y. Komeiji, Y. Okiyama, S. Tanaka, Appl. Phys. Exp., in press.
    [3] K. Akisawa, R. Hatada, K. Okuwaki, Y. Mochizuki, K. Fukuzawa, Y. Komeiji, S. Tanaka, RSC Adv. 11 (2021) 3272.

Session 5 (DAY-1 : Feb 15 13:50 - 14:50)

Session Chair: Takahito Nakajima

Distinguished Achievements for COVID-19 on Fugaku

  • Makoto Tsubokura, RIKEN R-CCS

  • “Droplet and Aerosol Dispersion Simulation on the supercomputer Fugaku and Fight Back against COVID-19”

  • Virus droplet infection caused by sneezing, coughing, or talking is strongly influenced by the flow, temperature and humidity of the air around an infected person and potential victims. Especially for COVID-19, possibility of aerosol infection by atomized droplets is suggested, in addition to the usual droplet infection. Because smaller aerosol particles drift in the air for a longer time, it is imperative to predict their scattering route and to estimate how surrounding airflow affects the infection. By this information, the risk of droplet infection can be properly assessed and effective measures to reduce infection can be proposed. In this project, massively parallel coupling simulation of virus droplet scattering, with airflow and heat transfer under the indoor environment such as inside commuter trains, offices, classrooms, and hospital rooms are conducted. The countermeasures to reduce the risk are also proposed from a viewpoint of controlling the air flow. Complex Unified Simulation framework called CUBE, developed at RIKEN R-CCS and implemented on the supercomputer Fugaku, is mainly used, which makes it possible to execute the world-largest and highly accurate virus droplet simulation ever conducted. These outputs from the simulation can protect the living and working environment from virus droplet infection, and contribute to earlier recovery of our socio-economic activities.

  • Nobuyasu Ito/ Hiroyasu Inoue , RIKEN R-CCS/University of Hyogo

  • “The economic effect of the restriction by Japanese government under COVID-19”

  • In order to prevent the spread of COVID-19, governments have often required regional or national lockdowns, or restrictions in Japan, which have caused extensive economic stagnation over broad areas as the shock of the restrictions has diffused to other regions through supply chains. Using supply-chain data for 1.6 million firms in Japan, this study estimates how much economic losses a restrction causes. Through adjustments of our simulator based on the actual economic reactions to restrictions, we find that coordinated restrictions yield smaller GDP losses than uncoordinated restrictions. Furthermore, we test practical scenarios in which Japan's 47 regions impose restrcitions over three months and find that GDP losses are lower if nationwide restrictions are coordinated than if they are uncoordinated.

  • Yuji Sugita, RIKEN R-CCS

  • “Intrinsic Conformational Flexibility of SARS-CoV-2 Spike-Protein Simulated on Fugaku”

  • The spike protein of SARS-CoV-2 consists of trimeric polypeptide chains with glycosylated residues on the surface. It exists on the surface of the coronavirus and triggers the virus entry into a host cell. Just after the COVID-19 pandemic started, atomic structures of the spike protein were determined using cryo-electron microscopy. The closed structure of the spike protein in the inactive form is distinct from the active forms, where at least one of the receptor binding domains (RBDs) takes “up” conformations. Besides the experimental structures, mechanisms for the conformational changes between “closed” and “up” forms remain elusive. In this study [1], we performed conventional molecular dynamics (MD), target MD (TMD) and enhanced conformational sampling based on the generalized replica exchange with solute tempering (gREST) [2, 3] starting from the “closed” or “up” conformations of the spike protein in explicit solution using GENESIS software [4] on Fugaku supercomputer. Since gREST does not require reaction coordinates to enhance conformational sampling, we can examine intrinsic conformational flexibility of the spike protein in solution. The predicted transition pathways between “closed” and “1 up” forms by gREST are affected largely by protein-glycan interactions and are different from those from TMD simulations.

    [1] T. Mori et al. Biophys. J. (2021) in press; H.M. Dokainish et al. in preparation.
    [2] M. Kamiya, Y. Sugita, J. Chem. Phys. 149, 072304 (2018).
    [3] H.M. Dokainish, Y. Sugita, Int. J. Mol. Sci. (2021).
    [4] J. Jung, T. Mori, et al. WIREs Comp. Mol. Sci. 5, 310-323 (2015); C. Kobayashi, J. Jung, et al. J. Comp. Chem. 38, 2193-2206 (2017).

Session 6 (Invited talks) (DAY-1 : Feb 15 15:55 - 16:05)

Session Chair: Yuji Sugita

Gordon Bell Prize Finalists' talks

  • Hisashi Yashiro, National Institute for Environmental Studies
  • “World's Largest Ensemble Weather Data Assimilation on Fugaku”

  • Real-world applications do not only require computing performance. Software for weather forecasting consists of program parts with various characteristics and, consequently, requires the performance of every component of the supercomputer system, from CPU to storage. To evaluate the overall system of Fugaku, we conducted an experiment combining high-resolution global atmospheric simulations and ensemble data assimilation (DA). A 3.5 km mesh, 1024-member NICAM-LETKF benchmark was realized using 82% of the nodes in Fugaku. In this calculation, 1.3 PiB of data was passed from the simulation part to the DA part. We will present the application development detail based on the system-application co-design that has been carried out over the past six years to achieve this world's largest meteorological calculation.

  • Chisachi Kato, University of Tokyo
  • “Toward Realization of Numerical Towing-Tank Tests by Wall-Resolved Large Eddy Simulation based on 32 billion grid Finite-Element Computation”

  • To realize numerical towing-tank tests by substantially shortening the time to the solution, a generalpurpose Finite-Element flow solver, named FrontFlow/blue (FFB), has been fully optimized so as to achieve maximum possible sustained memory throughputs with three of its four hot kernels. A single-node sustained performance of 179.0 GFLOPS, which corresponds to 5.3% of the peak performance, has been achieved on Fugaku, the next flagship computer of Japan. A weak-scale benchmark test has confirmed that FFB runs with a parallel efficiency of over 85% up to 5,505,024 compute cores, and an overall sustained performance of 16.7 PFLOPS has been achieved. As a result, the time needed for large-eddy simulation using 32 billion grids has been significantly reduced from almost two days to only 37 min., or by a factor of 71. This has clearly indicated that a numerical towing-tank could actually be built for ship hydrodynamics within a few years.

Session 7 (Invited talks) (DAY-1 : Feb 15 16:05 - 16:35)

Session Chair: Yuji Sugita

Society 5.0 Simulation

  • Gerhard Hummer, Max Planck Institute of Biophysics
  • “Molecular simulations in times of COVID-19”

  • Molecular simulations in times of COVID-19

    Mateusz Sikora (1,2), Laura Schulz (1), Florian Blanc (1), Sören von Bülow (1), Michael Gecht (1), Roberto Covino (1,3), Ahmad Reza Mehdipour (1), Gerhard Hummer (1,3,4,5)

    (1) Department of Theoretical Biophysics, Max Planck Institute of Biophysics, 60438 Frankfurt am Main, Germany
    (2) Faculty of Physics, University of Vienna, 1090 Vienna, Austria
    (3) Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany
    (4) Department of Physics, Goethe University Frankfurt, 60438 Frankfurt am Main, Germany
    (5) Buchmann Institute for Molecular Life Sciences, 60438 Frankfurt am Main, Germany

    We use molecular dynamics (MD) simulations to study key molecular processes of the SARS-CoV-2 virus. We concentrate on the structure of the spike (S) protein at the viral surface, its interactions with the host cell, and on viral modulation of the host immune response. In molecular dynamics (MD) simulations of full-length S with a palmitoylated transmembrane domain and a fully glycosylated ectodomain, we identified three hinges in the stalk connecting the S head to the viral membrane. Hinge flexibility and glycosylation have been confirmed by high-resolution cry-electron tomography (Turonova, Sikora, Schürmann et al., Science 2020). We are now using the detailed structural and dynamic models for a computational antibody epitope scan (Sikora et al., biorxiv). In addition, we study the interactions of S with the host-cell receptor ACE2 (Mehdipour, Hummer, biorxiv). In MD simulations of the SARS-CoV-2 papain-like protease PLpro, MD simulations provided detailed insight in its function as immunomodulator by suppressing the host interferon (IFN) and NF-κB pathways through preferential cleavage of ISG15 (Shin et al., Nature 2020). Overall, MD simulations help us to uncover some remarkable biology associated with viral infection and, as we hope, guide our fight against COVID-19.

    Acknowledgments. Our special thanks go to Martin Beck and Beata Turonova plus team (MPIBP and EMBL) for the electron tomography, to Jacomine Krijnse Locker, Michael Mühlebach and Christoph Schürmann plus team (Paul Ehrlich Institute) for the virus preparation and purification, and to Donghyuk Shin and Ivan Dikic plus team (Goethe University and MPIBP) for the structural and mechanistic studies of PLpro.

Poster Session (DAY-1 : Feb 15 16:45 - 17:45)

Session Chair: Jens Domke, RIKEN R-CCS

"List of Accepted Posters"

Keynote 2 (DAY-2 : Feb 16 8:05 - 8:55)

Session Chair: Kento Sato

  • Bronis R. de Supinski, Lawrence Livermore National Laboratory
  • “The Livermore Large-Scale Computing Strategy”

  • Lawrence Livermore National Laboratory's (LLNL's) supercomputing center, Livermore Computing (LC), is one of the world's leading large-scale computing sites. This talk will provide a brief overview of LC's current systems, including details of Sierra, the current number 3 system on the Top 500 list. It will then detail LC's anticipated near-term systems, including El Capitan, which will be the first exascale system of the Advanced Simulation and Computing (ASC) program of the National Nuclear Security Administration (NNSA) when it is accepted in 2023. The overall focus of this talk will be how these systems are part of LLNL's long-term strategy to adopt heterogeneous system architectures that better support LC's overall workload.

Session 8 (Invited talks) (DAY-2 : Feb 16 8:55 - 9:25)

Session Chair: Kento Sato

Next generation systems

  • John Linford, Arm
  • “Experiences with HPC Toolchains on A64FX”

  • The Arm architecture is now established as a strong foundation for high performance computing systems, and the Arm HPC ecosystem is growing rapidly. Fugaku's commanding Top500 scores mark the beginning of a long-term trend in computing. This talk will present an overview of the Arm HPC ecosystem and Arm technologies that enable the exascale HPC platform, i.e. the Scalable Vector Extension, Scalable Matrix Extension, and tools and programming approaches for optimizing applications for Arm-based systems.

Session 9 (DAY-2 : Feb 16 9:40 - 10:40)

Session Chair: Kentaro Sano

Next generation systems

  • Kengo Nakajima, Univeristy of Tokyo/RIKEN R-CCS

  • “h3-Open-BDEC: Innovative Software Platform for Scientific Computing in the Exascale Era”

  • We propose an innovative method for sustainable promotion of scientific discovery using supercomputers in the Exascale Era by combining (Simulation + Data + Learning (S+D+L)). The Information Technology Center, The University of Tokyo (ITC/UTokyo) has been considering that integration of (S+D+L) is essential for establishment of Society 5.0, which is the super smart and human-centered society achieved by digital innovation, and by integration of cyber space and physical space. ITC/UTokyo has been planning to introduce the BDEC system (Big Data & Extreme Computing) as the platform for integration of (S+D+L). Recently, Information Technology Center, The University of Tokyo decided to introduce the BDEC/Wisteria-01 System, which starts its operation in May 2021. The BDEC/Wisteria-01 is the first system of the BDEC platforms. It is a Hierarchical, Hybrid, Heterogeneous (h3) system, which consists of computing nodes for computational science (Odyssey, 7,680 nodes of Fujitsu PRIMEHPC FX1000 (A64FX)) and those for data science/machine learning (Aquarius, 45 nodes of Intel Xeon Ice Lake with 360 of NVIDIA A100 GPU’s). Total peak performance is 33.1 PFLOPS with memory bandwidth of 8.38 PB/sec. We develop an innovative open-source software platform “h3-Open-BDEC” for integration of (S+D+L), and evaluate the effects of the integration on the Wisteria/BDEC-01. The h3-Open-BDEC is designed for extracting the maximum performance of the supercomputers with minimum energy consumption focusing on (1) innovative method for numerical analysis based on the new principle of computing by adaptive precision, accuracy verification and automatic tuning, and (2) Hierarchical Data Driven Approach (hDDA) based on machine learning. The hDDA automatically constructs the simplified models for efficient generation of training data using Feature Detection, MOR, UQ, Sparse Modeling and AMR. The h3-Open-BDEC is the first innovative software platform to realize integration of (S+D+L) on supercomputers in the Exascale Era, where computational scientists can achieve such integration without supports by other experts. This integration by h3-Open-BDEC enables significant reduction of computations and power consumptions, compared to those by conventional simulations. This work is supported by Japanese Government from FY.2019 to FY.2023 (JSPS Grant-in-Aid for Scientific Research (S), P.I.: Kengo Nakajima).

  • Masaaki Kondo, RIKEN R-CCS / The University of Tokyo

  • “Community-Wide Activities toward Next-Generation Advanced Computing Infrastructure”

  • The supercomputer Fugaku has made its debut in this year and is set to begin full operations soon. It is already time to think about the development of the next generation supercomputer system. The sustainable evolution of high-performance computers and further integration of AI and BigData technologies will enable a number of new innovations in scientific applications as well as in the field of Society 5.0. However, there are many technical challenges towards development of next generation supercomputer systems as we are now facing the end of Moore's law. Since community wide efforts are indispensable for studying and discussing technical issues and necessary R&D elements, we launched a community activity for Next-Generation Advanced Computing Infrastructure (NGACI) to discuss the roadmap of future advanced computing platforms. This talk introduces its activity and brief summary of the white paper for next-generation supercomputer systems written by the NGACI community.

Panel discussion: Next generation systems (DAY-2 : Feb 16 10:40 - 11:40)

Moderator : Satoshi Matsuoka

Panelists :

Session 10 (Invited talks) (DAY-2 : Feb 16 13:10 - 14:40)

Session Chair: Yasumichi Aoki

Distinguished Achievements in AI, Big Data and Simulations supporting Society5.0 from Priority Applications

  • Satoru Miyano, Tokyo Medical and Dental University
  • “Digesting Cancer Big Networks by Explainable AI and Supercomputers”

  • We are running a Fugaku project “Unravelling origin of cancer and diversity by large-scale data analysis and AI technology.” As a part of this project, we investigated again a large-scale gene expression data set of 762 cancer cell lines that was published by Sanger Institute more than years ago. Our aim is to explore the mechanisms comprehensively behind the epithelial-mesenchymal transition (EMT) from this large data set. EMT is a process in which epithelial cells loose cell-cell adhesion, gain migration ability, and finally change to mesenchymal cells. In cancer research, EMT has been receiving intensive attentions because it is observed to be related to the initiation of invasion and metastasis in cancer progression. But its complexity prevents us from comprehensive understanding. Our first challenge was to reveal how gene networks change from epithelial cells to mesenchymal cells. We developed a method named NetworkProfiler and computed 762 networks each of which has more than 13,000 genes including microRNAs. In 2010, it took three months by using a computer of 75 TFLOPS at peak. But, networks were too large and did not allow us detailed investigation. Therefore, we focused on the gene E-cadherin and its parent and grandparent genes in the 762 large networks. 24 genes were suggested as candidates of EMT inducers. Half of them were already validated as EMT inducers, but the rest 12 were not investigated at all from the viewpoint of EMT. Fortunately, we found the gene KLF5 induces EMT as a new inducer (Shimamura T et al. PLoS One. 6(6): e20804, 2011). Now we have implemented NetworkProfiler on Fugaku. A small number of Fugaku nodes were sufficient to perform the same process in less than a day that took three months ten years ago. However, a big obstacle is remaining that is interpretation of a large number of complex networks. To resolve this problem, we developed a strategy based on an explainable AI methodology using tensor decomposition. Maruhashi K et al (AAAI 2018, 3770-3777) developed Deep Tensor and, then we developed Tensor Reconstruction-based Interpretable Prediction (Maruhashi K et al. TRIP: http: // 03912) for learning multiway relations, which are deep learning approaches using tensor decomposition. Deep Tensor is implemented on Fugaku. We then explore the massive multiple gene networks by using TRIP. The use of the interpretable AI method based on tensor decomposition enabled us to overcome limitation of existing gene network analysis. Surprisingly, many EMT-related biological mechanisms discovered during the last ten years are revealed from the data set published ten years ago in addition to new insights. To the best of our knowledge, this is the first study on revealing gene networks using an explainable AI method. The result on EMT analysis is published in Park H et al (PLoS One. 15(11): e0241508, 2020). We are also expanding our AI and network methodology by combining mutation data to explore mechanisms how cancers acquire drug resistance for gefitinib and erlotinib.

  • Takane Hori, JAMSTEC
  • “Large-scale numerical simulation of earthquake generation, wave propagation and soil amplification”

  • In one of the programs for Promoting Researches on the Supercomputer Fugaku, entitled “Large-scale numerical simulation of earthquake generation, wave propagation and soil amplification”, we have two large research topics. One is application of the developed simulation codes based on large-scale finite element modeling to earthquake and tsunami damage prediction methods aiming for formulating damage estimates by the Japanese government. For this purpose, we make necessary improvements to the developed simulation codes in cooperation with national agencies. We also develop a computing platform on which construction and civil engineering companies can use the same framework of earthquake and tsunami damage prediction as used in damage estimates by the government. The other topic is development of computational methods for large-scale computation aiming for earthquake damage prediction by Prof. Ichimura’s group at Earthquake Research Institute, the University of Tokyo. They develop and improve the computational methods for large-scale computation aiming for earthquake damage prediction. The state-of-the-art techniques of computational and computer science are applied to achieve high-performance computation using “Fugaku”.

  • Shinobu Yoshimura, The University of Tokyo
  • “Super-simulations of Clean Energy Systems”

  • Any source of energy that is superior in all aspects—cost, environmental impact, safety, and use of natural resources—does not exist. Therefore various types of energy systems including clean energy systems have been developed in worldwide. To accelerate the development of such energy systems, computational simulations are strongly expected to play a key role. However, core physics in such systems are very complex and tend to be multi-scale and multi-physics among fluid, thermal, solid, electromagnetics and so on. We target the following innovative clean energy systems, i.e. carbon-free coal gasification plants and large-scale offshore wind farm. These sub-issues are independent of each other in terms of energy sources, but they have many commonalities in their central physical phenomena (structure, fluid, heat, material degradation, etc.) and in the computational science and engineering problems to be overcome. The coal gasification is one of the key technologies to drastically reduce CO2 emission from coal fired power generation. Coal is crushed into fine particulate matter and then partially burned into gas in a high-pressure and elevated-temperature environment. We perform a large scale two-way coupled simulation of thermo-combustion-fluid-melting-structure interaction of a full-scale testing reactor. An offshore wind farm for power generation typically consists of tens to hundred of large scale wind turbines. To promote the wind energy in Japan, we need to take care of heavy weather conditions and narrower offshore sites. To improve the performance of power generation of the whole wind farm and the reliability of individual wind turbine, it is necessary to improve the accuracy of evaluating the degradation of power generation of wind turbine affected by wake interaction, as well as to improve the reliability evaluation of individual wind turbine exposed to the wake. In this talk, focusing on the coal gasification system, I explain the latest developments and achievements of multiphysics simulations in which a unique parallel coupler, REVOCAP_Coupler, is used to integrate highly parallelized independent solvers such as LES-based flow and combustion solvers, thermal conduction and solid solvers, i.e. FFR-Comb, and ADVENTURE_Thermal and ADVENTURE_Solid.

Session 11 (DAY-2 : Feb 16 14:55 - 15:55)

Session Chair: Hirofumi Tomita

Distinguished Achievements in AI, Big Data and Simulations supporting Society5.0

  • Takemasa Miyoshi, RIKEN R-CCS

  • “Fugaku’s Illuminating a Path to the Future of Numerical Weather Prediction”

  • At RIKEN, we have been exploring a fusion of big data and big computation, and now with AI techniques and machine learning (ML). The new Japan’s flagship supercomputer “Fugaku” is designed to be efficient for both double-precision big simulations and reduced-precision machine learning applications, aiming to play a pivotal role in creating super-smart “Society 5.0.” Our group in RIKEN has been pushing the limits of numerical weather prediction (NWP) through two orders of magnitude bigger computations using the previous Japan’s flagship “K computer”. Now with the new Fugaku, we have been exploring ideas for fusing Big Data Assimilation and AI. The data produced by NWP models become bigger and moving around the data to other computers for ML may not be feasible. Having a next-generation computer like Fugaku, good for both big NWP computation and ML, may bring a breakthrough toward creating a new methodology of fusing data-driven (inductive) and process-driven (deductive) approaches in meteorology. With neural networks trained by big computational data on Fugaku, high-precision NWP that can run only on Fugaku becomes available and portable to many users, leading to a broader impact. This approach makes the most of Fugaku’s high peak and broad foothills. This presentation will introduce the most recent results from data assimilation and NWP experiments using Fugaku, followed by perspectives toward future developments and challenges of DA-AI fusion.

  • Satoru Oishi, RIKEN R-CCS

  • “Deep Learning From Simulated Data for Flood and Debris-Flow Mapping”

  • We propose a framework that estimates the maximum water level and debris-flow-induced topographic deformation from remote sensing imagery by integrating deep learning and numerical simulation. Water and debris-flow simulator generate training data for various artificial disaster scenarios. We show that regression models based on Attention U-Net and LinkNet architectures trained on synthetic data can predict the maximum water level and topographic deformation from a remote sensing-derived change detection map and a digital elevation model. The proposed framework has an inpainting capability, mitigating the false negatives that are inevitable in remote sensing image analysis. Our framework breaks the limits of remote sensing and enables rapid estimation of inundation depth and topographic deformation, essential information for emergency response, including rescue and relief activities.

  • Seiji Yunoki, RIKEN R-CCS

  • “Quantum simulations for quantum many-body systems”

  • As R. Feynman was originally suggested in 1982, quantum many-body systems such as quantum materials are one of the most promising classes of applications for quantum computing. In this talk, I would like to first briefly introduce some of the research activities in R-CCS on quantum simulations for quantum computing using classical computers. I will then discuss quantum algorithms of quantum simulations for near-term quantum computers such as NISQ devices, focusing on quantum-classical hybrid algorithms with parametrized quantum circuits [1,2] and beyond the variational quantum circuit ansatz [3].

    [1] “Symmetry-adapted variational quantum eigensolver”, K. Seki, T. Shirakawa, and S. Yunoki, Phys. Rev. A 101, 052340 (2020).
    [2] “Discretized quantum adiabatic process for free fermions and comparison with the imaginary-time evolution”, T. Shirakawa, K. Seki, and S. Yunoki, Phys. Rev. Research 3, 013004 (2021).
    [3] “Quantum power method by a superposition of time-evolved states”, K. Seki and S. Yunoki, arXiv:2008.03661 (to be published in PRX Quantum).

Session 12 (Invited talks) (DAY-2 : Feb 16 15:55 - 16:25)

Session Chair: Kento Sato

Distinguished Achievements in AI, Big Data and Simulations supporting Society5.0

  • France Boillod-Cerneux, CEA
  • “AI and simulation for health at CEA”

  • In this talk, we will present recent challenges and scientific progress in health. CEA has been focusing on health since decades, but joining AI algorithms, supercomputers and large data allows our research teams to exploit the data and design new models to accelerate the patient care.
    We will focus on brain imaging first and show how our teams are working together on AI problematics that are transverse to many scientific areas, such as astrophysics.

List of Accepted Posters

Room : LINKS-1
16:45-16:51 Arata Amemiya, Shlok Mohta and Takemasa Miyoshi, "Application of the Long-Short Term Memory neural networks to model bias correction: idealized experiments with the Lorenz-96 model"
16:52-16:58 Daichi Mukunoki, Katsuhisa Ozaki, Takeshi Ogita, Toshiyuki Imamura and Roman Iakymchuk, "High-Precision, Accurate, and Reproducible Linear Algebra Operations using Ozaki Scheme"
16:59-17:05 Shun Ohishi, Tsutomu Hihara, Hidenori Aiki, Joji Ishizaka, Yasumasa Miyazawa, Misako Kachi and Takemasa Miyoshi, "Development of an ensemble Kalman filter-based regional ocean data assimilation system"
17:06-17:12 Jorji Nonaka, Naohisa Sakamoto, Go Tamura and Masaaki Terai, "Large Data Visualization Environment on the K Pre-Post Cloud Testbed"
17:13-17:19 Jaewoon Jung, Chigusa Kobayashi, Kento Kasahara, Cheng Tan, Michael Feig and Yuji Sugita, "Optimization of GENESIS MD software on Fugaku supercomputer"
17:20-17:26 Nils Meyer, Tilo Wettig, Dirk Pleiter, Stefan Solbrig and Peter Georg, "Lattice QCD on Fujitsu A64FX"
17:27-17:33 Miki Nakano, Osamu Miyashita and Florence Tama, "Requirement for 3D-reconstruction of biomolecule structure from single particle analysis using X-ray free electron laser diffraction images"
17:34-17:45 Open discussion round
Room : SX-3/44R
16:45-16:51 Maha Mdini, Takemasa Miyoshi and Shigenori Otsuka, "Accelerating Climate Model Computation by Neural Networks: A Comparative Study"
16:52-16:58 Tianfeng Hou, Staf Roels and Hans Janssen, "The use of proper orthogonal decomposition for the Monte Carlo based uncertainty analysis"
16:59-17:05 Kohei Takatama, John C. Wells and Takemasa Miyoshi, "Simulating rapid water level decrease of Lake Biwa due to Typhoon Jebi (2018)"
17:06-17:12 Ivan R. Ivanov, Jens Domke, Akihiro Nomura and Toshio Endo, "Improved failover for HPC interconnects through localised routing restoration"
17:13-17:19 Yoshifumi Nakamura, "Benchmarking QCD Wide SIMD Library (QWS) on Fugaku"
17:20-17:26 Yoshifumi Nakamura, "Finite temperature phase transition for three flavor QCD with domain wall fermions"
17:27-17:33 Issaku Kanamori, Sinya Aoki, Yasumichi Aoki, Tatsumi Aoyam, Takumi Doi, Shoji Hashimoto, Ken-Ichi Ishikawa, Takashi Kaneko, Hideo Matsufuru, Tomoya Nagai and Yoshifumi Nakamura, "Preparation of LQCD code for Fugaku"
17:34-17:45 Open discussion round
Room : Numerical Wind Tunnel
16:45-16:51 Shigenori Otsuka, Yasumitsu Maejima, Pierre Tandeo and Takemasa Miyoshi, "Toward an integrated NWP-DA-AI system for precipitation prediction"
16:52-16:58 Yasumitsu Maejima, Tomoo Ushio and Takemasa Miyoshi, "Toward assimilation of dense and frequent 3-D lightning location data for severe thunderstorm forecast"
16:59-17:05 Yutaka Ishikawa, Atsushi Hori and Balazs Gerofi,"Sytem Software Research for Fugaku"
17:06-17:12 Yoshinori Kusama, Noriyuki Shiobara, Motoi Okuda and Takaaki Noguchi, "Overview of User Support Activities by RIST in HPCI in Supercomputer Fugaku Era in Japan"
17:13-17:19 Ryuta Tsunashima, Ryohei Kobayashi, Norihisa Fujita, Taisuke Boku, Seyong Lee, Jeffrey Vetter, Hitoshi Murai, Masahiro Nakao and Mitsuhisa Sato, "Multi-device Programming Environment for GPU and FPGA Cooperative Acceleration"
17:20-17:26 Atsushi Tokuhisa, "Implementation of an integrated workflow for biomolecular multi conformational analysis by template matching method on the supercomputer Fugaku"
17:27-17:33 Eisuke Kawashima and Takahito Nakajima, "Organoborane Polymer for Lithium-Ion Battery Electrolyte"
17:34-17:45 Open discussion round
Room : Earth Simulator
16:45-16:51 Hiroshi Murakami, "Solving Eigenvalue Problems Using Filters Computed with Mixed-Precision"
16:52-16:58 Toshiyuki Imamura, Yusuke Hirota, Takuya Ina, Shuhei Kudo, Takeshi Terao, Katsuhisa Ozaki and Takeshi Ogita, "EigenExa and related eigensolver projects -- high performance to numerical verification"
16:59-17:05 Daisuke Kadoh, Hideaki Oba and Shinji Takeda, "Triad second renormalization group"
17:06-17:12 Koji Terasaki and Takemasa Miyoshi, "A 1024-Member Data Assimilation and Forecast Experiment with NICAM-LETKF Using Fugaku: A Heavy Rainfall Event in Kyushu in July 2020"
17:13-17:19 Go Tamura, Naohisa Sakamoto, Yasumitsu Maejima and Jorji Nonaka, "Probabilistic Isosurface based Visual Analytics System for Time-Varying Ensemble Weather Simulation Data"
17:20-17:26 Yiyu Tan, Toshiyuki Imamura and Masaaki Kondo, "Design and Implementation of FPGA-based High-order FDTD Method for Room Acoustics"
17:27-17:33 Ryohei Kobayashi, Norihisa Fujita, Yoshiki Yamaguchi, Taisuke Boku, Kohji Yoshikawa, Makito Abe and Masayuki Umemura, "Preliminary Evaluation of Multi-hybrid Acceleration for Radiative Transfer Simulation by OpenACC"
17:34-17:45 Open discussion round
Room : K computer
16:45-16:51 Jianyu Liang, Koji Terasaki and Takemasa Miyoshi, "A purely data-driven approach to satellite simulator in numerical weather prediction"
16:52-16:58 Bhaskar Dasgupta, Osamu Miyashita and Florence Tama, "Modelling multiple types of three-dimensional hexameric conformations of bacterial heat-shock protein ClpB from low-resolution AFM images"
16:59-17:05 Toshiki Matsushima, Seiya Nishizawa and Shin-ichiro Shima, "Numerical method for resolving turbulence induced cloud microphysical variability"
17:06-17:12 Atsuya Uno, "Consideration of Operation of the supercomputer Fugaku"
17:13-17:19 Takaaki Miyajima, Tomohiro Ueno, Jens Huthmann, Kentaro Sano and Atsushi Koshiba, "Research activity on a high-performance off-loading engine with multiple FPGAs"
17:20-17:26 Norihisa Fujita, Ryohei Kobayashi, Yoshiki Yamaguchi, Taisuke Boku, Kohji Yoshikawa, Makito Abe and Masayuki Umemura, "Implementation and Performance Evaluation of Space Radiative Transfer Code on multiple-FPGAs"
17:27-17:45 Open discussion round
Room : Fugaku
16:45-16:51 Takaaki Fukai and Kento Sato, "Measurement of I/O performance for distributed deep neural networks on Fugaku"
16:52-16:58 Akiyoshi Kuroda, Toshiyuki Imamura, Ikuo Miyoshi, Kazuo Minami and Satoshi Matsuoka, "Evaluation of Power Consumption of MLPerf HPC by Comparison HPL-AI on the Supercomputer Fugaku"
16:59-17:05 Takumi Honda, Yousuke Sato and Takemasa Miyoshi, "Potential Impacts of Lightning Flash Observations on Numerical Weather Prediction with Explicit Lightning Processes"
17:06-17:12 Keiji Yamamoto, Shinichi Miura, Masaaki Terai and Fumiyoshi Shoji, "Design of cloud-like functions on the supercomputer Fugaku"
17:13-17:19 Ai Shinobu, Chigusa Kobayashi, Yasuhiro Matsunaga and Yuji Sugita, "Coarse-grained simulations of multiple intermediates along conformational transition pathways of multi-domain proteins"
17:20-17:26 Qiwen Sun, Takemasa Miyoshi and Serge Richard, "Ensemble Kalman filter experiments with an extended SIR model for COVID-19"
17:27-17:45 Open discussion round
Room : Tsubame
16:45-16:51 Konstantinos N. Anagnostopoulos, Takehiro Azuma, Kohta Hatakeyama, Mitsuaki Hirasawa, Yuta Ito, Jun Nishimura, Stratos Kovalkov Papadoudis and Asato Tsuchiya, "Complex Langevin studies of the spacetime structure in the Lorentzian type IIB matrix model"
16:52-16:58 James Taylor, Takumi Honda, Arata Amemiya and Takemasa Miyoshi, "Optimizing the localization scale for a convective-scale ensemble radar data assimilation system"
16:59-17:05 Kenta Sueki, Seiya Nishizawa, Tsuyoshi Yamaura and Hirofumi Tomita, "Parameter estimation for cloud microphysics scheme based on EnKF-based method: Idealized twin experiment"
17:06-17:12 Hiroshi Harada, Chihiro Shibano and Hidetomo Kaneyama, "System utilization rate in HPCI Shared Storage"
17:13-17:19 Ken Furukawa, Hideyuki Sakamoto, Marimo Ohhigashi, Shin-ichiro Shima, Travis Sluka, Takemasa Miyoshi and Qiwen Sun, "Particle Filter Based Data Assimilation with the Two-dimensional Three-state Cellular Automata"
17:20-17:26 Sandhya Tiwari, Florence Tama and Osamu Miyashita, "Retrieving potential three-dimensional biological shape matches from a small number of two-dimensional single particle XFEL diffraction patterns"
17:27-17:45 Open discussion round