TOP
Events & Outreach
R-CCS Cafe
The 260th R-CCS Cafe (Jan 19, 2024)
The 260th R-CCS Cafe (Jan 19, 2024)
JapaneseDate | Fri, Jan 19, 2024 |
---|---|
Time | 2:00 pm - 3:45 pm (2:00 pm - 3:45 pm Talks, 3:45 pm - Free discussion and coffee break) |
City | Kobe, Japan/Online |
Place | Lecture Hall (6th floor) at R-CCS, Online seminar on Zoom
|
Language | Presentation Language: English Presentation Material: English |
Speakers |
Yasumitsu Maejima Data Assimilation Research Team ![]() Serge G. Petiton University of Lille ; CNRS, France ![]() Nahid Emad University of Paris Saclay/Versailles, Maison de la Simulation ; LI-PaRAD, France ![]() |
Talk Titles and Abstracts
1st Speaker: Yasumitsu Maejima
Title:
Observing system simulation experiments of a rich phased array weather radar network covering Kyushu for the July 2020 heavy rainfall event
Abstract:
In early summer, a monsoon front called the "Baiu front" yields a rainy season in Japan, and it occasionally causes catastrophic disasters. On July 4, 2020, southern Kumamoto encountered extreme heavy rainfalls associated with the Baiu front and caused catastrophic flooding of River Kuma in southern Kumamoto. This study investigates a potential impact of a rich phased array weather radar (PAWR) network covering Kyushu, Japan on numerical weather prediction (NWP) of this historic heavy rainfall. Perfect-model, identical-twin observing system simulation experiments (OSSEs) with 17 PAWRs are performed by 30-second-upate the local ensemble transform Kalman filter (LETKF) with a regional NWP model known as the Scalable Computing for Advanced Library and Environment-Regional Model (SCALE-RM) at 1-km resolution. Assimilating every 30-second PAWR data significantly improves the heavy rainfall prediction mainly up to 1-hour lead time compared with the no data assimilation experiment.
2nd Speaker: Serge G. Petiton
Title:
Sequences of irregular sparse large matrix computation for iterative methods on Fugaku
Abstract:
Exascale machines are now available, based on several different arithmetic (from 64-bit to 16-32 bit arithmetics, including mixed versions and some that are no longer IEEE compliant) and using different architectures. Recent brain-scale applications, from machine learning and AI for example, manipulate huge graphs or meshes that lead to very sparse nonsymmetrical linear algebra problems. Those applications generate irregularly structured data (graphs, meshes or directly sparse matrices) which allow to distribute along supercomputer nodes a few compressed rows or columns of extreme scale and very sparse matrices but which don’t allow to store any dense vector of the same order on each node, as we usually expected to compute distributed matrix-vector products (even using MapReduce).
In this talk, after a short description of recent evolutions having important impacts on our results, in particular about parallel and distributed iterative methods, I present some results obtained on Fugaku, with Japanese and French colleagues, based on sequences of sparse non-symmetrical matrix products optimized for very irregular sparse and very large matrices. I discuss the performance with respect to the sparsity and the size of the matrices, to some formats to compress the sparse matrices, to the number of process and nodes, and to two different interconnecting network topologies. I also analyze the impact having networks on chip to interconnected some subsets of cores, which don’t share memories, with respect to the sparse irregular patterns of the matrices. I conclude proposing some research perspectives and potential collaborations.
3rd Speaker: Nahid Emad
Title:
Parallel Numerical Computation and AI
Abstract:
Many machine learning techniques are strongly linked to linear algebra methods such as those solving the eigenvalue problem or more generally the singular value problem. Sparse computation is, moreover, a subject common to these two fields. We show how to take advantage of these interactions and commonalities to propose new approaches to problem solving in either domain. An innovative machine learning approach based on Unite and Conquer methods, used in linear algebra, will be presented. In addition to its efficiency from an accuracy point of view, the important characteristics of this inherently parallel and scalable technique make it well suited to multi-level and heterogeneous parallel and/or distributed architectures. Experimental results, partly on the Fugaku supercomputer, demonstrating the interest of the approach for efficient data analysis in the case of applications such as clustering, anomaly detection, etc. will be presented.
Important Notes
- Please turn off your video and microphone when you join the meeting.
- The broadcasting may be interrupted or terminated depending on the network condition or any other unexpected event.
- The program schedule and contents may be modified without prior notice.
- Depending on the utilized device and network environment, it may not be able to watch the session.
- All rights concerning the broadcasted material will belong to the organizer and the presenters, and it is prohibited to copy, modify, or redistribute the total or a part of the broadcasted material without the previous permission of RIKEN.
(Dec 28, 2023)