TOP
Research
AI for Science Platform Division
Learning Optimization Platform Development Unit
Learning Optimization Platform Development Unit
Japanese
Unit Leader Mohamed Wahib
mohamed.attia [at] riken.jp (Lab location: Tokyo)
- Please change [at] to @
- 2024
- Unit Leader, Learning Optimization Platform Development Unit, AI for Science Platform Division, R-CCS, RIKEN (-present)
- 2022
- Team Leader, High Performance Artificial Intelligence Systems Research Team, R-CCS, RIKEN (-present)
- 2017
- Senior Research Scientist, Real World Big Data Computing Open Innovation Laboratory (RWBC-OIL), AIST
- Visiting Researcher, AICS (renamed R-CCS in 2018), RIKEN
- Specially Appointed Researcher, Tokyo Tech
- 2012
- Postdoctoral Researcher, AICS, RIKEN
- 2012
- Ph. D. from the Hokkaido University
Keyword
- AI-based Science
- Infrastrcuture for Foundation Models (training inference)
- Generative AI in Science
- Integration of AI in Science
Research summary
The learning optimization platform development unit is an R-CCS unit focusing on developing AI software infrastructure and models for expanding the use of AI beyond commercial use cases, and into scientific domains. In particular adapting into the scientific requirements of semi-structured multimodal data, complex representations, and attribution. Specifically, we conduct research on next-generation AI systems by focusing on the following topics:
1. Investigate and develop novel optimization methods for scaling generative AI techniques to address complex optimization problems across different science domains. The focus will be on enhancing optimization performance, scalability, and adaptability through the integration of generative AI models.
2. Research, develop, and implement parallel and distributed inference techniques for large language models (LLMs) to enable efficient and scientific applications built on top of LLMs. The focus will be on improving the inference speed, scalability, and resource utilization of LLMs for real-time and large-scale scientific applications.
3. Research, develop, and implement parallel and distributed DNNs to leverage the computational power of modern hardware architectures and accelerate optimization processes. The focus will be on improving scalability, efficiency, and robustness of optimization algorithms for large-scale problems across various domains.
Main research results
- Developing practical and scalable solutions for machine learning, particularly in areas such as model training efficiency and optimization of neural network architectures for scientific applications (NeurIPS’23, TEVC’24, TPDS’22, HPDC’21, SC’20)
- Development of AI solutions for practical real-world science problems: molecular dynamics simulations (Nature Sci. Data’22), neuromorphic computing chip simulations (TPDS’23), image reconstruction and processing (SC’22, SC’21), and interplanetary flight trajectories for the European Space Agency (ESA) (SoftwareX’21).
Representative papers
-
Enzhi Zhang, Isaac Lyngaas, Peng Chen, Xiao Wang, Jun Igarashi, Yuankai Huo, Masaharu Munetomo, Mohamed Wahib.
"Adaptive Patching for High-resolution Image Segmentation with Transformers"
International Conference for High Performance Computing, Networking, Storage, and Analysis (SC 2024) -
Du Wu, Jintao Meng, Peng Chen, Mohamed Wahib, Xiao Wang, Minwen Deng, Wenxi Zhu, Luo Tao, Yanjie Wei.
"autoGEMM: Pushing the Limits of Irregular Matrix Multiplication on Arm Architectures"
International Conference for High Performance Computing, Networking, Storage, and Analysis (SC 2024) -
*Yu Xue, Jiajie Zha, Danilo Pelusi, Peng Chen, Tao Luo, Liangli Zhen, Yan Wang, Mohamed Wahib.
"Neural Architecture Search With Progressive Evaluation and Sub-Population Preservation"
in IEEE Transactions on Evolutionary Computation, doi: 10.1109/TEVC.2024.3393304 -
Thao Nguyen Truong, Balazs Gerofi, Edgar Josafat Martinez-Noriega, Francois Trahay, Mohamed Wahib.
"KAKURENBO:"Adaptively Hiding Samples in Deep Neural Network Training"
Advances in Neural Information Processing Systems 2023 (NeurIPS 2023) -
*Huaipeng Zhang, Nhut-Minh Ho, Yigit Polat Dogukan, Peng Chen, Mohamed Wahib, Truong Thao Nguyen, Jintao Meng, Rick Siow Mong Goh, Satoshi Matsuoka, Tao Luo, Weng-Fai Wong.
"Simeuro: A Hybrid CPU-GPU Parallel Simulator for Neuromorphic Computing Chips"
IEEE Transactions on Parallel and Distributed Systems, volume:34, pp2767-2782 (2023) -
*Lingqi, Mohamed Wahib, Chen Peng, Jintao Meng, Xiao Wang, Toshio Endo, Satoshi Matsuoka.
"PERKS: a Locality-Optimized Execution Model for Iterative Memory-bound GPU Applications"
ACM 37th International Confernce on Supercomputing (ACM ICS 2023) -
*Jintao Meng, Peng Chen, Mingjun Yang, Mohamed Wahib, Yanjie Wei, Shengzhong Feng, Wei Liu,Liangzhen Zheng.
"Boosting the Predictive Performance with Aqueous Solubility Dataset Curation"
Nature Scientific Data, March 2022